patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11857284 | One or more of the illustrated elements may be exaggerated to better show the features, process steps, and results. Like reference numbers and designations in the various drawings may indicate like elements. DETAILED DESCRIPTION Various embodiments of the present disclosure relate to surgical instruments for use with teleoperated robotic systems. More specifically, embodiments include drive assemblies for surgical instruments featuring steering input devices that are more efficiently manufactured, assembled, and/or tuned than in prior systems. For example, the input devices featured in certain embodiments include multiple components that can be manufactured by molding instead of machining. Further, certain embodiments provide drive assemblies that include a quick engage-release coupling that can be assembled (and disassembled) without special tools or fasteners. This simplifies the tuning (e.g., drive cable pre-tensioning) process, enabling full or partial automation. Still further embodiments provide input devices that are capable of withstanding high torque loads and specifically configured for use in applications requiring a compact footprint. Minimally invasive surgery can be performed by inserting surgical instruments through orifices in a patient's body (e.g., natural orifices or body wall incisions) and controlling the surgical instruments via an interface on the outside of the body. In various embodiments of the present disclosure, the surgical instruments are teleoperated by surgeons. Thus, the surgeons do not move the instruments by direct physical contact, but instead control instrument motion from some distance away by moving master input devices (“masters”). The operating surgeon is typically provided with a view of the actual surgical site via a visual display, so that the surgeon may remotely perform surgical motions with the masters while viewing the surgical site. A controller of the surgical system causes the surgical instrument to be moved in accordance with movement of a master. FIG.1depicts a patient-side portion100of a teleoperated surgical system in accordance with one or more embodiments of the present invention. Patient-side portion100is a teleoperated robotic system for performing minimally invasive surgery on a patient's body10positioned on an operating table12. Patient-side portion100includes a column102, a support assembly104, and an instrument carriage106. In this example, column102anchors patient-side portion100on a floor surface (not shown) proximate operating table12. However, in other embodiments the patient-side portion may be mounted to a wall, to the ceiling, to the operating table supporting the patient's body, or to other operating room equipment. Support assembly104branches radially outward from the column102, and instrument carriage106resides at a distal end of the support assembly. Instrument carriage106supports a detachable surgical instrument108, and the carriage includes various actuators and control connections for controlling functionality of the instrument during a surgical procedure within the patient's body10. In particular, the teleoperated actuators housed in instrument carriage106provide a number of controller motions that surgical instrument108translates into a corresponding variety of movements of the instrument's end effector. In some examples, the surgical instrument includes a drive assembly housing an input device configured to facilitate controlled adjustment of the end effector in response to actuation signals from the instrument carriage. The particulars of the instrument's drive assembly and its individual components are provided below with reference toFIGS.2A-4B. Returning toFIG.1, an entry guide110(e.g., a cannula) serves as a surgical port to an orifice of the patient's body10that receives surgical instrument108to guide the instrument into the patient. Entry guide110may perform various other functions, such as allowing fluids and other materials to pass into or out of the body and reducing trauma at the surgical site by isolating at least some motion of the surgical instrument (e.g., translating movement along an insertion axis, and/or axial (lengthwise) rotation of the instrument shaft around the insertion axis) from the body wall. Support assembly104further includes an instrument manipulator112that controls positioning of surgical instrument108relative to the patient's body10. In various implementations, instrument manipulator112may be provided in a variety of forms that allow surgical instrument108to move with one or more mechanical degrees of freedom (e.g., all six Cartesian degrees of freedom, five or fewer Cartesian degrees of freedom, etc.). Typically, mechanical or control constraints restrict instrument manipulator112to move surgical instrument108around a particular center of motion that stays stationary with reference to the patient's body10. This center of motion is typically located proximate where surgical instrument108enters the patient's body10(e.g., at some point along entry guide110, such as at the midpoint of the body wall). In this example, instrument manipulator112includes a joint114and an elongated spar116supporting instrument carriage106and entry guide110. In this example, instrument carriage106is mounted to ride along the length of spar116while entry guide110is held fixed, so as to translate surgical instrument108through the entry guide along an insertion axis relative to the patient's body10. Adjusting joint114locates surgical instrument108at a desired angular orientation about the center of motion, while movement of carriage106along spar116locates the surgical instrument at a desired insertion point through the center of motion. Thus, the teleoperated actuators of instrument manipulator112move surgical instrument108as a whole, as compared to the teleoperated actuators housed in instrument carriage106, which move only the instrument's end effector or other individual instrument components. Manipulator112is illustrative of both manipulators that are configured to constrain the remote center of motion by fixed intersecting manipulator joint axes (hardware-constrained remote center of motion) and manipulators controlled by software to keep a defined remote center of motion fixed in space (software-constrained remote center of motion). The term “surgical instrument” is used herein to describe a medical device for insertion into a patient's body and use in performing surgical or diagnostic procedures. A surgical instrument typically includes an end effector associated with one or more surgical tasks, such as a forceps, a needle driver, a shears, a bipolar cauterizer, a tissue stabilizer or retractor, a clip applier, an anastomosis device, an imaging device (e.g., an endoscope or ultrasound probe), and the like. Some surgical instruments used with embodiments of the invention further provide an articulated support (sometimes referred to as a “wrist”) for the end effector so that the position and orientation of the end effector can be manipulated with one or more mechanical degrees of freedom in relation to the instrument's shaft. Further, many surgical end effectors include a functional mechanical degree of freedom, such as jaws that open or close, or a knife that translates along a path. Surgical instruments appropriate for use in one or more embodiments of the present disclosure may control their end effectors (surgical tools) with one or more rods and/or flexible cables. In some examples, rods, which may be in the form of tubes, may be combined with cables to provide a pull, push, or combined “push/pull” or “pull/pull” control of the end effector, with the cables providing flexible sections as required. A typical elongate shaft for a surgical instrument is small, for example five to eight millimeters in diameter. The diminutive scale of the mechanisms in the surgical instrument creates unique mechanical conditions and issues with the construction of these mechanisms that are unlike those found in similar mechanisms constructed at a larger scale, because forces and strengths of materials do not scale at the same rate as the size of the mechanisms. The rods and cables must fit within the elongate shaft and be able to control the end effector through the wrist joint. The cables may be manufactured from a variety of metal (e.g., tungsten or stainless steel) or polymer (e.g., high molecular weight polyethylene) materials. Polymer cables may be preferred in some embodiments to enable a discrete, multi-step pre-tensioning process. Polymer cables may be more suitable for such processes because they are not as stiff as metal cables and tend to release unintentional over-tensioning. FIG.2Aillustrates a surgical instrument108including a distal portion120and a proximal drive assembly122coupled to one another by an elongate shaft124defining an internal bore. Drive assembly122includes a housing125supporting an input device126. Input device126includes an instrument control surface127. The input device facilitates controlled adjustment of the instrument's end effector via a drive cable extending along the internal bore of the elongate instrument shaft. Control surface127provides mechanical connections to the other control features of surgical instrument108. During use, instrument control surface127couples to instrument carriage106(seeFIG.1), which controls surgical instrument108, as generally described above. Distal portion120of surgical instrument108may provide any of a variety of surgical tools, such as the forceps128shown, a needle driver, a cautery device, a cutting tool, an imaging device (e.g., an endoscope or ultrasound probe), or a combined device that includes a combination of two or more various tools and imaging devices. Further, in the illustrated embodiment, forceps128are coupled to elongate shaft124by a wrist joint130, which allows the orientation of the forceps to be manipulated with reference to the elongate shaft124. The bottom view of surgical instrument108shown inFIG.2Billustrates control surface127of input device126. As shown, control surface127includes a set of eight steering inputs132, each of which governs a different aspect of movement by wrist joint130and forceps128. Of course, more or less steering inputs132can be provided in different implementations. When control surface127is coupled to instrument carriage106, each of steering inputs132interfaces with an actuator that drives the steering input. In this example, steering inputs132are configured to form a direct mechanical engagement with respective rotary actuators (e.g., servo motors) of instrument carriage106. However, other suitable configurations for power transmission can also be used (e.g., indirect mechanical couplings including speed and/or torque converters, fluid couplings, and/or electrical couplings). Each of steering inputs132is part of a drive shaft (e.g., drive shaft134shown inFIGS.3A-3B) that operates a drive cable (e.g., drive cable166shown inFIG.5) controlling movement of forceps128. FIGS.3A and3Billustrate an isolated portion of input device126. The illustrated portion of input device126includes a drive shaft134and a capstan136. Drive shaft134and capstan136are separate and independent structures. These structures are depicted inFIG.3Bin an engaged state. As discussed in detail below, while in the engaged state, relative rotation between drive shaft134and capstan136is at least inhibited (or entirely prevented, in some examples). While in a disengaged state (seeFIG.5), the capstan136may be carried on the drive shaft134, but relative rotation between them is freely permitted (i.e., uninhibited). Drive shaft134includes the disk-shaped steering input132and a cylindrical rod138extending outward from the steering input along the steering input's axis of rotation. Drive shaft134further includes a support stem140extending from a central bore of cylindrical rod138. In this example, steering input132and cylindrical rod138are thermoplastic parts (e.g., nylon or polycarbonate) that are overmolded around the metallic support stem140. Capstan136is a contiguous and monolithic tubular structure including a shank142and a head portion144. Head portion144features a pair of opposing rectangular notches146that provide a structural coupling feature to facilitate engagement with an external device (e.g., drive mechanism202shown inFIG.5) for rotating capstan136as part of a cable pre-tensioning process. As shown inFIG.3B, capstan136includes a central through-bore148traversing both its shank142and head portion144. Bore148includes an upper portion150and a lower portion152. Drive shaft134and capstan136are simultaneously aligned and coupled to one another by inserting support stem140of drive shaft134into lower portion152of the capstan's central bore148. When capstan136is disengaged from drive shaft134(yet still coupled (loosely) to the drive shaft), support stem140functions as a spindle that provides a central axis of rotation for the capstan. When capstan136is engaged with the drive shaft134, mutual surface friction between the wall of bore148and support stem140provides a frictional force resisting relative rotation between the support stem and the capstan. As noted above, input device126is specifically designed to carry a drive cable. During one exemplary use, one end of the drive cable is attached to drive shaft134, and an opposite end of the drive cable is attached to capstan136. In another exemplary use, one end of a first drive cable is attached to drive shaft134, and one end of a second drive cable is attached to capstan136. In some implementations, the drive cable end is crimped and coupled to the drive shaft134or capstan136. In some implementations, purely frictional couplings may be used to attach the ends of the drive cable to drive shaft134and capstan136. For example, the cable ends may be wound about these components for multiple revolutions to provide sufficient surface friction to maintain the couplings intact. As shown, both drive shaft134and capstan136include outwardly facing helical grooves154,156to guide the winding of the cable ends. The middle portion of drive cable between the ends carried by input device126extends into the internal bore of the surgical instrument's elongate shaft124. As described above, the drive cable traverses the internal bore and couples to an end effector or other distal end component of the surgical instrument. Power provided by an actuator of the instrument carriage is transmitted to drive shaft134via steering input132, causing the drive shaft to rotate. With drive shaft134and capstan136in the engaged state, rotary motion imparted on the drive shaft is directly transferred to the capstan. Shared rotation of drive shaft134and capstan136may cause the respective ends of drive cable to equally release from or further entwine these components. More specifically, the cable ends may be wound about the drive shaft and capstan in opposite directions, such that their simultaneous rotation in a clockwise direction causes one end of the cable to release from the capstan while the other end becomes further wound about the drive shaft, and vice versa with counter-clockwise rotation. Such controlled movement of the drive cable facilitates operation of a “push/pull” or “pull/pull” mechanism for working the end effector. FIG.4Adepicts a portion of drive assembly122, specifically housing125, drive shaft134, and capstan136. In this example, housing125is a multi-component structure including a base158mounted to a carriage160. Drive shaft134is rotatably mounted to housing125, with steering input132supported within base158and cylindrical rod138supported in carriage160. In addition to the structural features that accommodate the rod of drive shaft134, carriage160also includes features for mounting other operative components of the drive assembly (e.g., spools, pulleys, etc.). Drive shaft134and capstan136are illustrated inFIG.4Ain an engaged state, such that the rotation of drive shaft134guided by the mounting hardware of housing125imparts identical motion to capstan136. Engagement of drive shaft134and capstan136is facilitated by a taper friction fit between these components that at least inhibits, and in general prevents, relative rotation between them at the torques produced by cable tension. Structural features enabling the formation of a taper friction fit are illustrated most clearly inFIG.4B. As shown, lower portion152of the capstan's central bore148and support stem140of drive shaft134are mutually sized for surface-to-surface contact. In this example, the mating surfaces of support stem140and lower bore portion152are rounded and smooth, forming a frictional coupling that is both keyless and unthreaded. Thus, drive shaft134and capstan136can transition from the disengaged state to the engaged state by simply imparting a downward vertical force on capstan136. As such, no additional alignment steps are necessary after the capstan is placed on the support stem of the drive shaft, which greatly simplifies the assembly and cable pre-tensioning processes. In this example, the mating surfaces of support stem140and lower bore portion152are not only rounded, but also radially tapered, defining support stem140and lower bore portion152as frustoconical shapes. The radial tapering aspect permits capstan136to sit loosely on the drive shaft's support stem140absent the external downward force (seeFIG.5). This permits the independent rotation of capstan136about the longitudinal axis of draft shaft134in the disengaged state. Radial tapering of these components further enables the taper friction fit to function as a self-locking coupling. The term “self-locking,” as used in the present disclosure, means that the mating surfaces of the capstan and drive shaft provides sufficient frictional force to prevent relative rotation between them under the forces/loads transmitted during a surgical procedure absent any external force. That is, with a self-locking coupling, the capstan is pressed down on the drive shaft to engage the two components and then removed, without disturbing the engagement. The self-locking coupling is maintained during use in a surgical procedure. A self-locking coupling is formed by providing the mating surfaces with a certain taper angle. This self-locking taper angle is a function of several variables, including material properties, surface roughness, expected rotational forces/loads, etc. In some particular implementations, we have found that the self-locking taper angle may be less than about 1.5 degrees (e.g., about 1.49 degrees). As shown inFIG.4B, the upper bore portion150of capstan136is coaxially aligned with a central blind bore162of support stem140. These coaxial bores can be employed in conjunction with external hardware to facilitate engagement or disengagement of capstan136and drive shaft134. For example, when the drive shaft-to-capstan taper friction fit forms a self-locking coupling, a lead screw may be used to release the capstan from the support stem. More specifically, in some implementations, upper bore portion150may include a pattern of internal threads designed to engage the threaded shank of a lead screw. Further, as shown, bore162includes a surface164(in this example a countersunk surface), which supports the lead screw and prevents further insertion of the screw into the bore. As shown, surface162forms a slight undercut with upper bore portion150to ensure contact between the lead screw and surface162. Thus, removal of a self-locked capstan can be accomplished by inserting the lead screw into the threaded upper bore portion150and rotating the screw in the upper bore until the base of the screw presses against undercut surface164of support stem140. Further rotation of the lead screw urges capstan136to ride up the threads of the lead screw, separating the capstan from the support stem140. In some implementations, the capstan can be removed from the drive shaft by applying an upward external force to pull the two components apart. The coaxial bores of drive shaft134and capstan136can also be used to maintain them in the engaged state—e.g., in the absence of a self-locking coupling. In this case, a mechanical fastener, such as a threaded set screw, can be used to lock the capstan to the drive shaft. More specifically, in some implementations, the blind bore162of support stem140may also be threaded and designed to interface with the threaded shank of the set screw. When the set screw is tightened, it bears down against the capstan and provides a constant downward force to augment and maintain the surface friction force of the coupling. In one implementation of the coupling between drive shaft134and capstan136, the capstan is placed over the drive shaft so that it can rotate. One cable end is secured to and wrapped around the drive shaft, and another cable end is secured to and wrapped around the capstan. The drive shaft is rotated until its corresponding cable is at a desired tension, and then the drive shaft is held in position to maintain the tension. Next, the capstan is rotated until its corresponding cable is at a desired tension, and then the capstan is held in position to maintain the tension. Optionally, the drive shaft and capstan are simultaneously rotated to establish the cable tensions. When both cables are at the desired tension, a force (e.g., a hammer strike) is applied to drive the capstan against the drive shaft and create an engaged first friction coupling between the capstan and drive shaft. The first friction coupling temporarily prevents relative rotation between drive shaft and capstan, even though their corresponding cables urge the rotation. While the first friction coupling holds the capstan in place, an axially-aligned set screw or other suitable fastener is applied to further urge the capstan downward against the drive shaft to augment the first friction coupling and maintain a larger engaged second friction coupling, and so maintain the desired tension in the cables. Referring next toFIG.5, pre-tensioning of a drive cable166can be performed using an apparatus200appropriately configured to independently rotate drive shaft134and capstan136when these components in the disengaged state. In this example, apparatus200includes a first drive mechanism202and a second drive mechanism204. First drive mechanism202is powered by a first motor206, and second drive mechanism204is powered by a second motor208. As shown, drive shaft134is carried by first drive mechanism202, and capstan136is carried by second drive mechanism204. As discussed below, the two drive mechanisms can be used to pre-tension the drive cable by rotating the drive shaft and capstan alternatively (i.e., one at a time) or simultaneously. FIG.6illustrates a method600of tensioning a cable of a drive assembly for a surgical instrument. For purposes of clarity, the method600will be described in the context of apparatus200and input device126, the individual components of which are described above. Step602of method600includes aligning capstan136with drive shaft134in a disengaged state. For example, the capstan may be placed over and on top of the drive shaft absent external force. In particular, the support stem of the drive shaft can be inserted into the lower portion of a central through-bore traversing the capstan. When radially tapered surfaces are used, the capstan sits loosely on the support stem, coupling the capstan to the drive shaft in a disengaged state. Step604includes coupling the respective ends of drive cable(s)166to drive shaft134and capstan136. In some examples, the ends of the cable(s) are attached to the drive shaft and capstan by purely frictional couplings, absent additional connection hardware (e.g., crimps or other fasteners). For instance, the cable ends may be wound around the drive shaft and capstan. Step606includes independently rotating drive shaft134and capstan136to draw drive cable(s)166into tension. As discussed above, such independent rotation can be performed when the capstan is placed over the drive shaft, rotationally supported by the shaft's support stem, and the components are in the disengaged state. Independent rotation of the drive shaft134and capstan136may be performed by separately powering first and second drive mechanisms202,204via first and second motors206,208. In some examples, the drive shaft and capstan can be rotated alternatively, with one of the components being held fixed while the other is driven. In some other examples, the drive shaft and capstan can be rotated simultaneously. Step608includes securing capstan136to drive shaft134in an engaged state. Securing the capstan may include applying a downward vertical force against the capstan to drive it down against the stem portion of the drive shaft. The downward vertical force causes the radially tapered surface of the capstan's lower bore portion to bear against the radially tapered outer surface of the drive shaft's support stem. The mutual force exerted by these mating surfaces against one another provides sufficient friction to inhibit relative movement between the drive shaft and capstan. In some examples, the radial taper of the surfaces defines a self-locking taper, allowing the capstan and drive shaft to remain engaged absent the downward force. In some other examples, a set screw may be inserted through coaxially aligned bores of the capstan and support stem to maintain the downward force that facilitates the taper friction fit coupling. In some implementations, the capstan may be released from the support stem of the drive shaft to allow for further tensioning of the drive cable. If the taper friction fit between the capstan and drive shaft is not self-locking, the capstan can be released by removing the set screw. Release of a self-locking capstan may involve the use of a lead screw. For example, the lead screw may be inserted into a threaded bore of the capstan and rotated until it bears against a surface of the support stem to urge the capstan apart from the stem. As such, further tensioning can be performed by releasing the capstan, again independently rotating the capstan and drive shaft, and then re-engaging the capstan and the drive shaft. FIGS.7A and7Billustrate an isolated portion of a second exemplary input device726. Similar to input device126ofFIGS.3A and3B, input device726includes a drive shaft734and a capstan736that are separate and independent structures capable of being reversibly adjusted between an “engaged state” (as shown) and a “disengaged state” for pre-tensioning and securing one or more drive cables. Drive shaft734includes a disk-shaped input732and a cylindrical rod738extending outward from the input. In this example, drive shaft734further includes a central blind bore735(seeFIG.7B) having interior screw threads for receiving and engaging a set screw768. Drive shaft734still further includes exterior, outwardly facing helical grooves754to guide the winding of cable ends for forming a frictional coupling to the drive shaft. Alternatively, cable ends may be secured to the drive shaft and capstan as described above. Capstan736includes a shank742and a head portion744. As shown, head portion744has a polygonal cross-section (hexagonal, in this example) with planar top and side surfaces745,746. Planar top surface745engages a first toothed washer778a. Planar side surfaces746are configured to engage the bore of a socket-type tensioning tool880, as described below with reference toFIG.8. Shank742is generally cylindrical in shape, and, like drive shaft734, includes a set of exterior, outwardly facing helical grooves756to guide the winding of cable ends. Shank742also includes a planar bottom surface747that engages a second toothed washer778b. Further still, as shown inFIG.7B, capstan736includes a central through-bore748of constant diameter that axially traverses both shank742and head portion744. As noted above, input device726includes a set screw768. Set screw768is designed to couple capstan736to drive shaft734, and to facilitate adjustment between the engaged and disengaged state of these components. Set screw768includes a radially enlarged head770centered atop a generally cylindrical shaft772. Head770features a keyed blind bore774(e.g., a hex keyed bore) and a planar bottom surface for engaging the first toothed washer778a. Shaft772includes an upper portion772aand a lower portion772b. The upper portion of the shaft has an enlarged diameter relative to the lower portion. In particular, the diameter of upper portion772aclosely matches the diameter of the capstan's through-bore748, which enables shaft772to function as a spindle that provides a central axis of rotation for capstan736in the disengaged state. Lower portion772bincludes a set of exterior screw threads776that engage the interior threads of the drive shaft's blind bore735. Engagement of these mating screw threads secures set screw768to drive shaft734, and therefore also couples capstan736to the drive shaft. That is, the capstan is retained between the drive shaft and the head of the set screw. The initial engagement of the threads places the device in the disengaged state, leaving capstan736free to rotate about the upper portion (772a) of set screw shaft772. Further rotation of set screw768progressively advances shaft772downward along the threaded blind bore735of drive shaft734until capstan736becomes effectively clamped between the head of the set screw and the drive shaft. This places the device in the engaged state. In this example, input device726features an assembly of machined metal parts designed for withstanding relatively high torque loads during use of the surgical instrument. The advantage of utilizing metal-to-metal interfaces between clamping elements is increased slip resistance. Slippage between the capstan and drive shaft significantly degrades the degree of precision in controlling the surgical instrument's end effector because it introduces slack or less than desired tension in the drive cable(s). In a particular implementation, set screw768is composed of alloy steel, drive shaft734and capstan736are composed of aluminum (e.g., 6061-T651 aluminum), and toothed washers779a,bare composed of stainless steel (e.g., 316 stainless steel). In these implementations, the hardness of the washers is greater than the hardness of the capstan, which allows the teeth of the washers to “bite” into the capstan to increase the hold between these components. This configuration exhibited an acceptable “slip limit” (i.e., the amount of torque applied at the onset of slippage) ranging between 80 and 160 ounce-inch of torque. FIGS.8and9depict a tool880for facilitating the pre-tensioning of one or more drive cables carried by input device726. Tensioning tool880includes an enlarged flathead882and a shank884. A central through-bore886axially traverses both flathead882and shank884. As shown inFIG.9, bore886includes a first portion886a, a second portion886b, and a third portion886cof discretely increasing diameters. The first bore portion886ais appropriately sized to receive a hex-profiled wrench904(i.e., an Allen wrench) that engages the hex-keyed blind bore774of set screw768. The second bore portion886bis appropriately sized to receive the head770of set screw768. The third bore portion886cis appropriately sized to receive the head portion744of capstan736. This third portion886cfurther includes a polygonal profile that engages the planar side surfaces of746of the head portion744. To perform the cable pre-tensioning procedure, the device is placed in the disengaged state by loosely coupling capstan736to drive shaft734via set screw768(e.g., by initially engaging the set screw threads with the drive shaft threads, but not tightening down the set screw). The tensioning tool880is then fitted over the capstan736, and a tensioning wand902inserted into a sidewall aperture888of flathead882is used to rotate the capstan relative to drive shaft734(e.g., by exerting a force on the wand that is tangential to the capstan). When a predetermined degree of pre-tensioning has been reached, the position of flathead882is held fixed as the profiled wrench904is inserted into the central bore886of tensioning tool880and used to tighten set screw768via the blind bore774. FIG.10illustrates a method1000of tensioning a cable of a drive assembly for a surgical instrument. For purposes of clarity, the method1000will be described in the context of tensioning tool880and input device726, the individual components of which are described above. Step1002of method1000includes aligning capstan736with drive shaft734. This alignment step places the central through-bore748of the capstan in a co-axial arrangement with the blind bore735of the drive shaft absent significant external forces. Step1004includes coupling the capstan736to the drive shaft734in a disengaged state. While in the disengaged state, the capstan and drive shaft are freely rotatable relative to one another. Coupling the capstan to the drive shaft in the disengaged state may include inserting the shaft of set screw768through the co-axially aligned through-bore of the capstan and blind bore of the drive shaft. Further, in some examples, a set of exterior screw threads on the shaft of the set screw can be engaged with a set of interior screw threads of the blind bore of the drive shaft. This initial threaded engagement retains the capstan between the drive shaft and the head of the set screw, but does not exert a clamping force to lock the capstan in place. Step1006includes coupling the respective ends of a drive cable to drive shaft734and capstan736. As discussed above in connection with the method600, in some examples, the ends of the cable are attached to the drive shaft and capstan by purely frictional couplings, absent additional connection hardware (e.g., crimps or other fasteners). For instance, the cable ends may be wound around the drive shaft and capstan. Step1008includes rotating capstan736relative to drive shaft734to tension the drive cable(s). And, step1010includes securing the capstan736to the drive shaft734in an engaged state. In some examples, tensioning tool880can be used to facilitate the rotating (step1008) and securing (step1010) of capstan736. For example, the tensioning tool can be placed over the capstan, such that the bore of the tool is keyed to the head of the capstan. Then, tensioning wand902can be engaged with tensioning tool880to rotate the tool and the keyed capstan relative to the drive shaft. When the desired cable tension is reached, the tensioning wand902, and therefore flathead882, is held in place to inhibit (or prevent) further rotation of the capstan while wrench904is inserted through the central bore of tensioning tool880to tighten set screw768. Tightening the set screw advances the shaft of the screw through the blind bore of the drive shaft along the engaged threads until the head of the screw clamps down on the capstan with sufficient force to lock it in place. The use of terminology such as “top,” “bottom,” “over,” “downward,” “upper,” “lower,” etc. throughout the specification and claims is for describing the relative positions of various components of the system and other elements described herein. Similarly, the use of any horizontal or vertical terms to describe elements is for describing relative orientations of the various components of the system and other elements described herein. Unless otherwise stated explicitly, the use of such terminology does not imply a particular position or orientation of the system or any other components relative to the direction of the Earth gravitational force, or the Earth ground surface, or other particular position or orientation that the system other elements may be placed in during operation, manufacturing, and transportation. A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the inventions. In addition, it should be understood that various described components and features optionally may be combined, so that one or more features of one embodiment may be combined with, or substituted for, one or more features of another embodiment consistent with the inventive aspects. | 36,471 |
11857285 | DETAILED DESCRIPTION For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the disclosure relates. It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof. Throughout the patent specification, a convention employed is that in the appended drawings, like numerals denote like components. Reference throughout this specification to “an embodiment”, “another embodiment”, “an implementation”, “another implementation” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment”, “in one implementation”, “in another implementation”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or additional devices or additional sub-systems or additional elements or additional structures. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The apparatus, system, and examples provided herein are illustrative only and not intended to be limiting. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Further, the term sterile barrier and sterile adapter denotes the same meaning and may be used interchangeably throughout the description. Embodiments of the disclosure will be described below in detail with reference to the accompanying drawings. The disclosure relates to a robotic surgical system for minimally invasive surgery. The robotic surgical system will generally involve the use of multiple robotic arms. One or more of the robotic arms will often support a surgical tool which may be articulated (such as jaws, scissors, graspers, needle holders, micro dissectors, staple appliers, tackers, suction/irrigation tools, clip appliers, or the like) or non-articulated (such as cutting blades, cautery probes, irrigators, catheters, suction orifices, or the like). One or more of the robotic arms will often be used to support one or more surgical image capture devices such as an endoscope (which may be any of the variety of structures such as a laparoscope, an arthroscope, a hysteroscope, or the like), or optionally, some other imaging modality (such as ultrasound, fluoroscopy, magnetic resonance imaging, or the like). FIG.1(a)illustrates a schematic diagram of multiple robotic arms of a robotic surgical system in accordance with an embodiment of the disclosure. Specifically,FIG.1illustrates the robotic surgical system (100) having four robotic arms (101a), (101b), (101c), (101d) mounted on four robotic arm carts (103a), (103b), (103c), (103d) around an operating table (105). The four-robotic arms (101a), (101b), (101c), (101d) as depicted inFIG.1(a)is for illustration purpose and the number of robotic arms may vary depending upon the type of surgery or the robotic surgical system. The four robotic arms (101a), (101b), (101c), (101d) are arranged along the operating table (105) and may be arranged in different manner but not limited to the robotic arms (101a), (101b), (101c), (101d) arranged along the operating table (101) or the robotic arms (101a), (101b), (101c), (101d) separately mounted on the four robotic arm carts (103a), (103b), (103c), (103d) or the robotic arms (101a), (101b), (101c), (101d) mechanically and/or operationally connected with each other or the robotic arms (101a), (101b), (101c), (101d) connected to a central body (now shown) such that the robotic arms (101a), (101b), (101c), (101d) branch out of the central body (now shown). FIG.1(b)illustrates a schematic diagram of a surgeon console of the robotic surgical system in accordance with an embodiment of the disclosure. The surgeon console (107) aids the surgeon to remotely operate the patient lying on the operating table (105) by controlling various surgical instruments and endoscope mounted on the robotic arms (101a), (101b), (101c), (101d). The surgeon console (107) may be configured to control the movement of surgical instruments (as shown inFIG.2) while the instruments are inside the patient body. The surgeon console (107) may comprise of at least an adjustable viewing means (109) and (111) but not limited to 2D/3D monitors, wearable viewing means (not shown) and in combination thereof. The surgeon console (107) may be equipped with multiple displays which would not only show 3D high definition (HD) endoscopic view of a surgical site at the operating table (105) but may also shows additional information from various medical equipment's which surgeon may need during the robotic surgery. Further, the viewing means (109) and (111) may provide various modes of the robotic surgical system (100) but not limited to identification of number of robotic arms attached, current surgical instruments type attached, current instruments end effector tip position, collision information along with medical data like ECG, ultrasound display, fluoroscopic images, CT, MRI information and the like. The surgeon console (107) may further comprise of an eye tracking camera system (113) for detecting the direction of the surgeon's eye gaze and accordingly activates/deactivates the surgical instruments control. Furthermore, the surgeon console (107) may comprise of mechanism for controlling the robotics arms but not limited to one or more surgeon input devices (115L) and (115R), one or more foot pedal controllers (117), a clutch mechanism (not shown), and in combination thereof. The surgeon input devices (115L) and (115R) at the surgeon console (107) are required to seamlessly capture and transfer complex actions performed by surgeon giving the perception that the surgeon is directly articulating the surgical instruments. The different controllers may require for different purpose during the surgery. In some embodiments, the surgeon input devices (115L) and (115R) may be one or more manually-operated input devices, such as a joystick, exoskeletal glove, a powered and gravity-compensated manipulator, or the like. The surgeon may sit on a resting apparatus such as a chair (not shown) in proximity to the surgeon console (107) such that the surgeon's arms may rest on an arm rest (119), while controlling the surgeon console (107). The chair may be adjustable with means in height, elbow rest and the like according to the ease of the surgeon and also various control means may be provided on the chair and the arm rest (119). Further, the surgeon console (107) may be at a one location inside an operation theatre or may be placed at any other location in the hospital provided connectivity to the robotics arms (101a,101b,101c,101d) via wired or wireless means is maintained. FIG.1(c)illustrates a schematic diagram of a vision cart of the robotic surgical system in accordance with an embodiment of the disclosure. The vision cart (121) is configured to display the 2D and/or 3D view of the surgery captured by an endoscope mounted on any of the robotic arm. The vision cart (121) may be adjusted at various angles and heights depending upon the ease of view. The vison cart (121) may have various functionality but not limited to providing touch screen display, preview/recording/playback provisions, various inputs/outputs means, 2D to 3D converters and the like. The vision cart (121) may include a 3D monitor (123) to view a surgical site from outside the patient's body. One of the robotics arms typically engage a surgical instrument that has a video-image-capture function (i.e., a camera instrument) for displaying the captured images on the vision cart (121). In some robotic surgical system configurations, the camera instrument includes an optics that transfer the images from the distal end of the camera instrument to one or more imaging sensors (e.g., CCD or CMOS sensors) outside of the patient's body. Alternatively, the imaging sensor(s) may be positioned at the distal end of the camera instrument, and the signals produced by the sensor(s) may be transmitted along a wire or wirelessly for processing and display on the vision cart (121). A 2D monitor (125) may be placed at the rear side of the vision cart (121) that enables a spectator or other non-operating surgeons to view a surgical site from outside the patient's body. The vision cart (121) may comprise of various shelfs (127) which may be provided to keep a camera processing unit, a robotic arms control box, a robotic system processing unit, power back-up units and the like. FIG.2illustrates a perspective view of a tool interface assembly mounted on a robotic arm in accordance with an embodiment of the disclosure. The tool interface assembly (200) is mounted on the robotic arm (201) of the robotic surgical system (100). The tool interface assembly (200) is one of the main components for performing the robotic surgery on a patient. The robotic arm (201) as shown inFIG.2is shown for the illustration purpose only and other robotic arms with different configurations, degree of freedom (DOF) and shapes may be used. The robotic arm (201) is mounted on a robotic arm cart (203), on the opposite end of tool interface assembly (200), such that the robotic arm (201) may be shifted freely within an operating theater. The tool interface assembly (200), as depicted byFIG.2, comprises of an ATI (Arm Tool Interface) connector (205) which facilitates a tool interface (207) to operationally connect with the robotic arm (201). The tool interface assembly (200) further comprises of an actuator (209) mounted on a guiding mechanism (not shown) provided on the tool interface (207) and capable of linearly moving along the guiding mechanism such as a guide rail. The movement of the actuator (209) along the guiding mechanism of the tool interface (207) is controlled by the surgeon with help of surgeon input device (115L,115R) on the surgeon console (107) as shown inFIG.1(b). A sterile adapter (211) is releasably mounted on the actuator (209) to separate a non-sterile part of the robotic arm (201) from a sterile surgical instrument (213). A locking mechanism (not shown) is provided to releasably lock and unlock the sterile adapter (211) with the actuator (209). The sterile adapter (211) detachably engages with the actuator (209) which drives and controls of the surgical instrument (213) in a sterile field. The surgical instrument (213) may also be capable to be releasably lock/unlock or engages/disengages with the sterile adapter (211) by means of a push button (not shown) provided on the surgical instrument (213). The surgical instrument (213) includes a housing (215), an end effector (217) and an elongated shaft (219) connecting the housing (215) to the end effector (217). The surgical instrument (213) may also contain a stored (e.g., on a semiconductor memory inside the instrument) information that may be permanent or may be updatable by a processor of a robotic surgical system (100). The end effector (217) may be an instrument associated with one or more surgical tasks, such as a forceps, a needle driver, a shears, a bipolar cauterizer, a tissue stabilizer or retractor, a clip applier, an anastomosis device, an imaging device (e.g., an endoscope or ultrasound probe), and the like. Some instruments further provide an articulated support for the surgical instrument (213) such that the position and orientation of the surgical instrument (213) may be manipulated with one or more mechanical degrees of freedom in relation to the elongated shaft (219). A cannula locking assembly (221) is provided on the tool interface (207) and is configured to lock and unlock a cannula (223) having a hollow body. During a surgery, the surgical instrument (213) is mounted on the sterile adapter (211) and the elongated shaft (219) of the surgical instrument (213) is inserted through the hollow body of the cannula (223). For example, the cannula locking assembly (221) comprises of flap like body which receives the cannula (223) and secures the cannula (223) thereon. Alternatively, the cannula locking assembly (221) may have a circular body for receiving the cannula (223) and comprises of grooves to grip the cannula (223) at a stationary position. FIG.3illustrates a perspective view of a surgeon console with surgeon input device in accordance with an embodiment of the disclosure. The surgeon console (300) may include a 3D monitor (301), an eye tracking camera system (303), a 2D monitor (305), at least one electromagnetic signal transmitter (307), a left hand surgeon input device (309L), a right hand surgeon input device (309R), a surgeon's hand rest (311), and a foot pedal switch assembly (313). The 3D monitor (301) may be equipped to not only show 3D high definition (HD) endoscopic view of a surgical site at an operating table but may also shows additional information from various medical equipment's which surgeon may need during the robotic surgery. Similarly, the 2D monitor (305) may be placed at the below the 3D monitor (301) that enables the surgeon to view additional details regarding the robotic surgery. The eye tracking camera system (303) may be configured to detect the direction of the surgeon's eye gaze and accordingly activates/deactivates the surgical instruments control. The electromagnetic signal transmitter (307) may be capable of transmitting an electromagnetic signal at a predefined boundary around the surgeon console (300). The predefined boundary of the electromagnetic signal may be varied around the surgeon console by a user/surgeon. Also, the predefined boundary of the electromagnetic signal may be varied dynamically around the surgeon console by a user/surgeon during the surgery. Further, the variation of the predefined boundary of the electromagnetic signal is facilitated by a processor/a control system configured in the surgeon console (300). According to an embodiment, the electromagnetic signal transmitter (307) may be capable of moving in x, y, and z directions. The electromagnetic signal transmitter (307) movement in x, y, and z directions may be facilitated by plurality of actuators such as linear actuator, telescopic actuator, and the like. The electromagnetic signal transmitter (307) position may be adjusted in x, y, and z direction before the surgery based on the surgeon body habitus, ease, and the like. According to a specific embodiment, the electromagnetic signal transmitter (307) position in x, y, and z directions may be adjusted dynamically during the surgery when the surgeon hand holding the left and right surgeon input device (309L,309R) may tend to move beyond the predefined boundary of the electromagnetic signal. The foot pedal switch assembly (313) includes several pedals which may be used for various purposes during surgery such as for clutching, toggling, cautery control, endoscope zoom in & out and the like. The left hand surgeon input device (309L) and the right hand surgeon input device (309R) may collectively referred as a surgeon input device (400) throughout the disclosure, including claims and will be discussed later in more detail. A surgeon may sit on a chair (not shown) to manipulates the surgical instruments/tools (not shown) via the hand surgeon input devices (309L &309R). The surgeon may sit on the chair (not shown) in proximity to the surgeon console (107) such that the surgeon's arms may rest on an arm rest (311), while controlling the surgeon console (300). FIG.4(a)illustrates a surgeon input device in accordance with an embodiment of the disclosure. The surgeon input device (400) may include a left hand surgeon input device (309L) and a right hand surgeon input device (309R) illustrated inFIG.3. Both the surgeon input device (309L), (309R) are identical in nature and comprises of same assembly and functionality, which is explained below. The surgeon input device (400) may comprise of a housing (401), a button (403), a sensor wire (405), and a sensor (407). The surgeon input device (400) may be considered as a hand held device connected to the surgeon console (300) by a wire. The housing (401) is an outer body to envelope all the aforesaid components of the surgeon input device (400). The housing (401) comprises of a first end (401a) and a second end (401b) and extending along a length defined by a longitudinal axis between the first end (401a) and the second end (401b). Also, the housing (401) extending along a width defined along a transverse axis that extends transverse to the longitudinal axis, where the length being greater than the width of the housing (401). The housing (401) may be made of any suitable resilient material such as a thermoplastic. In accordance to a specific embodiment of the disclosure, the housing (401) is made of polycarbonate plastic. The housing (401) may be painted or may have a protective coating such as a plastic spray paint. In accordance with an embodiment, the process of coating may be used to coat the housing (401) such as to form a protective coating of plastic paint on the surface of the housing (401). The housing (401) may be of any suitable size such that the surgeon input device (400) fits the hand of the surgeon. The housing (401) may be of a suitable thickness providing sufficient strength. The housing (401) may be of any shape to fit the hands of the surgeon. In accordance to a specific embodiment of the disclosure, the housing (401) is oval shaped. According to an embodiment, the housing (401) comprising an outer surface configured to be gripped by the surgeon's hand and to facilitate translation and rotation of the housing (401) by the surgeon's hand. The outer surface of the housing (401) may comprise of contours or grooves for the surgeon's hand which may assist the surgeon to grip the surgeon input device (400). The button (403) is positioned on one of the surfaces of the housing (401) and may protrude outwards from the housing (401). The button (403) is configured to be pressed by the surgeon to close a jaw of an end-effector (shown inFIG.2) of any surgical instruments. The jaws may open when the surgeon releases the bottom (403). The button (403) may also referred to as a pinch button for opening and closing function of a jaw of the end effector. In accordance to a specific embodiment of the disclosure, the button (403) is circular shaped and made of polycarbonate plastic. The sensor (407) may be an electromagnetic sensor probe capable of detecting the electromagnetic signal from the electromagnetic signal transmitter (307) located at the surgeon console (300). According to an embodiment, the sensor (407) is a tracking sensor. The sensor wire (405) may be composed of one or more wires (not shown). The sensor wire (405) may be used for carrying power to the surgeon input device (400). The sensor wire (405) may be used to transport signals or information between the control system and the electronics or sensors within the surgeon input device (400). FIG.4(b)illustrates an internal view of the surgeon input device in accordance with an embodiment of the disclosure. The housing (401) as shown inFIG.4(a)is removed so that internal components of the surgeon input device (400) can be seen. The surgeon input device (400) further comprises of at least one sensor (409) disposed within the housing (401) to sense the compression and decompression of the pinch button (403) by the surgeon. The at least one sensor (409) may be an optical sensor which senses the compression and decompression of the pinch button (403) and may sends a signal via sensor wire (405) to a control system to control opening and closing of the jaws of the end effectors of the surgical instruments. Further, the at least one sensor (409) may be a force sensor, encoder, and the like. Also, the power source such as a battery may be embedded inside the surgeon input device (400). Alternatively, the surgeon input device (400) may be wireless connected with the control system by means of Bluetooth, ZigBee, and the like. FIG.4(c)illustrates an exploded view of the surgeon input device in accordance with an embodiment of the disclosure. The pinch button (403) as shown inFIG.4(c)comprises an extruding portion (411) protruding from bottom side of the pinch button (403). The pinch button, as shown inFIG.4(c), may be flipped around 90 degree so as to show the extruding portion (411). The operating position of the pinch button (403) will be as shown in theFIG.4(b). Further, the surgeon input device (400) further comprises of a support structure (413). The support structure (413) may have a portion (419) in which the sensor (407) is disposed. The portion (419) may be a shape similar to the shape of one end of the sensor (407). A lever (415) with a spring (417) is affixed in a hollow recess (421) of the support structure (413) by means of a shaft (423). In an embodiment, when the surgeon presses the pinch button (403), the extruding portion (411) presses the lever (415) and the at least one sensor (409) may detect the extent to which the lever (415) is pressed and send signal to the control system. Based on the signal received from the at least one sensor (409), the control system regulates the opening and closing of the jaws of the end effector of the surgical instrument. When the surgeon releases the pinch button (403), the extruding portion (411) of the pinch button (403) releases the lever (415) and the spring (417) facilitate the lever to come back to its normal position. Further, the surgeon input device (400) is configured to provide an input to a control system to transform motion of a surgeon's hand into motion of the end-effector of the surgical instrument, instruments actuator, and the robotic arm. According to an embodiment, a capacitive sensor may be configured with the housing (401) to detect the presence of surgeon's hand. When surgeon holds the surgeon input device (400), the capacitive sensor senses the presence of surgeon's hand and send signal to activate the robotic surgical system (100). According to another embodiment, the surgeon input device (400) further comprises of a force sensor disposed within the housing (401), where the force sensor configured to detect a pressing and releasing of the at least one button (403). According to another embodiment, the surgeon input device (400) is configured to provide tactile feedback to the surgeon. According to another embodiment, the at least one sensor (407) disposed within the housing (401) is an electromagnetic sensor or Inertial Measurement Unit (IMU) sensor. According to another embodiment, the surgeon input device (400) is in operative communication with the control system via a wired means. According to another embodiment, the surgeon input device (400) is in operative communication with the control system via a wireless means such as Bluetooth. According to another embodiment, the surgeon input device (400) may contain a power source, such as a battery pack, contained within the housing (401). According to another embodiment, the housing (401) is configured to receive control inputs from the surgeon via one or more of translation of the housing (401), rotation of the housing (401), pressing of the outer surface with the surgeon's hand, and changing of the angular orientation of the longitudinal axis of the housing (401). The surgeon input device (400) may be made of inexpensive materials such as but not limited to soft rubber and plastic. The sensor and the other related electronics may also be inexpensive, thus making the entire surgeon input device (400) inexpensive. Another advantage of using inexpensive materials for the surgeon input device (400) is that the design may be scalable in size. Thus, the surgeon input device (400) of differing sizes may be manufactured to accommodate the various hand sizes of the surgeon. In some embodiments, the surgeon input device (400) may be a disposable component, for e.g., use in a single surgical operation. The foregoing descriptions of exemplary embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the disclosure and its practical application, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient but is intended to cover the application or implementation without departing from the spirit or scope of the claims of the present disclosure. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims. While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the apparatus in order to implement the inventive concept as taught herein. While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the apparatus in order to implement the inventive concept as taught herein. | 27,300 |
11857286 | DETAILED DESCRIPTION Referring toFIG.1, glove100includes a finger portion A, a palm portion B, and a cuff portion C. The finger portion A includes an index finger108, a middle finger110, a ring finger112, and a pinky finger114. A thumb finger106is also included in the glove, and may or may not be considered included in or as an aspect of the finger portion A in certain embodiments. In fact, the finger portion A may include any one or more fingers and/or thumb, as applicable in the embodiments. The palm portion B includes a generally cylindrical extension102from the finger portion A. The extension102opposing the finger portion A forms a cuff portion C. The cuff portion C is sized to accommodate insertion of a hand, the palm portion B is sized to accommodate a palm and back of the inserted hand, and the finger portion A is sized to accommodate respective pinky finger, ring finger, middle finger and index finger of the inserted hand in the pinky finger114, ring finger112, middle finger110, and index finger108, respectively, and thumb of the inserted hand in the thumb finger106. The glove100may be of any material and/or combination of materials. For non-exclusive example, the glove may be nitrile, latex, vinyl, neoprene, rubber, synthetic, fabric, composite, natural material, or otherwise. The glove100may be stretchable, twistable, gatherable, semi rigid, rigid, or otherwise or combinations of these. For non-exclusive example, same or different materials may form distinct or differentiated portions or parts of the glove100. Additionally, the glove100may but need not include ribs, rolls, beading or other adornments or structures. Sizing of the glove100may depend on materials of construction, for non-exclusive example, the glove100may be of varied size, standard size, or combinations. If the glove100is for use in healthcare, law enforcement, industrial or laboratory applications, or otherwise, the glove may be of a standard or fixed length and width, and may be stretchable to fit tightly to a hand or else more loosely fit to the hand, as desired in the application. Although many variations of the glove100are possible in the embodiments, a particular non-exclusive type of the glove is a nitrile, latex, vinyl, neoprene or rubber glove that is stretched to accommodate the hand of a wearer. Referring toFIG.2, in conjunction withFIG.1, a pair200of gloves202,204are each positioned aligned in sequence with the finger portion A (which finger portion A may include any one or more finger and/or the thumb, as applicable in the embodiment) of one glove202oriented away from the second glove204and with the cuff portion C of the glove202near the finger portion A of the second glove204. The second glove204is oriented in sequential alignment from the first glove202, such that the finger portion A of the second glove204is directed toward the cuff portion C of the first glove202and the cuff portion C of the second glove204is directed away from the first glove202. Referring toFIG.3, in conjunction withFIGS.1and2, a glove pair300includes glove302and glove304. The gloves302,304are manipulated such that a pinky finger324, ring finger322, middle finger320and index finger318, as well as a thumb finger316, of the glove302are spread away from the palm portion B of the glove302and the palm portion B is similarly extended to the cuff portion C of the glove302. The glove304is positioned with a pinky finger314, ring finger312, middle finger310, and index finger308of the glove304towards the cuff326of the glove302. These fingers314,312,310,308of the glove304are inserted into the cuff326of the glove302. A thumb finger306of the glove304may extend outside the cuff326along extension of the glove302, or else, according to certain embodiments, the thumb finger306may also be inserted in the cuff326or not. As inserted, a palm portion B of the glove304extends away from the cuff326of the glove302to form a cuff portion C of the glove304. A cuff328is opened to receive fingers of another glove, as further described. Referring toFIG.4, in conjunction withFIGS.1,2and3, packed gloves400include the glove302and the glove304. The glove302is positioned with the pinky finger324, the ring finger322, the middle finger320, and the index finger318folded against a first side of the palm portion B of the glove302. (The thumb finger316may also be inserted or not, as applicable in the embodiments.) The inserted pinky finger214, ring finger312, middle finger310and index finger308(not shown) of the glove304remain within the cuff326of the glove302. The palm portion B of the glove304is folded back against a second side of the palm portion B of the glove302opposite the first side. The thumb finger316of the glove302aligns along the palm portion B of the glove302. The thumb finger306of the glove304aligns in opposite direction along the palm portion B of the glove302. Referring toFIG.5, in conjunction withFIG.4, another gloves set500include the packed gloves400and an additional glove502. The pinky finger324, ring finger322, middle finger320and index finger318(not shown inFIG.5but shown inFIG.4) are inserted into a cuff526of the glove502. The glove502is folded, such that a pinky finger524, ring finger522, middle finger520and index finger518lie against a palm portion B of the glove502. In certain embodiments, a thumb finger516of the glove502is positioned extending in same direction as the thumb finger306of the glove304, alongside the palm portion B of the glove502; although, according to other embodiments, the thumb finger516may also or alternatively be inserted in the cuff526. As illustrated, successions of gloves are positioned with fingers (excluding thumb) of one glove within a cuff of another glove, in series from glove to glove. Referring toFIG.6, in conjunction withFIG.5, a chain of gloves600adds other gloves602,604,606in succession, each with a respective finger portion A (which finger portion A may include any one or more fingers and/or thumb, as applicable in the embodiment) of a glove inserted into a cuff portion C of a next in succession glove. A respective palm portion B of each glove is folded back on the finger portion of the glove and contacts a respective palm portion B of a preceding in succession glove. Referring toFIG.7, in conjunction withFIG.6, a glove dispenser700includes a container702formed with a hole704. As non-exclusive example, the container702may be formed of paper, cardboard, plastic, metal or otherwise or combinations. The container702may be initially presented with the hole704closed, and the hole704may be formed, such as for non-exclusive example, by tearing a perforated line around the hole704or otherwise in the embodiments. In any event, the chain of gloves600is positioned in the container702with a first glove606extending by a cuff portion C towards the hole704. When the hole704is opened, the cuff portion C of the glove606may be pulled or otherwise presented to outward from the hole704. Within the container702, a succession of gloves606,604,602,502,302,304, etc. is retained. Each successive glove (bottom to top in the Figure) has a finger portion A (which finger portion A may include any one or more finger and/or thumb) positioned into a cuff portion C of a preceding glove. The gloves may be positioned in any manner within the container702, such as for non-exclusive example, the gloves may each be folded with a palm portion B folded back towards a finger portion A of the same glove, or other arrangements of gloves are possible. In any event, the positioning of the fingers of one glove within the cuff of a preceding glove in succession, permits retrieval from the hole704of each next glove without simultaneous retrieval of a next in succession glove in the series. Referring toFIG.8, in conjunction withFIG.7, a glove and dispenser assembly800includes the container702formed with the hole704. Gloves contained in succession, with at least some fingers of each glove inserted into cuff of next glove in succession, are retrievable singly from hole704. Because of the particular interconnection of fingers of one glove inside the cuff of another glove, the gloves are singly retrieved by cuff portion from the container702. Referring toFIG.9, a method900of a glove and dispenser assembly includes opening902a cuff of a first glove. A finger portion (which can include any one or more finger and/or thumb) of a second glove is inserted904into the opened cuff of the first glove. Although not necessary in all embodiments, a palm portion of second glove is folded back906towards the finger portion of the second glove. The method900proceeds with repeating steps902and904, and if included906, for each successive glove of a plurality of gloves for a dispenser container. When the last glove of the plurality is encountered, and steps902and904(and if applicable,906) are performed for the glove, the interconnected gloves of the plurality are packaged in a dispenser, such as for non-exclusive example, the container702. In the foregoing, the invention has been described with reference to specific embodiments. One of ordinary skill in the art will appreciate, however, that various modifications, substitutions, deletions, and additions can be made without departing from the scope of the invention. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications substitutions, deletions, and additions are intended to be included within the scope of the invention. Any benefits, advantages, or solutions to problems that may have been described above with regard to specific embodiments, as well as device(s), connection(s), step(s) and element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced, are not to be construed as a critical, required, or essential feature or element. | 9,907 |
11857287 | DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying figures, which form a part hereof. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, figures, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. The methods and apparatus of the present disclosure are well suited for combination with many types of surgical instruments and robotic surgery devices, for example as described in PCT Application No. PCT/US2013/028441, filed on Feb. 28, 2013, entitled “AUTOMATED IMAGE-GUIDED TISSUE RESECTION AND TREATMENT”, the entire disclosure of which are incorporated herein by reference, and suitable for combination in accordance with embodiments disclosed herein. As used herein, the terms proximal and distal in the context of the apparatus refer to proximal and distal as referenced from the apparatus outside the patient, such that proximal may refer to components outside the patient or nearer the operator and distal may refer to components inside the patient or further from the operator. As used herein, the terms proximal and distal in the context of anatomical locations are with respect to the operator of the apparatus, such that proximal may refer to anatomical locations nearer the operator and distal may refer to anatomical locations further from the operator. As used herein, the terms distal and proximal refer to locations referenced from the apparatus and can be opposite of anatomical references. For example, a distal location of a probe may correspond to a proximal location of an elongate member of the patient, such as a penis of the patient, and a proximal location of the probe may correspond to a distal location of the elongate member of the patient. Although specific reference is made to treatment of the prostate, the methods and systems disclosed herein can be used with many tissues. For example, the embodiments disclosed herein may be used in any urological, gynecological or proctological procedures. Embodiments as disclosed herein may be used in any surgical procedures to treat any tissue cavity comprising a proximal opening and a distal opening, the proximal and distal openings allowing the tissue volume to fluidly communicate with other organs or parts of the body adjacent the tissue volume. For example, although specific reference is made to the advancement of a hemostasis device through the urethra into the prostate, and through the bladder neck into the bladder, the hemostasis device as described herein may be advanced through any proximal opening of a tissue cavity into the cavity, and through any distal opening of the tissue cavity into another organ or body part adjacent the tissue volume. The surgical systems that are protected by the drape may relate to the administration of a hemostatic material or sealant to fill in whole, or in part, any bleeding closed tissue volume. Such tissue volumes may comprise tissue spaces or voids occurring naturally, for example an aneurysm, fissure, or postpartum hemorrhage of the uterus. Such tissue volumes may for example be formed as a result of tissue removal of unnecessary or undesirable growths, fluids, cells, or tissues. The surgical systems as utilized in the surgical procedures are well-suited for treating closed tissue volumes remaining after tumor resection, endometrial ablation, polyp removal, cyst removal, and the like. The surgical systems involved in the surgical procedures may be well-suited for treating many types of closed tissue volumes such as within rectum, prostate, uterus, cervix, liver, kidney, bowel, pancreas, muscle, and the like. The surgical system or at least part of the surgical system can be sterilized by normal methods that are compatible with the device, such as steam, heat and pressure, chemicals and the like. FIGS.1A and1Bshow a perspective view of a patient20and a surgical system10partially covered by a sterile drape100in accordance with some embodiments. The surgical drape100may be substantially flexible and may be impervious to liquids, such as bodily fluids. The surgical drape100may comprise a first portion110coupled to a second portion130. The second portion may comprise a non-transparent or opaque portion and the first portion may comprise an optically transmissive material, such as one or more of a visually translucent material, a visually translucent material, or semi-transparent material, whereby at least a portion of light is permitted to pass through the material to allow at least partial visualization through the material. For example, a transparent material or a translucent material may allow most of the light in the visible spectrum to pass through and allow at least partial visualization through the material. A semi-transparent material or semi-translucent material may allow only a portion of the visible light or certain wavelengths of light to pass through, thereby resulting in visibility being reduced to some extent. The first portion110may be at least partially transparent to the visible light spectrum, such that a user can see through the portion to view an underlying object. As shown inFIGS.1A and1B, the surgical drape100may be sized and shaped to cover at least a portion of a surgical system10. The surgical system10may comprise an imaging probe12for imaging tissue in a patient's body. The imaging probe12may comprise, for example an ultrasonography probe. The imaging probe may comprise a transrectal ultrasound (TRUS) or other imaging modalities for providing real time image guidance to a physician during a surgical procedure. The imaging probe12may be coupled to an articulating or mechanical arm14configured to support and/or actuate the imaging probe. For example, the imaging probe may be operably coupled to a distal portion of the mechanical arm, and a proximal portion may be coupled to an operation table or stand. The mechanical arm may be configured to provide one or more degrees of freedom of motion to the imaging probe. For example, the mechanical arm can be used to move the imaging probe along a longitudinal axis towards a target tissue of the patient. The surgical drape100may comprise a first portion110comprising a canopy portion111. Part or all of the first portion110may comprise a visually transparent or translucent material. In some cases, part of the first portion110may be visually transparent or translucent, while another part of the first portion may be opaque. The canopy portion111may preferably comprise a visually transparent or translucent material. The canopy portion111can be provided or disposed anywhere on the first portion110, for example at the center, edge, corner, top, bottom, and/or side of the first portion. The canopy portion111can be integrally formed with the first portion110or as part of the first portion. Alternatively, the canopy portion111may be provided as a separate piece from the first portion110such that the canopy portion can be fixedly or detachably coupled to the first portion. In some cases, the first portion110may comprise a cut-out or opening configured to couple to the canopy portion111. For example, the cut-out or opening of the first portion may be sized and shaped to match the canopy portion111, as described elsewhere herein. The canopy portion111may be sized and shaped to substantially cover a proximal end of the imaging probe10. In some embodiments, the canopy portion111may comprise a three-dimensional configuration that provides a working space for the imaging probe12to move in an un-restricted manner therein, for example with less physical impedance or interference. The imaging probe can be an ultrasonography probe having a proximal portion12-1that is supported by the articulating or mechanical arm14. The first portion110or the canopy portion111may be configured to at least partially cover the mechanical arm coupled to the ultrasonography probe and/or the proximal portion of the ultrasonography probe. The surgical drape100may also comprise a second portion130coupled to the first portion110. The first portion110and the second portion130may be fixedly or detachably coupled to each other. The second portion130may be sized and shaped to cover at least a portion of a torso22of the patient. As previously described, the second portion130may be a non-transparent or opaque portion, although the invention is not limited thereto. In some cases, one or more parts of the second portion130can be visually transparent or translucent. The surgical drape may cover at least a portion of an articulating or mechanical arm14of the imaging probe. In some cases, the entire imaging probe including the articulating arm and a base from which the arm extends may be covered by the surgical drape. In some situations, the entire imaging probe may be covered by the drape to create a sterile barrier to physically separate the imaging probe from the operation area of the patient. As mentioned previously, the surgical drape may be compatible with surgical systems utilized in male urology surgical procedures or prostate surgery. In some embodiments, the surgical system may comprise a treatment probe16(e.g. shown inFIG.1B) and an imaging probe. The patient may be placed on a patient support (e.g., examination table or operation table), such that the treatment probe and the imaging probe (e.g. ultrasound probe) can be inserted into the patient. The patient can be placed in one or more of many positions such as prone, supine, upright, or inclined, for example. In some embodiments, the patient may be placed in a lithotomy position, and stirrups may be used, for example. The treatment probe and the imaging probe can be inserted into the patient in one or more of many ways. In some embodiments, the imaging probe may be inserted into the rectum of the patient and the treatment probe may be inserted into the urethra of the patient, and the drape disclosed herein may provide a transparent sterile barrier between the urethra and rectum. In some situations, the imaging probe is not sterilized, and a sterile barrier may be provided to physically separate the imaging probe from the operation area of the patient. In some cases, insertion of the treatment probe (e.g., sealant delivery device, tissue resection device) and/or delivery of sealant to a cavity or the tissue may be guided by the imaging probe. The imaging probe may be an ultrasonography probe. The imaging probe can comprise a transrectal ultrasound (TRUS) or other imaging modalities for providing visual guidance. TRUS may be used to guide actuation of the catheter during sealant delivery, for example by retracting or advancing the catheter within the cavity by mechanical or manual means. In some embodiments, the treatment probe may comprise a handpiece. In some cases, the treatment probe may be configured to image the target tissue. The treatment probe may comprise an elongate structure having a working channel sized and shaped to receive an endoscope and a carrier of a carrier tube. The carrier may be configured to direct and scan a light beam on the treatment area to determine a profile of the tissue removed. The carrier may also be configured to release a fluid stream comprising a waveguide and scan the light pattern of the fluid stream comprising the waveguide. The treatment probe may be a urethral probe for tissue resection volumetric tissue removal. For example, the treatment probe may direct a fluid stream radially outwardly for controlled resection of tissue such as the prostate and luminal tissues. Optionally, the fluid stream may be used to deliver light, electrical, heat or other energy sources to aid in resection and/or to cauterize the treated tissue. Alternatively, the treatment probe can be any tools or robotic devices that can perform or assist in the urologic surgery with or without manual operations. The imaging probe may comprise or be supported by an articulating arm or mechanical arm14extending from the base. The mechanical arm may be connected to a proximal end12-1of the elongate imaging probe10. In some embodiments, the articulating arm or mechanical arm may comprise an actuator117to manipulate the imaging probe under user control. In some cases, the entire or at least a portion of the base or the articulating arm may be covered by the drape, and the proximal end of the imaging probe may be entirely covered by the canopy portion of the drape. For instance, as shown inFIGS.1A and1B, the proximal end of the TRUS probe may be covered by the canopy portion of the first portion of the surgical drape, and the articulating arm may be covered by a non-canopy portion of the first portion. The imaging probe, for example a distal portion12-2of the imaging probe10, can be inserted into the patient in one or more of many ways. The imaging probe can comprise an ultrasonography probe. A proximal portion of the ultrasonography probe may be mounted on the articulating or mechanical arm14and a distal portion12-2of the ultrasonography probe may be inserted into the patient. During insertion, the articulating arm14may have a substantially unlocked configuration such that the imaging probe can be desirably rotated and translated in order to insert a distal portion12-2of the probe into the patient. When the imaging probe has been inserted to a desired location within the patient, the articulating or mechanical arm14can be locked. In some cases, the imaging probe and the treatment probe may be inserted into the patient sequentially or concurrently. In a locked configuration of the imaging probe, the imaging probe and/or the treatment probe can be oriented in relation to each other in one or more of many ways, such as parallel, skew, horizontal, oblique, or non-parallel, for example. It can be helpful to determine the orientation of the probes with sensors such as angle sensors, in order to map the image date of the imaging probe to coordinate references of the treatment probe. Having the tissue image data mapped to treatment probe coordinate reference space can allow accurate targeting and treatment of tissue identified for treatment by an operator such as the physician. Accordingly, it is ideal for the imaging probe to be capable of moving with unimpeded and few restrictions while being covered by the surgical drape. In some embodiments, the treatment probe16may be coupled to the imaging probe, in order to align treatment with the treatment probe based on images from imaging probe. The coupling can be achieved using a base that is common to the treatment probe and the imaging probe. The imaging probe can be coupled to the base with the articulating or mechanical arm14, which can be used to adjust the alignment of the imaging probe when the treatment probe is locked in position. The articulating or mechanical arm14may comprise a lockable and movable probe under control of an imaging system or of the console and of a user interface, for example. The articulating or mechanical arm14may be micro-actuable so that the proximal end12-1of the imaging probe12can be adjusted with small movements, for example a millimeter or so in relation to the treatment probe. The movement of the imaging probe12or the proximal end12-1of the imaging probe may range from millimeters to centimeters. For instance, the proximal end of the imaging probe may move within a space having a dimension (e.g., length, width, height, diameter, diagonal) of at least 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, or 80 cm. The proximal end of the imaging probe may be configured to move freely with respect to up to six degrees of freedom (e.g., three degrees of freedom in translation and three degrees of freedom in rotation). As shown inFIGS.1A and1B, the canopy portion111provides a working space or volume114for the imaging probe12to move therein with reduced restrictions. The working space114is helpful to reduce physical interference between the imaging probe and the drape when the imaging probe moves. In some cases, the canopy portion may be configured to move with the proximal end of the imaging probe12as a whole while the remaining portion of the drape is still supported in place. Additional details regarding the canopy portion are described later herein. The surgical drape100may comprise an aperture or fenestration118allowing access to the urethra by the treatment device or probe. The fenestration118may allow access to the organ that is isolated from the remainder of the patient's body covered by the drape. Alternatively, the fenestration118may allow access of a surgical instrument through the drape. The aperture or fenestration may be a through hole formed in the second portion130of the surgical drape or the first portion110of the surgical drape. The aperture or fenestration, in some embodiments, may be located adjacent to the canopy portion111. The surgical drape may be removable from the patient or the surgical system. Removal of the drape can be achieved by a perforation132which may extend along or be at any angle to a midline of the drape to the fenestration. This is helpful to provide an easy removal of the drape after the urology procedure with instruments like catheters still in place. In some embodiments, the perforation132need not extend along the midline of the drape, and may be offset from the midline of the drape by a distance, for example around 5 cm, 10 cm, 20 cm, 30 cm, 40 cm, or 50 cm. In some cases, perforation may include various other mechanisms such as zipper, buttons, sliders, ripcords and the like. The surgical drape may further comprise a container portion140for receiving waste generated during the surgical procedure. For example, in the male urology procedure, it is common that sizable amounts of liquids may be released from the urethra or other instruments. The container portion140can be a collection repository for body and irrigation fluids flowing from the patient during examination and surgery. In some embodiments, the container portion140may comprise a hole142allowing bodily fluids exiting therefrom. The container portion140may include a drainage area and/or funnel, which can direct body and irrigation fluids to another container situated below the container portion. Moreover, the container portion140may provide effective fluid management and may be compatible with a suction irrigation system. In some cases, the container may comprise an opening and configured to receive and store waste including bodily fluids, surgical-related fluids, tissue or debris generated during the surgical treatment. The surgical drape may comprise attachment150to assist in locating and supporting the container portion140in position, particularly when the container is holding fluids contributing additional weight to the container. The attachment may be a component of the surgical drape or the container portion. The attachment may be a standalone element releasably attached to the surgical drape or the container portion. The attachment is useful for supporting the container portion when the container is holding fluids. For instance, in the event of a malfunction in the suction irrigation system, a sizable amount of fluids may be disposed into the container, and the drape may sag downwards due to the weight of the liquids in the container. The attachment150may comprise tethers or straps to support the container and prevent the drape from sagging. The tethers or straps can be attached from an opening side of the container portion to the legs24of the patient or the operation table. For example, the straps or tethers may be wrapped around the legs24of the patient over the drape or may be affixed to the operation table. The tethers can be coupled to a bed rail, a surgical arm, or to a physician. In some cases, the straps or tethers may be attached to a garment/gown of the physician, or attached to a halter that the physician wears around his or her neck. In some cases, the attachment may comprise a U-ring device configured to be attached to one side of an operation table, or an end of an operating table. Accordingly, the attachment150can allow the weight of the container portion to be partially supported by the legs of the patient or the operation table. The legs of the patient may be covered by the second portion130or the first portion110of the drape. The legs of the patient may or may not be visible by a physician through the drape. FIG.2shows a first portion110of a surgical drape, in accordance with some embodiments. The first portion110may comprise a single sheet of material or multiple sheets of material, such as extruded or coextruded material. In some embodiments, the first portion may comprise multiple sheets of a same material that are stacked together. In other embodiments, the first portion may comprise multiple sheets of different materials that are stacked or sandwiched together. The first portion110may comprise a visually transparent or semi-transparent portion. The first portion may comprise, for example, clear plastics, or a latex thin film. The first portion may or may not have color. The first portion may be clear without color. Alternatively, the first portion may have color such as blue, green, yellow, red, or any other colors. The first portion may comprise a uniform color or a mixture of various different colors. The sheet(s) of material used for the first portion may be constructed from readily available plastic films used in the medical field, for example, vinyl (such as polyvinyl chloride), polyethylene, polypropylene, polycarbonate, polyester, silicon elastomer, acetate and so forth. The sheet of material used for the first portion110may be a transparent or translucent material that permits a user to see-through the portion during insertion or operation of an imaging probe. The sheet of material may be translucent whereby at least a portion of light is permitted to pass through the material. The sheet of material may be at least partially transparent to the visible light spectrum, such that a user can see through the portion to view an underlying object. The sheet of material may allow visual viewing of a non-sterile instrument inserted into the patient such as an imaging probe. The sheet of material used for the first portion110may be impervious to liquids. For example, the material may be a hydrophobic material to prevent moisture absorption by the drape. The sheet of material used for the first portion may possess a desirable stiffness or flexibility such that an operator is able to move an articulating arm coupled to the imaging probe without tearing the first portion. The sheet of material used for the first portion may be configured having a predetermined bending stiffness, or a range of bending stiffness values. In another example, the sheet of material may have tensile modulus in a range of 0.1 GPa to 5 GPa, where GPa refers to the tensile modulus in gigapascal as will be understood by one of ordinary skill in the art. A material may be harder grade when the tensile modulus is greater. A material may be flexible or soft when the tensile modulus is small. In selecting a sheet material for use in the first portion, factors such as softness of the sheet, breathability, adaptability of the sheet to the body contour of a patient, patient comfort, or canopy portion stiffness properties can be evaluated in conjunction with the thickness of the sheet. For instance, a sheet made of a soft material may be provided having a greater thickness than a sheet made of a harder grade material, in order to provide similar stiffening effects. The type of material and its thickness may be selected such that the canopy portion of the first portion may be configured to collapse to less than its full volume when the canopy portion is covering the proximal portion of the imaging probe, e.g. ultrasonography probe. Additionally, the type of material and its thickness may be selected such that an operator is able to manipulate an articulating arm through the surgical drape without tearing any portion of the drape. In some embodiments, the canopy portion or the first portion of the drape may comprise a thin flexible material having a thickness raging from about 0.05 mm to about 3 mm, for example from about 0.25 mm to about 3 mm. Alternatively, the thickness of the first portion or the canopy portion can be less than 0.25 mm or greater than 3 mm. For example, the thickness of the first portion comprising the canopy portion may comprise a thin flexible material having a thickness within a range from about 0.05 mm to about 3 mm. The first portion may comprise a canopy portion111. The canopy portion111may comprise a shape and dimension to cover at least a proximal end of an imaging probe as described elsewhere herein. The canopy portion111may comprise a three-dimensional space114sized and shaped to cover the proximal portion of the imaging probe such that the imaging probe is permitted to move in a non-restrictive manner within the three-dimensional space. The space114provided by the canopy portion may be sufficient to accommodate movement of the imaging probe, without the canopy portion or other portions of the drape interfering with the movement of the imaging probe. The space114provided by the canopy portion may be greater than a working space required by the imaging probe. The canopy portion can be adapted for imaging probes of different sizes and dimensions. For example, the space provided by the canopy portion may be greater than the size or dimension of the imaging probe by at least 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%, 200%, or more. The space provided by the canopy portion may be defined by a volume, for example, 30-50 cm by 10-30 cm by 10-30 cm. The three-dimensional space may comprise a volume ranging from about 750 cm3to about 70,000 cm3. In some cases, the volume of the three-dimensional space may be less than 750 cm3or greater than 70,000 cm3. Alternatively, the space provided by the canopy portion may be defined by volumes at least or greater than 40 cm×45 cm×25 cm, 45 cm×45 cm×30 cm, or 50 cm×50 cm×30 cm. The canopy portion may comprise a space having a volume that is greater than a volume occupied by the proximal portion of the ultrasonography probe. For example, the volume of the space provided by the canopy portion may be at least two times greater than the volume occupied by the proximal portion of the ultrasonography probe. The canopy portion may comprise any type of material, shape and/or dimensions that allows the canopy portion to collapse to less than its full volume in a free-standing configuration, such that the canopy portion can be wrapped around the proximal portion of the ultrasonography probe or cover the proximal portion of the ultrasonography probe. The canopy portion is described to be in a free-standing configuration when the canopy portion is not covering any underlying object, and is allowed to collapse to a substantially flattened shape under its own weight. It is noted that the volume of the three-dimensional space of the canopy portion may be substantially reduced when the canopy portion is in a free-standing configuration (as compared to when the canopy portion is covering the imaging probe or proximal portion thereof). In some cases, the volume of the three-dimensional space of the canopy portion may be reduced by 50%, 60%, 70%, 80%, 90%, or more than 90% when the canopy portion is in the free-standing configuration. The canopy portion111may comprise any three-dimensional shape such as cube, orb, cylinder, cone, semi-sphere, cuboid, triangular prism, hexagonal prism, pyramid, self-supporting geodesic dome, and various other forms. In some cases, the canopy portion may comprise a cuboid shape or rectangular shape. The three-dimensional shape of the canopy portion may comprise any number of facets or edges where the adjoining facets meet. For example, the three-dimensional shape may include one, two, three, four, five, six, seven, eight, nine, ten or more facets. The three-dimensional shape of the canopy portion can be formed using any manufacturing or fabrication methods known to those skilled in the art. For instance, the canopy portion can be formed by a thermoforming process to mold a sheet of plastic material into a desired three-dimensional geometry or by joining a plurality of pieces of material together such as using heat sealing. In some cases, the first portion110may comprise a base portion116connected to the canopy portion. The base portion116may be an outer section of the first portion. The base portion may be a substantially flat portion of the first portion and may be configured to wrap around at least the mechanical arm coupled to the ultrasonography probe or the proximal portion of the ultrasonography probe, such that the canopy portion collectively moves with the mechanical arm and the proximal portion of the ultrasonography probe. The canopy portion111may comprise a separate sheet of material that is attachable to the base portion116. The first portion may comprise a first sheet of material, and the canopy portion111may comprise a second sheet of material that may or may not be the same as the first sheet of material. The canopy portion111may be attached to the outer section of the first portion110or the base portion116. For example, the edges of the opening of the canopy portion may be sealed or sewn to the inner edges124of the outer section of the first portion or the base portion116. The inner edges124may also be referred to as the inner cut-out of the outer section of the first portion. The inner cut-out of124may comprise a shape and dimension to match the opening of the canopy portion. The canopy portion can be coupled to the outer section of the base portion in a variety of ways. For example, the edges of the opening of the canopy portion may comprise flanges to seal the canopy portion of the inner cut-out of the outer section of the base portion116. The inner edges or the inter cut-out124of the outer section of the first portion or the base portion116may define a hole122therein. The canopy portion may be attached to the outer section of the first portion or the first portion116to cover the hole122. Alternatively, the canopy portion may be integrally formed with the first portion. For example, the canopy portion may be formed from the same sheet of material as the first portion. The canopy portion may be visually transparent or semi-transparent to allow viewing of the imaging probe and/or alignment of the probe to the patient's body. The first portion110may be coupled to the second portion130and a container portion140of the drape100.FIG.3Aprovides a top planar view of the drape100as seen from above the patient, andFIG.3Bprovides a perspective view of the drape100. In some embodiments, the first portion110may comprise a fenestration118sized and shaped to receive a treatment probe to be inserted into a urethra of the patient. Alternatively, the fenestration118can be formed in the second portion130. In some cases, the fenestration118may be formed in the first portion110above the canopy portion111. The first portion110may permit viewing of the ultrasonography probe and can maintain sterility of the treatment probe when the treatment probe is inserted into the urethra of the patient. The fenestration118may be formed having any shapes such as circular, rectangular, square and various others. The fenestration118may have any dimension. For instance, the fenestration may have a diameter or length in a range of about 1 cm to 20 cm. In some cases, the fenestration118may comprise reinforcement elements120attached to the peripheral of the fenestration to provide reinforcement to the fenestration upon insertion of an instrument or male organ. The reinforcement elements120may comprise a diaphragm surrounding the fenestration or partially surrounding the fenestration. The reinforcement elements can comprise a material which possesses the same degree of flexibility and rigidity as, or may be softer or more rigid, than the material used for the fenestration/transparent portion. The reinforcement elements may comprise opaque, translucent or transparent materials, and these reinforcement elements may provide visual contrast to enhance identification of the fenestration. The reinforcement elements may comprise materials that are flexible or elastic, and that possesses sufficient rigidity to prevent an instrument such as a treatment probe from penetrating or tearing the drape. In some cases, the reinforcement elements may comprise an elastic gather that cinches around the organ for maintaining integrity of the sterile field. Alternatively, the reinforcement elements may comprise an adjustable closure to cinch around the organ for maintaining integrity of the sterile field. Additionally or optionally, the reinforcement elements may comprise an adhesive material to gather drape and attach around the organ for maintaining integrity of the sterile field. The surgical drape may comprise a container portion140for management of fluids generated during a surgical procedure. The container portion140may comprise a sheet of material which is impervious to liquids. The container portion140may be removable or releasably attached to other parts of the drape. For instance, the container portion may be a liquid pouch that can be attached to a lower portion of the drape under the canopy portion. The liquid pouch may be attached to the drape via any suitable means such as, for example, strips, ribbons, buttons, self-adhesive strips or tabs. Alternatively, the container portion140may be integrally formed with the first portion. In some embodiments, the container portion140and the first portion of the drape may be formed from the same sheet of material. In some cases, the container portion140may comprise joined edges146such that the sheet of material in a triangular shape may be folded and joined at the joined edges to form the pouch. Alternatively, the container portion140may not be formed with the joined edges. As illustrated inFIG.3A, the container portion140may be positioned below the canopy portion110to receive fluids flowing downwards. The container portion140may comprise any shape that aids in collecting and/or directing fluids to flow towards the exit port. In some embodiments, the container portion may comprise a substantially conical funnel shape to allow fluids to drain to the exit port. In another example, the container portion may comprise a substantially rectangular form having an angled or sloped bottom that allows fluids to drain to the exit port. The container portion140may include a drainage area and/or funnel for directing body and irrigation fluids to another container or suction system (not shown). For example, the container portion may comprise a screen148attached to a lower inner side of the container portion allowing bodily fluids passing through. The screen148may comprise mesh with pores to prevent clogging of the exit port144. The pore size may vary in a range, for example, from 1 mm to 20 mm. In some cases, the waste is passed through the screen, and the screen is configured to collect the tissues or debris generated during the surgical treatment. In some instances, the debris or tissues collected by the screen may be used for further diagnosis or analysis. In some instances, if the container portion prolapses, the screen may bunch up to allow airflow to the exit port. In some cases, the screen148may be detachable from the container portion, and can be sealed and used as storage container for containing the collected samples. In some cases, the screen148may have sides made from material that is impervious to fluids to eliminate drips during storage or transport. In some cases, the screen may have a closure (zipper, zip-lock, adhesive seal, draw-string, clip, elastic, conformable wire, etc.) for securing the collected samples for storage or transport. In some cases, the screen148may comprise a transparent portion compatible with imaging modalities for tissue analysis. The container portion can provide effective management of fluids, and can be compatible with a suction irrigation system. In some embodiments, a suction irrigation system may be provided, and the container portion may comprise an exit port144at the bottom of the container configured to connect to the suction irrigation system. For example, the container portion may comprise an exit port with a sealing flange at the bottom of the container portion to be connected to a suction irrigation system (not shown). The container portion140may comprise attachment150to support the container when it holds fluids. The attachment150may be used to attach an opening of the container portion to an upper portion of the non-transparent portion of the drape, in order to support the fluid-holding container. The attachment150may comprise straps or tethers that prevent the drape from sagging. For example, the straps or tethers may be used to attach an opening side of the container portion to the legs of the patient or to the operation table. For example, as illustrated inFIG.3C, the straps or tethers150may be wrapped around the legs24of the patient over the drape. The straps or tethers may also be affixed to the patient using adhesive tape provided at the end of the straps or tethers150. Alternatively, the straps or tethers can be attached to the operation table, structures mounted to the operation table, or to the drape. The straps or tethers can be rolled up or folded prior to use (as shown inFIG.3A). For example, the straps or tethers may be folded and secured by securing tapes152prior to use. The straps or tethers150may have a length of at least 50 cm, 100 cm, 120 cm, 130 cm, 140 cm, 150 cm, 160 cm, 170 cm, 180 cm, 200 cm, or 250 cm. Any number or type of attachments can be used for holding the container portion in place. Although the illustrated invention shows two tethers attached to both sides of the opening of the container, any number of tethers (e.g. two or more) or any form of attachments (e.g., adhesives, anchoring mechanisms, Velcro, etc.) may be used to support the container portion. The surgical drape100may comprise a second portion130coupled to the first portion110. The second portion may be a non-transparent or opaque portion. The non-transparent portion may comprise a standard medical non-woven disposable material, and can be more opaque to provide privacy for the patient. The non-transparent portion may comprise a material that is impervious to liquids. The non-transparent portion may be sized and shaped to cover a torso22of the patient. The non-transparent portion may be sized and shaped to cover substantially the patient's body and at least part of the surgical system. For example, the non-transparent portion may cover a portion of the imaging probe. In some embodiments, the non-transparent portion may optionally and further comprise features such as slits136as shown inFIG.3Ato provide flexibility. The non-transparent portion can comprise any shape such as rectangular, triangular, square, or any other irregular shapes. The first portion110and the second portion130may be detachably coupled to each other. Alternatively, the first portion110and the second portion130may be formed together as one piece. The first portion110and the second portion130may be stitched together as one piece. In some embodiments, as illustrated inFIG.3A, the first portion may be coupled to a cut-out134of the second portion. The cut-out may allow the surgical system or the imaging probe to be more easily viewed by an operator through the first portion. The second portion130may be configured to cover or wrap around patient's legs24. The non-transparent portion may be wrapped around legs of a patient as leggings such that a tether or strap for holding the container portion can be affixed to the legs by wrapping around them over the leggings (as shown inFIG.3C). The second130may or may not extend all the way down to the floor. The surgical drape100may comprise a separable line or perforation132along a midline of the surgical drape, or at any angle to the midline of the surgical drape. The perforation132may extend from an edge of the non-transparent portion to the fenestration. Alternatively, the perforation may not extend to the fenestration. The perforation may assist in removal of the drape after a urology procedure when instruments such as catheters are still in place. The perforation may also allow easy separation of the adjoining surfaces of the surgical drape. In some embodiments, the perforation may comprise a segment132-1located in the first portion110of the drape and a second segment132-2located in the second portion130of the drape. In some embodiments, the perforation need not be positioned along the midline or central-line of the surgical drape. The perforation can be positioned anywhere on the drape, in a configuration permitting the drape to be split from one side through the fenestration without interference with instruments placed through the fenestration. In some cases, tapes133may be used to strengthen the perforation region. The tapes133may serve to provide a sealed barrier. The tape133may be applied to one side or both sides of the region where the perforation forms. The tapes133may be disposed on top and/or below the perforation and the tape may be impervious to fluids. For example, the tape may comprise a first tape layer on top of the perforation and a second tape layer below the perforation such that the perforation is sandwiched between the first and second tape layers. In the case when other mechanisms such as zipper, buttons, or sliders are used, the tape may also be applied to provide the seal. In some cases, the perforation comprises a sliding dove-tail mechanism that releasably opens and closes the perforation. The surgical drape100can be folded into a compact packet. The folded drape may be secured in an outer wrap or by adhesive tabs. The outer wrap or adhesive tabs may be removed when the packet is unfolded. FIGS.4A to4Cillustrate a method of using the surgical drape100with a patient20and a surgical system. As shown in steps A and B, the surgical drape100may be unfolded toward the patient's head, and the fenestration118of the drape may be positioned over the operative area of the patient. Next, the second portion130of the drape may be applied over the patient's leg24as leggings. The second portion130may be wrapped around covering the legs24of the patient, as shown in step C. The second portion130may also cover a torso22of the patient. Proceeding to step D, the first portion of the drape110may be unfolded to substantially cover an imaging probe (not shown) of the surgical system, and the container portion140may be positioned in place below the surgical area. The first portion of the drape110provides less physical interference or impedance to the movement of the imaging probe. The visual transparency of the first portion of the drape100also allows an underlying site or tool to be viewed. The container portion may140comprise attachment such as straps or tethers150for attaching the container portion to the legs of the patient or to the operation table, as described elsewhere herein. The straps help to hold the container portion to a more stable support. As illustrated in step E, the tethers or straps150may comprise adhesive tabs to keep the tethers or straps folded and compacted when not in use. The tethers or straps may be extended and wrapped around the leggings of the surgical drape when in use. The tethers or straps150may be secured in place by placing the adhesive taps on top of the strap or tether when not in use, as shown in step F. In some cases, the container portion may be configured to connect an exit port of the container portion to a suction irrigation system to remove fluids from the container portion. The surgical drape100can be easily removed by separation along the perforation when a surgical process is finished. AlthoughFIGS.4A to4Cillustrate a method of using a surgical drape in accordance with some embodiments, a person of ordinary skill in the art will recognize many adaptations and variations. For example, some of the steps can be omitted, some of the steps replicated, and the steps can be performed in any appropriate order. FIG.5shows a surgical drape comprising one or more frame structures160, that may be combined with any embodiment disclosed herein. The frame structures can be adapted to structurally maintain one or more configurations of the surgical drape. For instance, the canopy portion111may comprise one or more frame structures160to maintain or provide the working space/volume114. The frame structures may be disposed in any location, or one or more portions of the drape. In some cases, the frame structures160may extend in an arc-like manner along sidewalls of the canopy portion111, for example as shown inFIG.5. In some embodiments, the frame structures may include a lining161along an opening of the container portion140. The frame structures can be made of any suitable materials such as metal, steel, plastic, fiber glass, and the like. In some cases, the frame structures can be folded and unfolded/deployed to support a variety of different configurations. The frame structures may include or utilize spring steel or any other type of elastic structure. In another example, the frame structures may include or utilize compliant stiffening elements that are integrated into the drape material. In some embodiments, at least one of the first portion110or the second portion130of the surgical drape100may be operably coupled to an actuation element (not shown). The actuation element can be configured to deploy one or more sections of the surgical drape from a compact configuration to an extended configuration. The compact configuration may comprise a substantially two-dimensional shape/profile, and the extended configuration may comprise a substantially three-dimensional shape/profile. The surgical drape may be in the compact configuration when the drape is not in use. The surgical drape can be deployed to the extended configuration prior to or during use of the drape for the surgical treatment of the patient. The extended configuration may correspond to a useable state/position for the drape. The actuation element may be integrated into the surgical drape, or included with the drape. In addition to deployment, the actuation element can further provide structural reinforcement to one or more sections of the surgical drape (e.g. the canopy portion111) in the extended configuration. Accordingly, the actuation element may comprise one or more stiffening elements. In some embodiments, the frame structures160shown inFIG.5may be the actuation element, or form part of the actuation element. The actuation element may comprise a stored energy device. The actuation element may include one or more spring elements. Non-limiting examples of spring elements can include a variety of suitable spring types, e.g., nested compression springs, buckling columns, conical springs, variable-pitch springs, snap-rings, double torsion springs, wire forms, limited-travel extension springs, braided-wire springs, etc. Further, the actuation element (e.g., spring elements) can be made from any of a number of metals (e.g. spring steel), plastics, or composite materials. In some cases, the actuation element may include fiberglass or plastic stiffeners, which also serve as stiffening elements as described elsewhere herein. In some cases, the one or more spring elements may include a deployment spring positioned to deploy one or more sections of the surgical drape from the compact configuration to the extended configuration. Similarly, the one or more spring elements may include a retraction spring positioned to retract one or more sections of the surgical drape from the extended configuration back to the compact configuration. In some instances, a monolithic spring can be configured to provide dual functions, e.g. (1) for deploying and also (2) for retracting one or more sections of the surgical drape. For example, the monolithic spring can be configured to transform between two or more states (fully compressed state, partially extended state, fully extended state, etc.). In any of the embodiments disclosed herein, the actuation element can include magnets, electromagnets, pneumatic actuators, hydraulic actuators, motors (e.g. brushless motors, direct current (DC) brush motors, rotational motors, servo motors, direct-drive rotational motors, DC torque motors, linear solenoids stepper motors, ultrasonic motors, geared motors, speed-reduced motors, or piggybacked motor combinations), gears, cams, linear drives, belts, pulleys, conveyors, and the like. As described elsewhere herein, the first portion110of the surgical drape100may comprise an opening/aperture/fenestration118. In some embodiments, the opening may include an elastic strap or an adjustable closure that is configured to cinch around an organ of the patient as described herein, for maintaining integrity of a sterile surgical field or environment. The adjustable closure may include, for example a zipper. Additionally or optionally, the opening may include an adhesive material to gather and wrap loose sections of the drape around an organ of the patient for maintaining integrity of a sterile surgical field or environment. The first portion110of the surgical drape may comprise material for covering the torso22or legs24of the patient, as described elsewhere herein. Referring toFIG.6, the material can be configured to hang loosely602. The material can also be wrapped around604the torso or underside of the legs of the patient or stirrups. The material can be secured using any means of attachment, for example straps, tethers, Velcro™, or tape. In some embodiments, the material may comprise an adhesive for attaching the material around the stirrups to form a holder, in which a container140for receiving and storing waste can be secured and suspended. Referring toFIG.7, the second portion130of the surgical drape can be mounted to a support structure18. The support structure may comprise one or more arms. The support structure can be attached to an operating table over or near the patient. For example, the support structure can be attached to bedrails11of an operating table. The second portion of the drape130can be mounted to the support structure, and similarly the support structure can be attached to the operating table using any means of attachment, for example straps, tethers, Velcro™, or tape. In any of the embodiments disclosed herein, the support structure18can be coupled19to a graphical display (not shown). The display may be supported beneath the drape, or supported by the drape. The display may be a screen. In some cases, the display may be a touchscreen. The display may include a light-emitting diode (LED) screen, OLED screen, liquid crystal display (LCD) screen, plasma screen, or any other type of screen. The display may be configured to show a user interface (UI) or a graphical user interface (GUI). The display can be operably coupled to the imaging probe12, articulating arm14, and/or treatment probe16. A physician may view real-time optical images collected by the imaging probe12on the display. In some cases, a physician can control one or more steps during the surgical treatment of the patient via the display (e.g. articulation of the treatment probe16and/or imaging probe12via the arm14). The second portion130of the surgical drape may comprise a translucent or transparent material138that permits the graphical display to be viewed through the drape. The transparent material138may be provided in one or more regions of the drape, for example as shown inFIG.7. The one or more regions may include a translucent or transparent viewing window139such that the display can be viewed. In some embodiments, the graphical display may include a touchscreen, and the translucent material138may be compatible for use with the touchscreen such that a user (e.g. a physician) is able to interact with the touchscreen, with the translucent material as an intervening layer. The graphical display may be located underneath the drape, but is visible through the viewing window139of the drape. In some embodiments, the translucent material may be flexible or loose-fitting so as to allow a user (e.g. physician) to manually manipulate one or more input/output (I/O) devices that are connected to the graphical display. Examples of I/O devices can include a joystick, mouse, trackball, trackpad, 3-dimension cursor, button, knob, finger trigger, dials, touchscreen, touchpad, or keyboard. The translucent material may include excess or extra material to accommodate a user's manipulation of one or more underlying I/O devices. The container portion140(henceforth referred to as container) may comprise a funnel shape to allow fluids to drain to an exit port144. The container140may comprise a substantially conical funnel shape156, for example as shown inFIG.7. Alternatively, the container140may comprise a substantially rectangular funnel shape158, for example as shown inFIG.8. In some embodiments, the container140may comprise a bottom portion that is angled or sloped147, for example as shown inFIG.8. The angle or slope can be designed to enhance fluids flow to towards the exit port144. In some embodiments, the container may comprise structures for supporting one or more configurations of the container.FIGS.9A and9Billustrate a container having a substantially rectangular funnel shape with an angled or sloped bottom, in accordance with some embodiments. The supporting structures can comprise interleaved structures, for example vertical pleats143as shown inFIG.9A, or horizontal pleats145as shown inFIG.9B. The vertical pleats143can serve or act as stiffeners that prevent the container140from collapsing and changing its shape/form when under load. Additionally or optionally, the vertical pleats143can be configured to allow airflow to vent displaced fluids as the fluids is being extracted or drained from the container140. Referring toFIG.9B, the horizontal pleats145can permit the container140to collapse into a collapsed or compact configuration in a telescoping manner. The container may be collapsible to a substantially planar configuration, and extendable to a substantially 3-dimensional configuration with aid of the horizontal pleats145. The container140may comprise one or more flexible, semi-rigid, or rigid materials. The container140can be designed to achieve various desired characteristics such as strength, rigidity, elasticity, compliance, and durability. Non-limiting examples of materials may include fabric, silicone, polyurethane, silicone-polyurethane copolymers, polymeric rubbers, polyolefin rubbers, hydrogels, semi-rigid and rigid materials, elastomers, rubbers, thermoplastic elastomers, thermoset elastomers, elastomeric composites, rigid polymers including polyphenylene, polyamide, polyimide, polyetherimide, polyethylene, epoxy, partially resorbable materials and the like. In any of the embodiments disclosed herein, the container140can be configured to serve as a packaging enclosure for storing the surgical drape100or portion thereof. The packaging enclosure can be used to store the surgical drape or portion thereof when the surgical drape is in its original state prior to use. Additionally or alternatively, the packaging enclosure can be used to store the surgical drape or portion thereof for subsequent disposal after the surgical drape has been used.FIGS.10A to10Dillustrate an embodiment in which the container140can be used as a packaging enclosure170for storing the surgical drape100. Referring toFIG.10A, the drape100may be coupled to the container140, and may extend outside of the container. Prior to storing the drape, the drape may be stretched out into a substantially planar configuration172. Next, the drape may be folded in the manner as shown inFIG.10B, by folding opposing ends174-1and174-2of the drape inward relative to a longitudinal axis175. Next, the drape may be compacted further by folding an end176inward relative to a transverse axis177, as shown inFIG.10C. Finally, the drape can be folded and tucked into the container for storage as shown inFIG.10D. The drape can be stored in the container when new (i.e. prior to use of the drape), and can be shipped in the container. Additionally or optionally, the drape can be stored in the container after the drape has been used (e.g. in a surgical treatment), for subsequent disposal or transportation to a disposal facility. In some embodiments, the container may include a lid (not shown) for covering the drape stored within the container. In any of the embodiments disclosed herein, the container140can be coupled to and supported by a user, e.g. carried by a user26, for example as shown inFIG.11. The user may be, for example a surgeon, and the container can be coupled to the user in many ways, for example with straps or tethers such that the user at least partially supports the container. In the example ofFIG.11, the container may be coupled to a halter180that is configured to be worn on or around the user's neck28, and the container may be supported by the halter around the user's neck when in use, for example. In some instances (not shown), the container may be coupled to a belt that that is configured to be worn around the user's waist. In some embodiments, the container can be releasably attached to a user's gown using one or more quick release couplings154. The use of quick release couplings can help to improve operating room efficiency, for example by facilitating portability of containers and interchanging of used/new containers within the operating room. Examples of quick release couplings may include mechanical couplings, snapfits, adhesives, tapes, fasteners, magnets, and the like. The container may be releasably coupled to any portion of the user's gown, in a manner that aids disposal of wastes and without impeding the user's movement during the surgical treatment. In any of the embodiments disclosed herein, the container140may comprise one or more compliant stiffening elements for maintaining a structural form of the container. At least one of the first portion110or the second portion130may comprise one or more frame structures that support a configuration of at least the first portion or the second portion. The compliant stiffening elements may be used as frame structures. For example, as shown inFIG.12, stiffening elements190may be provided as rib liners extending across various parts of the surgical drape100and the container140. Structural reinforcement can be advantageous for prolonging the life of the container and the surgical drape for multiple use/patient encounters. The structural reinforcement can also be advantageous for fluid management, for example during long surgeries involving significant fluid management for multiple use/patient encounters In some cases, the container140may comprise structures for management of fluid flow. The container140may comprise an integral perforated tubing matrix200to maintain fluid flow and air displacement, for example as shown inFIG.13. The perforated tubing matrix200may be connected to a drain/suction port located at a hole142or exit port144of the container140. The perforated tubing matrix may comprise one or more tubes202serving as fluidic pathways or channels. The fluidic pathways or channels may be provided adjacent to the exit port144to provide anti-block caused by prolapse drape. The tubes may be coupled to one another, and may intersect with one another. Each tube202may comprise a plurality of perforations204. The perforations may be provided in a manner that aids in air displacement from the tubes and prevent clogging within the matrix. The fluidic channels may extend for any length along a plurality of surfaces of the drape100under a filter screen. For example, a plurality of fluidic channels may extend for about 5 cm to about 40 cm up the walls of the container140and under screen148. The container can be designed to ensure sufficient suction of fluid from the container by (1) providing non block-able passageways for the suction to act on the fluid, or (2) by providing a mechanism that prevents material from folding over a vacuum port and blocking the vacuum port. In some alternative embodiments, the container may comprise rolled up tube-like areas210, for example as shown inFIG.14. The rolled up tube-like areas210may be formed from rolled up drape material, and may be connected to a drain/suction port to maintain fluid flow and air displacement, similar to the embodiment ofFIG.13. In some cases, the container140may comprise a flap to prevent splash onto a user (e.g. a physician).FIG.15shows an example of container140comprising a flap220. In some cases, the flap may be deployable. The container may be configured having an inner sterile portion and an external non-sterile portion to protect the physician from splash. The flap may comprise a non-sterile portion that extends outside of the surgical drape. In some cases, the flap220may have a self-supporting semi-cylindrical form. In some cases, one or more stiffening elements190can be used to support a structural configuration of the container and/or the flap. In some embodiments, one or more of the stiffening elements190can be adjustable in position to prevent splash onto the physician, for example by raising the flap higher. The stiffening elements190can extend in directions transverse to each other. For example, the stiffening elements190may extend circumferentially and longitudinally along the container140. The stiffening elements may comprise a bendable material such as a thin wire, for example, or any stiffing structure or element as described herein. In some cases, the container140may comprise one or more ports230for accepting fluid from an irrigation or aspiration pump, or from a drain line above or below a screen148.FIG.16shows an example of the ports230in accordance with some embodiments. The ports may comprise an opening, an aperture, a fenestration, a connecting feature, sealing flange, and the like. In some embodiments, the container140may comprise one or more extruded portions extending from the container140. One or more of the extruded portions may have a ‘tented’ shape. Referring toFIG.17, the extruded portions may help to define an inner sterile surface242and an external non-sterile surface244. The external non-sterile surface244may provide a working space for placement of a support structure comprising the surgical arm (e.g. articulating arm14). The external non-sterile surface permits an ungloved hand to access a sterile space defined within the inner sterile surface242for manipulation of the transrectal device. In some embodiments, the container may comprise a detachable screen148, for example as shown inFIG.18. The screen may be detachably coupled to an inner lower portion of the container140. The screen may be detached252from the container, and can be folded up254and sealed256for collection and transportation of tissue or solid samples (for subsequent analysis or disposal). The screen may comprise a hole that is sized or shaped to permit capture of clots or intact tissue. The screen may comprise a material that is impervious to fluids, that is provided along edges or sides of the screen. In some cases, the screen may comprise a closure element for securing samples for storage or transport (that enables the screen to be sealed256). The closure element may comprise a zipper, a zip-lock, an adhesive seal, a draw-string, a clip, or an elastic or conformable wire. In some cases, the screen may comprise a translucent region that is compatible with imaging modalities for tissue analysis. The screen may be removable from the container along the edges or sides of the screen to permit visualization through the translucent region. In some cases, the screen may comprise an area258for displaying information about the patient. The area258can be configured to receive thereon a preprinted label containing the information about the patient. Additionally or optionally, the area258can be configured to permit a user to write thereon. In some cases, the area258may comprise a plurality of sub-areas for displaying preprinted information or clinician notes. Referring toFIG.19, the screen can be configured to fold with collapse of the container240. The folding of the screen can be configured to permit airflow to a drain/suction port. The screen can be configured to fold in an interleaved manner. Referring toFIG.20, the canopy portion111may be designed such that fluid is conducted to flow downwards260toward the container140when the canopy portion is in an inverted configuration. The inverted configuration can prevent fluids from accumulating or pooling on the canopy portion111instead of flowing into the container140. The inverted configuration may comprise one or more sloping surfaces that aid the fluid to flow downward toward the container. The canopy portion111may be shaped and/or sized262such that the canopy portion does not sag and collect fluids under weight of the fluids. In some embodiments, the surgical drape100may comprise one or more labels as described herein. The labels may comprise instructions for using the drape, and information on one or more of the following: (a) location of one or more access port holes, (b) location of one or more perforations, (c) location of one or more attachment points, (d) areas at which sections of the drape can be detached, (e) placement of the drape onto the patient, (f) location of the drape relative to an operating table, (g) attachment of the drape to the operating table, (h) location of the drape relative to one or more support structures proximal to the operating table, or (i) attachment of the drape to the one or more support structures. In some embodiments, the surgical drape100may comprise excess material in at least the first portion110or the second portion130to permit a non-sterile hand of the user (e.g. a physician) from a non-sterile working space outside of the drape to access and manipulate the probe10comprising the transrectal device or the surgical arm14without contaminating a sterile field underneath the drape. The canopy portion111can be configured to permit the user to manipulate the surgical arm14that supports the transrectal device. FIG.21shows a surgical drape100comprising one or more labels. The labels may comprise a first label550, a second label552and a third label554, for example. Additional labels or fewer labels may be provided with the drape100. The labels may comprise any of the labels as described herein. The labels may be affixed to the drape, so as to identify structures of interest on the drape, such as access port holes, perforations, attachment points, areas of the drape that can be detached as sections, placement of the drape on the patient, location of the drape relative to the patient and the operating table, location of the drape relative to support structures, and attachment of the drape to support structures. The one or more labels may comprise instructions for use for the drape100, and the instructions for use can be attached to the drape or provided separately. The instructions for use can be affixed to the drape on a sterile side of the drape, in order to maintain sterility while one or more instructions is referred to by a user. The one or more instructions may comprise an arrow or other indicia to identify portions of the drape that may be of interest to a user. For example, an arrow or a circle can identify the opening on the drape through which a penis of the patient is passed from the non-sterile side of the drape to the sterile side. One or more perforations can be identified with an indicium associated with the label to identify the perforations. FIG.22Ashows a method300of thermoforming the cover100as described herein with a mold310and sheet material320. The mold may comprise container forming portion312comprising a curved surface, such as a concave surface to contact sheet material320, so as to define container portion140of drape100. The mold may comprise a canopy forming portion314comprising a curved surface, such as a convex surface to contact sheet material320. The canopy forming portion may comprise inverted portions comprising opposite curvature to facilitate drainage as described herein, for example inverted portions adjacent, near, or within protrusion of canopy forming portion314. The mold may comprise torso forming portion316, so as to define the torso portion of the drape. The mold may comprise additional structures corresponding to a patient placed in stirrups for a urological procedure as described herein, for example, structures corresponding to bent legs of a patient. FIG.22Bshows a surgical drape100thermoformed over mold310, so as to define the three-dimensional shape profile of thermoformed surgical drape330. Thermoformed surgical drape330may comprise any of the structures of surgical drape100as described herein. For example, the thermoformed drape330may comprise container portion140as described herein, first portion110comprising the canopy portion as described herein, and second portion130comprising the torso portion as described herein. The thermoformed drape330may comprise one or more stiffing structures as described herein, and the stiffening structures may comprise stiffening structures sandwiched between a plurality of layers of thermoformed sheet material, for example. The stiffening structures can be placed on a first thermoformed layer of the drape, and a second layer placed over the stiffening structures so as to sandwich the stiffening structure between the layers, and the layers can bond together as part of the thermoforming process. Alternatively, or in combination, actuators can be sandwiched between thermoformed layers of the drape, so as to automatically expand and extend the drape from a compact packaged configuration for sterile storage to an expanded and extended configuration for use on a patient. The sheet material may comprise any biocompatible barrier material impermeable to bodily fluids, and can be thermoformed on the mold as will be understood by one of ordinary skill in the art. The method300for thermoforming the drape100may comprise one or more steps as follows: 1) receive sheet material to thermoform the mold; 2) manufacture the mold with the three-dimensional shape profile; place the sheet material on the mold; thermoform the sheet material to the shape of the mold; place appropriate structures on the thermoformed sheet of material at appropriate positions and orientations, e.g. stiffening structures; thermoform a second sheet of material on the mold so as to bond the first sheet to the second sheet with the stiffening structures therebetween; remove the thermoformed surgical drape330from mold310; place the thermoformed surgical drape in a package or wrap the thermoformed drape within a packaging portion of the drape, in a compact storage configuration; and sterilize the thermoformed drape. Although method300of thermoforming a surgical drape is described herein in accordance with an embodiment, a person of ordinary skill in the art will recognize many variations and adaptations. Some of the steps may be removed or repeated, and the steps may be performed in a different order, for example. FIG.23Ashows a user manipulating a transrectal device through canopy portion111of surgical drape100and a corresponding first position351of an engaged portion352of the proximal portion350of the transrectal device comprising probe10as described herein. The engaged portion of the transrectal device may comprise a portion of actuator117as described herein, for example a knob of the actuator. The knob may comprise of a portion of actuator117and may comprise an engaged portion352of the transrectal device. An engaged portion354of canopy portion111is located between the engaged portion352of the proximal portion350of the transrectal device, and an engaged portion355of the hand of a user such as a finger or thumb of the user. The engaged portion355of the hand of the user is coupled to the engaged portion354of the canopy, with the engaged portion354of the canopy of the drape located between the engaged portion355of the hand of the user and the engaged portion352of the proximal portion350of the transrectal device. The coupling allows the canopy to move with the hand and the canopy portion with a low resistance to movement, such that the proximal portion of the transrectal device appears to move freely with the hand of the user with the proximal portion of the transrectal device located within the canopy. FIG.23Bshows the user manipulating the proximal portion350of the transrectal device ofFIG.23Aand a corresponding second position of the proximal portion of the transrectal device. The engaged portion352of the proximal portion of the transrectal device has been moved to a second position353with the engaged portion355of the hand of the user and the engaged portion354of the canopy portion111. At the second position353the canopy has been moved from the first position351to the second position353with a small substantially imperceptible about of force. The movement of the canopy typically provides a resistance to movement that is less than the amount of force required to move the proximal portion350of the transrectal device. The amount of force to move the engaged canopy portion can be less than one tenth ( 1/10), for example less than one hundredth ( 1/100) of the amount of force used to move the engaged portion352of the proximal portion of the transrectal device. FIG.23Cshows return of the engaged portion354of the canopy portion toward the first position351of the engaged portion as shown inFIG.23A. This return of the engaged portion of the canopy portion allows the user to release proximal portion of the transrectal device and engage the proximal portion of the transrectal device at a new location356, so as to allow the user to move the proximal portion of the transrectal device again, while encountering substantially imperceptible amounts of resistance from an engaged portion of the canopy as described herein. The amount of return can be a distance within a range from about 1 mm to about 100 mm, for example within a range from about 1 mm to about 25 mm. The proximal portion350of the transrectal device may comprise a proximal portion of any transrectal device as described herein. In some embodiments, the proximal portion of the transrectal device comprises a knob of the transrectal device that is coupled to a probe so as to allow movement of the probe of the transrectal device. For example, the probe of the transrectal device can be mounted on a carriage coupled to the knob, such that the probe can be advanced distally and retracted proximally with rotation of the knob. In some embodiments, the transrectal device comprises an ultrasound imaging probe as described in PCT Application PCT/US2013/028441, filed on Feb. 28, 2013, entitled “AUTOMATED IMAGE-GUIDED TISSUE RESECTION AND TREATMENT”, published as WO/2013/130895, the entire disclosure of which is incorporated herein by reference. A transrectal ultrasound image can be shown on a display visible to a user, such that the transrectal ultrasound probe can be adjusted through the canopy portion with return of the engaged canopy portion as described herein. The transrectal device may comprise an input output (“I/O”) device as described herein so as to allow computer control of the position of the transrectal device, and the engaged portion of the canopy can return as described herein so as to facilitate movement of the engaged portion of the hand of the user and interaction with the I/O device. The substantially imperceptible resistance to force provided by the canopy portion111when the engaged portion354moves from the first position351to the second position353stores potential energy in the canopy portion and in some embodiments additional portions of the drape100. This potential energy is released at least partially when the engaged portion354of the canopy111is released by the hand of the user, and the engaged portion354returns from the second position353toward the first position351. FIG.24Ashows a full volume of a canopy portion111in an extended configuration. In the extended configuration, the sheets of the canopy have been extended to substantially remove slack and folds. While this can be achieved in many ways, in some embodiments a lower perimeter of the canopy portion111can be supported and the canopy inflated with a gas such as air to extend the canopy into the extended configuration to determine a volume of the canopy. The canopy portion111may comprise a length L, a width W and a height H for embodiments in which the canopy comprises a rectangular protrusion. Each of these dimensions may be defined by distances between corresponding corners of the canopy portions. The full volume of the canopy portion corresponds to the length, width and height and may be calculated by the known formulas. In some embodiments, the protruding canopy may comprise one or more substantially straight sides, and a comprise a partially trapezoidal shape, for example. In some embodiments the canopy portion comprises a curved surface sized and shaped to receive at least a proximal portion of the transrectal device. The curved surface when extended substantially without slack or folds defines the volume of the canopy portion. The canopy portion111may comprise a combination of substantially flat surface and curved surfaces, for example. In some embodiments, the lower perimeter of the canopy portion is supported and the canopy portion inflated with a gas to expand the canopy portion to the fully extended configuration in order to determine to the full volume. The amount of gas such as air within the fully expanded canopy portion can be measured by deflating the canopy portion to a fully deflated and compact configuration and measuring the amount of gas released. The amount of gas released by be measured by any number of ways known to a person of ordinary skill in the art. FIG.24Bshows a decreased volume of the canopy portion111ofFIG.24Ain a partially collapsed free standing configuration. The height H is decreased in the free standing configuration, and the barrier sheet material may comprise folds so as to decrease the height H by at least about 10%, for example. The length L and with width W defined by distances between corners of the canopy may be similarly decreased, e.g. by at least about 10%. In some embodiments, the volume of the canopy decreases by an amount within a range from about 10% to 90% between the fully extended configuration and the partially collapsed free standing configuration. The canopy may comprise at least some weight, such that the volume of the canopy decreases in a partially collapsed free standing configuration as compared with the volume of the canopy in the fully extended expanded configuration. The weight and stiffness of the barrier material of the canopy can be configured to provide the partial collapse of the canopy. A heavier (e.g. thicker) less stiff barrier material will collapse more than a lighter (e.g. thinner) barrier material comprising similar stiffness. A stiffer barrier material may collapse less. A lower perimeter of the canopy can be supported, and the volume of the partially collapsed canopy determined, for example based on an amount of gas released when the canopy is compressed from the free standing partially collapsed configuration to the fully collapsed configuration. In some embodiments, the canopy barrier material inhibits the flow of air from the canopy. The seals of the canopy, if present, may comprises air tight seals to maintain the sterile field above the canopy. The barrier material of the canopy111may comprise a Young's modulus, a thickness, and a density configured to provide the partial collapse of the canopy in the free-standing configuration as described herein. The amount of return of the canopy portion as described herein can be related to the Young's modulus, the thickness, the density of the barrier material and the volume of the full volume of the canopy portion in relation to the amount of movement of the proximal portion of the transrectal device. A person of ordinary skill in the art can determine suitable configurations of materials as described herein to configure the canopy with partial return during manipulation of the transrectal device, and partial collapse in the free standing configuration as described herein. FIG.25Ashows a surgical drape100coupled to an actuation element in a compact configuration in a side profile view. FIG.25Bshows the surgical drape100ofFIG.25Ain an extended profile configuration in a side profile view. FIG.25Cshows the surgical drape ofFIG.25Ain the compact profile configuration in a top profile view. The surgical drape may comprise actuation elements in any and any location of the drape, so as to expand the drape from a compact configuration as described herein, e.g. as shown inFIG.25A, to an at least partially extended and at least partially expanded configuration as shown inFIG.25C. The first portion110of the drape100comprising canopy portion111may comprise one or more actuation elements410in order to at least partially extend and expand the canopy portion. The actuation elements may comprise a shape memory material, such as spring steel or thermoformed plastic, configured to extend and expand the canopy portion to an increased internal volume. In the compact configuration shown inFIG.25C, the actuation elements may comprise a bent configuration and straighten in the expanded profile configuration inFIG.25B. The second portion130comprising the portion of drape100configured to at least partially cover the torso of the patient comprises actuation elements in some embodiments, either alternatively or in combination with actuation elements410or one or more action elements420. The container portion140of the drape may comprise one or more actuation elements420configured to expand the container portion form an initial compact configuration to an expanded configuration to receive fluids as described herein. Although the compact configuration ofFIG.25Ais shown with the drape extended along a length of the drape, actuation elements can be provided that extend at least partially along the length of drape, so as to at least partially unroll the drape to the configuration shown inFIG.25A. For example, one or more actuation elements420can extend from the container portion140through the first portion110comprising canopy portion111and at least partially along the second portion130configured to at least partially cover the torso. The drape can be initially provided in a rolled configuration in the sterile package, such that the drape unrolls in response to actuation elements410and420to expand the drape from a compact configuration to the extended profile configuration as shown inFIG.25B. Although reference is made to a rolled configuration, the surgical drape100can be configured to expand from a compact folded configuration to the extended profile configuration. The surgical drape shown inFIGS.25A to25Ccan be configured with stiffening elements either alternatively to the actuation elements, or in combination with the stiffening elements. The stiffening elements may comprise any stiffening element or structure as described herein and may comprise metal extensions such as wire or pleats, for example. The metal extensions may comprise a deformable material, and a cross-sectional thickness and length suitable for allowing the drape to be shaped as desired by the user. The wire may comprise a suitable diameter to allow the drape to be shaped to as desired by the user. The extensions can be placed at one or more locations of the drape, such the canopy portion of the container portion, and combinations thereof. The one or more actuation elements can be configured in many ways. In some embodiments, the actuation element comprises one or more spring elements, and optionally the one or more spring elements comprises spring steel. The container can be configured to receive, collect and store waste including bodily fluids, surgical-related fluids, tissue or debris generated during the surgical treatment. The container can be configured to receive, collect and store waste including bodily fluids, surgical-related fluids, tissue or debris generated during the surgical treatment as described herein. In some embodiments, the container portion140comprises a volume within a range from about 1000 cm3to about 70,000 cm3in the expanded deployed configuration and optionally the volume is within a range from about 1000 cm3to about 10,000 cm3. In some embodiments, at least one of a first portion comprising the canopy portion or a second portion comprising the torso portion is operably coupled to an actuation element configured to deploy one or more sections of the surgical drape from a compact configuration to an extended configuration. In some embodiments, the compact configuration comprises a substantially two-dimensional shape, and the extended configuration comprises a substantially three-dimensional shape. In some embodiments, the surgical drape is in the compact configuration when the surgical drape is not in use prior to deployment, and deployed to the extended configuration prior to or during use of the surgical drape for the surgical treatment of the patient. FIG.26shows an opening sized to receive a surgical urological probe and132perforations extending in a first direction corresponding to inferior and superior directions of a patent and perforations450extending in second direction transverse to the first direction to allow the surgical drape to be removed around a base457of a tensioning device coupled to a patient with a catheter extending along the urethra of the patient. Examples of a base and tensioning devices are described in PCT application PCT/US2017/023062, filed on Mar. 17, 2017, entitled “MINIMALLY INVASIVE METHODS AND SYSTEMS FOR HEMOSTASIS IN A BLEEDING CLOSED TISSUE VOLUME”, published as WO/2017/161331, the entire disclosure of which is incorporated herein by reference. The base may comprise a maximum dimension454across the base. The perforation450extending in the second direction may extend a distance452greater than maximum dimension454across the base, so as to facilitate removal of the surgical drape when the base and tensioning device have been coupled to the patient. The maximum dimension across the base can be within a range from about 2.5 cm to about 30 cm, for example within a range from about 3 cm to about 20 cm. The distance452may comprise a distance within a range from about 2.5 cm to about 60 cm, for example within a range from about 3 cm to about 40 cm. Although reference is made to perforations, the surgical drape can be configured in many ways similar to perforations132and450. In some embodiments, a second portion comprising the torso portion comprises a weakened material extending a direction corresponding to a direction along an inferior superior direction of the patent, e.g. along a midline. The second portion can be configured to assist removal of the surgical drape by allowing the second portion to separate along the weakened material. In some embodiments, the weakened material comprises one or more of perforations, thinned material relative to adjacent unweakened material, thermally or chemically weakened material or stressed material along the midline, or at any angle or offset to the midline, so as to extend a generally along a generally inferior or superior aspect of the patent. In some embodiments, a second weakened material extends in a second direction transverse to the midline in order to facilitate removal of the surgical drape around a base of the traction device coupled to the patient with a catheter extending along a urethra of the patient. The second weakened material extending in the second direction can be weakened similarly to the weakened material extending in the first direction, and may comprise perforations extending in the second direction, for example. In some embodiments, the perforations allow insertion or access of a catheter to be inserted into a urethra of the patient. The catheter may comprise a suprapubic catheter to drain urine from a bladder of the patient, for example. FIG.27shows a container portion140of a surgical drape100with a porous structure such as screen148upstream of a suction port462. The porous structure located upstream of suction port462may comprise any porous structure as described herein, such as one or more of a tube with holes on an outer wall, a screen, a mesh, a fabric, a grating, a plurality of apertures formed in a sheet of material, an open cell foam, a sponge a screen, a perforated tubing matrix, fabric, a sintered material, or particles held together to define channels. The mesh may comprise a fine mesh, for example. The porous structure such as screen148can be located upstream of suction port and coupled to an inner wall of container portion140to direct fluid entering container portion140through porous structure. The suction port can be connected to a source of suction, such as a surgical suction pump, with a tube coupled to the suction port and the source of suction. The porous structure such as screen148can filter particles comprising blood clots and ablated tissue, to inhibit blockage of suction port462. In some embodiments, the porous structure comprises channels having a maximum cross-sectional size no larger than a minimum inner cross-sectional size (e.g. minimum diameter) of suction port462, to ensure passage of clots or tissue passed by the porous structure through the suction port. The porous structure may comprise a surface area to receive surgical fluids, clots and tissue. The container portion140may comprise a fluid inlet460to receive flowable material from a surgical procedure, such a surgical fluid comprising tissue and clots from an ablation procedure. The ablation may comprise a water jet ablation procedure performed with an ablation probe as described in PCT Application No. PCT/US2013/028441, filed on Feb. 28, 2013, entitled “AUTOMATED IMAGE-GUIDED TISSUE RESECTION AND TREATMENT”, which has been previously incorporated by reference. The fluid from the surgical probe can be coupled to the inlet with a tube, such that the ablated prostate tissue material can be collected on the porous structure and used for subsequent analysis. In some embodiments, the receiving surface area of the porous structure comprises a surface area greater than the minimum inner cross-sectional size of the suction port in order to provide additional channels of the porous structure to pass surgical fluids when solid material such as clots and tissue have been deposited on the porous structure. The porous structure may comprise channels extending through a thickness of the porous structure that are sized and shaped to collect tissue from the surgical procedure for subsequent analysis. The channels can be sized no larger than the approximate size of a prostate cell, for example no larger than about 5 microns (um). The channels of the porous structure may comprise a maximum cross-sectional size within a range from about 0.1 microns to about 5 microns, for example, in order to capture individual cells of the prostate. Alternatively, the channels of the porous structure may comprise a larger cross-sectional size and may comprise a maximum cross-sectional size within a range from about 5 um to about 1 mm in order to capture tissue of the prostate comprising cells and blood clots received from inlet460. In some embodiments, the porous structure148and suction port462are configured such that an amount of fluid464accumulates in the container. The porous structure can decrease the amount of fluid accumulated in the container portion140as compared to fluid accumulation without the porous structure. The surface area and the size of the channels of the porous structure can be configured to decrease amount of fluid that accumulates between the porous structure and the opening to the suction port on an inner side of the container when the suction port is coupled to the suction source. The amount of accumulated fluid with the porous structure on the lower end of the container can be within a range from about 0.05 cm3to about 500 cm3, for example within a range from about 0.05 cm3to about 100 cm3. In some embodiments the porous structure can be separated from the suction port with a gap extending in between, and the amount of fluid that accumulates between the porous structure and the opening to suction port can be within similar ranges, e.g. from about 0.05 cm3to about 100 cm3. FIG.28shows a container510with a viewing window514, in which the container is sized to receive a porous structure such as screen148with material from the patient supported thereon. The viewing window may comprise an optically transparent material configured to allow viewing the material from the patient with a high-resolution microscope for example. The container510may comprise a sealed container that can be sealed with the porous structure and material of the patient placed thereon. The container may comprise a barrier material configured to inhibit release of material from the patient. FIG.29shows an image of surgical drape100used to conduct experimental testing. The inventors conducted several experiments to determine the advantageous structures, elements and portions of the drape and other elements as described herein. The surgical drape shown inFIG.29includes many of the elements and structures shown and described above with reference toFIG.1B. This testing was used to determine appropriate shapes and structures and material properties of the drape to provide the beneficial functions and structures as described herein, such as fluid management, stiffening structures and dimensions of the canopy and container, return of the canopy, and the dimensions and material properties of the porous structure, as described herein. As one of ordinary skill in the art will appreciate in view of the present disclosure, the surgical drape comprises a sterile side that generally faces away from the patient and a non-sterile site that generally faces toward the patient, in order to provide a sterile barrier between the patient and the sterile side of the drape when placed on the patient. Alternatively or in combination, the surgical drape can be configured to cover the feet or legs of the patient, for example with the second portion comprising the torso portion as described herein extending so as to cover one or more of the feet or legs of the patient. In some embodiments the user of the drape places a handpiece down and/or resectoscope on top of the drape on the patient's stomach, and the drape may comprise a fastener, such as a strap or Velcro or tether or tape portion configured to allow the user to place the handpiece at locations away from or on the patient's stomach. Alternatively or in combination, the patient drape may comprise an adhesive material to adhesively couple to the handpiece and/or resectoscope to limit movement of the surgical instrument. The adhesive can be covered with a peel or other material, such that the adhesive material is not exposed until the peel has been removed from the surface of the adhesive by the user such as a physician or attendant. Securing the handpiece and/or resectoscope on the drape above the stomach can inhibit instruments from sliding down into the fluid container as described herein, which would be less than ideal because the container may contain fluids and/or wrappers/gauze from earlier in the surgical procedure. In some embodiments, the surgical drape comprises fasteners such as one or more of tape or Velcro section on one or more sides of the patient's legs for wire and cable and tubing management, which can be used to bundle these together. In some embodiments, the surgical drape as described herein comprises a packaging enclosure to store the surgical drape or portion thereof when the surgical drape comprises an initial compact configuration prior to be expanded to the extended configuration, for example in its original state prior to use. The packaging portion may comprise an extension of the surgical drape configured to cover the drape in a compact configuration, for example with a lower non-sterile side of a portion of the drape folded so as to be exposed to a non-sterile exterior environment. An internal sterile side of the folded portion of the drape corresponding to the upper sterile side of the drape can be folded so as to a remainder of the drape, when the drape comprises the compact configuration prior to expansion to the extended configuration as described herein. Although reference is made to alternatives in the present disclosure, one or ordinary skill in the art will recognize that these alternatives can be combined in accordance with the teachings of the present disclosure. While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby. | 98,845 |
11857288 | DETAILED DESCRIPTION In some aspects, the present disclosure relates to systems, methods, and computer-readable medium for phase unwrapping for displacement encoding with stimulated echoes (DENSE) MRI using deep learning. Although example embodiments of the disclosed technology are explained in detail herein, it is to be understood that other embodiments are contemplated. Accordingly, it is not intended that the disclosed technology be limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The disclosed technology is capable of other embodiments and of being practiced or carried out in various ways. It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value. By “comprising” or “containing” or “including” is meant that at least the named compound, element, particle, or method step is present in the composition or article or method, but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named. In describing example embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. It is also to be understood that the mention of one or more steps of a method does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Steps of a method may be performed in a different order than those described herein without departing from the scope of the disclosed technology. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified. Some references, which may include various patents, patent applications, and publications, are cited in a reference list and discussed in the disclosure provided herein. The citation and/or discussion of such references is provided merely to clarify the description of the disclosed technology and is not an admission that any such reference is “prior art” to any aspects of the disclosed technology described herein. All references cited and discussed in this specification are incorporated herein by reference in their entireties and to the same extent as if each reference was individually incorporated by reference. As discussed herein, a “subject” (or “patient”) may be any applicable human, animal, or other organism, living or dead, or other biological or molecular structure or chemical environment, and may relate to particular components of the subject, for instance specific organs, tissues, or fluids of a subject, may be in a particular location of the subject, referred to herein as an “area of interest” or a “region of interest.” Throughout the description, the following abbreviations may be used:DENSE—Displacement Encoding with Stimulated Echoes;MRI—Magnetic Resonance Imaging;CNN—Convolutional Neural Network;LV—Left Ventricular;RV—Right Ventricular;HF—Heart Failure;EF—Ejection Fraction;DL—Deep Learning;MSD—Mean Surface Distance;MSE—Mean Squared Error;SNR—Signal to Noise Ratio;NW—No Wrap; andEcc—Circumferential Strain. A detailed description of aspects of the present disclosure, in accordance with various example embodiments, will now be provided with reference to the accompanying drawings. The drawings form a part hereof and show, by way of illustration, specific embodiments and examples. In referring to the drawings, like numerals represent like elements throughout the several figures. Embodiments of the present disclosure include DL-based fully-automated methods for global and segmental strain analysis of short-axis DENSE MRI from a multicenter dataset. U-Nets were designed, trained and found to be effective for LV segmentation, identification of the anterior RV-LV insertion point, and phase unwrapping. Steps involving displacement and strain calculations can be automated, thus, with the DL methods, the entire DENSE analysis pipeline for global and segmental strain can be fully automated. Identification of the anterior RV insertion point, and phase unwrapping, and remaining steps to compute displacement and strain can also be performed automatically without user assistance, as described herein4,5,17,18. Embodiments of the present disclosure include a fully-automated post-processing approach for cine displacement encoding with stimulated echoes (DENSE). Deep learning (DL) methods, particularly convolutional neural networks (CNN), can be used segmentation and analysis of various CMR techniques19,20,29,21-28. Some embodiments of the present disclosure include a pipeline for fully-automated analysis of cine DENSE data using four CNNs to (a) identify the LV epicardial border, (b) identify the LV endocardial border, (c) identify the anterior RV-LV insertion point, and (d) after LV segmentation, perform phase unwrapping of the LV myocardium. Embodiments of the present disclosure include a pipeline that can eliminate all user intervention and can reduce the time for image analysis. Embodiments of the present disclosure include a fully-automatic DENSE analysis pipeline. Some embodiments of the present disclosure include the following general steps: (a) LV segmentation, (b) identification of the anterior RV-LV insertion point, (c) phase unwrapping, and (d) displacement and strain analysis. Steps (a)-(c) can utilize CNNs, and step (d) can use other fully-automatic methods5,31. FIGS.1A-1Billustrate flowcharts of methods for performing segmentation and phase unwrapping according to embodiments of the present disclosure. With reference toFIG.1A, a flowchart illustrating a method for performing phase unwrapping is illustrated. At step102, MRI data is acquired that corresponds to a region of interest of the subject (e.g. a cardiac region). For embodiments herein, the MRI data is stored in computerized memory and may be manipulated to achieve the goals of this disclosure. For example, the data can be subject to computerized mathematical processes in which various forms digital data are created from the original MRI data. In the examples herein, the MRI data may be referred to in terms of frames of image data, and the image data may be stored in appropriate software and hardware memory structures, including but not limited to image arrays configured to allow calculations and mathematical manipulation of the original image data. In some embodiments, the MRI images may be subject to segmentation operations, including but not limited to those set forth in U.S. patent application Ser. No. 16/295,939 filed on Mar. 7, 2019, and published as United States Pub. No. 2019/0279361, which is incorporated by reference herein. This disclosure utilizes segmented images of the epicardial contour and endocardial contour, such as the segmented images illustrated inFIG.2, although the use of other types and configurations of images is contemplated. At step104, a phase image, which may be stored in a computer as a phase image or phase image array or matrix, is generated for each frame, including a phase value corresponding to the pixels of each frame. The method can include generating a phase image for each frame of the displacement encoded MRI data. A chart showing a non-limiting example of the labels used for different types of wrapping is shown inFIG.3, and the labels shown inFIG.3may be referred to throughout the present disclosure. The phase image can include potentially-phase-wrapped measured phase values corresponding to pixels of the frame. At step106, a convolutional neural network (CNN) is trained to compute a wrapping label map for the phase image, where the wrapping label map includes a number of phase wrap cycles present at each pixel in the phase image. The wrapping label map can, for example use the labels shown inFIG.3or any other suitable labels. At step108, the CNN is used to compute a wrapping label map as shown inFIGS.4A-4B. An example input image including phase wrapping is shown inFIG.4A. The wrapping label map (FIG.4B) includes regions classified by the CNN as corresponding to +2pi and −2pi wrapping. An unwrapping factor can be calculated for each region classified by the CNN, based on the classification of each region. As a non-limiting example, in some embodiments of the present disclosure, every “cycle” of wrapping corresponds to the phase being 2π off from the “true” phase value. Therefore, based on the classification of each pixel as being wrapped or not, and in which direction the phase is wrapped (i.e. in the positive or negative direction), the appropriate unwrapping factor can be calculated for each pixel. At step110, therefore, the method includes computing an unwrapped phase image by adding a respective phase correction to each of the potentially-wrapped measured phase values of the phase image, wherein the phase correction is based on the number of phase wrap cycles present at each pixel. In phase-reconstructed MR images, the phase value is inherently confined to the range (−2π, 2π). However, in cardiac DENSE in order to balance displacement sensitivity, signal-to-noise ratio, and suppression of artifact-generating signals, displacement-encoding frequencies that lead to phase shifts of greater than 2π are typically used, and ±1 cycle of phase wrapping typically occurs during systole5. Thus, phase unwrapping can be required to convert phase to displacement. The unwrapped phase ψijcan be estimated from the potentially-wrapped measured phase φijas follows: ψij=φij+2πkij where kijis an integer and where −2π<φij<2π. According to some embodiments of the present disclosure phase unwrapping problem requires determining kijfor each pixel indexed by i and j. Thus, the phase unwrapping can be defined as a semantic segmentation problem35, and the network can label each pixel as belonging to one of at least three classes (no wrap, −2π wrapped, or +2π wrapped) as shown inFIG.3. At step112, and with the unwrapping complete, the method of this disclosure may be used to compute myocardial strain using the unwrapped phase image for strain analysis of the subject. To create the ground truth for unwrapped phase images, a highly accurate but very slow phase unwrapping method based on multiple phase prediction pathways and region growing can be used36. Additionally, a user can also check the results of this method, frame by frame, and discard all frames with unwrapping errors. The same dilated U-Net structure with three output classes was trained using a pixel-wise cross-entropy loss function. The network's input was the segmented phase-reconstructed DENSE image and the output was the wrapping label map. With this design, after applying the CNN, the value of kijis known for each pixel. Then by multiplying kijby 2π and adding the result to the input wrapped image, the unwrapped phase is computed. Based on whether there is +2π wrapping or −2π wrapping, the appropriate +2π or −2π phase correction can be added to the image to com, to compute110an accurate output image, as shown inFIG.4C. The CNN can be used to generate a more accurate wrapping label map than path-following approaches. As shown inFIG.5, the top row illustrates low-noise images, and the bottom row illustrates high-noise images. The CNN correctly identified wrapping in the high noise areas, while the path following technique failed and did not correctly perform phase unwrapping on the high-noise image. Similarly,FIG.6illustrates a variety of phase unwrapping scenarios, showing the advantage of the U-Net and CNN based approach over path-following approaches. Additionally, embodiments of the present disclosure can perform phase unwrapping for images with more than one “cycle” of phase wrapping. For example, with reference toFIGS.7A-7C, a myocardial DENSE phase image is shown in the bottom row that includes regions with both 1 and 2 cycles of phase wrapping. The network output is the pixel-wise labels which may be classified no wrapping (red), +2π wrapping (blue), −2π wrapping (green), +4π wrapping (purple), −4π wrapping (yellow) (FIG.7B). The unwrapped image (FIG.7C) is computed from (FIG.7A) by unwrapping the classified pixels in (FIG.7B). First row shows an example of one wrap cycle, second row shows same example with two wrap cycles. It should be understood that in situations where there is additional wrapping, (e.g. two cycles of wrapping, three cycles of wrapping, four cycles of wrapping etc.) the network can be configured to classify the additional regions. For example, as shown in the non-limiting example inFIGS.7A-7C, embodiments of the present disclosure configured to perform two cycles of phase unwrapping can include 5 classifications (no wrapping, +2π or −2π unwrapping, and +4π or −4π unwrapping). Optionally, these 5 types of wrapping can correspond to the following classifications: 1—no wrap (k=0), 2−(−2π) wrap (k=−1), 3—(+27c) wrap (k=+1), 4—(−4π) wrap (k=−2), and 5—(+4π) wrap (k=+2). It should be understood that these classifications are intended only as non-limiting examples, and that different numbers of classifications and different systems for naming, labeling, and organizing classifications are contemplated by the present disclosure. Similarly, it should be understood that in embodiments of the present disclosure capable of performing more than two cycles of phase unwrapping, that more than 5 classifications can be used. With reference toFIG.1B, a flowchart illustrating a method for performing strain analysis is shown, according to one embodiment of the present disclosure. At step152, Phase encoded MRI data corresponding to the cardiac region of interest of the subject is acquired. The MRI data can be acquired using a Cine DENSE image acquisition protocol. Optionally, segmentation can be performed including LV-epicardial segmentation154, LV-endocardial segmentation156, and LV-myocardial segmentation158. LV Segmentation154,156,158can be performed using a convolutional neural network. Embodiments of the present disclosure implement a 2D U-Net approach to LV segmentation [e.g.19-22,24,26,28], LGE27, T1-weighted MRI25and phase contrast23. Three-dimensional convolutions may have advantages for segmentation of cine MRI data through time; however, they can be less well studied for cardiac cine MRI than 2D and can present unique issues (e.g. they can require a constant number of cardiac phases). For cine MRI, to date most studies use a 2D model and achieve very good results26,28,41. Since 2D models work well and DICE values can be reasonably good using a 2D approach, a 2D U-Net can be used. Also, values for HD and MSD can be similar to the mean contour distance of 1.14 mm and HD of 3.16-7.25 mm for myocardial segmentation reported by others19, and to the average perpendicular distance of 1.1±0.3 mm also reported by others26. Embodiments of the present disclosure use two separate U-Nets for epicardial and endocardial segmentation, although in some applications training one network for myocardial segmentation based on the proposed network architecture can result in the same performance. Optionally, three classes of the blood pool can be defined, myocardium and background and to assign class weights of 3, 5 and 1, respectively, which can overcome the imbalanced classes problem. To create the ground-truth LV segmentation data, manual image annotation can be performed for DENSE magnitude-reconstructed images. The LV endocardial and epicardial borders can be manually traced for all frames using DENSEanalysis software17. To automatically segment the LV from DENSE magnitude images, one U-Net was trained to extract the epicardial border, and another to extract the endocardial border, and the myocardial pixels can be identified by performing a logical XOR between the two masks. The 2D U-Net networks utilized the structure presented by Ronneberger32with modifications to get the best results for the proposed application. Specifically, in the contracting path, each encoding block can contain two consecutive sets of dilated convolutional layers with filter size 3×3 and dilation rate 2, a batch normalization layer and a rectified linear activation layer. Compared with traditional convolutions, dilated convolutions can increase the receptive field size without increasing the number of parameters and showed improved performance in our experiments. Padding can be used in each convolutional operation to maintain the spatial dimension. Between each encoding block, pooling layers with step size of 3×3 and stride 2 were applied to reduce the spatial dimension in all directions. The number of features can be doubled for the next encoding block. Four symmetric encoding and decoding blocks were used in the contracting and expanding path, respectively. Each decoding block can contain two consecutive sets of deconvolutional layers with filter size 3×3, a batch normalization layer and a rectified linear activation layer. The output of each encoding block in the contracting path was concatenated with those in the corresponding decoding block in the expanding path via skip-connections. The final segmentation map can include two classes: background and endocardium or epicardium. The loss function can be the summation of the weighted pixel-wise cross entropy and soft Dice loss. The assigned class weights were 1 for background, 2 for endocardium in the endocardial network and 3 for the epicardial network. During training, data augmentation on-the-fly was performed by applying random translations, rotations and scaling followed by a b-spline-based deformation to the input images and to the corresponding ground-truth label maps at each iteration. This type of augmentation has the advantage that the model sees different data at each iteration. The use of other network configurations, including networks with different numbers of layers, different filter sizes, stride numbers and dilation rates, is contemplated by the present disclosure, and the above are intended only as non-limiting examples of network parameters that can be used for segmentation. In one embodiment of the present disclosure, 400 epochs were used to train each network; therefore, each image was augmented 400 times. After applying the random transformations to the label maps, a threshold value of 0.5 was applied to the interpolated segmentation to convert back to binary values33. To improve the accuracy and smoothness of the segmented contours, during testing, each image can be rotated 9 times at an interval of 40 degrees and the corresponding output probability maps were rotated back and averaged34. Hereafter, this testing process is described in the present disclosure as “testing augmentation”. It should be understood that the number of rotations (9), the interval (of 40 degrees), the number of epochs (400), and the threshold value (0.5) as well as this order and selection of steps for testing augmentation, are included only as non-limiting examples of ways to improve the accuracy of he described network, and that the use of other training techniques is contemplated. Based on the segmentation104106108the RV-LV insertion point can be identified110. The anterior RV-LV insertion point is the location of the attachment of the anterior RV wall to the LV, and its location defines the alignment of the American Heart Association 16-segment model16which can be used for segmental strain analysis of the LV. As the first frame of cine DENSE images can have poor blood-myocardium contrast, a U-Net is trained to detect the anterior RV-LV insertion point on early-systolic frames (e.g. frames 5 and 6), where the insertion point is reliably well visualized. To create the ground-truth data, an expert user can identify one point in these frames from magnitude-reconstructed DENSE images. During network training, instead of using that point as an absolute ground-truth, which only provides very limited information to the network to learn and suffers from severe class imbalance, a circle with a six-pixel radius around that point can be defined as the network target. The network's inputs were the DENSE magnitude image and the segmented LV binary mask obtained by the aforementioned myocardial segmentation networks as an additional input channel. The network's output is the probability map of a circle for which the center of mass is defined to be the detected RV-LV insertion point. The same aforementioned U-Net structure can be used. The loss function was the combination of the absolute difference and the soft Dice between the target and the output probability map computed using a Sigmoid function. The same on-the-fly data augmentation can be applied during training, and optionally testing augmentation may not be used in the network. At step162, phase unwrapping can be performed, for example according to the method illustrated inFIG.1A, or other methods described herein. At step164, the unwrapped phase image can be used to perform strain analysis, based on the relationship between the phases in the unwrapped phase image and displacement. This can include determining correlation of the unwrapped phase image to strain values for strain analysis of the subject. Optionally, the method can include testing and/or training augmentation. As shown inFIGS.8A-8B, transformations can be applied to generate training images with different qualities. Training augmentation can also be performed by adding Gaussian noise with a mean of zero and a randomly chosen standard deviation between (e.g. 0, 0.75) to simulate different signal-to-noise ratios and by manipulating the unwrapped ground truth data to generate new wrapped data. Data augmentation can be an important point as it can avoid overfitting and the network is trained on data with lower SNR and more wrapping patterns. To create augmented new wrapped data, an unwrapped ground-truth phase image can be multiplied by a random constant number (e.g. between 0.8 and 2.0), and then it is wrapped to the range (−2π, 2π). For each augmented phase image, the kijvalue is known and if it is 0, 1, or −1 then it is used for training.FIG.8Aillustrates how a new phase-wrapping pattern is generated during augmentation andFIG.8Bdemonstrates an example of how different operations can be applied to create augmented data. For this network, the randomly generated transformations including combinations of translation, rotation, scaling, shearing, and b-spline deformation and applied them to the training images along with random phase manipulation and random noise. Different augmentation/transformations can be applied to each image, as a non-limiting example, inFIGS.8A-8B, 7 random augmentations were applied to each training image. Again, it should be understood that the standard deviations and constants used to perform training augmentation are intended only as non-limiting examples, and other types of training augmentation are contemplated by the present disclosure. For data augmentation, segmented and phase unwrapped data obtained by applying segmentation and phase unwrapping methods, can be used. Using simple manipulations of these data, as shown inFIGS.8A-B, augmented pairs of wrapped and unwrapped images can be generated with new wrapping patterns, providing an effective data augmentation strategy for training the phase-unwrapping U-Net. This strategy can be used to create a robust and successful CNN. The phase-unwrapping problem can potentially be treated different approaches. One approach is to train a network to directly estimate the unwrapped phase from the potentially-wrapped input phase, i.e., treating the problem as a regression problem42,43. Another approach, used in some embodiments of the present disclosure, is to estimate the integer number of wrap cycles at each pixel of the phase map by training a semantic-segmentation network to label each pixel according to its wrap class as defined inFIG.335,44-46. The semantic-segmentation approach can recognize DENSE phase wrap patterns, and embodiments of the present disclosure using the semantic segmentation approach can be effective even for low-SNR images. Example Implementations and Corresponding Results The following description includes discussion of example implementations of certain aspects of the present disclosure described above, and corresponding results. Some experimental data are presented herein for purposes of illustration and should not be construed as limiting the scope of the disclosed technology in any way or excluding any alternative or additional embodiments. An embodiment of the present disclosure including a semantic-segmentation phase-unwrapping network was compared to path-following for low-SNR data. To validate one embodiment of the present disclosure, each new step was be compared with expert-user or ground-truth methods and the end-to-end processing of global and segmental strains were compared to previously-validated user-assisted conventional DENSE analysis methods17. An embodiment of the present disclosure was tested using Cine DENSE image acquisition parameters including a pixel size of 1.56×1.56 mm2-2.8×2.8 mm2, FOV=200 mm2(using outer volume suppression) to 360 mm2, slice thickness=8 mm, a temporal resolution of 17 msec (with view sharing), 2D in-plane displacement encoding using the simple three-point method30, displacement-encoding frequency=0.1 cycles/mm, ramped flip angle with final flip angle of 15°, echo time=1.26-1.9 msec, and a spiral k-space trajectory with 4-6 interleaves. Short-axis cine DENSE MRI data from 38 heart-disease patients and 70 healthy volunteers were used for network training and testing of a non-limiting example of the present disclosure. Twenty-six datasets were acquired using 1.5T systems (Magnetom Avanto or Aera, Siemens, Erlangen, Germany) and 82 were acquired using 3T systems (Magnetom Prisma, Skyra, or Trio, Siemens, Erlangen, Germany). The types of heart disease included dilated cardiomyopathy, hypertrophic cardiomyopathy, coronary heart disease, hypertension, acute coronary syndrome and heart failure with left bundle branch block. For each subject, 1-5 short-axis slices were acquired, each with 20-59 cardiac phases. Training data included 12,415 short-axis DENSE images from 64 randomly selected subjects, and 20% of all training data were used for model validation. Forty-four datasets, including 25 healthy volunteers and 19 patients imaged at both field strengths, were selected for the test data (10,510 total 2D images, including those with displacement encoded in both the x- and y-directions). In the experimental embodiment described herein, the final model of each network was trained using data from 64 subjects. Network training was performed on an Nvidia Titan Xp GPU with 12 GB RAM over 400 epochs using an Adam optimizer at a learning rate of 5E-4 and a mini batch size of 10. The times to train the myocardial segmentation networks (endocardium and epicardium), identifying the RV-LV insertion point network, and using the myocardial segmentation for the phase unwrapping network were 34, 48, and 30 hours, respectively. The networks were implemented using Python (version 3.5; Python Software Foundation, www.python.org) with the Tensorflow machine-learning framework (version 1.12.0)37. To quantitatively evaluate the results of myocardial segmentation, the DICE similarity coefficient38was computed. This metric measures the overlap between the ground-truth segmentation (A) and the CNN's segmentation (B) as follow: DICE=2×❘"\[LeftBracketingBar]"A∩B❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"A❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"B❘"\[RightBracketingBar]"(2) DICE coefficient is normalized between 0 and 1, where “0” indicates complete dissimilarity and “1” indicates complete agreement. In addition, to measure the maximum and average distances between the myocardial ground-truth and the CNN-generated contours, the Hausdorff distance (DH) and the mean surface distance (MDS) were computed as follows. Given two sets of points A=(a1, . . . , an) and B=(b1, . . . , bm), and an underlying distance d(a, b) which is defined as the Euclidean distance d (a, b)=∥a−b∥, DHand MDS are given by: DH(A,B)=max(h(A,B),h(B,A))h(A,B)=maxaϵA(min(d(a,b))bϵB(3)MSD=mean(hmean(A,B),hmean(B,A))hmean(A,B)=1n∑aϵA(min(d(a,b))bϵB(4) To assess the accuracy of identifying the RV-LV insertion point position, the Euclidean distance between the expert-selected point and the centroid of the automatically-selected region was calculated. To evaluate the phase-unwrapping CNN, it was compared with the widely-used path-following method5using mean squared error (MSE). The ground-truth unwrapped phase was computed using the phase-unwrapping method based on multiple phase prediction pathways and region growing36. For images with SNR typical of routine DENSE protocols15,39(phase SNR of approximately 22), MSE referenced to ground truth were evaluated for the proposed U-Net and the path-following method. Similar to the phase SNR of velocity-encoded phase contrast imaging40, the DENSE phase SNR was calculated as phaseSNR=mean(unwrappedphaseofend-systolicROI)stdev(phaseofend-diastolicmyocardium) where the mean unwrapped phase of an end-systolic region of interest (ROI) measures the DENSE phase in the region with greatest displacement (representing the signal of interest), and the standard deviation of the phase of the end-diastolic myocardium provides a measure of the standard deviation of phase at a cardiac frame where the mean phase is essentially zero. Because SNR can be lower than typical in some circumstances (such as when imaging patients with implanted devices), the two methods were also analyzed for lower SNR data generated by adding noise to our datasets. For low-SNR data, if no ground truth data is available, low-SNR data (with phase SNR=5-10) can be synthetically created from the test data by adding noise with zero mean and with standard deviation of 0.75. Adding noise to the original wrapped phase data could change the wrapping class of any image pixel. As the label of the pixel may not be the same as the corresponding pixel in the original data, for the low-SNR data the U-Net was compared with the path-following method by calculating the MSE between the unwrapped phase and the typical-SNR unwrapped ground truth. To evaluate the full pipeline shown inFIG.1for global and segmental circumferential strain analysis of the LV, correlations and Bland-Altman analyses were performed comparing the proposed deep-learning based method and the conventional user-assisted semi-automated method (DENSEAnalysis,17). In DENSEAnalysis, a 10th-order polynomial was used for temporal fitting and a spatial smoothing parameter of 0.8 was selected. This example focused on results for circumferential strain and not for radial strain. There are fewer pixels radially across the LV wall in short-axis images than circumferentially. For this reason, methods like DENSE and tagging can be less accurate and reproducible for the estimation of radial strain compared to circumferential strain, and many clinical applications of short-axis DENSE (and tagging) find that circumferential strain is diagnostically or prognostically useful, whereas radial strain may not perform as well. In this non-limiting example implementation, all cardiac phases were segmented, with good results, although it is also contemplated that manually drawn-contours could be used for segmentation. Further, the DL methods described herein provide a superset of the contours needed for the simplified method, and a DL-based simplified method is contemplated. While other strain imaging methods may provide reliable and reproducible global strain, values and are well-suited to automatic DL-based analysis20,28,29, cine DENSE has shown excellent reproducibility of segmental strain7. The example described herein shows excellent agreement of DL-based fully-automated segmental strain with user-assisted semi-automatically computed segmental strain. The limits of agreement for DL automatic vs. user-assisted segmental circumferential strain are better than those for DL vs. user-assisted analysis of myocardial-tagging-based global circumferential strain29. A potential explanation for the substantially better results for DENSE is that for tag analysis, DL is used to perform motion tracking, and even when trained using data from thousands of subjects, there is error in motion tracking29. In contrast, for DENSE, DL is used only for segmentation and phase unwrapping, but DL is not used for automatic motion estimation. For DENSE, during data acquisition displacement is encoded directly into the pixel phase, thus there is no need to learn motion estimation from image features. In essence, the motion estimation problem for DENSE is much simpler than for methods like tagging and feature tracking, and the demands for DL to accomplish full automation are much less. Evaluation of the U-Nets for LV segmentation using 5,255 test images resulted in a DICE coefficient of 0.87±0.04, a Hausdorff distance of 2.7±1 pixel (equivalent to 5.94±2.2 mm), and a mean surface distance of 0.41±0.29 pixels (0.9±0.6 mm). The computation times for determining the epicardial and endocardial contours for a single DENSE image, including test augmentation, were 0.16±0.02 s, 0.15±0.01 s, respectively. The typical semi-automatic LV segmentation time for DENSE is 3-5 minutes for all cardiac phases, which corresponds to about 6 s per frame. The RV-LV insertion point was detected within 1.38±0.9 pixels compared to the manually annotated data. The computation time for detecting the RV-LV insertion point was 2.4±0.15 s for all cardiac phases. An expert reader uses approximately 20 seconds to manually define the point.FIG.2shows examples of the automatically and manually segmented LV epicardial and endocardial contours and the identification of the anterior RV-LV insertion point on short axis images at end-diastolic (ED) and end-systolic (ES) frames. The phase-unwrapping U-Net performed well on both typical-SNR and low-SNR DENSE phase images. The MSE values for the semantic-segmentation U-Net and the standard path-following method are provided in Table 2. MSE was similar for typical-SNR data using the U-Net and conventional path following, and was lower for low-SNR data using the U-Net (p<0.05). The time for DL phase unwrapping for all cardiac phases was 3.52±0.21 s, which was similar to path following method of 3.50±0.65 s.FIG.64illustrates an example where the U-Net and the path-following method were both successful for typical-SNR data and where the semantic-segmentation U-Net outperformed the path-following method for low-SNR data. Fully-automated DL methods described herein were used to compute global and segmental circumferential strain for all test data and compared the results with user-assisted DENSE analysis methods17.FIGS.10A and10Bshow two examples of end-systolic strain maps, global and segmental strain-time curves computed using the DL-based automated methods and the conventional method for a healthy volunteer and a HF patient with a septal strain defect. Very close agreement between the DL-based and conventional DENSE analysis methods is seen inFIGS.10A-10D.FIG.11Ashows the Bland-Altman plot and the linear correlation comparing the DL and conventional DENSE analysis methods for end-systolic global circumferential strain. The bias was 0.001 and the limits of agreement were −0.02 and 0.02. For the linear correlation, r=0.97 and the slope was 0.99. A slice-by-slice analysis of segmental strain is provided inFIGS.11B-11D, and shows very good agreement of segmental end-systolic strain between the fully-automated DL method and the conventional method. The biases were 0.00±0.03 and the limits of agreement were −0.04 to 0.04 for basal segments, −0.03 to 0.03 for mid-ventricular segments, and −0.04 to 0.05 for apical segments. Excellent correlations (r=0.94−0.97, slope=0.92−0.98) were found for all segments of all slices. FIG.12shows the mean±SD of segmental circumferential strain and the variance±SD within each segment at end systole for the mid-ventricular slice of all test data. Two-way ANOVA showed that while there are differences between segments for both mean circumferential strain (p<0.05) and variance of circumferential strain (p<0.05), there are no significant differences between the conventional user-assisted and DL-based fully-automatic methods for mean circumferential strain or the variance of circumferential strain. The performance of each individual step of an embodiment of the present disclosure was validated, including segmentation, identification of the RV-LV insertion point, and phase unwrapping, and also validated the end-to-end performance of the entire pipeline by showing excellent correlation and agreement of whole-slice and segmental strain with well-established user-assisted semi-automatic methods. Embodiments of the present disclosure were evaluated for short-axis cine DENSE data from multiple centers and different field strengths (1.5T and 3T). However, it is contemplated that the networks may be trained using long-axis cine DENSE data to compute longitudinal strain and using data from any machine that can provide the DENSE pulse sequence. It is also contemplated that any number of readers can be used to manually contour the data, and the neural networks can be trained or retrained for use with different numbers of readers. Additionally, while the example embodiment described herein was tested using a phase unwrapping neural network trained for one cycle of phase wrap, it should be understood that the methods disclosed herein can be used to perform an arbitrary number of cycles of phase unwrapping (e.g. 2 cycles of phase unwrap). Further, the data augmentation method for phase manipulation can be particularly useful for training with more than one cycle of phase unwrap, as comparatively few real datasets have two cycles of phase wrap. Additionally, it should be understood that the network can be trained on images with respiratory motion, other types of motion, or where the image is off-center, for example by performing further training using images with these qualities. Furthermore, it should be understood that the size of dataset in the present example is intended only as a nonlimiting example and that embodiments of the present disclosure can perform phase unwrapping with an arbitrary amount of training data. The computerized methods, systems, and products of this disclosure are set forth herein as applied to individual frames of MRI data. This disclosure, however, also encompasses using these phase unwrapping techniques in three dimensional image analyses involving multiple frames of data of higher dimensionality, such as a set of frames of image data gathered over time. The present study trained CNNs to perform LV segmentation, phase unwrapping, and identification of the anterior RV-LV insertion point for short-axis cine DENSE images, providing for fully-automatic global and segmental DENSE strain analysis with excellent agreement with conventional user-assisted methods. DL-based automatic strain analysis for DENSE may facilitate greater clinical use of DENSE for the assessment of global and segmental strain in heart disease patients. FIG.13is a system diagram is a system diagram illustrating an imaging system capable of implementing aspects of the present disclosure in accordance with one or more embodiments. A magnetic resonance imaging (MRI) system100includes a data acquisition and display computer150coupled to an operator console110, an MRI real-time control sequencer152, and an MRI subsystem154. The MRI subsystem154may include XYZ magnetic gradient coils and associated amplifiers168, a static Z-axis magnet169, a digital RF transmitter162, a digital RF receiver160, a transmit/receive switch164, and RF coil(s)166. The MRI subsystem154may be controlled in real time by control sequencer152to generate magnetic and radio frequency fields that stimulate magnetic resonance phenomena in a living subject, patient P, to be imaged. A contrast-enhanced image of an area of interest A of the patient P may be shown on display158. The display158may be implemented through a variety of output interfaces, including a monitor, printer, or data storage. The area of interest “A” corresponds to a region associated with one or more physiological activities in patient “P”. The area of interest shown in the example embodiment ofFIG.13corresponds to a chest region of patient “P”, but the area of interest for purposes of implementing aspects of the disclosure presented herein is not limited to the chest area. It should be recognized and appreciated that the area of interest can be one or more of a brain region, heart region, and upper or lower limb regions of the patient “P”, for example. It should be appreciated that any number and type of computer-based medical imaging systems or components, including various types of commercially available medical imaging systems and components, may be used to practice certain aspects of the present disclosure. Systems as described herein with respect to example embodiments are not intended to be specifically limited to magnetic resonance imaging (MRI) implementations or the particular system shown inFIG.13. One or more data acquisition or data collection steps as described herein in accordance with one or more embodiments may include acquiring, collecting, receiving, or otherwise obtaining data such as imaging data corresponding to an area of interest. By way of example, data acquisition or collection may include acquiring data via a data acquisition device, receiving data from an on-site or off-site data acquisition device or from another data collection, storage, or processing device. Similarly, data acquisition or data collection devices of a system in accordance with one or more embodiments of the present disclosure may include any device configured to acquire, collect, or otherwise obtain data, or to receive data from a data acquisition device within the system, an independent data acquisition device located on-site or off-site, or another data collection, storage, or processing device. FIG.14is a computer architecture diagram showing a computing system capable of implementing aspects of the present disclosure in accordance with one or more embodiments described herein. A computer300may be configured to perform one or more specific steps of a method and/or specific functions for a system. The computer may be configured to perform one or more functions associated with embodiments illustrated in one or more ofFIGS.1-13. For example, the computer300may be configured to perform aspects described herein for implementing the classification and calculation used for phase unwrapping, according toFIGS.1-13. It should be appreciated that the computer300may be implemented within a single computing device or a computing system formed with multiple connected computing devices. The computer300may be configured to perform various distributed computing tasks, in which processing and/or storage resources may be distributed among the multiple devices. The data acquisition and display computer150and/or operator console110of the system shown inFIG.13may include one or more components of the computer300. As shown, the computer300includes a processing unit302(“CPU”), a system memory304, and a system bus306that couples the memory304to the CPU302. The computer300further includes a mass storage device312for storing program modules314. The program modules314may be operable to perform functions associated with one or more embodiments described herein. For example, when executed, the program modules can cause one or more medical imaging devices, localized energy producing devices, and/or computers to perform functions described herein for implementing the data acquisition used in the methods ofFIGS.1A-1B. The program modules314may include an imaging application318for performing data acquisition and/or processing functions as described herein, for example to acquire and/or process image data corresponding to magnetic resonance imaging of an area of interest. The computer300can include a data store320for storing data that may include imaging-related data322such as acquired data from the implementation of magnetic resonance imaging pulse sequences in accordance with various embodiments of the present disclosure. The mass storage device312is connected to the CPU302through a mass storage controller (not shown) connected to the bus306. The mass storage device312and its associated computer-storage media provide non-volatile storage for the computer300. Although the description of computer-storage media contained herein refers to a mass storage device, such as a hard disk, it should be appreciated by those skilled in the art that computer-storage media can be any available computer storage media that can be accessed by the computer300. CONCLUSION The specific configurations, choice of materials and the size and shape of various elements can be varied according to particular design specifications or constraints requiring a system or method constructed according to the principles of the disclosed technology. Such changes are intended to be embraced within the scope of the disclosed technology. The presently disclosed embodiments, therefore, are considered in all respects to be illustrative and not restrictive. The patentable scope of certain embodiments of the present disclosure is indicated by the appended claims, rather than the foregoing description. REFERENCES [1] Szymanski, C., Levy, F. & Tribouilloy, C. Should LVEF be replaced by global longitudinal strain?Heart100, 1655 LP-1656 (2014).[2] Aletras, A. H., Ding, S., Balaban, R. S. & Wen, H. DENSE: displacement encoding with stimulated echoes in cardiac functional MRI.J. Magn. Reson.137, 247-252 (1999).[3] Kim, D., Gilson, W. D., Kramer, C. M. & Epstein, F. H. Myocardial tissue tracking with two-dimensional cine displacement-encoded MR imaging: development and initial evaluation.Radiology230, 862-871 (2004).[4] Zhong, X., Spottiswoode, B. S., Meyer, C. H., Kramer, C. M. & Epstein, F. H. Imaging three-dimensional myocardial mechanics using navigator-gated volumetric spiral cine DENSE MRI.Magn. Reson. Med.64, 1089-1097 (2010).[5] Spottiswoode, B. S. et al. Tracking myocardial motion from cine DENSE images using spatiotemporal phase unwrapping and temporal fitting.IEEE Trans. Med. Imaging26, 15-30 (2007).[6] Young, A. A., Li, B., Kirton, R. S. & Cowan, B. R. Generalized spatiotemporal myocardial strain analysis for DENSE and SPAMM imaging.Magn. Reson. Med.67, 1590-1599 (2012).[7] Lin, K. et al. Reproducibility of cine displacement encoding with stimulated echoes (DENSE) in human subjects.Magn. Reson. Imaging35, 148-153 (2017).[8] Spottiswoode, B. S. et al. Motion-guided segmentation for cine DENSE MRI.Med. ImageAnal.13, 105-115 (2009).[9] Mangion, K. et al. Circumferential Strain Predicts Major Adverse Cardiovascular Events Following an Acute ST-Segment-Elevation Myocardial Infarction.Radiology290, 329-337 (2019).[10] Bilchick, K. C. et al. CMR DENSE and the Seattle Heart Failure Model Inform Survival and Arrhythmia Risk After CRT.JACC. Cardiovasc. Imaging13, 924-936 (2020).[11] Jing, L. et al. Cardiac remodeling and dysfunction in childhood obesity: a cardiovascular magnetic resonance study.J. Cardiovasc. Magn. Reson.18, 28 (2016).12] Ernande, L. et al. Systolic myocardial dysfunction in patients with type 2 diabetes mellitus: Identification at MR imaging with cine displacement encoding with stimulated echoes.Radiology265, 402-409 (2012).[13] Chen, X., Salerno, M., Yang, Y. & Epstein, F. H. Motion-compensated compressed sensing for dynamic contrast-enhanced MRI using regional spatiotemporal sparsity and region tracking: block low-rank sparsity with motion-guidance (BLOSM).Magn. Reson. Med.72, 1028-1038 (2014).[14] Chen, X. et al. Accelerated two-dimensional cine DENSE cardiovascular magnetic resonance using compressed sensing and parallel imaging.J. Cardiovasc. Magn. Reson.18, 38 (2016).[15] Tayal, U. et al. The feasibility of a novel limited field of view spiral cine DENSE sequence to assess myocardial strain in dilated cardiomyopathy.Magn. Reson. Mater. Physics, Biol. Med.32, 317-329 (2019).[16] Cerqueira, M. D. et al. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart.J. Cardiovasc. Magn. Reson.4, 203-210 (2002).[17] Gilliam, A. D., Suever, J. D. DENSEanalysis: Cine DENSE Processing Software. https://github.com/denseanalysis/denseanalysis.[18] D'Errico, J. Surface Fitting using gridfit.MATLAB Central File Exchangevol. 1 1-6 http://uk.mathworks.com/matlabcentral/fileexchange/8998-surface-fitting-using-gridfit (2020).[19] Bal, W. et al. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks.J. Cardiovasc. Magn. Reson.20, 65 (2018).[20] Puyol-Anton, E. et al. Fully automated myocardial strain estimation from cine MRI using convolutional neural networks. in 2018IEEE15th International Symposium on Biomedical Imaging, ISBI2018 vols 2018-April 1139-1143 (IEEE Computer Society, 2018).[21] Tan, L. K., McLaughlin, R. A., Lim, E., Abdul Aziz, Y. F. & Liew, Y. M. Fully automated segmentation of the left ventricle in cine cardiac MRI using neural network regression.J. Magn. Reson. Imaging48, 140-152 (2018).[22] Zheng, Q., Delingette, H., Duchateau, N. & Ayache, N. 3-D Consistent and Robust Segmentation of Cardiac Images by Deep Learning With Spatial Propagation.IEEE Trans. Med. Imaging37, 2137-2148 (2018).[23] Bratt, A. et al. Machine learning derived segmentation of phase velocity encoded cardiovascular magnetic resonance for fully automated aortic flow quantification.J. Cardiovasc. Magn. Reson.21, 1 (2019).[24] Duan, J. et al. Automatic 3D Bi-Ventricular Segmentation of Cardiac Images by a Shape-Refined Multi-Task Deep Learning Approach.IEEE Trans. Med. Imaging38, 2151-2164 (2019).[25] Fahmy, A. S., El-Rewaidy, H., Nezafat, M., Nakamori, S. & Nezafat, R. Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks.J. Cardiovasc. Magn. Reson.21, 7 (2019).[26] Tao, Q. et al. Deep Learning-based Method for Fully Automatic Quantification of Left Ventricle Function from Cine MR Images: A Multivendor, Multicenter Study.Radiology290, 81-88 (2019).[27] Fahmy, A. S. et al. Three-dimensional Deep Convolutional Neural Networks for Automated Myocardial Scar Quantification in Hypertrophic Cardiomyopathy: A Multicenter Multivendor Study.Radiology294, 52-60 (2019).[28] Ruijsink, B. et al. Fully Automated, Quality-Controlled Cardiac Analysis From CMR: Validation and Large-Scale Application to Characterize Cardiac Function.JACC Cardiovasc. Imaging13, 684-695 (2020).[29] Ferdian, E. et al. Fully Automated Myocardial Strain Estimation from Cardiovascular MRI-tagged Images Using a Deep Learning Framework in the UK Biobank.Radiol. Cardiothorac. Imaging2, e190032 (2020).[30] Zhong, X., Helm, P. A. & Epstein, F. H. Balanced multipoint displacement encoding for DENSE MRI.Magn. Reson. Med.61, 981-988 (2009).[31] Verzhbinsky, I. A. et al. Estimating Aggregate Cardiomyocyte Strain Using In Vivo Diffusion and Displacement Encoded MRI.IEEE Trans. Med. Imaging39, 656-667 (2020).[32] Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. inMICCAI(2015).[33] Feng, X., Qing, K., Tustison, N. J., Meyer, C. H. & Chen, Q. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images.Med. Phys.46, 2169-2180 (2019).[34] Feng, X., Kramer, Chirstopher M., Meyer, C. H. View-independent cardiac MRI segmentation with rotation-based training and testing augmentation using a dilated convolutional neural network. inISMRM27th Annual Meeting(2019).[35] Spoorthi, G. E., Gorthi, S. & Gorthi, R. K. PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping.IEEE Signal Process. Lett.26, 54-58 (2019).[36] Auger, D. A., Cai, X., Sun, Ch., Epstein, F. H. Improved phase unwrapping algorithm for automatic cine DENSE strain analysis using phase predictions and region growing. inSMRT27th Annual Meeting(2018).[37] Abadi, M. et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.CoRRabs/1603.0, (2016).[38] Zou, K. H. et al. Statistical validation of image segmentation quality based on a spatial overlap index.Acad. Radiol.11, 178-189 (2004).[39] Bilchick, K. C. et al. Impact of mechanical activation, scar, and electrical timing on cardiac resynchronization therapy response and clinical outcomes.J. Am. Coll. Cardiol.63, 1657-1666 (2014).[40] Lee, A. T., Bruce Pike, G. & Pelc, N. J. Three-Point Phase-Contrast Velocity Measurements with Increased Velocity-to-Noise Ratio.Magn. Reson. Med.33,122-126 (1995).[41] Hammouda, K. et al. A New Framework for Performing Cardiac Strain Analysis from Cine MRI Imaging in Mice.Sci. Rep.10, 7725 (2020).[42] Wang, K., Li, Y., Kemao, Q., Di, J. & Zhao, J. One-step robust deep learning phase unwrapping.Opt. Express27, 15100-15115 (2019).[43] Dardikman-Yoffe, G. et al. PhUn-Net: ready-to-use neural network for unwrapping quantitative phase images of biological cells.Biomed. Opt. Express11, 1107-1121 (2020).[44] Yin, W. et al. Temporal phase unwrapping using deep learning.Sci. Rep.9, 20175 (2019).[45] Zhang, J., Tian, X., Shao, J., Luo, H. & Liang, R. Phase unwrapping in optical metrology via denoised and convolutional segmentation networks.Opt. Express27, 14903-14912 (2019).[46] Zhang, T. et al. Rapid and robust two-dimensional phase unwrapping via deep learning.Opt. Express27, 23173-23185 (2019).[47] Suever, J. D. et al. Simplified post processing of cine DENSE cardiovascular magnetic resonance for quantification of cardiac mechanics.J. Cardiovasc. Magn. Reson.16, 94 (2014). | 54,920 |
11857289 | DETAILED DESCRIPTION The systems and methods described herein relate to imaging systems and methods, and more specifically, imaging systems and methods used to generate optimized functional images of a lesion region in a subject. A subject as used herein is a human (live or deceased), an animal (live or deceased), an organ or part of an organ of a human or an animal, or part of a human or an animal. For example, a subject may be a breast, part of a human that includes an ovary, or part of a human that includes the colon or part of the colon. A lesion comprises abnormal tissue in a subject, such as a tumor, benign or malignant. A lesion region of a subject includes the region of the subject that comprises a lesion. Functional images are images or maps of an imaging volume that includes a lesion region. Functional images may be maps of the optical properties of the voxels of the imaging volume, such as absorption maps of the imaging volume, depicting the absorption coefficient at each voxel. Functional maps may be hemoglobin maps, depicting the hemoglobin concentration at each voxel. Functional maps may be total hemoglobin concentration (tHb) maps, oxyhemoglobin (oxyHb) maps, or deoxyhemoglobin (deoxyHb) maps, depicting the tHb, oxyHb, or deoxyHb concentration at each voxel, respectively. The systems and methods disclosed herein can be used to detect the vasculature distribution at the lesion region of the subject, assess the malignancy of the tumor, and reduce biopsies of low-risk benign tumors without compromising cancer detection sensitivity. As a result, health care cost can be reduced while needed patient care is still delivered. As to breast tumors, a vast majority of lesions recommended for biopsy are assessed as low to moderate suspicion (BI-RADS 4A, 4B), which may benefit from adjunctive DOT. Lesions diagnosed with a low suspicion of malignancy that also demonstrate low hemoglobin concentration may be recommended with follow-up instead of biopsy without compromising cancer detection. On the other hand, when a high suspicion (BI-RADS 4C or 5) lesion has a benign pathology result, the imaging pathology correlation is considered discordant, and a repeat biopsy or surgical excision is recommended in conventional treatment management. In such cases, an optical exam showing low vascularity can provide reassurance to allow for recommendation of less aggressive management, such as short term follow-up rather than additional tissue sampling or surgery. The system and method disclosed herein can also be used to improve the accuracy of measured vasculature distribution and reduce the computation time to derive the measured vasculature distribution. In various embodiments, a regularized optimization method is used to reconstruct the functional data acquired by the DOT device. This method improves the stability of the computation, allows the optimization to converge fast (such as converging after 3 or less iterations) and, in the meantime, decrease the computation time. As seen inFIG.1A, system100includes a diffuse optical tomography (DOT) device116, including, but not limited to, a near-infrared (NIR) diffuse optical tomography device and an NIR imager. In various embodiments, the DOT device116is a guided DOT device, defined herein as a DOT device operatively coupled to a second imaging device that obtains additional images of a subject in the form of guiding data used identify a region of interest corresponding to a lesion within a breast of the subject for subsequent partitioning and additional data processing of the imaging data acquired by the DOT device116. The system100further comprises an imaging device102that is configured to acquire guiding data of a subject including the lesion region of the subject. The imaging device may be any suitable imaging device that makes use of an imaging modality different from the DOT device116including, but not limited to, an ultra-sound device, a magnetic resonance imaging system, an x-ray device, or a computed tomography device. In various embodiments, the light spectrum used on the DOT device116is at the near-infrared spectrum (wavelength from −700 to 900 nm). NIR DOT imaging is a noninvasive imaging technique that uses NIR light to estimate optical properties of tissue.FIG.1Bshows the absorption of the light as a function of the wavelength of the light for water (dashed-dashed line), oxyhemoglobin (oxyHb) (dashed line), and deoxyhemoglobin (deoxyHb) (solid line). The rectangular box includes the NIR spectrum range. As shown inFIG.1B, in the NIR spectrum range, water absorbs light much less than oxyHb and deoxyHb, and oxyHb and deoxyHb each absorb the light at different rates depending on the wavelength of the emitted light. Four arrows superimposed onFIG.1Bpoint to absorption properties at wavelengths of 730 nm, 780 nm, 808 nm, and 830 nm, respectively. Because of the minimal absorption of water in the NIR spectrum (˜700 to 900 nm), NIR light penetrates several centimeters in tissue. Within the NIR spectrum, oxygenated and deoxygenated hemoglobin are the major chromophores to absorb light and can be used to characterize tumor vasculature, which is directly related to tumor angiogenesis. DOT systems are usually portable, require no contrast agents, and have relatively low cost. These features make DOT systems well-suited for diagnosis of cancer and for assessment of neoadjuvant treatment response. However, intense light scattering in tissue typically causes the low spatial resolution and lesion location uncertainty in DOT images. The DOT device116is configured to emit optical waves of a plurality of wavelengths toward an imaging volume of the subject. In various embodiments, the DOT device116is configured to emit optical waves at wavelengths 740, 780, 808 and 830 nm. The imaging volume includes a lesion region. The DOT device116is configured to acquire functional data representing the optical waves diffused by tissue in the imaging volume in response to the emitted optical waves. The DOT device116and the imaging device102may be co-registered, where the imaging probes of the imaging device and the DOT device are directed to the same imaging volume. In various embodiments, the DOT device and the imaging device acquire data through one probe. In the exemplary embodiment, system100also includes a computing device104coupled to imaging device102via a data conduit106aand operatively coupled to the DOT device116via a data conduit106b. It should be noted that, as used herein, the term “couple” is not limited to a direct mechanical, electrical, and/or communication connection between components, but may also include an indirect mechanical, electrical, and/or communication connection between multiple components. Imaging device102and the DOT device116may communicate with computing device104using a wired network connection (e.g., Ethernet or an optical fiber), a wireless communication means, such as radio frequency (RF), e.g., FM radio and/or digital audio broadcasting, an Institute of Electrical and Electronics Engineers (IEEE®) 802.11 standard (e.g., 802.11(g) or 802.11(n)), the Worldwide Interoperability for Microwave Access (WIMAX®) standard, a short-range wireless communication channel such as BLUETOOTH®, a cellular phone technology (e.g., the Global Standard for Mobile communication (GSM)), a satellite communication link, and/or any other suitable communication means. IEEE is a registered trademark of the Institute of Electrical and Electronics Engineers, Inc., of New York, New York. WIMAX is a registered trademark of WiMax Forum, of Beaverton, Oregon. BLUETOOTH is a registered trademark of Bluetooth SIG, Inc. of Kirkland, Washington. In the exemplary embodiment, computing device104is configured to receive guiding data from the imaging device102and receive functional data from the DOT device116. The computing device104may also be configured to control the imaging device102and the DOT device116. System100may further include a data management system108that is coupled to computing device104via a network109. In some embodiment, the computing device104includes a data management system108. Data management system108may be any device capable of accessing network109including, without limitation, a desktop computer, a laptop computer, or other web-based connectable equipment. More specifically, in the exemplary embodiment, data management system108includes a database110that includes previously acquired data of other subjects. In the exemplary embodiment, database110can be fully or partially implemented in a cloud computing environment such that data from the database is received from one or more computers (not shown) within system100or remote from system100. In the exemplary embodiment, the previously acquired data of the other subjects may include, for example, a plurality of measurements of lesion region of other subjects. Database110can also include any additional information of each of the subjects that enables system100to function as described herein. Data management system108may communicate with computing device104using a wired network connection (e.g., Ethernet or an optical fiber), a wireless communication means, such as, but not limited to radio frequency (RF), e.g., FM radio and/or digital audio broadcasting, an Institute of Electrical and Electronics Engineers (IEEE®) 802.11 standard (e.g., 802.11(g) or 802.11(n)), the Worldwide Interoperability for Microwave Access (WIMAX®) standard, a cellular phone technology (e.g., the Global Standard for Mobile communication (GSM)), a satellite communication link, and/or any other suitable communication means. More specifically, in the exemplary embodiment, data management system108transmits the data for the subjects to computing device104. While the data is shown as being stored in database110within data management system108, it should be noted that the data of the subjects may be stored in another system and/or device. For example, computing device104may store the data therein. In the exemplary embodiment, when in use, the imaging device102acquires guiding data of the subject including a lesion region. The guiding data is transmitted to the computing device104via the data conduit106a. The computing devices produces guiding images of the subject including the lesion region based on the guiding data. The DOT device116acquires functional data of the subject of an imaging volume that includes the lesion region. The functional data is transmitted to the computing device104via data conduit106b. Although one computing device104is depicted inFIG.1, two or more computing devices may be used in the system. The imaging device102and the DOT device116may be in communication with different computing devices (not shown) and the computing devices are in communication with each other. In the exemplary embodiment, the computing device104is further programmed to identify the lesion region based on the guiding image reconstructed from the guiding data. The imaging volume selected for further analysis is chosen to include the lesion region. The imaging volume may be segmented into a plurality of regions including, but not limited to, the lesion region and a background region outside the lesion region. In various embodiments, the functional data at the lesion region comprises first voxels having fine voxel size and the functional data at the background region comprises second voxels having coarse voxel size that is greater than the fine voxel size. As a result, the overall number of voxels of the functional data selected for data processing and image reconstruction is reduced relative to the raw imaging data obtained by the DOT device116, and consequently the computation complexity is reduced and computation speed is increased. The computing device104generates optimized functional images of the subject including the lesion region by reconstructing the functional data using methods described in additional detail below. In various embodiments, the optimized functional images are reconstructed at the plurality of segmented regions including the lesion region and the background region. In various embodiments, a two-step reconstruction method is used to produce DOT image estimates. In the first step, a preliminary estimate of the functional image is generated. In one aspect, the preliminary estimate is be generated by applying a pseudoinverse matrix of the weight matrix to the functional data. The pseudoinverse matrix may be a truncated pseudoinverse matrix. A weight matrix may represent the optical properties of the DOT device, as described in additional detail below. The weight matrix may also describe the distribution of diffused waves in a homogeneous medium, as well as characterizing the measurement sensitivity of the DOT device to the absorption and scattering change in the object being imaged. In one embodiment, a truncated Moore-Penrose Pseudoinverse (MPP) solution is used to compute a preliminary estimate of the functional images using the functional data or the measured data acquired by the DOT device, as described in additional detail below. In the truncated MPP solution, the pseudoinverse matrix of the weight matrix is truncated. In various embodiments, the size of the truncated pseudoinverse matrix is selected based on the number of singular values of the weight matrix greater than a threshold value. In various embodiments, this threshold value is chosen as 10% of the largest singular value of the weight matrix. The MPP method is particularly well-suited for computing the initial estimate of the functional images, target images, or target for several reasons. First, the truncated pseudoinverse produced using MPP, by definition, produces a least-squares estimate of the image that possesses the minimum norm. Consequently, the estimate can be interpreted as an orthogonal projection of the true target image onto a subspace that is the orthogonal complement to the null space of the imaging operator. Therefore, the truncated pseudoinverse produced using MPP describes an estimate of the target that is closest to the true target but contains no component in the null space. This is a reliable strategy for image reconstruction when no reliable a priori information about the target is available. Second, the MPP truncated pseudoinverse solution can be easily regularized by excluding contributions that correspond to small values. Therefore, the regularization parameter used for subsequent optimized image reconstruction may be selected with little ambiguity. Third, the MPP truncated pseudoinverse operator can be explicitly stored in memory to speed up the computation, enabling nearly real-time image reconstruction. The systems and methods disclosed herein enhance the image's accuracy and reconstruction speed relative to the images produced using existing reconstruction methods. In various embodiments, the second step of the two-step reconstruction method incorporates the preliminary estimate into the design of a penalized or regularized optimization method to generate optimized functional images, as described in additional detail below. In one embodiment, the optimization method may be a least squares estimator. In another embodiment, a Newton optimization method is used for the optimization method. In an additional embodiment, a conjugate gradient method is used for the optimization method. In various embodiments, the regularization parameter used in the optimization method is chosen as the largest singular value of the weight matrix multiplied by a factor proportional to the tumor size in the lesion region, as described below. The functional data acquired by the DOT device reflects the optical properties of the object being imaged or the imaging volume imaged by the DOT device (see Eq. (1) in Example 2). In various embodiments, the optical properties of the imaging volume are derived by reconstructing the functional data according to the methods described herein. In various embodiments, to calibrate the optical properties obtained by reconstruction, e.g., curve fitting, of the functional data acquired by the DOT device, phantoms constructed with a medium having known optical properties may be used to derive the relation between the phantom and the average tissue background optical properties. The background optical properties include absorption coefficient μa, reduced scattering coefficient μs′, and diffusion coefficient D (D relates to μs′ as D=13μs′). In various embodiments, the optimized functional images are indicative of the hemoglobin concentration of the tissue in the imaging volume. The value of each voxel of the optimized functional images in indicative of the hemoglobin concentration at that voxel. In various embodiments, the optimized functional images comprise absorption maps at each voxel of the optical waves. The absorption maps may be used to compute a hemoglobin concentration at each voxel. In various embodiments, the absorption coefficient is a key parameter for evaluating tumor angiogenesis and relates to the hemoglobin concentration (oxyHb, deoxyHb, and tHb) within the tissue of the object being imaged. With this relationship between the absorption coefficient and the hemoglobin concentration, once the absorption coefficient is computed, the hemoglobin concentration, including the oxyHb concentration, the deoxyHb concentration, and the tHb, can be derived from the computed absorption coefficient. In various embodiments, a threshold hemoglobin concentration is defined to assess various properties of the lesion tissue. In one embodiment, the malignancy level of the lesion is determined based on the measured tHb in comparison with the threshold hemoglobin concentration, and a recommendation for biopsy is generated based on the level of malignancy. In various embodiments, if the tHb is higher than the threshold hemoglobin concentration, the lesion is indicated as malignant. If the tHb is lower than the threshold hemoglobin concentration, the lesion is indicated as benign. In various embodiments, the tHb is evaluated in conjunction with the category of the breast tumor diagnosed by the radiologist. A more conservative or less conservative threshold hemoglobin concentration may be selected based on the tumor category. In various embodiments, for the purpose of biopsy recommendation, a conservative threshold hemoglobin concentration may be selected, or a more or less conservative threshold hemoglobin concentration is selected depending on the category diagnosed by the radiologist. In the exemplary embodiment, the optimized functional images are displayed to a practitioner for further evaluation. The optimized functional images may be transmitted to a display device such as a monitor, a television, a mobile device, or a tablet, for display. The display device may be part of the computing device104. FIG.2is a block diagram of a computing device104. In the exemplary embodiment, computing device104includes a user interface204that receives at least one input from a user, such as an operator of imaging device102or the DOT device116(shown inFIG.1). User interface204may include a keyboard206that enables the user to input pertinent information. User interface204may also include, for example, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad, a touch screen), a gyroscope, an accelerometer, a position detector, and/or an audio input interface (e.g., including a microphone). Moreover, in the exemplary embodiment, computing device104includes a presentation interface207that presents information, such as input events and/or validation results, to the user. Presentation interface207may also include a display adapter208that is coupled to at least one display device210. More specifically, in the exemplary embodiment, display device210may be a visual display device, such as a cathode ray tube (CRT), a liquid crystal display (LCD), an organic LED (OLED) display, and/or an “electronic ink” display. Alternatively, presentation interface207may include an audio output device (e.g., an audio adapter and/or a speaker) and/or a printer. Computing device104also includes a processor214and a memory device218. Processor214is coupled to user interface204, presentation interface207, and to memory device218via a system bus220. In the exemplary embodiment, processor214communicates with the user, such as by prompting the user via presentation interface207and/or by receiving user inputs via user interface204. The term “processor” refers generally to any programmable system including systems and microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASIC), programmable logic circuits (PLC), and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term “processor.” In the exemplary embodiment, memory device218includes one or more devices that enable information, such as executable instructions and/or other data, to be stored and retrieved. Moreover, memory device218includes one or more computer readable media, such as, without limitation, dynamic random access memory (DRAM), static random access memory (SRAM), a solid state disk, and/or a hard disk. In the exemplary embodiment, memory device218stores, without limitation, application source code, application object code, configuration data, additional input events, application states, assertion statements, validation results, and/or any other type of data. Computing device104, in the exemplary embodiment, may also include a communication interface230that is coupled to processor214via system bus220. Moreover, communication interface230is communicatively coupled to imaging device102, DOT device116, and data management system108(shown inFIG.1). In the exemplary embodiment, processor214may be programmed by encoding an operation using one or more executable instructions and providing the executable instructions in memory device218. In the exemplary embodiment, processor214is programmed to select a plurality of measurements that are received from imaging device102or the DOT device116. The plurality of measurements may include, for example, a plurality of voxels of at least one image of the subject, wherein the image may be generated by processor214within computing device104. The image may also be generated by an imaging device (not shown) that may be coupled to computing device104, imaging device102and/or the DOT device, wherein the imaging device may generate the image based on the data received from imaging device102or the DOT device and then the imaging device may transmit the image to computing device104for storage within memory device218. Alternatively, the plurality of measurements may include any other type measurement of the lesion region that enables system100to function as described herein. FIG.3is a flow chart depicting an example method300of generating optimized functional images of a lesion region of a subject. The method300includes receiving guiding data of the subject at302. The guiding data may be acquired by an imaging device as described above. The method300further includes generating guiding images of the subject by reconstructing the guiding data at304. The reconstruction methods of the guiding images may be any method suitable for the imaging modality of the imaging device. The method300further includes identifying the lesion region within the guiding image of the subject at306. The method300further includes receiving functional data of an imaging volume at308. The functional data may be acquired by a DOT device as described above. The imaging volume includes the lesion region and surrounding tissues. The method300further includes segmenting the imaging volume into a plurality of regions at310. The plurality of regions includes the lesion region identified in the guiding images and a background region outside the lesion region. The functional data at the lesion region may comprise first voxels having a fine voxel size. The functional data at the background region may comprise second voxels having a coarse voxel size greater than the fine voxel size. After segmentation, the number of voxels of the functional data may be reduced. The method300further includes generating functional images of the subject at312. The method300further includes displaying the optimized functional images at314. Referring toFIG.4A, generating functional images of the subject at312may comprise generating preliminary estimates of the functional images at402. As described herein, the preliminary estimates may be generated by applying a pseudoinverse matrix of the weight matrix to the functional data. In various embodiments, the pseudoinverse matrix comprises a truncated Moore-Penrose pseudoinverse matrix. In various embodiments, the pseudoinverse matrix is truncated to a matrix size comprising the number of singular values of the weight matrix larger than a threshold value. In various embodiments, the threshold value comprises 10% of the largest singular value of the weight matrix. Generating functional images at312may further comprise selecting a regularization parameter at404. Generating functional images312may further comprise generating optimized functional images using the preliminary estimate and the regularization parameter at406. In various embodiments, the optimized functional images are generated by regularized or penalized optimization method, where the optimization is regularized by the preliminary estimate weighted by the regularization parameter. The optimization method may be any suitable optimization method including, but not limited to, a conjugated gradient method and Newton method. The Newton method and the conjugate gradient are faster to converge and take less computation time than other available optimization methods. In various embodiments, the systems and methods further include computation of perturbation imaging dataset Uscbased on the functional data acquired by the DOT device, where the perturbation imaging dataset is used to reconstruct the functional images. As described in additional detail below, the perturbation imaging dataset Uscrepresents a change in optical properties of the lesion tissue relative to background optical properties representative of healthy tissue. The systems and methods may further include removal of outliers in acquired functional data. FIG.4Bshows an example method1400of reconstructing functional images using the perturbation imaging dataset Usccomputed as described herein. The method1400includes receiving a computed perturbation imaging dataset at1401. The perturbation imaging dataset Uscmay be computed in a perturbation computation module of the computing device or may be precomputed on and transmitted from a separate computing device. The method1400further comprises performing a truncated pseudoinverse on the perturbation imaging dataset to obtain a raw initial image at1402. The method1400further comprises performing perturbation filtering on the initial image at1403to obtain an initial image estimate to be used as the preliminary estimate of the functional images. The method1400further comprises reconstructing the optimized functional image at1404using the regularized optimization method regularized by the initial image estimate. The regularization parameters used in the regularized optimization may be derived from the target size and the singular values of the weight matrix. FIG.4Cis a block diagram illustrating additional steps of method1400associated with receiving the perturbation imaging dataset at1401as illustrated inFIG.4B. As illustrated inFIG.4C, receiving the perturbation imaging dataset at1401further comprises receiving functional data obtained using the DOT device at1410. In various aspects, the functional data includes a lesion imaging dataset obtained from a breast containing a lesion, and a compound reference imaging dataset derived from absorption measurements of corresponding healthy tissue including, but not limited to, a contralateral breast of the patient, as described in additional detail below. The method further comprises calculating the perturbation imaging dataset at1412. FIG.4Dis a flow chart illustrating additional steps of the method1400associated with receiving the lesion imaging dataset and compounded reference imaging dataset at1410as illustrated inFIG.4C. Referring toFIG.4D, receiving the functional data at1410further includes constructing a compound reference dataset based on absorption measurements of healthy tissue including, but not limited to, a healthy breast of the patient, as obtained by a DOT device. Constructing the compounded reference imaging dataset includes receiving a reference imaging dataset at1420that includes absorption measurements obtained from healthy tissue corresponding to the lesion tissue, as obtained by the DOT device from a contralateral breast of a patient in one embodiment. Receiving the functional data at1410further comprises removing outliers from each source-detector pair of the reference imaging dataset at1422. In one embodiment, a maximum normed residual (MNR) test may be used to remove outliers at1422. Receiving the functional data at1410may further comprise removing saturated detector effects from the reference imaging dataset at1424. In one embodiment, a piece-linear fitting method is used to remove saturated detector effects at1424. Receiving the functional data at1410may further comprise performing iterative reweighted least square fitting (IRLS) on reference imaging dataset and discarding residuals above a threshold at1426. Receiving the functional data at1410further comprises eliminating duplicated data at each source-detector separation distance to obtain the compound reference dataset at1428. Fast data acquisition, automated and robust data processing, and image reconstruction in near real-real time make the systems and methods described herein suitable for use in clinical settings. In various embodiments, a plurality of imaging datasets are acquired using the DOT device from a breast of a patient containing a lesion and from a contralateral breast of a patient containing normal, healthy breast tissue. Each dataset from source positions and all detectors takes only a few seconds to acquire, and several data sets are acquired at the lesion for computing average hemoglobin levels after reconstruction. In various embodiments, the probe of the DOT device is moved to the contralateral normal breast at the same quadrant as the lesion to acquire reference data used to estimate the average tissue optical properties (μaand μs′). In various embodiments, several reference datasets are acquired for off-line selection of the best reference, using linear fitting criteria of the amplitude and phase profiles vs. source-detection distance. The average μaand μs′ are used to compute the weight matrix W for image reconstruction. Tissue optical properties determined from the contralateral breast may be the best estimate of the background tissue for reconstruction of images of the lesion region. The entire data acquisition, including several datasets at the lesion region and contralateral sites, takes a few minutes. In some aspects, to reduce the duration of the data acquisition phase, the data processing and image reconstruction are done off-line. These data processing and image reconstruction tasks may make use of manual operation by experienced users and can take up to 30-40 minutes, depending on the amount of patient imaging data. Reducing user interaction via automated processing can help the systems and methods move toward adoption for clinical use. In a hand-held operation, bad coupling between the light guides and the breast can result in measurement outliers. Additionally, tissue heterogeneity can cause measurement errors in some source-detector pairs. Recovered background and lesion optical properties depend on the boundary measurements of light propagating through the tissue underneath, and errors in these measurements can cause inaccuracy in the fitted background and reconstructed lesion optical properties. In various embodiments, an automated outlier removal, data selection, and perturbation filtering method is used to improve the robustness and speed of estimating background optical properties and DOT reconstruction. This method utilizes multiple sets of reference measurements acquired at the contralateral normal breast to produce a robust set of reference measurements. Multiple sets of reference measurements are used to form a single high quality reference dataset. Background optical properties and then the weight matrix W are computed from the selected reference dataset. A flowchart illustrating the steps of a method1410for automated outlier removal, data selection, and perturbation filtering is illustrated inFIG.4D. The method1410includes an outlier removal procedure at1422, e.g., MNR, to eliminate inaccurate measurements, with a criterion based on the statistical distribution of data collected at each source-detector pair. The method1410further includes piece-linear fitting at1424to reject the source-detector pair measurements obtained from saturated photomultiplier tubes (PMTs). Without being limited to any particular theory, PMT can saturate at a short source and detector distance, which varies for each individual PMT. Third, an iterative fitting of the residue of the remaining data further eliminates inaccurate measurements based on the linearity of the fitted results of the reference measurements of all source-detector pairs. The method1410further includes using a least-squares error method at1426to form the best reference dataset at1426used to form the compound reference at1428, from the remaining measurements. In various aspects, the lesion measurement set obtained from the breast tissue containing the lesion is subtracted from the compound reference dataset and scaled by the compound reference dataset to form the normalized perturbation of the scattered field, Usc. Without being limited to any particular theory, outliers in the lesion data may be due to any one or more of a plurality of factors including, but not limited to, measurement errors and tissue heterogeneity. Lesion measurements are expected to be more heterogeneous than the reference measurements because the heterogeneity is partially caused by the lesion and partially by the background tissue. However, to separate the bad measurements caused by movement and/or bad coupling from those caused by lesion heterogeneity, caution may be needed in using an outlier removal procedure, such as MNR. In various embodiments, the threshold in the MNR statistical test may be adjusted to remove large outliers that are likely caused by motion or bad coupling. In various embodiments, simulations were performed using different background optical properties for both reference and lesion breasts, as well as different optical properties for lesions of different sizes located at different depths. The results show that the maximum phase difference of any of the source detector pairs may be set to be 90 degrees, even in extreme cases. Further, the mean and the standard deviation of the imaginary part of the perturbation was calculated. If the imaginary part of a data point is farther than three standard deviations from the mean, the data point is classified as an outlier and removed from the perturbation dataset. Any data points in the perturbation dataset that do not meet these two criteria are rejected. This step removes outliers in the perturbation likely caused by measurement errors rather than heterogeneity of the lesion. In various embodiments, after the steps of outlier removal and filtering, the normalized perturbation is used to reconstruct the absorption map at each wavelength. The total hemoglobin map is calculated from the four wavelength absorption data. The entire imaging reconstruction takes about 1 to 2 minutes, depending on lesion size. Therefore, the total time needed for imaging each patient can be reduced to less than 5 minutes, which is sufficient for radiologists to read the images and provide feedback on a revised BI-RADs score including the DOT data. Although, in some of the examples provided below, the systems and methods disclosed herein are used on breasts and breast tumors, the systems and methods are not limited to this part of human or animal body or this type of tumor or cancer. EXAMPLES The following example illustrates various aspects of the disclosure. Example 1: Imaging Malignant and Benign Breast Lesions Two large scale clinical studies were performed with the system described herein. The findings are listed in Table 1 and summarized below. The two studies, performed at two sites, included more than 450 women. Study 1 included 162 patients and study 2 included 288 patients with readers or radiologists. All subjects had been referred for an ultrasound (US) guided biopsy of a suspicious sonographic lesion. The ultrasound images and optical measurements were obtained with a commercial ultrasound transducer positioned in the hybrid handheld imaging probe of the DOT device. A minimum of three co-registered ultrasound/optical imaging datasets of the lesion were acquired at each imaging location of the lesion, as well as of the contralateral breast, which was used as a reference. Optical absorption distributions at each wavelength were reconstructed, and the tHb was computed from the absorption maps. Maximum and average lesion tHb concentrations were measured. Sensitivity, specificity, and positive and negative predictive values (PPV and NPV) were calculated using a tHb threshold level (82 μmol/L for Study 1 and 80 μmol/L for Study 2) established to separate malignant from benign lesions. The thresholds were based on review of the data and determination of the value at which sensitivity would provide clinically relevant differentiation of benign and malignant tissue. Lesions that exceeded the tHb threshold level were considered malignant. The tHb threshold levels differ between the two studies because different numbers of optical wavelengths were used. Biopsies, which served as ground truth for disease status in both studies, were performed on all subjects and reviewed by pathologists. Approximately 35% of the lesions in Study 1 and 20% of the lesions in Study 2 were malignant. TABLE 1Summary of Two Clinical StudiesObjectivePopulationOutcomeSTUDY 2N = 288Mean max tHb, oxyHb,Investigate US-guidedsubjectsand deoxyHb wereoptical imaging in60 malignantsignificantly higherdistinguishing functional235 benignin malignant groupsdifferences in tHb, oxyHb,than in theand deoxyHb in malignantbenign group.and benign breast lesions.Sensitivity and NPVEvaluate US-guided opticalincreased when tHbimaging in conjunction withwas used inUS in improving diagnosis ofcombination with USmalignant and benign breastreader data.lesions.Sensitivity formalignant lesionsincreased from78.0% to 96.6%for Reader 1 and from81.4% to 100% forReader 2.STUDY 1N = 162Mean max tHb wasInvestigate the role of US-subjectssignificantly higherguided optical Imaging in61 malignantin malignant groupsdifferentiating early-stage114 benignthan in the benigncancers from benign lesions(lesions)group.For early Stage cancers(Tis-T1), Sensitivity,specificity, PPV, andNPV based on tHb were92%, 92%, 80%, and97%, respectively.The corresponding valuesfor T2-T4 tumors were75%, 92%, 67%and 95%, respectively. FIG.5Ais a graph summarizing the measured total hemoglobin concentration among various patient groups in Study 1. In Study 1, maximum tHb levels were used to characterize malignant vs. benign lesions. It was found that tHb was two-fold higher in the malignant groups (Stage 2 to 4 tumors (T2-4), and Stage 1 tumors (Tis-T1)) than in the benign group (P<0.0001) (seeFIG.5A). Additionally, the proliferative lesion (PBL) group showed a higher tHb content than the other benign lesion group (P<0.04).FIG.5Bis a scatter plot of the maximum tHb of T2-T4 tumors, Tis-T1 tumors, proliferative benign lesions, non-proliferative fibrocystic changes (NPFCC), fibroadenoma (FA), fat necrosis and inflammation/reactive changes (FN-INF), complex cysts, breast tissue, lymph nodes, and the control group. When 82 μmon was chosen as the tHb threshold level to separate malignant and benign lesions, the sensitivity, specificity, PPV, and NPV for Tis-T1 tumors were 92%, 92%, 80%, and 97%, respectively. For T2-T4 tumors, the sensitivity, specificity, PPV, and NPV were 75%, 92%, 67%, and 95%, respectively. Illustrative examples of benign and malignant Stage 1 cancer cases assessed in the present example are provided below.FIG.6Ais a US image of a suspicious lesion (marked by a superimposed arrow) located in the left breast of a 37-year-old woman. The lesion was measured as 7 mm by US.FIG.6Bshows the tHb map having a low and diffused distribution with a maximum 35.3 μmon, at the corresponding depth location of slice #4. Core biopsy revealed a fibroadenoma. In the functional images, the first slice was 0.3 cm from the skin surface, and the last slice was 3.3 cm toward the chest wall. Each slice is a 9 cm-by-9 cm image at the x-y plane, with a 0.5 cm depth difference between slices. The vertical scale is the tHb concentration in μmon.FIG.7Ais a US image of a suspicious tubular-like lesion at the 12 o'clock position in the right breast of a 71-year-old woman. The lesion was measured as 7 mm by US.FIG.7Bis the tHb map, which showed an isolated, well-defined mass with a maximum ft-lb concentration of 97.8 μmon at the corresponding depth location of slice #3. The first slice was 0.7 cm below the skin surface, and the spacing between the slices was 0.5 cm. Biopsy revealed an invasive ductal carcinoma, and the pathologic tumor stage was T1c (1.2 cm). The results of these experiments demonstrated that the hemoglobin maps acquired by the systems and methods disclosed herein can be used with mammography/US for distinguishing early-stage invasive breast cancers from benign lesions. As for qualitative features seen in total hemoglobin maps of large cancers, 38% of the T2-T4 tumors showed heterogeneous periphery enhancement (FIGS.8A-8D), which is unique and not observed in benign solid lesions.FIG.8Ais a US image of a suspicious lesion at the 11 o'clock position of the right breast of a 63-year-old woman. The lesion was measured as 3 cm in size by US.FIG.8Bshows the tHb map, which showed a heterogeneous distribution with a higher concentration at the periphery. The first slice was 0.2 cm below the skin surface. Biopsy revealed an invasive ductal carcinoma, and the pathologic tumor stage was T2 (2.2 cm). Additionally, 33% of the T2-T4 tumors showed posterior shadowing. This shadowing is partially due to the significant light absorption by the tumor, which causes a dramatic reduction of the reflected light from the deep portion of the tumor. This light shadow effect is similar to the posterior shadow of larger tumors seen in US images. Thus, for large US-visible lesions, the systems and methods disclosed herein can provide angiogenesis distribution features and provide additional diagnostic value. In the second, larger scale study (Study 2), the DOT device using four optical wavelengths allowed robust estimation of tHb, oxyHb, and deoxyHb concentrations. The maxima were significantly higher in the malignant groups than in the benign group (P<0.001, P<0.001, P<=0.035). For these malignant lesions, the maximum tHb moderately correlated with tumor histological grade (P=0.036) and nuclear grade (P=0.016). Thus, the tHb level can be used to identify aggressive cancers. The maximum oxyHb moderately correlated with tumor nuclear grade (P=0.042). When a threshold of 80 μmol/L was used for tHb, the system disclosed herein achieved sensitivity, specificity, PPV, and NPV of 84.6%, 90.0%, 57.9% and 97.3%, respectively for the Tis-T1 group; the corresponding values were 72.9%, 90.0%, 64.2%, and 93.1%, respectively for the combined malignant group. Based on the US BI-RADS scores provided by the two radiologists, the corresponding values of sensitivity, specificity, PPV, and NPV for the combined malignant group were 78.0-81.4%, 83.0-90.1%, 55.8-66.7%, and 94.2-94.7%, respectively. When the radiologists' US diagnoses and tHb were used together where the radiologists' reading of 4C or 5, or tHb higher than the threshold of 80 μmol/L was considered as malignant, the corresponding values were 96.6-100%, 77.3-83.3%, 52.7-59.4% and 99.0-100%, respectively for the combined malignant group. As summarized in Table 2 (below), both sensitivity and NPV increased dramatically when the tHb data was used in combination with the diagnostic ultrasound data. The sensitivity for all malignant lesions, for example, increased from 78.0% to 96.6% for Reader 1, and from 81.4% to 100% for Reader 2.FIG.9Ais a US image of a hypoechoic lobulated mass with internal echoes and separations at 2 o'clock in the right breast of a 40-year-old woman. US readings from the two readers were 4B and 4C.FIG.9Bis a tHb map, showing a diffused mass with a maximum value of 52.5 μmol/L.FIG.9Cshows the oxyHb maps, andFIG.9Dshows the deoxyHb maps. Analysis of core biopsy samples revealed hylinized benign fibroadenoma. FIG.10Ais a US image of a hypoechoic lobulated mass (pointed by arrow) in the right breast of a 72-year-old woman. US readings from the two readers were 4B and 4C.FIG.10Bshows the tHb maps showing an isolated mass, with a maximum of 106.2 μmol/L.FIG.10Cshows the oxyHb maps, andFIG.10Dshows the deoxyHb maps. The oxyHb map follows the tHb map closely, and the deoxyHb distribution is quite diffused. Core biopsy revealed DCIS, nuclear grade 2. Pathology at surgery revealed 0.6 cm DCIS with intraductal papilloma with atypia and microcalcification. TABLE 2Comparison of Reader Assessments without/with tHbSensitivitySpecificityPPVNPVDiagnostic ultrasound only, without tHb dataReader 1: All malignant lesions78.0%90.1%66.7%94.2%Reader 2: All malignant lesions81.4%83.7%55.8%94.7%Diagnostic ultrasound, with tHb dataReader 1: All malignant lesions96.6%83.3%59.4%99.0%Reader 2: All malignant lesions100%77.3%52.7%100% The Study 2 results were further analyzed to assess the potential clinical utility of guided DOT systems and methods in characterizing patients who may not need to be referred for biopsy. The great majority of biopsy lesions are rated as low suspicion BI-RADS 4A or moderately suspicion 4B. A subset of data from patients with 4A and 4B lesions was used for this analysis, where these patients, with low or moderate risk of malignancy, are routinely referred for biopsy. Table 3 shows the number of biopsy referrals categorized by the readers' BI-RADS scores. Many 4A and 4B lesions were benign (94% for both readers). Table 4 shows the number of biopsy referrals categorized by the readers' BI-RADS scores (4A and 4B) combined with tHb data. A conservative tHb threshold of 50 μmol/L was chosen to differentiate lesions based on the amount of vascularization and consequent angiogenesis and probability of malignancy. The number of 4A and 4B lesions that would be referred for biopsy was evaluated based on reader data alone or on reader data in combination with tHb data. Without tHb information, Reader 1 referred 210 patients and Reader 2 referred 195. TABLE 3Bi-RADS scores compared with biopsy results (295 lesions)Reader ScoreMalignantBenignTotalReader 1: 4A/B142102244C121830534741Reader 2: 4A/B131952084C2529545221133Total60235295 TABLE 44A/4B scores and tHb (unit μmol/L), and biopsyMalignantBenignReader ScoretHb ≤ 50tHb > 50tHb ≤ 50tHb > 50TotalReader 1: 4A002323464B1137193178Total11394116224Reader 2: 4A012116384B0126791170Total01388107208 When the tHb data were considered using the >50 μmol/L threshold, the number of lesions referred for biopsy decreased to 106 and 120, respectively, for read 1 and reader 2. Comparing with the number of biopsy referrals of 210 and 195 without the tHb information, these reductions are significant. Biopsy referrals for 4A and 4B lesions were decreased by 50% and 39% (an average of 45%) when tHb data were considered, while maintaining a high sensitivity (see Table 5). Furthermore, the only malignant lesion missed by Reader 1 was a low grade 1 cm T1b tumor. TABLE 5Reader 1 (224 lesions) and Reader 2 (208 lesions) results forcombined 4A/4B lesions with tHb > 50 μmol/LBiopsyMalignantBenignTotalReader 1Test, tHb >Positive139310650 μmol/LNegative1117118Total14210224Reader 2Test, tHb >Positive1310712050 μmol/LNegative08888Total13195208 The tHb data obtained through co-registered guided DOT, therefore, can be used clinically in differentiating malignant breast lesions from benign ones. Used in combination with the diagnostic standard of care, it can reduce the number of unnecessary biopsies with a low risk of malignancy. As shown above, for 4C lesions, the cancer detection sensitivity for reader 1 was (12 out of 30) and for reader 2 was 46.3% (25 out of 54) (Table 3). The benign categories read as 4C lesions were fibroadenoma (33%-41%), fibrocystic changes (22%-31%, including fibrosis), fat necrosis and inflammatory changes (10-28%), and proliferative lesions (14%-17%). US-guided DOT had 100% cancer detection sensitivity for 4C lesions read by reader 1 and 96% sensitivity for reader 2 when 50 μmol/L was used as a conservative tHb threshold. As for BI-RADS 5 lesions, the cancer detection sensitivity for reader 1 was 82.9% and reader 2 was 66.7% (Table 3). US-guided DOT data was unlikely to have changed the radiologists' decisions on biopsy, but might have downgraded it to 4C or 4B and improved biopsy concordance. Example 2. Image Reconstruction of the Optimized Functional Images Constrained Optimization Method In DOT, the propagation of diffused light through tissue can be described by photon diffusion approximation as: [∇2+k2]U(r)=-1DS(r),k2=-υμa+jwD,D=13μs′,Eq.(1) where S(r) is the equivalent isotropic source and, U(r) is the photon density wave, D is the diffusion coefficient, j=√{square root over (−1)}, v is the speed of light, w is the modulation frequency of the optical wave, and μaand μs′ are the absorption and reduced scattering coefficients, respectively. The inverse problem may be linearized by Born approximation. By digitizing the imaging space into N voxels, the resulted integral equations are formulated as following: [Usc]M×1=[W]M×N[δμa]N×1=WX,Eq. (2) where Uscis the measured scattered photon density wave, M is the number of measurements, and δμaa denotes the unknown changes of absorption coefficient at each voxel. The weight matrix, W, describes the distribution of diffused wave in the homogenous medium and characterizes the measurement sensitivity to the absorption and scattering changes. The inverse problem can be formulated as an unregularized optimization problem as: f(x)=arg min/x∥Usc−WX∥2Eq. (3) During the reconstruction, a dual-zone mesh scheme is used to segment the imaging volume into a lesion region identified by the guiding images, e.g., co-registered US images, and a background region with fine and coarse voxel sizes, respectively. The coarse voxel size is greater than the fine voxel size. This scheme reduces the total number of voxels with unknown optical properties. The conjugate gradient (CG) method may be utilized to iteratively solve the inverse problem. As a result, the target quantification accuracy can be significantly improved. When the lesions are larger, the total number of fine voxels and coarse voxels, N, can be much larger than the total measurements, M, which is the number of sources×the number of detectors×2 (e.g., =14×9×2=252 in some embodiments) counting for both amplitude and phase data. Due to the correlated nature of diffused light measured at closely spaced sources and detector positions in addition to measurement noise, increasing the number of sources and detectors does not effectively mitigate the ill-conditioned nature of the DOT inversion problem. The inverse problem is formulated as: f(x)=argminX(Usc-WX2+λ2(X-X0)2),Eq.(4) where X0is a preliminary estimate of the optical properties that can be determined from the measured data and λ is a regularization parameter. A Newton or conjugate gradient optimization method is used to approximately solve Eq. (4). In various embodiments, no spatial or temporal filters were used on solution f(x). Truncated Pseudoinverse as an Initial Estimate A truncated pseudoinverse (PINV) operator WPINV−1of W is used to form the preliminary estimate of X0as X0=WPINV−1Usc, which is included in the second part of Eq. (4). According to singular value decomposition (SVD) theory, W can be decomposed as: W=Σn=1R√{square root over (σn)}unvn†, Eq. (5) {un} and {vn} are left and right singular vector of W or orthonormal eigenvector of WW†, {σn} are nonzero eigenvalues of W†W or WW†or {√{square root over (σn)}} are the singular values of W, and R is the number of nonzero singular values. Moore-Penrose pseudoinverse (MPP) of W is, WPINV-1=∑n=1R1σnvnun†Eq.(6) Based on the linear equation Eq. (2), X~=WPINV-1Usc=∑n=1R1σnvnun†UscEq.(7) Since measurement contains noise, denoted as n, Usc=Unoiseless+n. Then the reconstructed absorption {tilde over (X)} is given as: X~=WPINV-1(Unoiseless+n)=X+Xnoise,Xnoise=∑n=1R1σnvnun†nEq.(8) For small singular values, where √{square root over (σn)}→0, Xnoisemay contain image artifacts. In the truncated MPP approach, a threshold value √{square root over (σth)} is used to choose singular values, and the initial solution using MPP is: X0=WPINV-1Usc=Σn=1R′1σnvnun†Usc,σ1,σ2,……… …σR′≥σthEq.(9) √{square root over (σth)} may be chosen as 10% of √{square root over (σ1)} as a cut-off value, where √{square root over (σ1)} is the largest singular value of the weight matrix W. From the truncated pseudoinverse, a preliminary estimate of unknown optical properties can be obtained. A projection operation is used to suppress pixels outside the region of interest identified by a sphere B obtained from measurements of co-registered ultrasound images. This projected absorption map is used as an initial solution for Newton or Conjugate gradient search method. Newton Optimization Method The Newton method uses 2ndderivative of objective function (known as hessian) to calculate a 2ndorder search direction resulting in quadratic convergence rate. We reformulate our penalized least square problem as a quadratic optimization problem, f(x)½XTQX−bTX−c Q=2WTW+λI, b=2WTUsc+λX0Eq. (10) The hessian is positive definite when λ>0. Our solution is iteratively updated using following equations, Xk+1=Xk−(∇2f(X))−1(∇f(x)), ∇f(x)=QX−b, ∇2f(X)=QEq. (11) The iteration process is terminated when change of objective function between successive iterations become smaller than a preset tolerance level. In various embodiments, the regularization parameter is chosen based on the tumor size measured from the guiding images, such as ultrasound images. For example, regularization parameter λ is chosen as λ=p√{square root over (σ1)}, where √{square root over (σ1)} is the largest singular value of the weight matrix, and p is proportional to the tumor size. Choice of regularization parameter, λ, affects the reconstruction. If λ is too small, the penalty or regularization may not have any effect on reconstruction. On the other hand, a large λ heavily penalizes the data fidelity term and solution may not converge near true minimum of unregularized objective function. In our approach, λ is chosen as λ=p√{square root over (σ1)}, which decreases with background reduced scattering coefficient μs0′ and increases with background absorption coefficient μa0. Thus, for higher background μa0, λ regulates more to improve the conditioning of the Q matrix (see Eq. (10)). Additionally, because the huge difference between the first and the rest of the singular values, λ/σnincreases with n and therefore λ regulates more for smaller singular values and further improves the conditioning of Q matrix. In various embodiments, the regularization parameter is determined by trial and error using phantom data to ensure convergence, reconstruction accuracy and lower image artifacts. Conjugate Gradient Optimization Method The Conjugate gradient (CG) method is an iterative technique for solving symmetric positive definite linear systems of equations. This method was investigated both with regularization and without regularization. For the unregularized optimization formulation as given in Eq. (3), W is positive semi-definite because it possesses singular values that take on zero values. From phantom experiments using absorbers with known optical properties, three iterations were determined as a stopping criterion because the reconstructed absorption coefficients after three iterations were close to known values. For the regularized least square formula Q=2WTW+½λI, and Q is, by construction, symmetric and positive semi-definite. For a choice of λ>0, Q is a positive definite matrix since the lower bound for the singular values of Q is λ2. Again, λ=p√{square root over (σ1)}, is chosen with p proportional to the target or tumor size measured from the ultrasound. Comparison of Five Reconstruction Methods Five reconstruction methods were compared using phantom and clinical data. The five methods use zero as an initial estimate of target optical properties for regularized Newton optimization (Newton Zero initial) and regularized CG optimization (CG Zero initial); using PINV as an initial estimate of target optical properties for regularized Newton optimization (Newton PINV initial), regularized CG optimization (CG PINV initial), and using zero initial estimate and unregularized CG. Data Acquisition of the US-Guided DOT System In the exemplary embodiment, a US-guided DOT system was used, where an ultrasound device was the imaging device used to acquire guiding data. The system comprised a commercial US system and a DOT device. Four laser diodes of wavelength 740, 780, 808 and 830 nm respectively were used to deliver light modulated at 140 MHz carrier frequency to tissue. Each laser diode was multiplexed to 9 positions on a hand-held probe and 14 photomultiplier detectors (PMT) were used to detect reflected light via light guides. A customized A/D board sampled detected signals from each patient at both lesion and contralateral normal breast. The sampled signals were then stored in a computing device, such as a personal computer. Multiple datasets acquired from contralateral normal breast were used to compute a composite or compound reference. A composite reference was considered as a homogeneous reference and used fitted optical absorption (μa0) and reduced scattering (μs0′) to calculate W for each wavelength. Lesion absorption maps of 4 wavelengths were reconstructed and total hemoglobin map was calculated from the absorption maps using absorption coefficients for these four wavelengths. Example 3. Phantom Experiments Phantom experiments were performed with solid ball phantoms of different sizes and different optical contrasts to emulate tumors. These phantoms were submerged in intralipid solution of P a in the range of 0.02-0.03 cm−1and μs′ in the range of 7 to 8 cm−1that emulated homogeneous background tissue. Three solid balls having diameters of 1, 2 and 3 cm respectively were submerged at depths of from 1.5 cm to 3 cm in 0.5 cm increments. The calibrated high and low contrast phantoms had μa=0.23 cm−1and μa=0.11 cm−1, mimicking malignant and benign lesions, respectively. An absorption map for each phantom location, size and contrast was reconstructed and maximum μawas obtained for quantitative comparison. Average reconstructed maximum μa's from all phantoms using five reconstruction methods are provided in Table 6(a) and shown inFIG.11.FIG.11is a box plot of phantom data obtained from phantoms having diameter from 1 to 3 cm of high contrast (dashed line) and low contrast (solid line) located at different depths (1.5-3.5 cm center depth) using zero and PINV as an initial guess and Newton as the optimization method, respectively (first and second columns), zero and PINV as initial guess and CG as the optimization method, respectively (third and fourth columns), and unregularized CG (last column). Errors of both high and low contrast phantoms reconstructed using these five different methods are given in Table 6(b) below. As seen from Tables 6(a) and 6(b), Newton and CG with PINV as an initial image accurately estimated absorption coefficients while Newton and CG with a zero initial produced larger errors. Unregularized CG provided better estimates for high contrast phantoms but resulted in under reconstruction for low contrast phantoms. TABLE 6(a)Maximum reconstructed absorption (cm−1) (mean ± standard deviation) for phantomNewton withNewton withCG withCG withCGzero IniPINV inizero iniPINV iniunconstrainedReconstructed μa0.097 ±0.099 ±0.093 ±0.100 ±0.107 ±(low contrast)0.0180.0160.0120.0170.069Reconstructed μa0.191 ±0.229 ±0.191 ±0.228 ±0.222 ±(high contrast)0.0420.0210.0410.0210.027 TABLE 6(b)Errors (mean ± standard deviation) in reconstructedabsorption coefficient using different methodNewton withNewton withCG withCG withCGzero IniPINV inizero iniPINV iniunconstrainedError (low11.6 ±0.04 ±11.8 ±0.1 ±3.5 ±contrast)13.8%9.1%13.2%9.0%9.9%Error (high12.0 ±9.6 ±15.6 ±8.8 ±26.5 ±contrast)16.1%14.6%10.9%15.8%8.5% Example 4. Patient Data Clinical data of 20 patients were obtained using the system and method disclosed herein. Based on biopsy results, 10 patients had benign lesions and 10 patients had cancers. An example of a cancer case is shown inFIGS.12A,12B,12C,12D,12E,12F, and12G.FIGS.12B,12C,12D,12E,12F, and12Gdepict the reconstructed absorption maps at 780 nm wavelength.FIG.12Ashows a co-registered US image with the suspicious lesion marked by a circle.FIG.12Bshows the absorption maps reconstructed by PINV initial, where maximum μa=0.194 cm−1.FIG.12Cshows the absorption maps reconstructed by Newton with zero initial, where maximum μa=0.179 cm−1.FIG.12Dshows the absorption maps reconstructed by Newton with PINV initial, where maximum μa=0.268 cm−1.FIG.12Eshows the absorption maps reconstructed by regularized CG with zero initial, where maximum μa=0.179 cm−1.FIG.12Fshows the absorption maps reconstructed by regularized CG with PINV initial, where maximum μa=0.267 cm−1.FIG.12Gshows the absorption maps reconstructed by unregularized CG, where maximum μa=0.216 cm−1. Each set of maps in each figures inFIGS.12B,12C,12D,12E,12F, and12Gcomprise 7 sub-images marked as slice 1 to 7 and each sub-image shows an x-y plane distribution of absorption coefficients at the depth of from 0.5 cm to 3.5 cm from the skin surface. The spacing between the sub-images in terms of depth is 0.5 cm. The color bar is absorption coefficients in cm−1. We chose the μ a display range from 0 to 0.2 cm−1because most of the reconstructed absorption values fall within this range. The dimension of the sub-image is 8 cm×8 cm with scales marked as from −4 cm to 4 cm in both X and Y axis. Absorption maps reconstructed with the five methods showed similar lesion position and shape, but the method of Newton with PINV initial (FIG.12D) yielded the highest reconstructed μa=0.268 cm−1. An example of a benign lesion is shown inFIGS.13A,13B,13C,13D,13E,13F, and13G.FIGS.13B,13C,13D,13E,13F, and13Gshow reconstructed absorption maps at the 780 nm wavelength of a benign case.FIG.13Ais a co-registered US image with the suspicious lesion marked by a circle.FIG.13Bis PINV initial estimate images, where maximum μa=0.076 cm−1.FIG.13Cis reconstructed by Newton with zero initial, where maximum μa=0.078 cm−1.FIG.13Dis reconstructed by Newton with PINV initial, where maximum μa=0.087 cm−1.FIG.13Eis reconstructed by regularized CG with zero initial, where maximum μa=0.077 cm−1.FIG.13Fis reconstructed by regularized CG with PINV initial, where maximum μa=0.088 cm−1.FIG.13Gis reconstructed by unregularized CG, where maximum μa=0.092 cm−1. The absorption maps inFIGS.13B,13C,13D,13E,13F, and13Ghave the same scale as inFIGS.12B,12C,12D,12E,12F, and12G. A box plot of maximum tHb of 20 clinical cases (20 patients with 10 patients having a malignant tumor (dashed line), and 10 patients having a benign tumor (solid line) reconstructed with the five different methods is shown inFIG.14. The first column and the second column summarize tHb concentrations reconstructed by a Newton method, with the first column using zero as the initial estimate and the second column using PINV as the initial estimate. The third and fourth columns summarize tHb reconstructed by CG, with the third column using zero as the initial estimate and the fourth column using PINV as the initial estimate. The last column summarizes tHb reconstructed by unregularized CG. A two-sample t test was performed between malignant and benign groups for each method. The Newton or CG method with PINV as an initial estimate provided the highest statistical significance. Additionally, the malignant to benign contrast ratios were 1.61, 2.11, 1.61, 2.07, and 1.93 for Newton's with zero initial, PINV initial, CG zero initial, PINV initial and unregularized CG respectively. The average and standard deviation of maximum tHb concentration obtained from each method is summarized in Table 7. For benign cases, reconstructed tHb was comparable using five methods, however, for malignant cases the tHb contrast was much higher when Newton's and CG were used with the PINV initial estimate. TABLE 7tHb (μmol/L) for clinical cases using different methodsNewton withNewton withCG withCG withCGzero IniPINV inizero iniPINV iniunregularizedTotal Hb conc.47.5 ±49.4 ±47.5 ±50.4 ±48.5 ±(Benign)14.210.614.39.816.3Total Hb conc.76.4 ±104.2 ±76.5 ±104.2 ±93.5 ±(Malignant)23.923.623.823.626.9 Example 5: Convergence Speed of Reconstruction Methods When an iterative image reconstruction algorithm converges quickly, reconstructed images can be provided for on-site diagnosis by physicians. To compare the convergence of different reconstruction methods, the Least Square Error (LSE), ∥Usc−WX∥2for each method was normalized to the power of the scattered field, ∥Usc∥2, which served as the initial objective function for unregularized CG method. Shown inFIG.15are the mean and standard deviations of normalized LSE for the five methods using phantom data. Truncated pseudoinverse provided a good initial guess which reduced the initial LSE, ∥Usc−WX∥2, to 4% of the power of the scattered field, ∥Usc∥2. Newton and CG with PINV as an initial estimate converged in 1 and 2 iterations, respectively. Newton and CG with zero initial converged in 1 and 3 iterations, respectively, and the residual LSE of CG was slightly higher than that with PINV as an initial. Unregularized CG converged in 3 iterations. Example 6: Target Centroid Error To compare different reconstruction methods, the target centroid error, i.e., the absolute difference between the center of a phantom target measured by co-registered US and the centroid of corresponding reconstructed target absorption map, was calculated as a measure of reconstruction quality. Phantom data of both low and high contrast phantom targets of 1 cm diameter located at different depths and measured at 780 nm were used to estimate the centroid error and results are shown in Table 8. The MATLAB function ‘regionprop’ was used to estimate the centroid of the target absorption map and the difference between the estimated centroid and the measured target center from corresponding co-registered US was calculated. As seen from Table 8, the target centroid error which was less than one voxel size of 0.25 cm did not depend on the reconstruction method. Thus, all reconstruction methods provided essentially the same target centroid. TABLE 8Object centroid error (Δx, Δy) (mean ± standard deviation) for phantom dataNewton withNewton withCG withCG withCGzero iniPINV inizero iniPINV iniunregularizedObject centroid0.157 ±0.157 ±0.157 ±0.157 ±0.163 ±Error (Δx)0.0930.0930.0930.0930.091Object centroid0.225 ±0.225 ±0.225 ±0.225 ±0.190 ±Error (Δy)0.1010.1010.1010.1010.069 Example 7: Automated Method of Outlier Removal and Data Selection Fast data acquisition of US-guided DOT system allows the collection of several data sets during one imaging session. The data contains multiple data sets from both lesion and contralateral reference breast. An automated method for outlier removal and data selection is introduced to eliminate the effect of inaccurate measurements. The block diagram of the procedures is provided inFIG.16. As illustrated in the flowchart ofFIG.16, imaging datasets of a breast containing a lesion and a contralateral breast of a patient are obtained at1600. Multiple sets of reference measurements produced using the contralateral breast measurements were used to form a single high quality data set at1602. An MNR outlier removal procedure is performed at1604as described above to eliminate the highly inaccurate measurements (outliers) with a criterion based on the distribution of data collected at each source detector pair. A piece-linear fitting is used at1606to reject the source-detector pair measurements obtained from the saturated PMTs. An iterative fitting of residue of the remaining data is calculated at1608to further eliminate inaccurate measurements based on the linearity of fitted results of the reference measurements of all source-detector pairs. A least-square error method is used at1610to form the most accurate reference data set from the remaining measurements. Lesion breast imaging measurements are produced from the imaging dataset from the lesion breast of the patient at1612and combined with the compound reference as described above to produce a perturbation dataset, which is subjected to perturbation filtering at1614. The perturbation filtering in various embodiments is based on analysis obtained from a semi-infinite analytical solution of light propagation in tissue as described above. The perturbation filtering at1614forms accurate perturbation sets that are more robust to outlier and inaccurate measurements. The perturbation set produced at1614is used as an initial image estimate to perform image reconstruction at1616to produce an optimized reconstructed image based on the measurements obtained by the DOT device. Additional details of the selected steps illustrated inFIG.16are described in detail below. Outlier Rejection in Reference Measurements Each data set contains measurements from s sources and d detectors with the total number of m=s×d measurements. The system used in this study provides 90 source-detector measurements per data set. A total of k data sets containing a total of k×s×d measurements collected at the reference site are used for selecting a best reference data set. In general, k ranges from about 12 to about 18. Since a frequency domain DOT system is used for data acquisition, each data set consists of amplitude and phase data at each voxel. The maximum normed residual (MNR) is a widely used statistical test to address the problem of outlier rejection that has shown outstanding performance in both linear and nonlinear data. The MNR test is based on the largest absolute deviation from the sample mean in units of the sample standard deviation. In various embodiments, the MNR test is applied to remove outliers for each source-detector pair within the k data sets. In various aspects, each outlier measurement is expunged from each source-detector pair data set based a criterion described below. An upper critical value of the t-distribution with k−2 degrees of freedom is calculated, and an outlier threshold GThreshold(i) is obtained based on Eq. (12). GThreshold(i)=k-1ktα2k,k-22(i)k-2+tα2k,k-22(i).Eq.(12) where GThreshold(i) denotes the outlier threshold for the ithsource-detector pair, tα2k,k-22(i) denotes the upper critical value of the t-distribution with k−2 degrees of freedom, and a represents the level of significance which determines the strictness of outlier removal procedure. By changing the value of a to a value ranging from 0 to 1, the total number of the outliers and the significance of these outliers removed from the database may be modulated. To find the optimal value of a, the outlier removal process is performed for different significance levels ranging from 0.01 to 0.5. In one embodiment, the optimal value of a is set to 0.05 based on visual examination of the removed outliers. This optimal value is selected in a way that the test only removes the significant outlier data. A G value is determined as an absolute deviation of the data point from mean value of the measurements and normalized by standard deviation. Any data point corresponding to maximum G value which has absolute deviation higher than the threshold is classified as an outlier and removed from the reference data set. In various aspects, the MNR test is iterated until no further outliers are classified in the reference dataset. The MNR test is performed for both amplitude and phase measurements separately. If any data point in either amplitude or phase measurements is classified as an outlier, both amplitude and phase measurements are removed from the reference data sets. Saturation and Noise Data Rejection in Reference Datasets In addition to outliers in the reference measurements, detector saturation is another common problem in DOT that can happen as a result of higher light intensity detected at a shorter source detector distance. Each PMT may saturate at a different light intensity level. A semi-infinite analytic solution predicts that the logarithm of the detected amplitude for each source-detector pair multiplied by square distance of that specific source-detector pair, referred as logarithmic amplitude, should linearly decrease with the source-detector distance for homogeneous reference measurements. The phase measurement should increase linearly with the source-detector distance. A piece-linear fitting method is implemented for the amplitude measurements of all remaining source-detector pairs in the reference data after outlier rejection. In general, three sections of shorter source-detector distance, mid-range, and longer distance range were used. If a measured logarithmic amplitude at a shorter source-detector distance does not follow the linear profile plotted as a function of the source-detector separation in the mid and longer range, we can assume that the PMT is saturated at this detector distance. The measurements that fit the linear profile are kept for further processing. Additionally, the phase data corresponding to the saturated amplitude data are not reliable and are removed from both reference and lesion data sets. Besides the saturated measurements which often occur for source-detector pairs with shorter separation distances, there exist measurements of source-detector pairs with longer distances which are dominated by the noise of the system and are not reliable. To improve the reference data set, any measurements of longer source-detector pairs with amplitudes below the electronic noise of the system are classified as noisy measurements and expunged from the reference data. Since the corresponding phase data with amplitudes at noise level are not reliable, the both amplitude and phase data are removed from the reference data. Iterative Reweighted Least Square Fitting The MNR test is based on each source detector measurements separately and it removes the outliers at each source-detector pair. All reference measurements remaining after MNR and saturation data removal are fitted using an iterative reweighted least square (IRLS) method to obtain an accurate linear fit with the minimum fitting residue for both log scaled amplitude and phase measurements as functions of source and detector separation. Without being limited to any particular theory, this IRLS method has enhanced the quality of the results obtained in various p-norm minimization problems ranging from compressive sensing to baseline correction in industrial settings. As shown in Eq. (13), IRLS iteratively minimizes the bi-square weighted residual in the least-square sense: βn+1=Σi=1mw(i)βn|y(i)−f(i,β)|2. Eq. (13) where i is the index of the measurement, w is the bi-square weight function, y is the measurement value, β includes slope and intercept of the line fitted to the data and f(i, β) is the fitted measurement based on the current β. This method reduces the influence of large residuals in the fitted reference parameters and improves the fitted results. After the fitting for both logarithmic amplitude and phase is completed, the distance of each amplitude and phase measurement from the corresponding value on the fitted line of same source-detector distance is calculated. All the measurements with higher absolute residue compared to the threshold in either amplitude or phase measurements are selected as non-accurate measurements and removed from the data set. In various embodiments, an absolute residue threshold of 0.5 is empirically selected for both amplitude and phase based on trial and error analysis using clinical data. Since the IRLS-based minimization is robust and less sensitive to noise bursts, this method further enhances the robustness of the data set selection. Compound Reference Dataset Even though accurately fitted lines for both log scaled amplitude and phase are obtained from the previous steps, there may still be more than one measurement for each remaining source-detector separation. In various embodiments, a single amplitude and phase measurement are selected for each source-detector pair to form a single robust reference data set. In various embodiments, a least square method is utilized to select the measurements with minimum distances from the center of the distribution of the remaining reference measurements for each source-detector pair. This selection process is performed separately on the remaining amplitude and phase data. Therefore, a final reference data set with high similarity to the fitted slope and intercept of the combination of all the reference data sets after outlier removal is produced. This reference data set consists of the selected amplitude and phase measurements for the remaining source-detector pairs. This robust set of reference measurements is referred to herein as the compound reference and is less sensitive to outliers, PMT saturation and noise. To visualize the effect of the proposed process, the amplitudes and phase profiles of one clinical case before and after preprocessing, as well as the final compound reference have been illustrated inFIG.17.FIG.17shows log scaled amplitude and phase profiles of reference data sets before (left column) and after preprocessing (center column) as well as final compound reference (right column). Saturated source-detector pairs have been marked with an overlaid dashed rectangle in the amplitude part of the left column. The preprocessing includes outlier removal, saturation and noise rejection and iterative reweighted fitting with higher residue removal. Perturbation Filtering The procedures described above provide a robust set of reference measurements. For reconstruction of a lesion absorption map at each wavelength, the perturbation Uscis calculated by subtracting the compound reference from the lesion data as shown in Eq. (14), in which Aland Arare amplitude and pi and (fir are phase of each source-detector pair obtained at the lesion and compound reference, respectively: USC=Al(i)·exp(φl(i))-Ar(i)·exp(φr(i))Ar(i)·exp(φ(i))=(Al(i)Ar(i)cos(φl(i)-φr(i))-1)+j(Al(i)Ar(i)sin(φl(i)-φr(i)))Eq.(14) Without being limited to any particular theory, outliers may occur in the perturbation data due to 1) measurements errors caused by movements of patient or operator's hand as well as bad coupling between the light guides and the breast; and/or 2) heterogeneity of the background tissue and the lesion. Lesion measurements are expected to be more heterogeneous than the reference measurements because the heterogeneity is partially caused by the lesion and partially by the background tissue heterogeneity. Therefore, outlier removal procedures applied to reference measurements, which include measurements of healthy tissues only, cannot be implemented to the lesion measurement dataset. Instead, a filtering method is applied to the perturbation based on constraints imposed on the phase difference between the lesion data and the reference data given in Eq. (14). These conditions are determined based on the predictions obtained from the semi-infinite analytical solution derived from diffusion approximation. Simulations were performed using different background optical properties for both reference and lesion breasts as well as different optical properties for lesions of different sizes located at different depths. The simulations used the same probe geometry and the same number of the sources and detectors as the experiments. Table 9 shows the range of the parameters used for simulations. The results show that the maximum phase difference between lesion and reference measurements of all source detector pairs for most scenarios listed in Table 9 are in the range of few degrees. For a 4 cm larger lesion of optical contrast of 10 times higher than the background, the maximum phase difference is only 22 degrees. This implies that the maximum phase difference of the source detector pairs cannot exceed 90 degrees even in extreme cases, and therefore the cosine term in Eq. (14) should always be positive. Consequently, the real part of the perturbation cannot be less than −1 assuming that the amplitude measured from lesion Alis smaller than that of the amplitude measured at reference Ardue to a higher absorption of the lesion in general. Furthermore, the mean and the standard deviation of the imaginary part of the perturbation are calculated. If the imaginary part of a data point is farther than three standard deviations from the mean, it is classified an outlier. Any data points in the perturbation that do not meet both of these two criteria are classified as outliers and rejected from the perturbation. This step removes outliers in the perturbation likely caused by measurement errors rather than heterogeneity of the lesion data. TABLE 9Range of parameters used for analytical modeland obtained maximum phase delay.BackgroundBackgroundLesionLesionLesionμaμ′sΔμadepthradius0.02-0.085-100.05-0.21.5-3.50.5-2(cm−1)(cm−1)(cm−1)(cm)(cm) This normalized perturbation is used for reconstructing the absorption map at each wavelength. A total hemoglobin map may be calculated from the combined data from the absorption maps summarizing absorption at four wavelengths. In various embodiments, a dual-zone mesh scheme is used for the inversion aspect of the image reconstruction. Using the dual-zone mesh scheme, the imaging volume is segmented into two regions consisting of the lesion and background regions. These two regions are identified by a separate analysis of the co-registered ultrasound images. The dual-zone mesh scheme reduces the total number of voxels with unknown optical properties by using smaller mesh size for the lesion region and a larger coarse mesh size for the background region. Any suitable method may be utilized for iterative optimization of the inverse problem to reconstruct the absorption maps based on the perturbation dataset including, but not limited to, a Newton method and a conjugated gradient method. In one aspect, a conjugate gradient method is utilized for iterative optimization of the inverse problem. Patient results are calculated from the selected data based on this automated outlier removal and data selection procedures. Example 8: Iterative Imaging Reconstruction by Removing Measurement Outliers In DOT image reconstruction, the scattered field, Usc, or perturbation, which is the normalized difference between lesion breast and contralateral normal breast (reference) measurements, is used for mapping lesion absorption map at each wavelength. The total hemoglobin map is computed from the absorption maps of four optical wavelengths. Tissue heterogeneity, bad coupling between tissue and probe, and patient and operator's hand motions can contribute to the outliers in the perturbation measurements. These outlier measurements can cause image artifacts. In various embodiments, a statistical method may be used to automatically remove outliers from contralateral normal breast measurements based on the semi-infinite medium model. However, this method cannot be used for perturbation measurements, because lesion measurements are expected to be more heterogeneous than the reference measurements. To separate the measurement errors from lesion heterogeneity, additional information from multiple wavelength measurements is incorporated in the preprocessing of the perturbation prior to the optimized image reconstruction.FIG.18is a flow chart illustrating shows a method for data preprocessing and an iterative perturbation correction algorithm. In this approach, a Structural Similarity Index (SSIM) is used to quantitatively evaluate imaging quality. The SSIM measure is a function of image luminance, contrast, and structure. SSIM between two images X and Y are defined as: SSIM(X,Y)=[l(X,Y)]α·[c(X,Y)]β·[s(X,Y)]γEq. (12) where l(X, Y), c(X, Y), and s(X, Y) are luminance, contrast and structure similarity respectively, and α>0, β>0, γ>0 are three parameters used to adjust relative importance of luminance, contrast and structure similarity, respectively, in the similarity index. Luminance, contrast, and structure of two images are computed from the mean, standard deviation of normalized images according to Eq. (13): l(X,Y)=(2μXμY+C1)/(μX2+μY2+C1) c(X,Y)=(2σXσY+C2)/(σX2+σY2+C2) s(X,Y)=(σXY+C3)/(σXσY+C3) Eq. (13) where μX, μY, σX, σYand σXYare mean of pixel values of image X, mean of pixel values of image Y, standard deviation of image X, standard deviation of image Y, and covariance of image X and Y respectively. C1, C2, C3are constants. For each wavelength, λi∈{740 nm, 780 nm, 808 nm, 830 nm}, images at the other three wavelengths are used as references to compute SSIMs for three image pairs. An average of the resulting three SSIMs is the quantitative image quality index, SSIM(λi), used to evaluate reconstructed image quality of wavelength λias given below: SSIM(λi)=1nwavelength-1∑j=1,j≠inwavelengthSSIM(imagei,imagej)Eq.(14) An iterative perturbation correction is performed based on SSIM(λi) for each wavelength. Data at the wavelength with minimum SSIM(λi) is corrected first. The initial estimate is produced by forming a truncated pseudoinverse (PINV) from the perturbation as described above. If the SSIM(λi) is lower than a preset threshold (e.g., 0.9), perturbation from λiwavelength is corrected based on the original perturbation and projected perturbation. The reconstructed image, δμa′, for λiis projected by multiplying weight matrix, W, to obtain the projected data according to Eq. (15): [Uprojected]=[W][δμa′] Eq. (15) Based on the distance of original perturbation data, Usc, and projected data, Uprojected, a projection error, Eproj, is calculated as: Eproj=∥Uprojected−Usc∥2. Each data point with maximum projection error is removed from Usc. The modified perturbation is again used to reconstruct absorption map for wavelength λiusing regularized CG. SSIM(λi) is recomputed again and compared with the threshold. This process is repeated until the lowest SSIM(λi) is greater or equal to the threshold. This iterative correction procedure is performed for each wavelength until SSIM(λi) for all four wavelengths are above the threshold. FIGS.19A,19B, and19Cshow an example of the proposed iterative perturbation correction algorithm applied to an image of a breast lesion.FIG.19Ashows the ultrasound image with lesion marked by a white ellipse.FIG.19Bshows reconstructed absorption maps of four wavelengths. Each wavelength absorption map shows only 1 layer at depth of 1.5 cm. SSIM for the four wavelengths are 0.86, 0.84, 0.85, 0.81, respectively and the reconstructed maximum absorption coefficients are 0.251 cm−1, 0.229 cm−1, 0.244 cm−1and 0.318 cm−1, respectively. Image artifacts are present in the 808 and 830 nm absorption maps as indicated by superimposed black arrows.FIG.19Cshows reconstructed absorption maps after the perturbation correction as described above. Mean SSIMs for the four wavelengths have improved to 0.93, 0.94, 0.93, 0.92, respectively while reconstructed maximum absorption coefficients became more consistent as 0.251 cm−1, 0.229 cm−1, 0.266 cm−1and 0.262 cm−1, respectively. FIGS.20A,20B, and20Cshow an example of the proposed iterative perturbation correction algorithm applied to an image of a normal breast.FIG.20Ashows the ultrasound image with lesion marked by an overlaid white ellipse.FIG.20Bshows reconstructed absorption maps for the four wavelengths. Each wavelength absorption map has one 2D layer at a depth of 1 cm. Mean SSIMs for the four wavelengths 740, 780, 808, 830 nm are 0.87, 0.91, 0.87, 0.82, respectively, and reconstructed absorption coefficients are 0.246 cm−1, 0.207 cm−1, 0.133 cm−1and 0.332 cm−1, respectively. Image SSIM indicates that there is image artifact in wavelength 830 nm, which is confirmed by visual inspection.FIG.3Cshows reconstructed absorption maps after perturbation correction. Mean SSIMs for the four wavelengths change to 0.95, 0.97, 0.97, 0.96, respectively while the absorption coefficients change to 0.158 cm−1, 0.147 cm−1, 0.135 cm−1and 0.148 cm−1, respectively. This perturbation correction procedure is applicable to outlier removal caused by tissue heterogeneity, bad coupling between tissue and sources and detection fibers. In operation, a computer executes computer-executable instructions embodied in one or more computer-executable components stored on one or more computer-readable media to implement aspects of the invention described and/or illustrated herein. The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention. When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Although described in connection with an exemplary computing system environment, embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. The computing system environment is not intended to suggest any limitation as to the scope of use or functionality of any aspect of the invention. Embodiments of the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. Aspects of the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. | 93,794 |
11857290 | FIG.1shows an example of a device1for endoscopic optoacoustic imaging comprising an imaging unit2, a position stabilizing structure3and a processing unit4. In the following, the position stabilizing structure3is also referred to as “position stabilizing unit”. The imaging unit2is configured to be inserted into an object5, for example the gastrointestinal (GI) tract of the human, via a natural orifice5athereof, and comprises an irradiation unit6configured to irradiate a region of interest7, for example the walls of a lumen inside the object5, with electromagnetic radiation, in particular with light of a wavelength in the visible or infrared spectrum. Acoustic waves generated in the region of interest7in response to the irradiation with the electromagnetic radiation are detected by a detection unit8, whereby according detection signals are generated. The imaging unit2is connected to the processing unit4via a connecting element9, for example a catheter. The processing unit4is configured to generate an optoacoustic image of the region of interest7based on the detection signals. Preferably, the processing unit is configured to control the irradiation unit6and hence the irradiation of the region of interest7with the electromagnetic radiation. Preferably, the connecting element9comprises a first channel23a, e.g. electrical wires, configured to guide the detection signals from the imaging unit2provided at a distal end of the connecting element9to the processing unit4provided at a proximal end of the connecting element9. Alternatively or additionally, the first channel23amay be configured to guide electrical signals from the processing unit4to the irradiation unit4to control and/or to provide electrical power to the irradiation unit4for irradiating the region of interest7with electromagnetic radiation. The position stabilizing structure3is arranged at the imaging unit2, in particular at the distal end of the connecting element9, and is preferably oval-shaped and/or elastically deformable to allow for an easy insertion through the orifice5aof the object5. In particular, the imaging unit2is disposed in the interior3b, for example at the center, of the position stabilizing structure3such that the imaging unit2is surrounded by the position stabilizing structure3. Because the imaging unit2is provided inside the surrounding position stabilizing structure3, it can be—irrespective of its shape—easily brought to the region of interest7inside the object5. An outer face3aof the position stabilizing unit3is configured to come into contact with the object5, in particular at the region of interest7, for example with walls of a lumen of the object5, thereby stabilizing the imaging unit2in a position and/or orientation with respect to the region of interest7. Preferably, the size and/or shape of the position stabilizing unit3and/or the material and/or texture of the outer face3ais or are configured such that the position stabilizing unit3is fixed and/or locked at a desired position relative to the region of interest7due to enhanced friction forces between the outer face3aof the position stabilizing unit3and inner structures of the object5and/or due to form fit, i.e. positive locking between the position stabilizing unit3and inner structures of the object5. Preferably, the position stabilizing structure3is configured to be filled with a coupling medium for acoustically coupling at least one ultrasound transducer of the detection unit8with the region of interest7. Preferably, the connecting element9comprises a second channel23bconfigured to guide the coupling medium from outside of the object5into the interior3bof the position stabilizing unit3. Preferably, the position stabilizing structure3comprises an elastic membrane configured to adapt its shape and/or size to the object5at the region of interest7, in particular the walls of a lumen of the object5. For example, in an empty state, when no or only small amounts of coupling medium are present in the interior3b, the position stabilizing structure3can be inserted into the object5and moved through the object5, in particular through the GI tract, with low resistance. As soon as the region of interest7is reached, coupling medium can be filled into the position stabilizing structure3via the connecting element9such that the outer face3a, in particular the membrane, comes into contact with the object5at the region of interest7, thereby fixing the imaging unit2in a position and/or orientation relative to the region of interest7. By this means, optoacoustic imaging of the region of interest7can be performed by the imaging unit2without producing artifacts due to unintentional movement of the imaging unit2relative to the region of interest7. In particular, by locking the position stabilizing structure3in the object5at the region of interest7, a controlled alignment of the irradiation unit6and/or the detection unit8, in particular with respect to region of interest7, is achieved. FIG.2shows a first example of an imaging unit2disposed inside a position stabilizing structure3. In present example, the imaging unit2is coupled to connecting element9which connects the imaging unit2, in particular detection unit8comprising at least one ultrasound transducer exhibiting a sensitive surface8cbeing sensitive to ultrasound waves and irradiation unit6, with processing unit4and radiation source10, respectively. The irradiation unit6and the detection unit8together are also referred to as optoacoustic imaging head. The position stabilizing structure3is located at a distal end of the connecting unit9and is configured to align and/or center a longitudinal axis of the optoacoustic imaging head inside an object at a region of interest, in particular in a lumen of the object. While, in present example, the position stabilizing structure3is substantially resistant to deformation, the connecting unit9is designed as a flexible tube, which is preferably configured to minimize stretching. Alternatively or additionally, the connecting unit9is designed to be resistant to rotational deformation. For example, the connecting unit9is a fabricated from dedicated plastics or a stainless steel flat wire coil or wire weave, coated on the outside with polyethylene, polyimide or Pebax® elastomers to assure smoothness and tear resistance. The position stabilizing structure3has typically a larger diameter than the connecting element9, such that luminal organs will constrict around it and thus stabilize and/or center the imaging unit2inside the lumen. For example, a diameter of the position stabilizing unit3at most 15 mm and a length of the position stabilizing unit3is at most 30 mm. When inside the luminal organ, the position stabilizing structure3will orient itself with its longitudinal axis24parallel to the axis of the lumen. Preferably, the diameter of the position stabilizing unit3is adapted to the imaged lumen. Thus for imaging different lumens, different position stabilizing structures3may be used. Typically, the position stabilizing structure3is rotationally symmetric with respect to its longitudinal axis24. For example, the shape of the position stabilizing unit3resembles the form of a pill, facilitating swallowing of the position stabilizing unit3. To ease the swallowing procedure, the position stabilizing structure3can be dipped in a fluid, such as water or saline, or coated on the outer face3awith a coating lowering the friction coefficient. The position stabilizing structure3preferably comprises a rigid or elastic shell with an outer face3athat is at least optically and/or acoustically transparent and forms a cavity with an interior3b. Thus, the interior3bcan be filled with a coupling medium, e.g. water, heavy water or mineral oil. The position stabilizing structure3can comprise a proximal end3c, in particular a base, with an imaging window portion3c′ and a distal end3d, in particular a cap for sealing the position stabilizing structure3. Sealing can be facilitated, for example, by permanent epoxying or using a snap connection with an O-ring at the distal end3d. Alternatively, a threaded interface between proximal end3cand distal end3eis provided. During assembly, the capsule can be filled with coupling medium, for example by submersion of the position stabilizing structure3in a reservoir filled with the coupling medium and attaching the distal end3dinside the reservoir. The position stabilizing structure3can be made of rigid or soft biomedical grade material or a combination of both. For example, the proximal end3cand the distal end3dcan be fabricated of PMMA, polycarbonate, polyethylene, polyurethane, polyimide, Pebax® or metal, e.g. stainless steel or titanium. Preferably, the proximal end3cand the distal end3dare connected by supporting structures3e, which are preferably made of the same material. As shown inFIG.2, the imaging window3c′ is preferably located in a region of a circumference of the position stabilizing structure3. The imaging window3c′ is preferably optically and acoustically transparent, and its thickness can be designed to decrease its influence on the imaging properties. For example, the imaging window3c′ can be tilted, drafted, wedged or coated with antireflective film to reduce backreflection of electromagnetic radiation. Additional marks or irregularities, for example providing orientational and/or positional information for evaluation during imaging, can be added in a region of the imaging window3c′. The imaging window3c′ can also be made of a flexible material which extends if the pressure inside the position stabilizing unit3is increased, such that the outer face3acontacts the surrounding object. This further promotes stabilization and/or orientation of the imaging unit2relative to the object and improves acoustic coupling. Materials which are particularly suitable for the imaging window3c′ are all medical grade materials which have acoustic impedance close to tissue and water, i.e. optimally between 1 and 2 MRayls, and which have a low acoustic and optical attenuation coefficient. For example, polyurethane, polyethylene, RTV, ecothane, ethyl vinyl acetate, styrene butadiene, dimethyl pentene polymer or technogel are suitable. Preferably, the imaging window3c′ is covered with a thin, e.g. at most 50 μm thick, polyurethane foil. Alternatively, the imaging window3c′ is not sealed. The radiation source10, e.g. a pulsed or amplitude modulated tunable or single wavelength laser, a LED, a SLD or a laser diode, is optically coupled into the connecting element9at a proximal end of the connecting element9. The connecting element9comprises a transmitting unit11configured to guide electromagnetic radiation generated by the radiation source10from the proximal end of the connecting element9to the distal end of the connecting element9, in particular to the irradiation unit6. For example, the transmitting unit11comprises a channel for free beam propagation or an optical fiber. The irradiation unit6at the distal end of the connecting element9is configured to irradiate the region of interest to be imaged with the guided electromagnetic radiation, e.g. visible or infrared light. To this end, the illumination unit6comprises an optical arrangement and/or component configured to direct the light towards a field of view8bof the detection unit8, in particular into a field of view8bof the at least one ultrasound transducer. This is achieved, e.g., by means of one or more micro-mirrors, prisms, or total internal reflection. The field of view8bof the at least one ultrasound transducer may comprise a focus8a, e.g. a focus spot, focus line or focus area or region, from which acoustic waves generated in response to irradiation of the region of interest with electromagnetic radiation are detected with particularly high sensitivity and, hence, high signal-to-noise ratio. Alternatively, the detection unit8and/or the at least one ultrasound transducer may exhibit a non-focused field of view8b, e.g. a divergent or parallel or cylindrical field of view (not shown). Likewise, in some embodiments where the device for endoscopic optoacoustic imaging is further configured for ultrasound imaging, the field of view8bof the detection unit8and/or the at least one ultrasound transducer may be focused or non-focused, so that ultrasound waves generated by the at least one ultrasound transducer are emitted into and, after being reflected, detected from an accordingly shaped, e.g. focused, divergent or parallel, field of view8b. In some embodiments, the irradiation unit6is also configured to receive electromagnetic radiation emanating from the tissue, for example in reaction to irradiating the tissue with electromagnetic radiation, and couple the received electromagnetic radiation into the transmitting unit11for being guided to the processing unit4provided at the proximal end of the connecting element9for further processing, in particular image generation. The detection unit8of the imaging unit2is configured to detect optoacoustic responses from tissue irradiated with the electromagnetic radiation. Preferably, the surface of the at least one ultrasound transducer of the detection unit8is manufactured of a minimally light absorbing material and/or coated with a highly reflecting material such as gold or silver. Such a coating is advantageous because it can prevent that light backscattered from the region of interest creates an undesired optoacoustic signal at the transducer surface which might superpose and mask the (real) optoacoustic signal generated in the tissue. This is particularly advantageous in endoscopic applications where the region of interest is typically very close to the detection unit8. The at least one ultrasound transducer may further comprise an acoustic matching layer to reduce acoustic impedance mismatches between the coupling medium and the at least one ultrasound transducer, thus improving transfer of the acoustic wave to the at least one ultrasound transducer. The detection unit8can comprise a single ultrasound transducer or a multi-element ultrasound transducer array. In particular, multiple ultrasound transducers with different detection characteristics can be utilized in the detection unit8to access different penetration depths and image characteristics. For example, a high frequency transducer in combination with a low frequency transducer may be utilized, wherein the high frequency transducer provides high resolution information from superficial structures, whereas the low frequency transducer provides high penetration depth. Using multiple transducers may be beneficial because standard piezoelectric ultrasound detection technology does generally not provide ultra-broadband bandwidth characteristics preferably used in optoacoustic imaging to resolve different scales of sizes. The at least one ultrasound transducer is preferably configured as a piezoelectric transducer, a micromachined transducer (CMUT) or an optical transducer using interferometric techniques for ultrasound wave detection. Moreover, the detection unit2preferably exhibits acoustic focusing properties to limit ultrasound detection to a narrow area, in particular a focus8a, coinciding with an illumination pattern in the region of interest formed by the irradiation with electromagnetic radiation. In the embodiment shown inFIG.2, the irradiation unit6is arranged with respect to the detection unit8in a way such that the electromagnetic radiation is passing through an opening in the detection unit8. By this means, an optimal overlap between the sensivity field of the at least one ultrasound transducer and the irradiation unit6can be achieved. An amplification unit12amplifies the electrical signals generated by the at least one ultrasound transducer and matches the electrical impedance of the at least one ultrasound transducer to the transmitting unit11, in particular an electrical wire, e.g. a microcoaxial cable, of the transmitting unit11. It is advantageous to position the amplification unit12close to the at least one ultrasound transducer, because in this way the information in the detection signals generated by the at least one ultrasound transducer upon detection of ultrasound waves is conserved. Preferably, the amplification unit12is provided as an integrated circuit inside the at least one ultrasound transducer. By means of the transmitting unit11the detection signals are guided from the detection unit8to the processing unit4provided at the proximal end of the connecting element9, e.g. via electrical cable(s) or wirelessly. Preferably, an analog-to-digital converter may be provided, e.g. inside the position stabilizing unit3, for converting analog detection signals into digital detection signals. Preferably, the analog-to-digital converter is configured to transmit the analog and/or digital detection signals from the detection unit8to the outside, in particular to the processing unit4. Alternatively or additionally to the analog-to-digital converter, a signal transmission unit (not shown) may be provided, which is configured to transmit the analog and/or digital detection signals from the detection unit8to the outside, in particular to the processing unit4. Preferably, the analog and/or digital detector signals are transferred wirelessly, e.g. via a WLAN or WiFi network. The processing unit4preferably comprises an ultrasound controller, in particular an amplifier and/or a filter, for amplifying and/or filtering the received detection signals and/or a reconstruction unit (not shown) configured to reconstruct high-quality optoacoustic images based on the, optionally amplified and/or filtered, detection signals. The reconstructed images can be outputted by an output unit4a, for example a computer screen. Preferably, the imaging unit2can be moved, in particular rotated and/or translated, by transmitting mechanical energy, i.e. forces or torques, from a drive unit13disposed at a proximal end of the connecting unit9to a carrier unit16, on which the irradiation unit6and the detection unit8are mounted. To this end, the connecting element9preferably comprises a further transmitting unit14, for example a driveshaft, in particular a torque coil. In the present embodiment shown inFIG.2, the connecting element9, for example a catheter, forms a hollow tube in which the driveshaft is disposed. Preferably, the driveshaft and/or the inside of the connecting element9is or are PTFE-coated for minimizing friction during rotation and/or translation of the driveshaft relative to the connecting element9. Preferably, the imaging unit2, in particular the carrier unit16, is designed to prevent the generation of turbulence in the coupling medium surrounding the imaging unit2upon rotation of the imaging unit2, in particular the carrier unit16. In particular, the carrier unit16and/or the imaging unit2preferably has a substantially circular shape, in particular a substantially circular outer surface exhibiting no protrusions. By this means, the carrier unit16and/or the imaging unit2can be rotated without displacing significant amounts of coupling medium, therefore minimizing turbulence and hence improving imaging quality. The transmitting unit14is preferably coupled to the transmitting unit11, in particular to an optical fiber and/or an electrical wire, such that the transmitting unit14and the transmitting unit11rotate synchronously, along with the imaging head on the carrier unit16. Preferably, the transmitting unit11is centered inside the transmitting unit14with respect to the axis of rotation of the transmitting unit14. In particular, an optical fiber may be centered on the axis of rotation of the transmitting unit14and an electrical wire may be wound around the optical fiber. Two-dimensional imaging and three-dimensional imaging may be performed by irradiating the region of interest with a specific pattern and detecting the optoacoustic responses at specific locations. Preferably, the irradiation unit6and the detection unit8are oriented substantially perpendicular to the longitudinal axis24of the position stabilizing unit3. By rotating or translating the optoacoustic imaging head around or along the longitudinal axis24, respectively, using the transmitting unit14, two-dimensional images are acquired. By combining both rotation and translation of the optoacoustic imaging head, volumetric scans, i.e. three-dimensional images, are acquired without moving the connecting element9. A typical translational scanning range is between 1 mm and 10 mm. Other translational scanning ranges are also possible, depending on the size of the position stabilizing structure3. To avoid movement of the position stabilizing structure3and the connecting element9due to the rotational and translational force transmitted via the transmitting unit14, the connecting element9is preferably configured such that it is resistant with respect to torsion, stretching and compression, while still providing sufficient flexibility and bending for insertion inside luminal organs. Positional information of the imaging unit2, in particular with respect to the position stabilizing unit3and/or the region of interest, is preferably obtained by the processing unit4from the drive unit13. The processing unit4preferably uses the acquired detection signals and positional information to compute the two-dimensional and/or three-dimensional optoacoustic images, which can be outputted by output unit4ain real-time. This is advantageous, e.g., during operations for visualization and analysis. In some embodiments, the processing unit4is configured to generate optoacoustic images using three dimensional rendering, allowing a more accurate representation of the anatomical structure. Preferably, the image generation is based on positional information of the distal end of the connecting element9, which is obtained by tracking the location of the distal end of the connecting element9or the position stabilizing structure3relative to the region of interest which is imaged, or by tracking the position of the imaging unit2with respect to the position stabilizing structure3. Preferably, in order to couple the radiation generation unit10and/or processing unit4to the connecting unit9, in particular to at least one of the transmitting units11,14, a rotary junction15is disposed at the proximal end of the connecting unit9. The rotary junction15serves as an interface between a stationary part, i.e. the radiation generation unit10and the processing unit4, and a rotating and/or translating part of the imaging device, i.e. the transmitting units11,14and/or the imaging unit2. Such rotary junction11allows to a couple light and/or electrical signals from the rotating part to the stationary part with minimal losses. By this means, the drive unit13, for example an electric motor, can provide the torque for rotation and the pull/push force for translation or the imaging unit2via the rotary junction15. The connecting element9, in particular the transmitting units11,14, can be attached to the rotary junction15for example with a hybrid connector. To enable clinical viable imaging rates, a rotation rate of the imaging unit2generated by the transmitted torque is preferably above 1 Hz, in particular above 10 Hz, and the electromagnetic radiation is pulsed at high rates, i.e. such above 1 kHz, in particular above 10 kHz. Moreover, the electromagnetic radiation can be intensity modulated with a complex periodic envelope at substantially high repetition rates, such as greater than 1 kHz, in particular above 10 kHz. A typical spectral range, at which the electromagnetic radiation may be provided in, ranges from 450 nm to about 980 nm. However, embodiments of the present invention may operate within other spectral windows. For instance, irradiation in the ultraviolet region, i.e. between 180 nm and 400 nm, where DNA and RNA show strong absorption, can be used for imaging of cell nucleons. Alternatively, irradiation in the near-infrared region, i.e. between 700 nm and 1400 nm, lipids, collagen and water can be imaged. Electromagnetic radiation at different selected wavelengths within the chosen spectral range can be delivered to the region of interest at different times, thereby providing acoustic signals proportional to the absorption at each respective wavelength. The detection signal corresponding to each wavelength can be processed to generate an optoacoustic image of the energy absorption in the tissue at the specific wavelength. The image may be further processed to obtain the absorption coefficient at the given wavelength. FIG.3shows a second example of an imaging unit2disposed inside a position stabilizing structure3, wherein an irradiation unit6and a detection unit8are mounted on a carrier unit16which is configured to be rotated and/or translated in an interior3b of the position stabilizing structure. The carrier unit16is coupled to a connecting element9for connecting the irradiation unit4and the detection unit8to a radiation source and a processing unit (seeFIG.2). To this end, the connecting element9comprises a transmitting unit11configured to relay electromagnetic radiation and/or electric signals, in particular detection signals generated by the detection unit8. The connecting element9further comprises a transmitting unit14configured to transmit torque and/or a force from a drive unit (seeFIG.2) provided at a proximal end of the connecting element9, wherein the transmitting unit14is rotatably arranged inside the static connecting element9and surrounds the transmitting unit11arranged in its center. The irradiation unit4and the detection unit8, which form an optoacoustic imaging head, are preferably rigidly connected to the transmitting unit14and/or to each other via the carrier unit16, such that optical components, acoustic components, transmitting unit11and transmitting unit14are in a fixed position with respect to each other. In order to secure the position stabilizing structure3at a distal end of the connecting element9, the connecting element9preferably comprises a collar9a, e.g. fabricated of a metal, at its distal end. The collar9aacts as an anchor to avoid pulling out the connecting element9from the position stabilizing structure3. The carrier unit16, which preferably forms a housing for the irradiation unit6and the detection unit8, can further have a protruding structure16acollinear to the longitudinal axis24of the position stabilizing structure3, in particular the rotational axis of the transmitting unit14. This protruding structure16ais preferably configured to be inserted into a recess3d′ provided at a distal end3dof the position stabilizing structure3. In particular, the protruding structure16ais configured to stabilize the rotational movement and/or translational movement of the carrier unit16. In order to facilitate a uniform rotation and/or translation of the optoacoustic imaging head, the carrier unit16is fabricated of a material with very low friction coefficient, e.g. PTFE, in particular at position where it engages the position stabilizing structure3. Alternatively or additionally, a lubricant or a ball-bearing ring is used for rotational decoupling between the position stabilizing structure3and the carrier unit16. In some embodiments, a washer16b, preferably fabricated of PTFE, is threaded over the transmitting unit14and/or connected to the carrier unit16to separate the rotating optoacoustic imaging head from a proximal end3cof the position stabilizing structure3, in particular the static collar9a. The proximal end3cpreferably comprises an opening to facilitate assembling of the position stabilizing structure3with the connecting element9. In order to prevent the connecting element9from kinking close to its connection with the position stabilizing structure3, a strain relief tail3fcan be disposed at the proximal end3c. At least a part of the tail3fis shaped such that the shape of the position stabilizing structure3is matched, to facilitate a smooth transition between the shape of the position stabilizing unit3and the connecting element9. The tail3fcan be made of urethane, nylon and/or another medical grade polymer. The tail3fcan be also shaped from epoxy and/or be epoxied to the proximal end3cand the connecting element9. Preferably, the tail3fis shaped as to allow attachment to the accessory channel of a video-endoscope. For further details, e.g. regarding the illumination unit6, the detection unit8and the position stabilizing structure3, the above elucidations with respect to the examples shown inFIGS.1and2apply accordingly. FIG.4shows an example of a multimodal imaging unit2comprising an optoacoustic imaging unit, which, i.a., comprises an irradiation unit6and a detection unit8, and an optical imaging unit, in particular an optical sensor17. Preferably, both the optoacoustic imaging unit and the optical imaging unit are mounted on common carrier unit16, which may be configured to form a housing, in particular a closed and/or liquid-tight housing, for components of the imaging unit2. For example, the carrier unit16may comprise a first face16aand a second face16b, wherein the detection unit8is mounted on the first face16aof the carrier unit16and the optical sensor17is mounted on the second face16bof the carrier unit16. In present embodiment, a transmitting unit14, which is configured to transmit a torque from a drive unit (seeFIG.2) provided at a proximal end of a connecting element9, is connected to the carrier unit16. A transmitting unit11, preferably comprising an optical fiber11aand an electrical wire11b, is located inside the transmitting unit14. Preferably, a part of the transmitting unit11, in particular the optical fiber11a, is decoupled from the transmitting unit, such that the transmitted torque and/or force is not applied to, i.e. does not act on, the optical fiber11a. Electromagnetic radiation emanating from the distal end of the optical fiber11apasses through a beam shaping element6a, e.g. a lens, and is reflected by a reflection element6b. Preferably, the reflection element6bis arranged in a 45° angle with respect to a longitudinal axis24of the position stabilizing structure3, such that the electromagnetic radiation is guided through an opening of the detection unit8. To ensure a liquid-tight sealing of the interior of the carrier unit16, an optically transparent seal16dis preferably provided on the detection unit8to seal the opening of the detection unit8in a liquid-tight manner. A washer16b, preferably fabricated of PTFE or other material with a low friction coefficient, is connected to a distal end of the transmitting unit11, in particular to the optical fiber11a, and configured to at least partially engage into a groove16cof the carrier unit16. By this means, the distance between the distal end of the optical fiber11aor the beam shaping element6a, respectively, and the reflection element6bis kept constant. Therefore, the optical fiber11atranslates together with the carrier unit16along a longitudinal axis24of the position stabilizing structure3, while being rotationally decoupled from the transmitting unit14. Preferably, electrical signals generated by the detection unit8in response to the detection of ultrasound waves are amplified by an amplification unit12and transmitted via the electrical wire11b, e.g. a microcoaxial cable, to a processing unit (seeFIG.2) provided at a proximal end of the connecting element9. The electrical wire11bis disposed inside of the transmitting unit14and arranged and/or configured to rotate together with the transmitting unit14, in particular connected to the transmitting unit14to avoid coiling around the stationary optical fiber11aduring rotation. The optical sensor17, for example a camera, comprises one or more light sources17a. The reflection element6bis preferably configured also to reflect electromagnetic radiation emitted from the light sources17athrough an optically transparent window16eof the carrier unit16provided opposite to the detection unit8. Vice versa, electromagnetic radiation emanating from the region of interest in response to the irradiation with electromagnetic radiation from the light sources17ais reflected back to the optical sensor17such that an optical image of at least a part of the region of interest can be acquired, wherein the optical image shows an area of the region of interest opposite to an area from which an optoacoustic image is acquired based on the detection signals generated by the detection unit8. Preferably, the optical sensor17is connected to the carrier unit16, such that upon rotation of the carrier unit16, the complete region of interest surrounding the position stabilizing unit16can be imaged by the imaging sensor17. Preferably, the imaging unit2and/or the carrier unit16can be moved inside the position stabilizing structure3along a helical path, such that volumetric scans of the region of interest can be obtained without moving the position stabilizing structure3with respect to the region of interest. The so-called helical scanning is preferably implemented by using two complementary threads, wherein a first thread18ais disposed at the transmitting unit14and a complementary second thread18bis disposed at the position stabilizing structure3, in particular at a proximal end thereof. When applying torque via the transmitting unit14to the carrier unit16, the carrier unit16is forced by the first and second thread18a,18bto perform a helical movement with respect to the position stabilizing structure3. The pitch of the threads18a,18bdetermines how far the carrier unit16translates along the longitudinal axis24of the position stabilizing structure3per rotation. A continuous movement of the carrier unit, and thereby a continuous scan of the region of interest by the imaging unit2, in particular the detection unit8and/or the optical sensor17, is preferably achieved by periodically changing the direction of the torque applied to the transmitting unit14. To this end, the transmitting unit14preferably comprises a multilayer torque coil, which consists of two wires coiled in opposite direction, such that torque can be transmitted both in clockwise and counterclockwise direction. A combination of helical movement of the carrier unit16inside the position stabilizing unit3and a movement of the position stabilizing unit3along the region of interest can be done to image large areas. The above described embodiment can be used as a standalone device or as an extension to a conventional video-endoscope, to provide the latter with an additional optoacoustic imaging modality. Since the position stabilizing structure3usually has a larger diameter than the diameter of accessory channels of the video-endoscope and thus does not fit through the accessory channels, the proximal end of the connecting unit9is designed to allow back-loading of the video-endoscope. By this means, the clinical workflow can be matched. In an alternative embodiment, the drive unit (not shown) may be disposed in the interior3bof the position stabilizing unit3and directly and/or via a drive shaft and/or a threaded rod connected to and/or coupled with the carrier unit16. Preferably, the threaded rod, which is driven by the driving unit, engages a threaded hole provided or fixed at the position stabilizing unit3. The engaging threaded rod and threaded hole act similarly to the first and second threads18a,18bof the example described above. Therefore, also with present embodiment the carrier unit16can be moved inside the position stabilizing structure3along a helical path, such that volumetric scans of the region of interest can be obtained without moving the position stabilizing structure3with respect to the region of interest. In another alternative embodiment (not shown), the optoacoustic imaging head allows parallel imaging in two directions. In this case, the detection unit8comprises at least two ultrasound transducers oriented in opposite directions perpendicular to the longitudinal axis24of the position stabilizing unit3, in particular on two opposite faces of the carrier unit16. Further, the electromagnetic radiation is guided in these directions by providing an according reflection element6b, in particular a triangular optical element, e.g. a prism. The parallel imaging helps speeding up the imaging process, because per pulse of electromagnetic radiation, a multitude of optoacoustic signals could be acquired. FIG.5shows an example of a backend19of a device for endoscopic optoacoustic imaging, wherein the backend19is coupled to a proximal end of a connecting element9, which connects the backend19with an imaging head (seeFIG.2) at the distal end of the connecting element9. A processing unit4of the backend19is configured to control an optoacoustic module4bcomprising a radiation source10and an ultrasound driver8b, in particular an amplifier and/or filter, for processing detection signals generated by a detection unit (seeFIG.2). In present example, the processing unit4is further configured to control an optical coherence tomography module4ccomprising an interferometer20and/or an imaging module4dcomprising optical sensor drivers and/or and/or light source drivers25for controlling one or more optical sensors or light sources provided at the imaging head, respectively. Additionally or alternatively, the processing unit4may be configured to control a reflectance spectroscopy module4ecomprising a second radiation source10aand first light detection and filtration electronics21aand configured to perform reflectance spectroscopy and/or a fluorescence module4f, comprising a third radiation source10band second light detection and filtration electronics21band configured to perform fluorescence imaging. The radiation sources10,10a,10b, the interferometer20and the light detection and filtration electronics21a,21bare preferably coupled via a beam splitter22or dichroic mirror to an optical fiber11aarranged in the center of the connecting element9. Due to said coupling, the optical fiber11ais fixed with respect to the connecting element9. The ultrasound driver8aand the imaging module4dare coupled via a rotary junction15to electrical wires11brunning along the optical fiber11athrough the connecting element9. By that means, the electrical wires11bare arranged to rotate synchronous with a first transmitting element14configured to transmit a torque generated by a drive unit13to an imaging head at the distal end of the connecting element9. In particular, the electrical wires11bare connected to the first transmitting element14. Preferably, the rotatable first transmission unit14and the electrical wires11bconnected thereto are designed to surround the static optical fiber11a. This arrangement of first transmission unit14, electrical wires11band optical fiber11ais preferably arranged in a first channel22aof the connecting element9, wherein channel walls, the first transmission unit14and/or the optical fiber9may be coated with PTFE to reduce friction. Alternatively or additionally, a lubricant may be filled into the first channel22a. The optical fiber11amay include a plurality of optical transmission channels. For example, in some embodiments, the optical fiber11amay be designed as a double clad fiber. The double clad fiber preferably comprises a small diameter core, in particular a single mode core, and a multi-mode inner cladding, which may be arranged concentrically around the single mode core. Preferably, the single mode core transmits OCT signals, i.e. electromagnetic radiation, from the proximal end to the distal end and/or from the distal end to the proximal end. Additionally or alternatively, the inner core may transmit electromagnetic radiation, in particular illumination light, for optoacoustic, fluorescence and/or other optical imaging modalities from the proximal end to the distal end of the optical fiber11aor vice versa. Additionally or alternatively, the multi-mode inner cladding may transmit electromagnetic radiation, in particular illumination light, for optoacoustic, fluorescence and/or other optical imaging modalities from the proximal end to the distal end of the optical fiber11aor vice versa. In particular, the multi-mode inner cladding may transmit electromagnetic radiation emanated from the region of interest in response to the irradiation with the electromagnetic radiation from the distal end to the proximal end of the optical fiber11a. Preferably, in present example the connecting element9comprises a second channel22bconfigured to convey a coupling medium from the proximal end of the connecting element9into the position stabilizing unit (seeFIGS.1to4) provided at the distal end of the connecting element9or vice versa. In another embodiment, the connecting element9may comprise a further channel (not shown) in communication with the exterior of the position stabilizing unit3. By this means, coupling medium can be guided into the space surrounding the position stabilizing unit3, in particular into a lumen, e.g. a hollow organ, such that optical and/or ultrasound coupling can be established between the region of interest, e.g. the wall of the hollow organ, and the position stabilizing structure3. The coupling medium is preferably configured to facilitate transmission of the electromagnetic radiation between irradiation unit6and region of interest as well as ultrasound waves between region of interest and detection unit8with no or at least minimal absorption of the ultrasound waves and/or electromagnetic radiation and no distortion of their propagation direction. Preferably, the coupling medium exhibits an acoustic impedance value close to the acoustic impedance of tissue (˜1.6 Mrayl), i.e. it may be between 1 and 2 Mrayl and preferably is between 1.2 Mrayl and 1.8 Mrayl, in order to maximize transmission of energy of the acoustic waves. The coupling medium may be a fluid so as to fill all available spaces between detection unit8and region of interest and avoid air gaps. In liquid form, the coupling medium can also be used as a means for inflating or deflating the at least partially elastic position stabilizing unit, for instance for adapting it to the shape of a surrounding lumen. Additionally, the coupling medium is preferably non-scattering and transparent for electromagnetic radiation at wavelengths used in optoacoustic imaging and other optical imaging modalities. For transmitting electromagnetic radiation in the visible range (e.g. 400-700 nm), water may be a suitable coupling medium being transparent, having an acoustic impedance of ˜1.5 Mrayl and an optical absorption coefficient below 0.01 cm−1. When transmitting electromagnetic radiation in the near-infrared range, in particular above 850 nm, heavy water (D2O) may be a preferred coupling medium as it has a significant lower optical absorption coefficient than water at the corresponding wavelengths. Heavy water may also be suited as a coupling medium optoacoustic imaging in conjunction with OCT, which typically uses light in the wavelength range above 1000 nm (e.g. 1060 nm, 1300 nm, 1700 nm etc.). Alternatively or additionally, the coupling medium provides lubrication and, in contrast to conventional clinical ultrasound applications, a low viscosity for facilitating movement, in particular rotation, of the carrier unit16or imaging head mounted thereon, and prevent the generation of air bubbles. Preferably, Silicone oil is used as a low viscosity couplant for ultrasound, but water and heavy water also have suitable characteristics. Preferably, the coupling medium comprises an additional preservative hindering the growth of microorganisms, especially bacteria and fungi, while preserving or improving the acoustic properties. For instance, alcohol is a suitable preservative, but other substances may be also possible. Preferably, the coupling liquid is biocompatible and may be applied, e.g. instilled, directly into the space surrounding the position stabilizing unit3, in particular into the imaged lumen. Based on the signals, in particular the detection signals, provided by the optoacoustic module4b, the optical coherence tomography module4c, the imaging module4d, the reflectance spectroscopy module4eand/or the fluorescence module4fthe processing unit4generates according images and displays them on an output unit4a. For example, an optical camera image23aof a lumen of an object, into which the imaging head at the distal end of the connecting element9has been inserted, may be displayed along with an optoacoustic image23band an optical coherence tomography image23c, wherein the latter two images23b,23cshow a cross-section of the camera image23aof the lumen in real-time. Alternatively or additionally, hybrid images may be generated by fusioning image information contained in at least two images which were acquired by different imaging modalities. Multimodal imaging may be done sequentially or in parallel. Sequential imaging includes that one modality acquires one of a 1D, 2D or 3D image before another modality acquires one of a 1D, 2D or 3D image, i.e. imaging is done in a time-shared fashion. Parallel means that at least two modalities acquire images simultaneously. FIG.6shows an example of an amplification unit12configured to be at least partially disposed inside a position stabilizing structure and to amplify detection signals generated by a detection unit before transmission through a transmitting unit11(seeFIGS.2to5). This is advantageous because the detection signals are usually weak and thus prone to electromagnetic disturbance from any external interfering source. The detection signal from the detection unit enters the amplification unit12at IN. Blocking capacitors Cblockprevent DC biasing signals from coupling into the detection unit or the output of the amplification unit12, respectively. In the case of a detection unit comprising a piezoelectric or CMUT sensor, no blocking capacitor Cblockis needed, since the sensor by its nature has a capacitive behavior. The second transmitting element11transmits the detection signal and conveys a DC supply voltage to the amplification module12a. A blocking inductance RFC impedes higher frequency signals from reaching the power supply V00. Bypass capacitor Cbypassbypasses any left AC components to the ground. The amplified detection signals is output at OUT. Preferably, some of the components of the amplification unit, in particular the power supply V00, the blocking inductance RFC and a blocking capacitor Cblock, are part of an ultrasound driver (seeFIG.5), i.e. disposed outside the position stabilizing structure. By this means, the number of parts, in particular conventional electrical components, of the amplification unit12remaining inside the position stabilizing structure may be minimized. Further, the transmission of the detection signals as well as the DC power supply voltage for the amplification unit12may be relayed from the amplification unit12to the ultrasound driver or from a power supply to the amplification unit12, respectively, by the same transmission line, e.g. a second transmitting unit. FIG.7shows another example of an imaging unit2disposed in the interior of a position stabilizing structure3. In distinction to the example described above with reference toFIG.2, a drive unit13, e.g. an electrical motor, is provided in the interior3bof the position stabilizing structure3, so that an additional transmission unit14(as shown inFIG.2), e.g. a torque coil, for transmitting a torque from a driving unit provided outside the position stabilizing structure3is dispensable. In present example, the position stabilizing structure3comprises a, preferably rigid, proximal end structure3cand a, preferably rigid, distal end structure3d, wherein the proximal end structure3cand the distal end structure3dare arranged along the longitudinal axis24of the position stabilizing structure3. As indicated in the figure, the driving unit13is preferably disposed at or in the region of the proximal end structure3cin the interior3bof the position stabilizing structure3. Providing the driving unit13at the proximal end structure3cof the position stabilizing structure3has the advantage that conducting wires for electrical power supply of the driving unit13have to be laid only up to the proximal end3cof the position stabilizing structure3, making wires running through the position stabilizing structure3, in particular through a region where the imaging unit2is positioned and/or towards the distal end structure3dof the position stabilizing structure3, dispensable. This allows for a stable full 360° rotational scan of the imaging unit2which is not disturbed and/or adversely affected by conducting wires crossing the position stabilizing structure3and/or the field of view of the imaging unit2. Regarding further components, aspects and advantages of the example shown inFIG.7, the above description with reference toFIG.2applies accordingly. | 49,142 |
11857291 | DETAILED DESCRIPTION The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment, and such references mean at least one. Reference in this specification to “an embodiment” or “the embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an embodiment of the disclosure. The appearances of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. The present invention is described below with reference to block diagrams and operational illustrations of methods and devices that provide ultrasound and thermoacoustic imaging or data generation. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, may be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions may be stored on computer-readable media and provided to a hard-core or soft-core processor of a general purpose computer, special purpose computer, field-programmable gate array (FPGA), ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implements the functions/acts specified in the block diagrams or operational block or blocks. In some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. As used herein, the following terms and phrases shall have the meanings set forth below. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art. As used herein, the term, “a” or “an” may mean one or more. As used herein “another” or “other” may mean at least a second or more of the same or different elements or components thereof. The terms “comprise” and “comprising” are used in the inclusive, open sense, meaning that additional elements may be included. As used herein, the term “or” refers to “and/or” unless explicitly indicated to refer to alternatives only or the alternatives are mutually exclusive. As used herein, the term “about” refers to a numeric value, including, for example, whole numbers, fractions, and percentages, whether or not explicitly indicated. The term “about” generally refers to a range of numerical values (e.g., ±25% of the recited value unless explicitly stated otherwise) that one of ordinary skill in the art would consider approximately equal to the recited value (e.g., having the same function or result). In some instances, the term “about” may include numerical values that are rounded to the nearest significant digit. As used herein, the following terms and abbreviations have the following meanings:ADC—analog to digital converter.AFE—analog front end, integrated US amplifier and ADC or separate US amplifier and ADC.Analog signal is an electrical signal, which value changes continuously. In the modern US and TA instrumentation with digital algorithms of image reconstruction, amplified Rx analog signal from the transducer is digitized using ADC device.Bypass—bypassing of an analog device connected in-series in the analog signal chain is connecting short matching input(s) and output(s) together. The active device output has to be disconnected from the signal chain. In some cases, device input has to be disconnected, for example, inFIG.1C-ETA preamplifier input is disconnected in US mode to protect TA preamplifier input from US Tx high voltage.DAQ—data acquisition device, typically based on ADC with addition of PC interface and a (pre)amplifier or AFE.EMI—electromagnetic interference.FPGA—field programmable gate array IC.Hard-core processor is a processor implemented on hardware level, such as a microprocessor or programmable microcontroller.HV—high-voltage.HV switch—high-voltage switch.IC—integrated circuit with a single die or multiple dies inside packaging.Serial USTA system architecture is the system architecture in which Rx US and TA analog signals sharing the signal path components and including the same ADC channels arranged in series for both US and TA modalities. Shared Rx path excludes components which are required in one, but not the other mode, for example, TA preamplifier is typically bypassed in US mode.LV—low voltage.Parallel USTA system architecture is the system architecture in which US and TA Rx analog signal split or completely separate into US and TA analog signal chains, and in which separate ADC devices are used for US and TA signals.PC—personal computer.PCB—printed circuit board.Preamplifier is the first stage amplification circuit.Rx—receive mode.Soft-core processor is a processor implemented on firmware code level, for example MicroBlaze™ soft-core processor implemented as FPGA firmware code is available from Xilinx Inc. of San Jose, CA. Soft-core processors have functionality similar to functionality of hard-core processors.SPST—single pole single throw. A type of electric switch with two terminals and one ON position, which can be implemented electronically or mechanically.SPDT—single pole double throw. A type of electric switch with three terminals and two ON positions, which can be implemented electronically or mechanically.TA—thermoacoustics, which includes optoacoustics, photoacoustics, microwave acoustics, X-ray acoustics and other thermoacoustic phenomena.TA mode—thermoacoustic mode. TA mode is Rx-only analog circuit operation mode used for TA.TA preamplifier is a preamplifier dedicated for TA applications, typically with extra gain as compared to US amplifier and high-input impedance value for broad-band applications. TA preamplifier is the only preamplifier discussed in this patent and might be called preamplifier.Transmission line is a single-ended electric wiring for AC (alternating current) signal with a specific impedance.Tx—transmit mode.US—ultrasound.US mode—ultrasound mode with US Tx and US Rx regimes.USTA—ultrasound and thermoacoustic. Common Units of Measurements:Hz—HertzkHz—kiloHertzMHz—MegaHertzμs—microsecond In an embodiment, the presently disclosed system and method provides a new serial architecture of a USTA system. Essentially the same analog and digital signal path is used for both US and TA modalities. All design modules are arranged in series along the signal path starting from the transducer array cable connector, as shown inFIG.1. The modules which are not used in a particular operation mode are bypassed, i.e., excluded from the analog signal path. The HV pulser is bypassed by its internal switches in TA and US Rx modes. TA preamplifiers are disabled and bypassed in US modes. The serial architecture allows the instrument to maintain the best performance for each modality of a USTA system as well as to reduce component count, system size and cost. Performance of a serial USTA system in US modality is equivalent to the performance of a US system without a TA modality. Performance of the serial USTA system in TA modality is equivalent to the performance of a TA system without US modality, but in some cases the input impedance might be limited by high-voltage bleed resistor value. In an embodiment, the size and the number of components for a serial USTA system with custom TA preamplifier and switch IC is only marginally higher vs. the size and the number of components of the equivalent US system without dedicated TA abilities. Beamformer and HV circuitry, including pulser and HV power, are used in US mode only. In an embodiment, the TA preamplifier is used in TA mode only. All other components, including transducer array, AFE (ADC), FPGA, PC interface and software are used in both modes. Reduced component counts enable increased number of parallel channels per DAQ PCB. Note that some US systems have a so-called Rx-only mode and reuse AFE/ADC, FGPA, and PC interface for both modalities. One example of such a system is the Vantage™ system available from Verasonics, Inc. of Kirkland, WA. Such systems, however, cannot be considered dual modality USTA systems with a serial design because they are lacking a dedicated TA preamplifier with high input impedance and extra gain required to archive broad signal bandwidth, high sensitivity, and low signal-to-noise ratio in TA mode. The TA mode of such a system provides poor image quality and is generally equivalent to US Rx mode. US systems with Rx-only mode might be upgraded for TA applications using an external TA preamplifier, for example the Legion preamplifier available from PhotoSound Technologies, Inc. of Houston, TX, but such upgrade is lacking HV switches, such as those shown inFIG.1C-1E. An ultrasound system with such external TA preamplifier cannot be used in US mode without physical removal of the TA preamplifier. Serial dual modality USTA architecture uses the same transducer array elements (101) and the same analog signal path from transducer element to AFE chip (110). Switching between US and TA modes is performed using switches as described inFIG.1. Digital control over switch states is not shown, but control sequence and time diagrams are described inFIG.2. FIG.1: US mode employs HV pulser, beamformer, and HV protection switches, which might be integrated inside a single pulser-beamformer IC (labeled as (102) inFIG.1A-1E), for example Texas Instruments TX7332 or TX7316. The internal structure of the pulser block (102) is not shown. HV circuitry is disabled in TA and US Rx modes. TA mode works with much weaker signals and much broader BW than US Rx mode. TA mode requires an extra amplification stage implemented as TA preamplifier (104), which might have high impedance input, programmable frequency filters and other features not required in US mode. In US mode TA preamplifier output should be disconnected and TA preamplifier bypassed, i.e. preamplifier input should be directly connected to the next stage of the analog signal chain instead of TA preamplifier output. Bypass connection and output connection might be controlled by two SPST switches (105), (106) (FIG.1A) or one SPDT switch (107) (FIG.1B) per analog channel to select TA or US modes. The state of TA preamplifier (103) switches is shown inFIG.1A-1Efor TA (Rx) mode and US (Rx and Tx) modes. All TA preamplifier switches inFIG.1C-1Emust be HV tolerant for operation in US Rx mode. TA preamplifier might be implemented using discrete components or as an IC. TA preamplifier might have a single analog channel or multiple analog channels. TA preamplifier IC might have the switches (105)-(109) integrated or the switches (105)-(109) might be implemented as separate components. In TA and US Rx modes the pulser HV signal is disconnected from the analog signal chain and the pulser allows the signal through, as shown in (111). Tx signal is a HV signal present in US mode only. Tx signal is temporally separated from Rx signal in US mode. HV Tx signal propagates from the pulser (102) into the transducer and not present on the low-voltage analog output from the pulser to AFE side as shown in (112). Signal acquisition in TA applications has a low duty cycle, i.e. a fraction of time during which preamplifier operates while receiving input TA signals (202) (FIG.2A) relative to the time when TA preamplifier operation is not required. Fast disabling and enabling DAQ electronics including TA preamplifier, the second stage amplifier and ADC enables significant power saving and reduction of average power consumption. The time with enabled TA preamplifier (full power mode) is shown as (204) and the time with disabled TA preamplifier is shown as (203) and (205) inFIG.2A. FIGS.1A and1Bdescribe applications with TA preamplifier located in LV part of the circuit. TA preamplifier in LV part of the circuit can be equipped with LV switches.FIGS.1A and1Bhave TA preamplifier with two SPST switches per channel and one SPDT switch per channel accordingly. FIG.1Ashows an analog signal path implementation with two SPST switches per channel. Each SPST switch has an open and a closed position. One switch is a bypass switch (105), and the other is an output switch (106). In the TA mode the output switch (106) is closed to deliver preamplifier output signal to AFE (110); the bypass switch (105) is open. In US mode the output switch (106) is open and used to exclude TA preamplifier from the signal path; the bypass switch (105) is closed and is used to pass analog signal to AFE (110). FIG.1Bshows an analog signal path implementation with one SPDT switch per channel. The SPDT switch (107) has preamplifier output position (used in TA mode) and preamplifier bypass position (used in US mode). If the TA preamplifier (104) is equipped with HV switches it can be placed right after the transducer (101) into the analog signal chain as shown inFIG.1C-1E. Such placement of TA preamplifier allows its integration into transducer array housing, which improves TA signal sensitivity, EMI rejection and quality, especially in case of low capacitance or high-frequency TA transducers. It enables a driving transmission line directly out of the transducer array housing. Such architecture allows use of long transducer array cables without sacrificing TA mode performance for any cable length. The pulser and subsequent components can be placed on the other end of micro-coaxial cable bundle inside USTA device housing. An extra HV protection switch (108) (FIG.1C,1D) or (109) (FIG.1E) is required to protect the preamplifier input from HV in US Tx mode. HV switches can be arranged as three SPST switches (105), (106), and (108) inFIG.1Cor as one SPDT and one SPST switches inFIG.1D,1E. The only SPST can be used as TA preamplifier input switch (108) (FIG.1D) or output switch (106) (FIG.1E). FIG.1C-1Edescribe configurations with TA preamplifier located in HV part of the circuit. TA preamplifier located in HV part of the circuit must be equipped with HV switches for HV protection. In US Tx mode, TA preamplifier input and output must be disconnected from the analog signal chain, which requires an extra switch for preamplifier input.FIG.1Cdescribes TA preamplifier with HV SPST switches.FIGS.1D and1E, TA preamplifier with HV SPST and SPDT switches. FIG.1Cshows an analog signal path implementation with three HV tolerant SPST switches per channel. One switch is a bypass switch (105), and another is an output switch (106), the third switch is the input protection switch (108). In TA mode the input switch (108) is closed to allow transmission of the signal from the transducer to the preamplifier input; the output switch (106) is closed to allow transmission of the signal from the preamplifier output through the pulser (102) to AFE (110); the bypass switch (105) is open. In US mode the input and output switches (108) and (106), respectively, are open to exclude TA preamplifier from the analog signal path and to protect preamplifier from HV US Tx signal; the bypass switch (105) is closed and used to pass analog Rx signal through pulser state (111) to AFE (110) or Tx signal from the pulser to the transducer. FIGS.1D and1Eshow an analog signal path implementation with one HV tolerant SPST switch and one HV tolerant SPDT switch per channel. InFIG.1D, SPDT switch (107) replaces two SPST switches (105) and (106) (FIG.1C). InFIG.1E, SPDT switch (109) replaces two SPST switches (105) and (108) (FIG.1C). FIG.1Fdescribes a system with TA only mode. The top panel describes Rx TA operation. The bottom panel describes energy saving state with TA preamplifier (104) in a standby mode. The switches are not shown inFIG.1F, but might be present. If switches are present, they must operate in TA mode as described in the top panels ofFIG.1A-1E. FIG.2Adisplays a time diagram for a USTA system with implemented fast energy saving technology. The dual modality USTA mode is activated for a limited time corresponding to the high USTA_on signal on the time line (207). TA_ex_in signals on the time line (201) indicate TA excitation events, for example, firing of the excitation laser in the case of photoacoustic imaging. TA_aq signals (202) indicate TA data acquisition events, which typically last no longer than 100-200 μs and are offset by a fixed delay from the preceding TA excitation signal. The time intervals (203)-(205) represent TA preamplifier power modes. TA preamplifier power can be disabled before and after activation of USTA mode (see time line (207)) as indicated by time intervals (203) and in the pauses between TA data acquisition events (202), as indicated by the time intervals (205), which can be used for acquisition of US frames (209). The time intervals (203) and (205) indicate a fast power saving mode of TA preamplifier. The time intervals (204) indicate a full power mode of TA preamplifier including two transient stages of <100 μs each, when TA preamplifier is being turned on or turned off. In the example shown inFIG.2A, the TA preamplifier stays in the power saving mode for ≥99% of time and is active for only ≤1% of time considering a conventional 10 Hz TA excitation rate. The time line (208) shows TA_sw signals with high values used to enable series connection for the TA preamplifier used for the TA mode. TA_sw low values are used to enable bypass connection for the TA preamplifier used for the US mode. FIG.2Bshows an operational sequence for a USTA imaging system with implemented fast energy saving technology. The USTA imaging system is normally operated in US mode (210) very much like its cousin—clinical ultrasound. When there is a requirement to enable a USTA imaging, the system is getting initialized for a particular TA excitation frequency FexTA (211), corresponding, for example, to the onset of the high USTA_on signal (time line (207),FIG.2A). The system then continues US imaging and waits for the first TA_ex_in signal (time line (201),FIG.2A) to arrive. The controller switches on power in the TA preamplifiers and enables TA acquisition event ((202) inFIG.2A) with a time delay slightly shorter than 1/FexTA (time line (208),FIG.2A). Subsequently, the system re-activates US imaging (214) until the next TA_sw signal is received (line (208),FIG.2A). Such sequence of interleaving TA and US imaging events continues until a USTA mode termination command arrives, for example in a form of a low-level USTA_on signal (time line (207),FIG.2A) at which point the system returns to its default US imaging mode. The ability to switch to low power mode when not performing thermoacoustic receive operations allows substantial reduction in power requirements. Among the advantages of such reduction are the ability to include the TA preamplifier, DAQ, analog-to-digital converter or ultrasound analog front-end within a housing of a probe or transducer array. At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a special purpose or general purpose computer system or other data processing system in response to its hard-core or soft-core processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device. Functions expressed in the claims may be performed by a processor in combination with memory storing code and should not be interpreted as means-plus-function limitations. Routines executed to implement the embodiments may be implemented as part of an application, operating system, firmware, ROM, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface). The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects. A machine-readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine-readable medium in entirety at a particular instance of time. Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. In general, a machine-readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system. | 23,353 |
11857292 | DETAILED DESCRIPTION OF THE EMBODIMENT Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. If it is considered that the description of related known configuration or function may cloud the gist of the present disclosure, the description will be omitted. Further, hereinafter, exemplary embodiments of the present invention will be described. However, it should be understood that the technical spirit of the present disclosure is not limited to the specific embodiments, but may be changed or modified in various ways by those skilled in the art. Hereinafter, a method for diagnosing a vascular disease and an apparatus therefor proposed by the present disclosure will be described in detail with reference to drawings. A method for diagnosing a vascular disease and an apparatus therefor according to the present disclosure may be desirably a method and an apparatus for diagnosing a cardiovascular disease, specifically, cardiovascular stenosis and determining a treatment method, but are not necessarily limited thereto and may be applied to various blood vessel diseases.FIG.2is a block diagram schematically illustrating a computing device for diagnosing a vascular disease according to an exemplary embodiment of the present disclosure. A computing device200for diagnosing a vascular disease according to the present exemplary embodiment includes an input unit210, an output unit220, a processor230, a memory240, and a database250. The computing device200ofFIG.2is an example so that all blocks illustrated inFIG.2are not essential components and in the other exemplary embodiment, some blocks included in the computing device200maybe added, modified, or omitted. In the meantime, the computing device200may be implemented as a diagnosing apparatus which diagnoses a vascular disease and each component included in the computing device200may be implemented as separate software devices or as a separate hardware device in which the software is combined. The computing device200for diagnosing a vascular disease performs an operation of determining a vascular disease by reflecting biometric authentication information of a diagnosis subject to fractional flow reserve calculated based on geometric feature parameter information generated based on patient information and flow feature information calculated by the computational fluid dynamics and determining whether to perform a surgery for the vascular disease. The input unit210refers to a unit which inputs or acquires a signal or data to perform an operation of diagnosing a vascular disease and determining whether to perform a surgery. The input unit210interworks with the processor230and inputs various types of signals or data or directly acquires data by interworking with an external device to transmit the signal or data to the processor230. Here, the input unit210may be a device or a server which inputs a synthetic model, a virtual patient model, patient information, various condition information, or a control signal, but is not necessarily limited thereto. The output unit220may interwork with the processor230to display various information such as a first diagnostic result and a second diagnostic result. The output unit220may desirably display various information through a display (not illustrated) equipped in the computing device200for diagnosing a vascular disease, but is not necessarily limited thereto. The processor230performs a function of executing at least one instruction or program included in the memory240. The processor230according to the exemplary embodiment performs an operation of generating a first learning model and a second learning model based on the synthetic model and the virtual patient model acquired from the input unit210or a database250. Further, the processor230according to the exemplary embodiment performs an operation of determining a vascular disease based on patient information for a diagnosis subject acquired from the input unit210or the database250and determining whether to perform the surgery on the vascular disease. The memory240includes at least one instruction or program which is executable by the processor230. The memory240may include an instruction or a program for generating the first learning model and the second learning model and performing a first diagnosis processing and a second diagnosis processing. The database150refers to a general data structure implemented in a storage space (a hard disk or a memory) of a computer system using a database management program (DBMS) and means a data storage format which freely searches (extracts), deletes, edits, or adds data. The database150may be implemented according to the object of the exemplary embodiment of the present disclosure using a relational database management system (RDBMS) such as Oracle, Informix, Sybase, or DB2, an object oriented database management system (OODBMS) such as Gemston, Orion, or O2, and XML native database such as Excelon, Tamino, Sekaiju and has an appropriate field or elements to achieve its own function. The database250according to the exemplary embodiment stores the synthetic model, the virtual patient model, the patient information, the first diagnostic result, and the second diagnostic result and supplies the stored data. The data stored in the database250may be various information related to the vascular disease such as the synthetic model, the virtual patient model, the patient information, the first diagnostic result, the second diagnostic result, the biometric authentication information, fractional flow reserve information, and flow feature information. In the meantime, it is described that the database250is implemented in the computing device200for diagnosing a vascular disease, but is not necessarily limited thereto and may be implemented as a separate data storage device. FIG.3is a block diagram schematically illustrating a vascular disease diagnosing apparatus according to an exemplary embodiment of the present disclosure. The vascular disease diagnosing apparatus300according to the present disclosure may amplify a quality and a quantity of data for diagnosing a vascular disease by applying the biometric authentication information of the diagnosis subject (patient) and the flow feature information based on the computational fluid dynamics to a machine learning algorithm which is configured by two steps. A first step algorithm of the machine learning algorithm configured by two steps aims to improve a function of the computational fluid dynamics of the related art. The first step algorithm of the vascular disease diagnosing apparatus300receives a blood vessel shape of a patient acquired from the CT as an input signal, like a fractional flow reserve (FFR) prediction simulator based on the computational fluid analysis technique of the related art and derives the fractional flow reserve (FFR) prediction and the flow feature information including flow factors such as or vorticity or wall shear stress as output signals. In the first step algorithm of the vascular disease diagnosing apparatus300, the first learning model is used and the first learning model is trained with a result obtained by analyzing the synthetic model, rather than a blood vessel model of an actual patient, using the computational fluid technique. As a result, in the first step algorithm of the vascular disease diagnosing apparatus300, the accuracy is similar to an accuracy when the computational fluid technique is used for analysis and the calculation time of the computational fluid technique which is consumed for approximately 10 hours may be shortened to a few minutes. A second step algorithm of the machine learning algorithm configured by two steps is objected to diagnose a vascular disease by overcoming a limitation of the computational fluid technique. Because the computational fluid technique does not fully reflect the patient's biometric authentication information, the accuracy may be just approximately 80%. In the present disclosure, in order to improve the diagnosis accuracy, various biometric information (for example, BMI, age, calcium concentration of the blood vessel) which cannot be reflected in the computational fluid technique of the related art needs to be considered. In the second step algorithm, in addition to the fractional flow reserve (FFR) predicted in the first step algorithm, the biometric authentication information of the diagnosis subjects (patients) is received as an input to derive decision making information regarding the diagnosis of the vascular disease or whether to perform a treatment (for example, a medical procedure or a surgery). The second step algorithm of the vascular disease diagnosing apparatus300according to the present disclosure may additionally utilize the biometric authentication information of the diagnosis subject and the flow feature information of a blood flow, as compared with the algorithm of the related art which is simply trained only based on a blood vessel image of the diagnosis subject. Further, the second step algorithm of the vascular disease diagnosing apparatus300may supplement the number of data required for learning using the synthetic model. The vascular disease diagnosing apparatus300according to the exemplary embodiment of the present disclosure includes an artificial information acquiring unit310, a learning model generating unit320, a patient information acquiring unit330, a diagnosis processing unit340, and a result processing unit350. The vascular disease diagnosing apparatus300ofFIG.3is an example so that all blocks illustrated inFIG.3are not essential components and in the other exemplary embodiment, some blocks included in the vascular disease diagnosing apparatus300may be added, modified, or omitted. In the meantime, the vascular disease diagnosing apparatus300maybe implemented as a computing device which diagnoses a vascular disease and each component included in the vascular disease diagnosing apparatus300may be implemented as separate software devices or as a separate hardware device in which the software is combined. The artificial information acquiring unit310acquires a synthetic model and a virtual patient model to generate a learning model for diagnosing a vascular disease. Here, the synthetic model and the virtual patient model may be information which is input by the manipulation of the user or information which is received from an external device. Here, the synthetic model may be a blood vessel image, but is not necessarily limited thereto and may be geometric feature information related to the blood vessel image. Further, the virtual patient model may be patient information about the diagnosis subject which is randomly collected or patient information which is arbitrarily generated for a virtual diagnosis subject. The learning model generating unit320generates a learning model which allows the diagnosis processing unit340to diagnose a vascular disease. The learning model generating unit320includes a first learning model generating unit322and a second learning model generating unit324. The first learning model generating unit322generates geometric feature parameter learning data based on a predetermined synthetic model and generates a first learning model based on supervised learning using fractional flow reserve data and flow feature data calculated using the shape parameter learning data. Here, the geometric feature parameter learning data includes geometric feature parameters for a top portion, a middle portion, and a bottom portion of the blood vessel of the synthetic model. Here, the geometric feature parameters may include parameters for a length, a curvature, a diameter, eccentricity, etc. The first learning model generating unit322applies the geometric feature parameter learning data, the fractional flow reserve data, and the flow feature data to Gaussian process regression analysis to generate the first learning model. Here, the first learning model is a learning model which allows the diagnosis processing unit340to diagnose the vascular disease and performs an operation of receiving the geometric feature parameter information to calculate and output the fractional flow reserve (FFR) information. The second learning model generating unit324acquires biometric authentication data for the virtual patient model and generates a second learning model based on supervised learning using the biometric authentication data, the fractional flow reserve data, and the flow feature data. Here, the fractional flow reserve data refers to data calculated by applying the geometric feature parameter learning data generated based on the virtual patient model to the first learning model and the flow feature data refers to data calculated by applying the geometric feature parameter learning data generated based on the virtual patient model to the computational fluid dynamics CFD. The second learning model generating unit324applies the biometric authentication data, the fractional flow reserve data, and the flow feature data to a support vector machine (SVM) to generate the second learning model. Here, the second learning model is a learning model which allows the diagnosis processing unit340to determine whether to perform a surgery on the vascular disease and performs an operation of receiving the biometric authentication data, the fractional flow reserve data, and the flow feature data to calculate and output decision making information about whether to perform a surgery. The patient information acquiring unit330acquires patient information for the diagnosis subject. Here, the patient information may include a blood vessel image of the diagnosis subject and biometric information. Here, the blood vessel image refers to an image obtained by capturing a lesion area and the biometric information may include an age, a gender, a BMI (body mass index), vessel calcification, and hematocrit which may identify the diagnosis subject. The diagnosis processing unit340performs an operation of determining a vascular disease and determining whether to perform a surgery on the vascular disease. The diagnosis processing unit340includes a first diagnosis processing unit342and a second diagnosis processing unit344. The first diagnosis processing unit generates geometric feature parameter information based on the patient information and applies the geometric feature parameter information to the first learning model to calculate the fractional flow reserve (FFR) information. Here, the geometric feature parameter information includes geometric feature parameters for a top portion, a middle portion, and a bottom portion of a blood vessel of the blood vessel image included in the patient information. Here, the geometric feature parameters may include parameters for a length, a curvature, a diameter, eccentricity, etc. Further, the first diagnosis processing unit342applies the geometric feature parameter information to the computational fluid dynamics (CFD) to calculate the flow feature information. Here, the computational fluid dynamics (CDF) discretizes the Navier-strokes equations which is a non-linear partial differential equation describing a fluid phenomenon, using methods such as a finite difference method (FDM), a finite element method (FEM), and a finite volume method (FVM), to convert the Navier-strokes equations into algebraic equations and calculate a fluid flow problem using an algorithm of numerical methods. The first diagnosis processing unit342calculates the flow feature information including vorticity, a wall shear stress, a pressure, a velocity, WSS, OSI, and APS. The first diagnosis processing unit342transmits the calculated fractional flow reserve (FFR) information and flow feature information to the second diagnosis processing unit344. In the meantime, the first diagnosis processing unit342transmits the calculated fractional flow reserve (FFR) information to a first result processing unit352to be output. The second diagnosis processing unit344acquires the biometric authentication information included in the patient information and acquires the fractional flow reserve (FFR) information and the flow feature information from the first diagnosis processing unit342. The second diagnosis processing unit344applies the biometric authentication information, the fractional flow reserve information, and the flow feature information to the second learning model to diagnose the vascular disease and determine whether to perform the surgery on the vascular disease. The second diagnosis processing unit344analyzes a stenosis state of the blood vessel based on the second learning model and determines whether to perform the surgery on the vascular disease based on the stenosis state. The second diagnosis processing unit344receives the biometric authentication information, the fractional flow reserve information, and the flow feature information to calculate decision making information about whether to perform the surgery. Here, the decision making information may be configured by binary numbers. For example, if the decision making information is “0”, the vascular disease diagnosing apparatus300proposes another treatment without performing the surgery on the vascular disease. If the decision making information is “1”, the vascular disease diagnosing apparatus300proposes to immediately perform the surgery on the vascular disease. The result processing unit350performs an operation of outputting a diagnostic result of the diagnosis processing unit340. The result processing unit350includes a first result processing unit352and a second result processing unit354. The first result processing unit350receives and outputs the fractional flow reserve (FFR) information calculated by the first diagnosis processing unit342. Even though it is described that the first result processing unit350outputs only the fractional flow reserve (FFR) information, the present disclosure is not limited thereto and the first result processing unit350may further output the flow feature information. The second result processing unit354receives and outputs the decision making information about whether to perform the surgery calculated by the second diagnosis processing unit344. Even though it is described that the second result processing unit354outputs only the decision making information, the present disclosure is not limited thereto and may further output the biometric authentication information, the fractional flow reserve information, and the flow feature information which are used by the second diagnosis processing unit344to calculate the decision making information. FIGS.4A and4Bare flow charts for explaining an operation of generating a learning model for diagnosing a vascular disease according to an exemplary embodiment of the present disclosure. Referring toFIG.4A, the vascular disease diagnosing apparatus300acquires a synthetic model and virtual patient information (artificial information) in step S402. The vascular disease diagnosing apparatus300generates a first learning model based on the synthetic model in step S404. The vascular disease diagnosing apparatus300generates a second learning model based on the virtual patient information and the first learning model in step S406. FIG.4Billustrates a detailed operation of generating the first learning model and the second learning model in the vascular disease diagnosing apparatus300. Hereinafter, an operation of generating the first learning model in the first learning model generating unit322will be described (STEP 1). The first learning model generating unit322acquires a predetermined synthetic model in step S410and generates geometric feature parameter learning data based on the synthetic model in step S412. Here, the geometric feature parameter learning data includes geometric feature parameters for a top portion, a middle portion, and a bottom portion of a blood vessel of the synthetic model. Here, the geometric feature parameters may include parameters for a length, a curvature, a diameter, eccentricity, etc. The first learning model generating unit322performs the computational fluid dynamics (CFD) based on the geometric feature parameter learning data in step S420to calculate the fractional flow reserve data and the flow feature data in steps S422and S424. The first learning model generating unit322is trained for Gaussian process regression analysis with the geometric feature parameter learning data, the fractional flow reserve data, and the flow characteristic data as inputs to generate the first learning model400in step S430. Hereinafter, an operation of generating the second learning model in the second learning model generating unit324will be described (STEP 2). The second learning model generating unit324acquires a virtual patient model in step S440and generates geometric feature parameter learning data for the virtual patient model in step S450. Here, the geometric feature parameter learning data includes geometric feature parameters for a top portion, a middle portion, and a bottom portion of the blood vessel image included in the virtual patient model. The shape parameters may include parameters for a length, a curvature, a diameter, eccentricity, etc. The second learning model generating unit324applies the geometric feature parameter learning data to the first learning model400to calculate the fractional flow reserve data insteps S460and462. The second learning model generating unit324applies the geometric feature parameter learning data to the computational fluid dynamics (CFD) to calculate the flow feature data in steps S470and S472, The second learning model generating unit324acquires biometric authentication information included in the virtual patient model in step S480. The second learning model generating unit324applies the biometric authentication data, the fractional flow reserve data, and the flow feature data as inputs to a support vector machine (SVM) to generate the second learning model402in step S490. Even though inFIGS.4A and4B, it is described that the steps are sequentially executed, the present disclosure is not necessarily limited thereto. In other words, the steps described inFIGS.4A and4Bmay be modified to be executed or one or more steps may be executed in parallel so thatFIGS.4A and4Bare not limited to a time-sequential order. The learning model generating method for diagnosing a vascular disease according to the exemplary embodiment described inFIGS.4A and4Bmay be implemented by an application (or a program) and may be recorded in a terminal device (or computer) readable recording medium. The recording medium which has the application (or program) for implementing the learning model generating method according to the exemplary embodiment for diagnosing the vascular disease recorded therein and is readable by the terminal device (or a computer) includes all kinds of recording devices or media in which computing system readable data is stored. FIGS.5A and5Bare flowcharts for explaining an operation of generating a first learning model and a second learning model according to an exemplary embodiment of the present disclosure.FIG.5Aillustrates an operation of generating a first learning model andFIG.5Billustrates an operation of generating a second learning model. Referring toFIG.5A, the vascular disease diagnosing apparatus300acquires an artificial blood vessel geometric feature parameter Gs and computational fluid analysis data DCFDin steps S510and S512. Here, the artificial blood vessel geometric feature parameter GSmay be fractional flow reserve data and the computational fluid analysis data DCFDmaybe flow feature data. The vascular disease diagnosing apparatus300generates data samples GSand DCFDbased on the artificial blood vessel geometric feature parameter Gs and computational fluid analysis data DCFDin step S520. The vascular disease diagnosing apparatus300generates learning data X and Y and test data X′ and Y′ based on the data samples GSand DCFDin steps S530and S532and performs the Gaussian process regression learning using the learning data X and Y in step S540. The vascular disease diagnosing apparatus300performs a test and a feedback for repeated learning in step S550and generates the first learning model based thereon in step S552. Referring toFIG.5B, the vascular disease diagnosing apparatus300acquires a patient blood vessel geometric feature parameter GP, patient biometric information BP, and a patient FFR measurement value FFRPin step S560. The vascular disease diagnosing apparatus300generates data samples GSand BPbased on the patient blood vessel geometric feature parameter GP, the patient biometric information BP, and the patient FFR measurement value FFRPin step S570. The vascular disease diagnosing apparatus300generates learning data X and Y and test data X′ and Y′ based on the data samples GSand BPin steps S572and S574and performs the support vector machine (SVM) learning using the learning data X and Y in step S580. The vascular disease diagnosing apparatus300performs a test and a feedback for repeated learning in step S590and generates the second learning model based thereon in step S592.FIGS.6A and6Bare flow charts for explaining a method for diagnosing a vascular disease according to an exemplary embodiment of the present disclosure. Referring toFIG.6A, the vascular disease diagnosing apparatus300acquires patient information in step S602. The vascular disease diagnosing apparatus300performs a first diagnosis processing using the geometric feature parameter information generated based on the patient information in step S604. The vascular disease diagnosing apparatus300outputs a first diagnostic result for the first diagnosis processing in step S606. Here, the first diagnostic result may be at least one of the fractional flow reserve (FFR) information and the flow feature information. The vascular disease diagnosing apparatus300performs a second diagnosis processing using the first diagnostic result and the biometric authentication information included in the patient information. The vascular disease diagnosing apparatus300outputs a second diagnostic result for the second diagnosis processing in step S609. Here, the second diagnostic result may include information about whether to perform the surgery on the vascular disease. FIG.6Billustrates a detailed operation of performing the first diagnosis processing and the second diagnosis processing in the vascular disease diagnosing apparatus300. Hereinafter, an operation of performing the first diagnosis processing in the first diagnosis processing unit342will be described (STEP 1). The first diagnosis processing unit342acquires patient information in step S610and generates geometric feature parameter information based on the patient information in step S620. Here, the geometric feature parameter information includes shape parameters for a top portion, a middle portion, and a bottom portion of a blood vessel of the blood vessel image included in the patient information. The geometric feature parameters may include parameters for a length, a curvature, a diameter, eccentricity, etc. The first diagnosis processing unit342applies the generated geometric feature parameter information to the first learning model400to calculate the fractional flow reserve (FFR) information in steps S630and S640. The first diagnosis processing unit342outputs the calculated fractional flow reserve (FFR) information as a first diagnostic result in step S642. The first diagnosis processing unit342applies the generated geometric feature parameter information to the computational fluid dynamics (CFD) to calculate the flow feature information in steps S650and652. Here, the flow feature information may include information about vorticity, a wall stress, a pressure, a velocity, WSS, OSI, and APS. Hereinafter, an operation of performing the second diagnosis processing in the second diagnosis processing unit344will be described (STEP 2). The second diagnosis processing unit344acquires the biometric authentication information included in the patient information in step S660and acquires the fractional flow reserve (FFR) information and the flow feature information from the first diagnosis processing unit. The second diagnosis processing unit344applies the biometric authentication information, the fractional flow reserve information, and the flow feature information to the second learning model402to calculate and output a second diagnostic result in steps S680and690. The second diagnosis processing unit344applies the biometric authentication information, the fractional flow reserve information, and the flow feature information to the second learning model402to diagnose the vascular disease and determine whether to perform the surgery on the vascular disease. Even though inFIGS.6A and6B, it is described that the steps are sequentially executed, the present disclosure is not necessarily limited thereto. In other words, the steps described inFIGS.6A and6Bmay be modified to be executed or one or more steps may be executed in parallel so thatFIGS.6A and6Bare not limited to a time-sequential order. The vascular disease diagnosing method according to the exemplary embodiment described inFIGS.6A and6Bmay be implemented by an application (or a program) and may be recorded in a terminal (or computer) readable recording media. The recording medium which has the application (or program) for implementing the vascular disease diagnosing method according to the exemplary embodiment recorded therein and is readable by the terminal device (or a computer) includes all kinds of recording devices or media in which computing system readable data is stored. FIG.7A and7Bare an exemplary view illustrating a diagnostic result of a vascular disease according to an exemplary embodiment of the present disclosure and a diagnostic result of a vascular disease according to the related art. FIG.7Aillustrates a diagnostic result calculated by a diagnosing method (FFREXP/DECEXP) of the related art and a diagnostic result calculated by a diagnosing method (FFRGPR/DECGPR) of the present disclosure, for a plurality of patients. Referring toFIG.7A, it is confirmed that the diagnostic results for 11 patients out of a total of 20 patients match the diagnosing method (FFRGPR/DECSVM) of the present disclosure and the diagnosing method (FFREXP/DECEXP) of the related art. It is further confirmed that the diagnostic results for 4 patients out of a total of 20 patients match only the diagnosing method (DECSVM) of the present disclosure and the diagnostic results for 2 patients out of a total of 20 patients match only the diagnosing method (FFRGPR) of the present disclosure. It is confirmed that the diagnostic results for 3 patients out of a total of 20 patients do not match the diagnosing method (FFRGPR/DECSVM) of the present disclosure and the diagnosing method (FFREXP/DECEXP) of the related art. FIG.7Billustrates a result of comparing the accuracy, the sensitivity, and the specificity of the present disclosure and the related art. InFIG.7B, the accuracy may be calculated by “accurately predicted data/all data, the sensitivity is calculated by “the number of patients who require the surgery/accurately predicted data”, and the specificity is calculated by “the number of patients who do not require the surgery/accurately predicted data”. FIGS.8and9are exemplary views for explaining an operation of a vascular disease system according to an exemplary embodiment of the present disclosure. Referring toFIG.8, a vascular disease system may determine a vascular disease and whether to perform a surgery by the following order:step S810: perform non-invasive diagnosisstep S820: input or acquire vessel parameters, blood parameters, and other parameters (input FFR related parameters)step S830: transmit data using a terminal of a diagnostician (doctor)step S840: calculate and estimate a vascular disease by the diagnosis processing using functional machine learning (a first learning model and a second learning model)step S850: output a diagnostic result using an augmented reality or virtual reality device which interworks with the terminal of the diagnostician (doctor)step S860: provide clear guideline for decision making in the form of “yes” or “no” regarding whether to perform a surgery, based on the diagnostic result and additionally provide the fractional flow reserve (FFR) and other parameters (a pressure, a velocity, WSS, OSI, and APS). Referring toFIG.9, a vascular disease system may determine a vascular disease and whether to perform a surgery by the following order to cure the vascular disease:step S910: perform non-invasive diagnosis in an operating roomstep S920: a diagnostician (doctor) transmits data to a vascular disease diagnosing apparatus through a terminalstep S930: perform simulation/machine learning based analysis in the vascular disease diagnosing apparatusstep S940: the diagnostician (doctor) receives data for the diagnostic result by the terminalstep S950: check the diagnostic result to immediately determine whether to perform a surgery by the diagnostician (doctor) The vascular disease system allows the diagnosis subject (patient) and the diagnostician (doctor) to wait only for less than one hour at a predetermined site (a doctor's office or an operating room) to determine whether to perform the surgery so that the diagnosis and the treatment can be quickly performed. It will be appreciated that various exemplary embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications and changes may be made by those skilled in the art without departing from the scope and spirit of the present disclosure. Accordingly, the exemplary embodiments of the present disclosure are not intended to limit, but describe the technical spirit of the present disclosure and the scope of the technical spirit of the present disclosure is not restricted by the exemplary embodiments. The protective scope of the exemplary embodiment of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the exemplary embodiment of the present disclosure. | 34,552 |
11857293 | DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS Overview Various embodiments can detect bleeding before, during, and after fluid resuscitation. In an aspect, such detection can be performed noninvasively. In some embodiments, the detection can be based on a calculation (or estimation) of a patient's compensatory reserve index (“CRI,” also referred to herein and in the Related Applications as “cardiac reserve index” or “hemodynamic reserve index” (“HDRI”)). In other cases, the assessments might be based on raw waveform data (e.g., PPG waveform data) captured by a sensor on the patent (such as the sensors described in the Related Applications, for example). In further cases, a combination of waveform data and calculated/estimated CRI can be used to calculate the effectiveness of resuscitation and/or the amount of fluid needed for effective resuscitation. In other aspects, such functionality can be provided by and/or integrated with systems and devices (such as a cardiac reserve monitor), tool, techniques, methods, and software described in the Related Applications, including in particular the '483 application. For example, various operations described in accordance with the methods disclosed by the Related Applications can be employed in a method of assessing effectiveness of resuscitation and/or calculating an amount of fluid needed for effective resuscitation. Similarly, such techniques can be performed by the systems and/or embodied by the software products described in the Related Applications. An embodiment can include a system that comprises one or more sensors placed on the patient and a computer system (such as those described in the Related Applications) that performs a method for using sensor data for estimating and predicting (in real-time, after every heartbeat, or as the information is needed) one or more of the relevant parameters outlined above. Other embodiments can comprise the computer system programmed to perform such a method, an apparatus comprising instructions to program a computer to perform such a method, and/or such a method itself. A sensor may include but is not limited to any of the following: a noninvasive blood pressure sensor such as the Nexfin (BMEYE, B.V.) or Finometer (Finapres Medical Systems B.V.); invasive arterial blood pressure, using an arterial catheter; invasive central venous pressure; invasive or noninvasive intracranial pressure monitor; EEG (electroencephalograph); cardiac monitor (EKG); transcranial Doppler sensor; transthoracic impedance plethysmography; pulse oximetry; a sensor generating a photoplethysmograph (PPG) waveform; near infrared spectroscopy; electronic stethoscope; and/or the like. The '809 application describes several exemplary embodiments, but various embodiments are not limited to those described in the '809 application. For example, FIG. 1 of the '809 application illustrates an exemplary sensor that can be used to collect waveform data for analysis, but other sensors could be used as well. Similarly, the '809 application describes several techniques for estimating probability of blood loss. Many such techniques depend on an estimate of a patient's CRI, which can be calculated using the techniques described in the '483 application. It should be appreciated, however, that other embodiments of estimating a probability of bleeding and/or of estimating CRI can be employed in various embodiments. Thus, in one aspect, a method can include receiving data from such a sensor and analyzing such data using techniques including, but not limited to, analyzing the data using models described in the Related Applications. Merely by way of example, a model might be constructed using test subject data from a study, such as the LBNP study, which can be used to predict or estimate a CRI (or HDRI) value, as described in the Related Applications, and in particular in the '483 application. From this calculated value of CRI (or, in some embodiments, from the waveform data itself, alone or in combination with the CRI value), a probability that a patient is bleeding internally before, during, and/or after fluid resuscitation procedures, for example, using the techniques described in the '809 application. For example, in one embodiment, a method might comprise capturing waveform data from a patient with the sensor before, during, and/or after fluid resuscitation and/or calculating a CRI value for the patient at these times. In some cases, the variation in CRI values obtained during the procedure can be used to estimate a probability that the patent is bleeding. For instance, the standard deviation of the CRI values during the recording and/or the difference in CRI values before, during, and/or after fluid resuscitation can be used to estimate probability of bleeding, as described more fully with regard to the clinical study detailed in the '809 application. Some embodiments further comprise normalizing an estimated probability of bleeding against a scaling. For example, in some cases, an index from 0 to 1 could be used, with 0 indicating that the patient is not bleeding, 1 indicating that the patient is bleeding, and values between 0 and 1 indicating relative probabilities that the patient is bleeding, based on the estimates calculated from the CRI values. The following detailed description illustrates a few exemplary embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention. For the purposes of this disclosure, it should be recognized that a node could be “virtual” or supported on a hypervisor or Host system, or could be a physical node or network device within a network. In most cases, the figures illustrate bridging a virtual path and possibly a node (virtual machine) across the path or between two physical nodes. However, it should be understood that the “swapping” of paths via orchestration can occur in any combination of physical and/or virtual nodes, physical and/or virtual links, or the like. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these specific details. In other instances, certain structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features. Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term “about.” In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms “and” and “or” means “and/or” unless otherwise indicated. Moreover, the use of the term “including,” as well as other forms, such as “includes” and “included,” should be considered non-exclusive. Also, terms such as “element” or “component” encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise. The tools provided by various embodiments include, without limitation, methods, systems, and/or software products. Merely by way of example, a method might comprise one or more procedures, any or all of which are executed by a computer system. Correspondingly, an embodiment might provide a computer system configured with instructions to perform one or more procedures in accordance with methods provided by various other embodiments. Similarly, a computer program might comprise a set of instructions that are executable by a computer system (and/or a processor therein) to perform such operations. In many cases, such software programs are encoded on physical, tangible, and/or non-transitory computer readable media (such as, to name but a few examples, optical media, magnetic media, and/or the like). In an aspect, a system might be provided that comprises one or more sensors to obtain physiological data from a patient and a computer system in communication with the one or more sensors. The computer system might comprise one or more processors and a non-transitory computer readable medium in communication with the one or more processors. The computer readable medium might have encoded thereon a set of instructions executable by the one or more processors to cause the computer system to receive the physiological data from the one or more sensors, analyze the physiological data, estimate a probability that the patient is bleeding, and display, on a display device, at least one of an assessment, prediction, or estimate indicating a probability that the patient is bleeding. In another aspect, a method might be provided that comprises monitoring, with one or more sensors, physiological data of a patient, analyzing, with a computer system, the physiological data, and estimating a probability that the patient is bleeding, based at least in part on the analyzed physiological data. The method might further comprise displaying, on a display device, an indication of at least one of an assessment, prediction, or estimate of a probability that the patient is bleeding. In some instances, one or more of monitoring the physiological data, analyzing the physiological data, estimating the probability that the patient is bleeding, or displaying the indication of at least one of an assessment, prediction, or estimate of the probability that the patient is bleeding are performed in real-time. In some cases, estimating a probability that the patient is bleeding might comprise estimating a probability that the patient is bleeding, based on one or more values of compensatory reserve index (“CRI”) estimated based on the received physiological data. According to some embodiments, the one or more values of CRI are estimated based on physiological data that are at least one of received before, received during, or received after a fluid resuscitation procedure. In some embodiments, the one or more values of CRI might comprise a plurality of values of CRI. In some cases, estimating a probability that the patient is bleeding might comprise estimating the probability that the patient is bleeding based at least in part on one or more of an average value of CRI over a particular period of time, a standard deviation of at least some of the plurality of values of CRI, a skewness of at least some of the plurality of values of CRI, a rate of change of at least some of the plurality of values of CRI, a rate of rate change of at least some of the plurality of values of CRI, and/or a difference between at least some of the plurality of values of CRI. In some instances, the indication is a value between 0 and 1. According to some embodiments, a value of 0 might indicate that the patient is not bleeding, while a value of 1 might indicate that the patient is bleeding. In some cases, estimating a CRI of the patient comprises estimating a compensatory reserve index by comparing the physiological data to a model constructed using the following formula: CRI(t)=1-BLV(t)BLVHDD, where CRI(t) is the compensatory reserve at time t, BLV(t) is an intravascular volume loss of a test subject at time t, and BLVHDDis an intravascular volume loss at a point of hemodynamic decompensation of the test subject. In some embodiments, the physiological data comprises waveform data and wherein estimating the CRI comprises comparing the waveform data with one or more sample waveforms generated by exposing one or more test subjects to state of hemodynamic decompensation or near hemodynamic decompensation, or a series of states progressing towards hemodynamic decompensation, and monitoring physiological data of the test subjects. In some instances, the physiological data might comprise waveform data, and estimating the CRI might comprise comparing the waveform data with a plurality of sample waveforms, each of the sample waveforms corresponding to a different value of the CRI to produce a similarity coefficient expressing a similarity between the waveform data and each of the sample waveforms, normalizing the similarity coefficients for each of the sample waveforms, and summing the normalized similarity coefficients to produce an estimated CRI value for the patient. According to some embodiments, estimating a probability that the patient is bleeding is based on a fixed time history of monitoring the physiological data of the patient. Alternatively, estimating a probability that the patient is bleeding is based on a dynamic time history of monitoring the physiological data of the patient. In some instances, at least one of the one or more sensors is selected from the group consisting of a blood pressure sensor, an intracranial pressure monitor, a central venous pressure monitoring catheter, an arterial catheter, an electroencephalograph, a cardiac monitor, a transcranial Doppler sensor, a transthoracic impedance plethysmograph, a pulse oximeter, a near infrared spectrometer, a ventilator, an accelerometer, an electrooculogram, a transcutaneous glucometer, an electrolyte sensor, and an electronic stethoscope. Merely by way of example, in some embodiments, physiological data might comprise at least one of blood pressure waveform data, plethysmograph waveform data, or photoplethysmograph (PPG) waveform data. In some cases, analyzing the physiological data might comprise analyzing the physiological data against a pre-existing model. In some embodiments, the method might further comprise generating the pre-existing model prior to analyzing the physiological data. In some instances, generating the pre-existing model might comprise receiving data pertaining to one or more physiological parameters of a test subject to obtain a plurality of physiological data sets, directly measuring one or more physiological states of the test subject with a reference sensor to obtain a plurality of physiological state measurements, and correlating the received data with the physiological state measurements of the test subject. According to some embodiments, the one or more physiological states comprises reduced circulatory system volume. In some instances, the method might further comprise inducing the physiological state of reduced circulatory system volume in the test subject. In some cases, inducing the physiological state comprise at least one of subjecting the test subject to lower body negative pressure (“LBNP”), subjecting the test subject to dehydration, and/or the like. In some embodiments, the one or more physiological states might comprise at least one of a state of cardiovascular collapse or near-cardiovascular collapse, a state of euvolemia, a state of hypervolemia, a state of dehydration, and/or the like. According to some embodiments, correlating the received data with the physiological state measurements of the test subject might comprise identifying a most predictive set of signals Skout of a set of signals s1, s2, . . . , sDfor each of one or more outcomes ok, autonomously learning a set of probabilistic predictive models ôk=Mk(Sk), and repeating the operation of autonomously learning incrementally from data that contains examples of values of signals s1, s2, . . . , sDand corresponding outcomes o1, o2, . . . , oK. Here, the most-predictive set of signals Skcorresponds to a first data set representing a first physiological parameter, and each of the one or more outcomes okrepresents a physiological state measurement, while okis a prediction of outcome okderived from a model Mkthat uses as inputs values obtained from the set of signals Sk. In yet another aspect, an apparatus might be provided that comprises a non-transitory computer readable medium that has encoded thereon a set of instructions executable by one or more computers to cause the apparatus to receive physiological data from one or more sensors, analyze the physiological data, estimate a probability that the patient is bleeding, and display, on a display device, at least one of an assessment, prediction, or estimate indicating a probability that the patient is bleeding. Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above described features. Compensatory Reserve Index (“CRI”) Various embodiments can assess the effectiveness of fluid intake hydration, where effectiveness can be defined, but not limited to, as leading to a better hydration state or maintain an optimal hydration state. In one aspect, optimal hydration might be defined as a fluid state that maximized some performance index/measure, perhaps indicated by the patient's compensatory reserve index (“CRI,” also referred to herein and in the Related Applications as “cardiac reserve index” or “hemodynamic reserve index” (“HDRI”), all of which should be considered synonymous for purposes of this disclosure). (While the term, “patient,” is used herein for convenience, that descriptor should not be considered limiting, because various embodiments can be employed both in a clinical setting and outside any clinical setting, such as by an athlete before, during, or after an athletic contest or training, a person during daily activities, a soldier on the battlefield, etc. Thus, the term, “patient,” as used herein, should be interpreted broadly and should be considered to be synonymous with “person.”) In other cases, the assessments might be based on raw waveform data (e.g., PPG waveform data) captured by a sensor on the patent (such as the sensors described below and the Related Applications, for example). In further cases, a combination of waveform data and calculated/estimated CRI can be used to calculate the effectiveness of hydration and/or the amount of fluid needed for effective hydration. In other aspects, such functionality can be provided by and/or integrated with systems, devices (such as a cardiac reserve monitor and/or wrist-worn sensor device), tools, techniques, methods, and software described below and in the Related Applications. For example, one set of embodiments provides methods. An exemplary method might comprise monitoring, with one or more sensors, physiological data of a patient. The method might further comprise analyzing, with a computer system, the physiological data. Many different types of physiological data can be monitored and/or analyzed by various embodiments, including without limitation, blood pressure waveform data, plethysmograph waveform data, photoplethysmograph (“PPG”) waveform data (such as that generated by a pulse oximeter), and/or the like. In an aspect of some embodiments, analyzing the physiological data might comprise analyzing the data against a pre-existing model. In some cases, the method can further comprise assessing the effectiveness of hydration efforts, and/or displaying (e.g., on a display device) an assessment of the effectiveness of the hydration efforts. Such an assessment can include, without limitation, an estimate of the effectiveness at a current time, a prediction of the effectiveness at some point in the future, an estimate and/or prediction of a volume of fluid necessary for effective hydration, an estimate of the probability a patient requires fluids, etc. An apparatus, in accordance with yet another set of embodiments, might comprise a computer readable medium having encoded thereon a set of instructions executable by one or more computers to perform one or more operations. In some embodiments, the set of instructions might comprise instructions for performing some or all of the operations of methods provided by certain embodiments. A system, in accordance with yet another set of embodiments, might comprise one or more processors and a computer readable medium in communication with the one or more processors. The computer readable medium might have encoded thereon a set of instructions executable by the computer system to perform one or more operations, such as the set of instructions described above, to name one example. In some embodiments, the system might further comprise one or more sensors and/or a therapeutic device, either or both of which might be in communication with the processor and/or might be controlled by the processor. Such sensors can include, but are not limited to, a blood pressure sensor, an intracranial pressure monitor, a central venous pressure monitoring catheter, an arterial catheter, an electroencephalograph, a cardiac monitor, a transcranial Doppler sensor, a transthoracic impedance plethysmograph, a pulse oximeter, a near infrared spectrometer, a ventilator, an accelerometer, an electrooculogram, a transcutaneous glucometer, an electrolyte sensor, and/or an electronic stethoscope. CRI for Assessing Blood Loss Before, During, and/or After Fluid Resuscitation A set of embodiments provides methods, systems, and software that can be used, in many cases noninvasively, to quickly and accurately assess blood loss in a patient (e.g., before, during, and/or after fluid resuscitation). Such an assessment can include, without limitation, an estimate of the effectiveness at a current time, a prediction of the effectiveness at some point in the future, an estimate and/or prediction of a volume of fluid necessary for effective hydration, an estimate of the probability a patient requires fluids, an estimate and/or prediction of blood loss (e.g., before, during, and/or after fluid resuscitation), etc. In a particular set of embodiments, a device, which can be worn on the patient's body, can include one or more sensors that monitor a patient's physiological parameters. The device (or a computer in communication with the device) can analyze the data captured by the sensors and compare such data with a model (which can be generated in accordance with other embodiments) to assess the effectiveness of hydration, as described in further detail in the '426 application, and/or to assess blood loss (e.g., before, during, and/or after fluid resuscitation). Different embodiments can measure a number of different physiological parameters from the patient, and the analysis of those parameters can vary according to which parameters are measured (and which, according to the generated model, are found to be most predictive of the effectiveness of hydration, including the probability of the need for hydration and/or the volume of fluids needed, or most predictive of blood loss). In some cases, the parameters themselves (e.g., continuous waveform data captured by a photoplethysmograph) can be analyzed against the model to make assessments of hydration effectiveness or assessments of blood loss (e.g., before, during, and/or after fluid resuscitation). In other cases, physiological parameters can be derived from the captured data, and these parameters can be used Merely by way of example, as described further below and the '483 application (already incorporated by reference), direct physiological data (captured by sensors) can be used to estimate a value of CRI, and this value of CRI can be used to assess the effectiveness of hydration and/or to assess blood loss (e.g., before, during, and/or after fluid resuscitation). In yet other cases, the derived CRI values and raw sensor data can be used together to perform such assessments. For example, the '483 application describes a compensatory reserve monitor (also described as a cardiac reserve monitor or hemodynamic reserve monitor) that is able to estimate the compensatory reserve of a patient. In an aspect, this monitor quickly, accurately and/or in real-time can determine the probability of whether a patient is bleeding. In another aspect, the device can simultaneously monitor the patient's compensatory reserve by tracking the patient's CRI, to appropriately and effectively guide hydration and ongoing patient care. The same device (or a similar device) can also include advanced functionality to assess the effectiveness of hydration, based on the monitored CRI values, as explained in further detail in the '426 application, and/or to rapidly assess blood loss (e.g., before, during, and/or after fluid resuscitation). CRI is a hemodynamic parameter that is indicative of the individual-specific proportion of intravascular fluid reserve remaining before the onset of hemodynamic decompensation. CRI has values that range from 1 to 0, where values near 1 are associated with normovolemia (normal circulatory volume) and values near 0 are associated with the individual specific circulatory volume at which hemodynamic decompensation occurs. The mathematical formula of CRI, at some time “t” is given by the following equation: CRI(t)=1-BLV(t)BLVHDD(Eq.1) Where BLV(t) is the intravascular volume loss (“BLV,” also referred to as “blood loss volume” in the Related Applications) of a person at time “t,” and BLVHDDis the intravascular volume loss of a person when they enter hemodynamic decompensation (“HDD”). Hemodynamic decompensation is generally defined as occurring when the systolic blood pressure falls below 70 mmHg. This level of intravascular volume loss is individual specific and will vary from subject to subject. Lower body negative pressure (LBNP) in some linear or nonlinear relationship λ with intravascular volume loss: BLV=λ·LBNP (Eq. 2) can be used in order to estimate the CRI for an individual undergoing a LBNP experiment as follows: CRI=1-BLV(t)BLVHDD≈1-λ·LBNP(t)λ·LBNPHDD=1-LBNP(t)LBNPHDD(Eq.3) Where LBNP(t) is the LBNP level that the individual is experiencing at time “t,” and, LBNPHDDis the LNPB level that the individual will enter hemodynamic decompensation. Using either CRI data, raw (or otherwise processed) sensor data, or both, various embodiments can assess the effectiveness of hydration. In one embodiment, the assessment of blood loss (“BL”) can be expressed as a value between 0 and 1; when BL=1, blood loss is certain, when BL=0, there is no blood loss, and when BL is a value between 1 and 0, the value is indicative of probability of blood loss, perhaps due to ongoing bleeding before, during, and/or after fluid resuscitation. (Of course, other embodiments can scale the value of BL differently). In an aspect of some embodiments, a general expression for the estimate of as follows: BL=ƒBL(CRIt,FVt,St) (Eq. 4) Where BL is a measure or an estimate of blood loss, ƒBL(CRIt, FVt, St) is an algorithm embodied by a model generated empirically, e.g., using the techniques described with respect toFIG.4below, and/or in the Related Applications, CRItis a time history of CRI values (which can range from a single CRI value to many hours of CRI values), FVtis a time history of fluid volume being given to the patient (which can range from a single value to many hours of values), and Stis a time history of raw sensor values, such as physiological data measured by the sensors, as described elsewhere herein (which can range from one value to many hours of values). The functional form of Eq. 4 is similar to but not limited to the form of the CRI model in the sense that time histories of (CRIt, FVt, St) data gathered from human subjects at various levels of BL are compared to time histories of (CRIt, FVt, St) for the current patient being monitored. The estimated BL for the current patient is then that which is the closest in (CRIt, FVt, St) space to the previously gathered data. While Eq. 4 is the general expression for BL, various embodiments might use subsets of the parameters considered in Eq. 4. For instance, in one embodiment, a model might consider only the volume of fluid and CRI data, without accounting for raw sensor input. In that case, BL can be calculated as follows: BL=ƒBL(CRIt,FVt) (Eq. 5) Similarly, some models might estimate BL based on sensor data, rather than first estimating CRI, in which case, BL can be expressed thusly: BL=ƒBL(FVt,St) (Eq. 6) The choice of parameters to use in modeling BL is discretionary, and it can depend on what parameters are shown (e.g., using the techniques ofFIG.4, below) to result in the best prediction of BL. In another aspect, the effectiveness of hydration can be assessed by estimating or predicting the volume, V, of fluid necessary for effective hydration of the patient. This volume, V, can indicate a volume of fluid needed for full hydration if therapy has not yet begun, and/or it can indicate a volume remaining for fully effective hydration if therapy is underway. Like BL, the value of V can be estimated/predicted using the modeling techniques described herein and in the Related Applications. In a general case, V can be expressed as the following: V=ƒV(CRIt,FVt,St) (Eq. 7) where V is an estimated volume of fluid needed by a patient need to prevent over or under hydration, ƒV(CRIt, FVt, St) is an algorithm embodied by a model generated empirically, e.g., using the techniques described with respect toFIG.4below, and/or in the Related Applications, CRItis a time history of CRI values, FVtis a time history of fluid volume being given to the patient, and Stis a time history of physiological data received from the one or more sensors. As with the estimate of BL, various embodiments can employ subsets of the parameters used in the general expression of Eq. 7. Thus, different embodiments might calculate V as follows: V=ƒV(CRIt,FVt) (Eq. 8) or V=ƒV(FVt,St) (Eq. 9) Yet another way of assessing effectiveness of hydration (which can even include assessing the need for hydration) is estimating the probability Pƒthat the patient requires fluids; this probability can estimate the likelihood that the patient requires hydration if therapy has not been initiated, and/or, if hydration therapy is underway, the probability can estimate the likelihood that further hydration is necessary. The value of this probability, which can be expressed, e.g., as a percentage, as a decimal value between 0 and 1, etc. can be estimated using the following expression: Pƒ=ƒPƒ(CRIt,St) (Eq. 10) where Pƒis the estimated probability that the patient requires fluid, ƒpƒ(CRIt, St) is a relationship derived based on empirical study, CRItis a time history of CRI values, and Stis a time history of physiological data received from the one or more sensors. Once again, this general expression can be employed, in various embodiments, using subsets of the parameters in the general expression, such as the following: Pƒ=ƒPƒ(CRIt) (Eq. 11) or Pƒ=ƒPƒ(St) (Eq. 12) In the estimate of any of BL, V, or Pƒ, the function ƒ expresses a relationship that is derived based on empirical study. In a set of embodiments, for example, various sensor data can be collected from test subjects before, during, and/or after hydration efforts, during hemorrhaging, or under other conditions that might simulate such situations. This sensor data can be analyzed to develop models, using techniques similar to those ofFIG.4below, which can then be used to estimate various assessments of hydration effectiveness, using, e.g., the methods described below with respect toFIGS.2and3. A measure of CBI, BL, V, and/or Pƒcan be useful in a variety of clinical settings, including but not limited to: 1) acute blood loss volume due to injury or surgery; 2) acute circulatory volume loss due to hemodialysis (also called intradialytic hypotension); and 3) acute circulatory volume loss due to various causes of dehydration (e.g. reduced fluid intake, vomiting, dehydration, etc.). A change in CRI can also herald other conditions, including without limitation changes in blood pressure, general fatigue, overheating and certain types of illnesses. Accordingly, the tools and techniques for estimating and/or predicting CRI can have a variety of applications in a clinical setting, including without limitation diagnosing such conditions. Moreover, measures of CRI, BL, V, and/or Pƒcan have applicability outside the clinical setting. For example, an athlete can be monitored (e.g., using a wrist-wearable hydration monitor) before, during, or after competition or training to ensure optimal performance (and overall health and recovery). In other situations, a person concerned about overall wellbeing can employ a similar hydration monitor to ensure that he or she is getting enough (but not too much) fluid, ill infants or adults can be monitored while ill to ensure that symptoms (e.g., vomiting, diarrhea) do not result in dehydration, and the like. Similarly, soldiers in the field (particularly in harsh conditions) can be monitored to ensure optimal operational readiness. In various embodiments, a hydration monitor, compensatory reserve monitor, a wrist-wearable sensor device, and/or another integrated system can include, but is not limited to, some or all of the following functionality, as described in further detail herein and in the Related Applications: A. Estimating and/or displaying intravascular volume loss to hemodynamic decompensation (or cardiovascular collapse). B. Estimating, predicting and/or displaying a patient's compensatory reserve as an index that is proportional to an approximate measure of intravascular volume loss to CV collapse, recognizing that each patient has a unique reserve capacity. C. Estimating, predicting and/or displaying a patient's compensatory reserve as an index with a normative value at euvolemia (for example, CRI=1), representing a state in which the patient is normovolemic; a minimum value (for example, CRI=0) which implies no circulatory reserve and that the patient is experiencing CV collapse; and/or an excess value (for example, CRI>1) representing a state in which the patient is hypervolemic; the patient's normalized compensatory reserve can be displayed on a continuum between the minimum and maximum values (perhaps labeled by different symbols and/or colors depending on where the patient falls on the continuum). D. Determining and/or displaying a probability that bleeding or intravascular volume loss has occurred. E. Displaying an indicator that intravascular volume loss has occurred and/or is ongoing; as well as other measures of reserve, such as trend lines. F. Estimating a patient's current blood pressure and/or predicting a patient's future blood pressure. G. Estimating the current effectiveness of fluid resuscitation efforts. H. Predicting the future effectiveness of fluid resuscitation efforts. I. Estimating and/or predicting a volume of fluid necessary for effective resuscitation. J. Estimating a probability that a patient needs fluids. K. Estimating a hydration state of a patient or user. L. Predicting a future hydration state of a patient or user. M. Estimate and/or predicting a volume of fluid intake necessary for adequate hydration of a patient or user. N. Estimating a probability that a patient is dehydrated. In various embodiments, CM, BL, V, and/or Pƒestimates can be (i) based on a fixed time history of patient monitoring (for example a 30 second or 30 heart beat window); (ii) based on a dynamic time history of patient monitoring (for example monitoring for 200 minutes, the system may use all sensor information gathered during that time to refine and improve CRI estimates, hydration effectiveness assessments, etc.); (iii) based on either establishing baseline estimates when the patient is normovolemic (no volume loss has occurred); and/or (iv) based on NO baselines estimates when patient is normovolemic. Certain embodiments can also recommend treatment options, based on the analysis of the patient's condition (including the estimated/predicted blood pressure, probability of bleeding, state of dehydration, and/or the patient's estimated and/or predicted CRI). Treatment options can include, without limitation, such things as optimizing hemodynamics, ventilator adjustments, IV fluid adjustments (e.g., controlling the flow rate of an IV pump or the drip rate of an IV drip), transfusion of blood or blood products, infusion of volume expanders, medication changes, changes in patient position and surgical therapy. As one example, certain embodiments can be used to control an IV drip, IV pump, or rapid infuser. For instance, an embodiment might estimate the probability that a patient requires fluids and activate such a device in response to that estimate (or instruct a clinician to attach such a device to the patient and activate the device). The system might then monitor the progress of the hydration effort (through continual or periodic assessment of the effectiveness of hydration) and increase/decrease drip or flow rates accordingly. As another example, certain embodiments can be used as an input for a hemodialysis procedure. For example, certain embodiments can predict how much intravascular (blood) volume can be safely removed from a patient during a hemodialysis process. For example, an embodiment might provide instructions to a human operator of a hemodialysis machine, based on estimates or predictions of the patient's CRI. Additionally and/or alternatively, such embodiments can be used to continuously self-adjust the ultra-filtration rate of the hemodialysis equipment, thereby completely avoiding intradialytic hypotension and its associated morbidity. As yet another example, certain embodiments can be used to estimate and/or predict a dehydration state (and/or the amount of dehydration) in an individual (e.g., a trauma patient, an athlete, an elder living at home, etc.) and/or to provide treatment (either by providing recommendations to treating personnel or by directly controlling appropriate therapeutic equipment). For instance, if an analytical model indicates a relationship between CRI (and/or any other physiological phenomena that can be measured and/or estimated using the techniques described herein and in the Related Applications) and dehydration state, an embodiment can apply that model, using the techniques described herein, to estimate a dehydration state of the patient. Specific Exemplary Embodiments We now turn to the embodiments as illustrated by the drawings.FIGS.1-10illustrate some of the features of the method, system, and apparatus for implementing rapid detection of bleeding before, during, and after fluid resuscitation, as referred to above.FIGS.1-8illustrate some of the specific (although non-limiting) exemplary features of the method, system, and apparatus for implementing rapid detection of bleeding before, during, and after fluid resuscitation, whileFIGS.9A-9Hillustrate implementing rapid detection of bleeding before, during, and after fluid resuscitation of patients in a clinical trial, andFIG.10illustrates exemplary system and hardware implementation. The methods, systems, and apparatuses illustrated byFIGS.1-10refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown inFIGS.1-10is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments. With reference to the figures,FIG.1Aprovides a general overview of a system provided by certain embodiments. The system includes a computer system or computational device100in communication with one or more sensors105, which are configured to obtain physiological data from the subject (e.g., animal or human test subject or patient)110. In one embodiment, the computer system100comprises a Lenovo THINKPAD X200, 4 GB of RAM with Microsoft WINDOWS 7 operating system and is programmed with software to execute the computational methods outlined herein. The computational methods can be implemented in MATLAB 2009b and C++ programming languages. A more general example of a computer system100that can be used in some embodiments is described in further detail below. Even more generally, however, the computer system100can be any system of one or more computers that are capable of performing the techniques described herein. In a particular embodiment, for example, the computer system100is capable of reading values from the physiological sensors105, generating models of physiological state from those sensors, and/or employing such models to make individual-specific estimations, predictions, or other diagnoses, displaying the results, recommending and/or implementing a therapeutic treatment as a result of the analysis, and/or archiving (learning) these results for use in future, model building and predictions. The sensors105can be any of a variety of sensors (including without limitation those described herein) for obtaining physiological data from the subject. An exemplary sensor suite might include a Finometer sensor for obtaining a noninvasive continuous blood pressure waveform, a pulse oximeter sensor, an Analog to Digital Board (National Instruments USB-9215A 16-Bit, 4 channel) for connecting the sensors (either the pulse oximeter and/or the finometer) to the computer system100. More generally, in an embodiment one or more sensors105might obtain, e.g., using one or more of the techniques described herein, continuous physiological waveform data, such as continuous blood pressure. Input from the sensors105can constitute continuous data signals and/or outcomes that can be used to generate, and/or can be applied to, a predictive model as described below. In some cases, the structure or system might include a therapeutic device115(also referred to herein as a “physiological assistive device”), which can be controlled by the computer system100to administer therapeutic treatment, in accordance with the recommendations developed by analysis of a patient's physiological data. In a particular embodiment, the therapeutic device might comprise hemodialysis equipment (also referred to as a hemodialysis machine), which can be controlled by the computer system100based on the estimated CRI of the patient, as described in further detail below. Further examples of therapeutic devices in other embodiments can include a cardiac assist device, a ventilator, an automatic implantable cardioverter defibrillator (“AICD”), pacemakers, an extracorporeal membrane oxygenation circuit, a positive airway pressure (“PAP”) device (including without limitation a continuous positive airway pressure (“cPAP”) device or the like), an anesthesia machine, an integrated critical care system, a medical robot, intravenous and/or intra-arterial pumps that can provide fluids and/or therapeutic compounds (e.g., through intravenous injection), intravenous drips, a rapid infuser, a heating/cooling blanket, and/or the like. FIG.1Billustrates in more detail an exemplary sensor device105, which can be used in the system100described above. (It should be noted, of course, that the depicted sensor device105ofFIG.1Bis not intended to be limiting, and different embodiments can employ any sensor that captures suitable data, including, without limitation, sensors described elsewhere in this disclosure and in the Related Applications.) The illustrated sensor device105is designed to be worn on a patient's wrist and therefore can be used both in clinical settings and in the field (e.g., on any person for whom monitoring might be beneficial, for a variety of reasons, including without limitation assessment of blood pressure and/or hydration during athletic competition or training, daily activities, military training or action, etc.). In one aspect, the sensor device105can serve as an integrated hydration monitor, which can assess hydration as described herein, display an indication of the assessment, recommend therapeutic action based on the assessment, or the like, in a form factor that can be worn during athletic events and/or daily activities. Hence, the exemplary sensor105device (hydration monitor) includes a finger cuff125and a wrist unit130. The finger cuff125includes a fingertip sensor135(in this case, a PPG sensor) that captures data based on physiological conditions of the patient, such as PPG waveform data. The sensor135communicates with an input/output unit140of the wrist unit130to provide output from the sensor135to a processing unit145of the wrist unit130. Such communication can be wired (e.g., via a standard—such as USB—or proprietary connector on the wrist unit130) and/or wireless (e.g., via Bluetooth, such as Bluetooth Low Energy (“BTLE”), near field connection (“NFC”), WiFi, or any other suitable radio technology). In different embodiments, the processing unit145can have different types of functionality. For example, in some cases, the processing unit145might simply act to store and/or organize data prior to transmitting the data through the I/O unit140to a monitoring computer100, which might perform data analysis, to control a therapeutic device115, etc. In other cases, however, the processing unit145might act as a specialized computer (e.g., with some or all of the components described in connection withFIG.10, below and/or some or all of the functionality ascribed to the computer100ofFIGS.1A and1B), such that the processing unit145can perform data analysis onboard, e.g., to estimate and/or predict a patient's current and/or future blood pressure. As such, the wrist unit105might include a display, which can display any output described herein, including, without limitation, estimated and/or predicted values (e.g., of CRI, blood pressure, hydration status, etc.), data captured by the sensor (e.g., heart rate, pulse ox, etc.), and/or the like. In some cases, the wrist unit130might include a wrist strap155that allows the unit to be worn on the wrist, similar to a wrist watch. Of course, other options are available to facilitate transportation of the sensor device105with a patent. More generally, the sensor device105might not include all of the components described above, and/or various components might be combined and/or reorganized; once again, the embodiment illustrated byFIG.1Bshould be considered only illustrative, and not limiting, in nature. FIGS.2A,2B,3A,3B,4, and5illustrate methods and screen displays in accordance with various embodiments. While the methods ofFIGS.2A,2B,3A,3B,4, and5are illustrated, for ease of description, as different methods, it should be appreciated that the various techniques and procedures of these methods can be combined in any suitable fashion, and that, in some embodiments, the methods depicted byFIGS.2A,2B,3A,3B,4, and5can be considered interoperable and/or as portions of a single method. Similarly, while the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the methods illustrated byFIGS.2A,2B,3A,3B,4, and5can be implemented by (and, in some cases, are described below with respect to) the computer system100ofFIG.1(or other components of the system, such as the sensor105ofFIGS.1A and1B), these methods may also be implemented using any suitable hardware implementation. Similarly, while the computer system100ofFIG.1(and/or other components of such a system) can operate according to the methods illustrated byFIGS.2A,2B,3A,3B,4, and5(e.g., by executing instructions embodied on a computer readable medium), the system100can also operate according to other modes of operation and/or perform other suitable procedures. Merely by way of example, a method might comprise one or more procedures, any or all of which are executed by a computer system. Correspondingly, an embodiment might provide a computer system configured with instructions to perform one or more procedures in accordance with methods provided by various other embodiments. Similarly, a computer program might comprise a set of instructions that are executable by a computer system (and/or a processor therein) to perform such operations. In many cases, such software programs are encoded on physical, tangible and/or non-transitory computer readable media (such as, to name but a few examples, optical media, magnetic media, and/or the like). By way of non-limiting example, various embodiments can comprise a method for using sensor data to assess blood loss in a patient.FIG.2illustrates an exemplary method200in accordance with various embodiments. The method200might comprise generating a model, e.g., with a computer system, against which patient data can be analyzed to estimate and/or predict various physiological states (block205). In a general sense, generating the model can comprise receiving data pertaining to a plurality of more physiological parameters of a test subject to obtain a plurality of physiological data sets. Such data can include PPG waveform data to name one example, and/or any other type of sensor data including, without limitation, data captured by other sensors described herein and in the Related Applications. Generating a model can further comprise directly measuring one or more physiological states of the test subject with a reference sensor to obtain a plurality of physiological state measurements. The one or more physiological states can include, without limitation, states of various volumes of blood loss and/or fluid resuscitation, and/or various states of hydration and/or dehydration. (In other embodiments, different states can include a state of hypervolemia, a state of euvolemia, and/or a state of cardiovascular collapse (or near-cardiovascular collapse), and/or can include states that have been simulated, e.g., through use of an LBNP apparatus). Other physiological states that can be used to generate a model are described elsewhere herein and in the Related Applications. Generating the model can further comprise correlating the physiological state(s) with the measured physiological parameters. There are a variety of techniques for generating a model in accordance with different embodiments, using these general functions. One exemplary technique for generating a model of a generic physiological state is described below with respect toFIG.4, below, which provides a technique using a machine-learning algorithm to optimize the correlation between measured physiological parameters (such as PPG waveform data, to name one example) and physical states (e.g., various blood volume states, including states where a known volume of blood loss has occurred and/or a known volume of fluid resuscitation has been administered, various states of hydration and/or dehydration, etc.). It should be appreciated, however, that any suitable technique or model may be employed in accordance with various embodiments. A number of physiological states can be modeled, and a number of different conditions can be imposed on test subjects as part of the model generation. For example, physiological states that can be induced (or monitored when naturally occurring) in test subjects include, without limitation, reduced circulatory system volume, known volume of blood loss, specified amounts of fluids added to blood volume, dehydration, cardiovascular collapse or near-cardiovascular collapse, euvolemia, hypervolemia, low blood pressure, high blood pressure, normal blood pressure, and/or the like. Merely by way of example, in one set of embodiments, a number of physiological parameters of a plurality of test subjects might be measured. In some cases, a subject might undergo varying, measured levels of blood loss (either real or simulated) or intravenous fluid addition. Using the method described below with respect toFIG.4(or other, similar techniques, many of which are described in the Related Applications), the system can determine which sensor information most effectively differentiates between subjects at different blood loss/addition volume levels. Additionally and/or alternatively to using direct (e.g., raw) sensor data to build such models, some embodiments might construct a model based on data that is derived from sensor data. Merely by way of example, one such model might use, as input values, CRI values of test subjects in different blood loss and/or volume addition conditions. Accordingly, the process of generating a model might first comprise building a model of CRI, and then, from that model, building a model of hydration effectiveness. (In other cases, a hybrid model might consider both raw sensor data and CRI data.) A CRI model can be generated in different ways. For example, in some cases, one or more test subjects might be subjected to LBNP. In an exemplary case, LBNP data is collected from human subjects being exposed to progressively lower levels of LBNP, until hemodynamic decompensation, at which time LBNP is released and the subject recovers. Each level of LBNP represents an additional amount of blood loss. During these tests, physiological data (including, without limitation, waveform data, such as continuous non-invasive blood pressure data) can be collected before, during, and/or after the application of the LBNP. As noted above, a relationship (as expressed by Equation 2) can be identified between LBNP and intravascular volume loss, and this relationship can be used to estimate CRI. Hence, LBNP studies form a framework (methodology) for the development of the hemodynamic parameter referred to herein as CRI and can be used to generate models of this parameter. More generally, several different techniques that induce a physiological state of reduced volume in the circulatory system, e.g., to a point of cardiovascular collapse (hemodynamic decompensation) or to a point near cardiovascular collapse, can be used to generate such a model. LBNP can be used to induce this condition, as noted above. In some cases, such as in a study described below, dehydration can be used to induce this condition as well. Other techniques are possible as well. Similarly, data collected from a subject in a state of euvolemia, dehydration, hypervolemia, and/or other states might be used to generate a CRI model in different embodiments. At block210, the method200comprises monitoring, with one or more sensors, physiological data of a patient. As noted above, a variety of physical parameters can be monitored, invasively and/or non-invasively, depending on the nature of the anticipated physiological state of the patient. In an aspect, monitoring the one or more physical parameters might comprise receiving, e.g., from a physiological sensor, continuous waveform data, which can be sampled as necessary. Such data can include, without limitation, plethysmograph waveform data, PPG waveform data (such as that generated by a pulse oximeter), and/or the like. The method200might further comprise analyzing, with a computer system (e.g., a monitoring computer100and/or a processing unit135of a sensor unit, as described above), the physiological data (block215). In some cases, the physiological data is analyzed against a pre-existing model (which might be generated as described above and which in turn, can be updated based on the analysis, as described in further detail below and in the Related Applications). Merely by way of example, in some cases, sensor data can be analyzed directly against a generated model to assess the effectiveness of hydration (which can include estimating current values, and/or predicting future values for any or all of BL, V, and/or Pƒ, as expressed above. For example, the sensor data can be compared to determine similarities with models that estimate and/or predict any of these values. Merely by way of example, an input waveform captured by a sensor from a patient might be compared with sample waveforms generated by models for each of these values. For example, the technique200′ provides one method for deriving an estimate of BL in accordance with some embodiments. It should be noted that the technique200′ is presented as an example only, and that while this technique200′ estimates BL from raw sensor data, similar techniques can be used to estimate or predict BL, V, and/or Pƒfrom raw sensor data, CRI data, and/or a combination of both. For example, one model might produce a first estimate of BL from raw sensor data, produce a second estimate of BL from estimated CRI values, and then combine those estimates (in either weighted or unweighted fashion) to produce a hybrid BL estimate. The illustrated technique200′ comprises sampling waveform data (e.g., any of the data described herein and in the Related Applications, including without limitation arterial waveform data, such as continuous PPG waveforms and/or continuous noninvasive blood pressure waveforms) for a specified period, such as 32 heartbeats (block270). That sample is compared with a plurality of waveforms of reference data corresponding to BL values (block275), which in this case range from 0 to 1 using the scale described above (but alternatively might use any appropriate scale). These reference waveforms are derived as part of the model developed using the algorithms described in this and the Related Applications, might be the result of experimental data, and/or the like. In effect, these reference waveforms reflect the relationship ƒ from Eq. 6, above. According to the technique200′, the sample might be compared with waveforms corresponding to a BL=1 (block275a), BL=0.5 (block275b), and BL=0 (block275c), as illustrated. (As illustrated by the ellipses inFIG.2B, any number of sample waveforms can be used for the comparison; for example, if there is a nonlinear relationship between the measured sensor data and the BL values, more sample waveforms might provide for a better comparison.) From the comparison, a similarity coefficient is calculated (e.g., using a least squares or similar analysis) to express the similarity between the sampled waveform and each of the reference waveforms (block280). These similarity coefficients can be normalized (if appropriate) (block285), and the normalized coefficients can be summed (block290) to produce an estimated BL value of the patient (block295). In other cases, similar techniques can be used to analyze data against a model based on parameters derived from direct sensor measurements. (In one aspect, such operations can be iterative in nature, by generating the derived parameters—such as CRI, to name one example—by analyzing the sensor data against a first model, and then analyzing the derived parameters against a second model. For example,FIG.3Aillustrates a method300of calculating a patient's CRI, which can be used (in some embodiments) as a parameter that can be analyzed to assess the effectiveness of hydration (including the probability that fluids are needed and/or the estimated volume of fluid necessary for effective hydration) and/or to assess blood loss (e.g., before, during, and/or after fluid resuscitation). The method300includes generating a model of CRI (block305), monitoring physiological parameters (310) and analyzing the monitored physical parameters (block315), using techniques such as those described above and in the '483 application, for example. Based on this analysis, the method300, in an exemplary embodiment, includes estimating, with the computer system, a compensatory reserve of the patient, based on analysis of the physiological data (block320). In some cases, the method might further comprise predicting, with the computer system, the compensatory reserve of the patient at one or more time points in the future, based on analysis of the physiological data (block325). The operations to predict a future value of a parameter can be similar to those for estimating a current value; in the prediction context, however, the applied model might correlate measured data in a test subject with subsequent values of the diagnostic parameter, rather than contemporaneous values. It is worth noting, of course, that in some embodiments, the same model can be used to both estimate a current value and predict future values of a physiological parameter. The estimated and/or predicted compensatory reserve of the patient can be based on several factors. Merely by way of example, in some cases, the estimated/predicted compensatory reserve can be based on a fixed time history of monitoring the physiological data of the patient and/or a dynamic time history of monitoring the physiological data of the patient. In other cases, the estimated/predicted compensatory reserve can be based on a baseline estimate of the patient's compensatory reserve established when the patient is euvolemic. In still other cases, the estimate and/or prediction might not be based on a baseline estimate of the patient's compensatory reserve established when the patient is euvolemic. Merely by way of example,FIG.3Billustrates one technique300′ for deriving an estimate of CRI in accordance with some embodiments similar to the technique200′ described above with respect toFIG.2Bfor deriving an assessment of hydration effectiveness and/or deriving an assessment of blood loss (e.g., before, during, and/or after fluid resuscitation) directly from sensor data (and, in fact, CRI can be derived as described herein, and that derived value can be used, alone or with raw sensor data, to assess such effectiveness). The illustrated technique comprises sampling waveform data (e.g., any of the data described herein and in the Related Applications, including, without limitation, arterial waveform data, such as continuous PPG waveforms and/or continuous noninvasive blood pressure waveforms) for a specified period, such as 32 heartbeats (block370). That sample is compared with a plurality of waveforms of reference data corresponding to different CRI values (block375). (These reference waveforms might be derived using the algorithms described in the Related Applications, might be the result of experimental data, and/or the like). Merely by way of example, the sample might be compared with waveforms corresponding to a CRI of 1 (block375a), a CRI of 0.5 (block375b), and a CRI of 0 (block375c), as illustrated. From the comparison, a similarity coefficient is calculated (e.g., using a least squares or similar analysis) to express the similarity between the sampled waveform and each of the reference waveforms (block380). These similarity coefficients can be normalized (if appropriate) (block385), and the normalized coefficients can be summed (block390) to produce an estimated value of the patient's CRI (block395). Returning toFIG.3A, the method300can comprise estimating and/or predicting a patient's dehydration state (block330). The patient's state of dehydration can be expressed in a number of ways. For instance, the state of dehydration might be expressed as a normalized value (for example, with 1.0 corresponding to a fully hydrated state and 0.0 corresponding to a state of morbid dehydration). In other cases, the state of dehydration might be expressed as a missing volume of fluid or as a volume of fluid present in the patient's system, or using any other appropriate metric. A number of techniques can be used to model dehydration state. Merely by way of example, as noted above (and described in further detail below), the relationship between a patient's compensatory reserve and level of dehydration can be modeled. Accordingly, in some embodiments, estimating a dehydration state of the patient might comprise estimating the compensatory reserve (e.g., CRI) of the patient, and then, based on that estimate and the known relationship, estimating the dehydration state. Similarly, a predicted value of compensatory reserve at some point in the future can be used to derive a predicted dehydration state at that point in the future. Other techniques might use a parameter other than CRI to model dehydration state. The method300might further comprise normalizing the results of the analysis (block335), such as the compensatory reserve, dehydration state, and/or probability of bleeding, to name a few examples. Merely by way of example, the estimated/predicted compensatory reserve of the patient can be normalized relative to a normative normal blood volume value corresponding to euvolemia, a normative excess blood volume value corresponding to circulatory overload, and a normative minimum blood volume value corresponding to cardiovascular collapse. Any values can be selected as the normative values. Merely by way of example, in some embodiments, the normative excess blood volume value is >1, the normative normal blood volume value is 1, and the normative minimum blood volume value is 0. As an alternative, in other embodiments, the normative excess blood volume value might be defined as 1, the normative normal blood volume value might be defined as 0, and the normative minimum blood volume value at the point of cardiovascular collapse might be defined as −1. As can be seen from these examples, different embodiments might use a number of different scales to normalize CRI and other estimated parameters. In an aspect, normalizing the data can provide benefits in a clinical setting, because it can allow the clinician to quickly make a qualitative judgment of the patient's condition, while interpretation of the raw estimates/predictions might require additional analysis. Merely by way of example, with regard to the estimate of the compensatory reserve of the patient, that estimate might be normalized relative to a normative normal blood volume value corresponding to euvolemia and a normative minimum blood volume value corresponding to cardiovascular collapse. Once again, any values can be selected as the normative values. For example, if the normative normal blood volume is defined as 1, and the normative minimum blood volume value is defined as 0, the normalized value, falling between 0.0 and 1.0 can quickly apprise a clinician of the patient's location on a continuum between euvolemia and cardiovascular collapse. Similar normalizing procedures can be implemented for other estimated data (such as probability of bleeding, dehydration, and/or the like). The method300might further comprise displaying data with a display device (block340). Such data might include an estimate and/or prediction of the compensatory reserve of the patient and/or an estimate and/or prediction of the patient's dehydration state. A variety of techniques can be used to display such data. Merely by way of example, in some cases, displaying the estimate of the compensatory reserve of the patient might comprise displaying the normalized estimate of the compensatory reserve of the patient. Alternatively and/or additionally, displaying the normalized estimate of the compensatory reserve of the patient might comprise displaying a graphical plot showing the normalized excess blood volume value, the normalized normal blood volume value, the normalized minimum blood volume value, and the normalized estimate of the compensatory reserve (e.g., relative to the normalized excess blood volume value, the normalized normal blood volume value, the normalized minimum blood volume value). In some cases, the method300might comprise repeating the operations of monitoring physiological data of the patient, analyzing the physiological data, and estimating (and/or predicting) the compensatory reserve of the patient, to produce a new estimated (and/or predicted) compensatory reserve of the patient. Thus, displaying the estimate (and/or prediction) of the compensatory reserve of the patient might comprise updating a display of the estimate of the compensatory reserve to show the new estimate (and/or prediction) of the compensatory reserve, in order to display a plot of the estimated compensatory reserve over time. Hence, the patient's compensatory reserve can be repeatedly estimated and/or predicted on any desired interval (e.g., after every heartbeat), on demand, before fluid resuscitation, during fluid resuscitation, after fluid resuscitation, etc., or a combination of one or more of these. In further embodiments, the method300can comprise determining a probability that the patient is bleeding, and/or displaying, with the display device, an indication of the probability that the patient is bleeding (block345). For example, some embodiments might generate a model based on data that removes fluid from the circulatory system (such as LBNP, dehydration, etc.). Another embodiment might generate a model based on fluid removed from a subject voluntarily, e.g., during a blood donation, based on the known volume (e.g., 500 cc) of the donation. Based on this model, using techniques similar to those described above, a patient's physiological data can be monitored and analyzed to estimate a probability that the patient is bleeding (e.g., internally). In some cases, the probability that the patient is bleeding can be used to adjust the patient's estimated CRI. Specifically, give a probability of bleeding expressed as Pr_Bleed at a time t, the adjusted value of CRI can be expressed as: CRIAdjusted(t)=1−((1−CRI(t))×Pr_Bleed(t)) (Eq. 13) Given this relationship, the estimated CRI can be adjusted to produce a more accurate diagnosis of the patient's condition at a given point in time. The method300might comprise selecting, with the computer system, a recommended treatment option for the patient, and/or displaying, with the display device, the recommended treatment option (block355). The recommended treatment option can be any of a number of treatment options, including, without limitation, optimizing hemodynamics of the patient, a ventilator adjustment, an intravenous fluid adjustment, transfusion of blood or blood products to the patient, infusion of volume expanders to the patient, a change in medication administered to the patient, a change in patient position, and surgical therapy, or the like. In a specific, non-limiting, example, the method300might comprise controlling operation of hemodialysis equipment (block360), based at least in part on the estimate of the patient's compensatory reserve. Merely by way of example, a computer system that performs the monitoring and estimating functions might also be configured to adjust an ultra-filtration rate of the hemodialysis equipment in response to the estimated CRI values of the patient. In other embodiments, the computer system might provide instructions or suggestions to a human operator of the hemodialysis equipment, such as instructions to manually adjust an ultra-filtration rate, etc. In some embodiments, the method300might include assessing the tolerance of an individual to blood loss, general volume loss, and/or dehydration (block365). For example, such embodiments might include estimating a patient's CRI based on the change in a patient's position (e.g., from lying prone to standing, from standing to lying prone, from lying prone to sitting, from sitting to lying prone, from standing to sitting, and/or from sitting to standing). Based on changes to the patient's CRI in response to these maneuvers, the patient's sensitivity to blood loss, volume loss, and/or dehydration can be measured. In an aspect, this measurement can be performed using a CRI model generated as described above; the patient can be monitored using one or more of the sensors described above, and the changes in the sensor output when the subject changes position can be analyzed according to the model (as described above, for example) to assess the tolerance of the individual to volume loss. Such monitoring and/or analysis can be performed in real time. Returning toFIG.2, based on the analysis of the data (whether data collected directly by sensors or derived data, such as CRI, or both) against a model (which might include multiple sub-models, such as a model of BL against raw data and a model of BL against CRI), the method200can include assessing the blood loss of the patient (block220), based on analysis of the patient's physiological data against the model. As noted above, assessing blood loss can include estimating or predicting a number of values, such as the estimated effectiveness, BL, of the hydration effort, the volume, V, of fluid necessary for effective hydration, the probability, Pƒ, that the patient needs fluids, and/or the like. In some cases, the assessment of the blood loss will be based on the analysis of a plurality of measured (or derived) values of a particular physiological parameter (or plurality of parameters). Hence, in some cases, the analysis of the data might be performed on a continuous waveform, either during or after measurement of the waveform with a sensor (or both), and the assessment of the blood loss can be updated as hydration efforts and/or fluid resuscitation efforts continue. Further, the amount of fluids added to the patient's blood volume can be measured directly, and these direct measurements can be fed back into the model to update the model (at block225) and thereby improve performance of the algorithms in the model (e.g., by refining the weights given to different parameters in terms of estimative or predictive value). The updated model can then be used to continue assessing the treatment (in the instant patient and/or in a future patient), as shown by the broken lines onFIG.2A. In some cases, the method200comprises displaying data (block230) indicating the assessment of the effectiveness of hydration. In some cases, the data might be displayed on a display of a sensor device (such as the device105illustrated byFIG.1B). Alternatively and/or additionally the data might be displayed on a dedicated machine, such as a compensatory reserve monitor, or on a monitor of a generic computer system. The data might be displayed alphanumerically, graphically, or both.FIGS.6-8, described below, illustrate several possible exemplary displays of assessments of blood loss and/or CRI. There are many different ways that the data can be displayed, and any assessments, estimates or predictions generated by the method200can be displayed in any desired way, in accordance with various embodiments. In certain embodiments, the method200can include selecting and/or displaying treatment options for the patient (block235) and/or controlling a therapeutic device (block240), based on the assessment of the blood loss of the patient. For example, a display might indicate to a clinician or the patient him or herself that the patient is losing (or has lost) blood, that fluid resuscitation therapy should be initiated or continued, an estimated volume of fluid to drink, infuse, or otherwise consume, a drip rate for an IV drip, a flow rate for an IV pump or infuser, or the like. Similarly, the system might be configured to control operation of a therapeutic device, such as dispensing a fluid to drink from an automated dispenser, activating or adjusting the flow rate of an IV pump or infuser, adjusting the drip rate of an IV drip, and/or the like, based on the assessment of the effectiveness of hydration. As another example, certain embodiments might include a water bladder (e.g., a backpack-based hydration pack, such as those available from Camelbak Products LLC) or a water bottle, and the hydration monitor could communicate with and/or control operation of such a dispensing device (e.g., to cause the device to dispense a certain amount of fluid, to cause the device to trigger an audible alarm, etc.). Further, in certain embodiments, the method200can include functionality to help a clinician (or other entity) to monitor hydration, fluid resuscitation, and/or blood volume status. For example, in some cases, any measure of effectiveness outside of the normal range (such as a value of Pƒhigher than a certain threshold value, a value of BL lower than a threshold value, etc.) would set off various alarm conditions, such as an audible alarm, a message to a physician, a message to the patient, an update written automatically to a patient's chart, etc. Such messaging could be accomplished by electronic mail, text message, etc., and a sensor device or monitoring computer could be configured with, e.g., an SMTP client, text messaging client, or the like to perform such messaging. In some cases, feedback and/or notifications might be sent to a third party, regardless of whether any alarm condition were triggered. For example, a hydration monitor might be configured to send monitoring results (e.g., any of the assessments, estimates and/or predictions described herein) to another device or computer, either for personal monitoring by the patient or for monitoring by another. Examples could include transmitting such alarms or data (e.g., by Bluetooth, NFC, WiFi, etc.) to a wireless phone, wearable device (e.g., smart watch or glasses) or other personal device of the patient, e.g., for inclusion in a health monitoring application. Additionally and/or alternatively, such information could be sent to a specified device or computer (e.g., via any available IP connection), for example to allow a parent to monitor a child's (or a child to monitor an elderly parent's) hydration remotely, to allow a coach to monitor a player's hydration remotely, and/or to allow a superior officer to monitor a soldier's hydration remotely, or the like. In some cases (e.g., for a coach or superior officer), an application might aggregate results from a plurality of hydration monitors, to allow the supervisor to view (e.g., in a dashboard-type configuration), hydration effectiveness and/or blood loss (and/or any other data, such as CRI, blood pressure, etc.) for a group of people. Such a display might employ, for example, a plurality of “fuel gauge” displays, one (or more) for each person in the group, allowing the supervisor to quickly ascertain any unusual results (e.g., based on the color of the gauge, etc.). Similarly, if an alarm condition were met for another physiological parameter (such as blood pressure, which can be estimated as described in the '171 Application, for example), that alarm could trigger an assessment of hydration effectiveness via the method200, to determine whether the first alarm condition has merit or not. If not, perhaps there could be an automated silencing of the original alarm condition, since all is well at present. More generally, the assessment techniques could be added to an ecosystem of monitoring algorithms (including, without limitation, those described in the Related Applications), which would inform one another or work in combination, to inform one another about how to maintain optimal physiological stability. FIG.4illustrates a method400of employing such a self-learning predictive model (or machine learning) technique, according to some embodiments. In particular, the method400can be used to correlate physiological data received from a subject sensor with a measured physiological state. More specifically, with regard to various embodiments, the method400can be used to generate a model for assessing, predicting and/or estimating various physiological parameters, such as blood loss volume, effectiveness of hydration or fluid resuscitation efforts, estimated and/or predicted blood pressure, CRI, the probability that a patient is bleeding, a patient's dehydration state, and/or the like, from one or more of a number of different physiological parameters, including without limitation those described above and in the Related Applications. The method400begins at block405by collecting raw data measurements that may be used to derive a set of D data signals s1, . . . , sDas indicated at block410(each of the data signals s being, in a particular case, input from one or many different physiological sensors). Embodiments are not constrained by the type of measurements that are made at block405and may generally operate on any data set. For example, data signals can be retrieved from a computer memory and/or can be provided from a sensor or other input device. As a specific example, the data signals might correspond to the output of the sensors described above (which measure the types of waveform data described above, such as continuous, non-invasive PPG data and/or blood pressure waveform data). A set of K current or future outcomes {right arrow over (o)}=(o1, . . . , oK) is hypothesized at block415(the outcomes o being, in this case, past and/or future physiological states, such as probability that fluids are needed, volume of fluid needed for effective hydration or fluid resuscitation, BL, CRI, dehydration state, probability of bleeding, etc.). The method autonomously generates a predictive model M that relates the derived data signals {right arrow over (s)} with the outcomes {right arrow over (o)}. As used herein, “autonomous” means “without human intervention.” As indicated at block420, this is achieved by identifying the most predictive set of signals Sk, where Skcontains at least some (and perhaps all) of the derived signals s1, . . . , sDfor each outcome ok, where k∈{1, . . . , K}. A probabilistic predictive model ôk=Mk(Sk) is learned at block425, where ôkis the prediction of outcome okderived from the model Mkthat uses as inputs values obtained from the set of signals Sk, for all k∈{1, . . . , K}. The method400can learn the predictive models ôk=Mk(Sk) incrementally (block430) from data that contains example values of signals s1, . . . , sDand the corresponding outcomes o1, . . . , oK. As the data become available, the method400loops so that the data are added incrementally to the model for the same or different sets of signals Sk, for all k∈{1, . . . , K}. While the description above outlines the general characteristics of the methods, additional features are noted. A linear model framework may be used to identify predictive variables for each new increment of data. In a specific embodiment, given a finite set of data of signals and outcomes {({right arrow over (s)}1, {right arrow over (o)}1), ({right arrow over (s)}2, {right arrow over (o)}2), . . . }, a linear model may be constructed that has the form, for all k∈{1, . . . , K}, {right arrow over (o)}k=ƒk(a0+Σi=1daisi) (Eq. 14) where ƒkis any mapping from one input to one output, and a0, a1, . . . , adare the linear model coefficients. The framework used to derive the linear model coefficients may estimate which signals s, s1, . . . , sdare not predictive and accordingly sets the corresponding coefficients a0, a1, . . . , adto zero. Using only the predictive variables, the model builds a predictive density model of the data, {({right arrow over (s)}1, {right arrow over (o)}1), ({right arrow over (s)}2, {right arrow over (o)}2), . . . }. For each new increment of data, a new predictive density models can be constructed. In some embodiments, a prediction system can be implemented that can predict future results from previously analyzed data using a predictive model and/or modify the predictive model when data does not fit the predictive model. In some embodiments, the prediction system can make predictions and/or to adapt the predictive model in real-time. Moreover, in some embodiments, a prediction system can use large data sets not only to create the predictive model, but also predict future results as well as adapt the predictive model. In some embodiments, a self-learning, prediction device can include a data input, a processor, and an output. Memory can include application software that when executed can direct the processor to make a prediction from input data based on a predictive model. Any type of predictive model can be used that operates on any type of data. In some embodiments, the predictive model can be implemented for a specific type of data. In some embodiments, when data is received the predictive model can determine whether it understands the data according to the predictive model. If the data is understood, a prediction is made and the appropriate output provided based on the predictive model. If the data is not understood when received, then the data can be added to the predictive model to modify the model. In some embodiments, the device can wait to determine the result of the specified data and can then modify the predictive model accordingly. In some embodiments, if the data is understood by the predictive model and the output generated using the predictive model is not accurate, then the data and the outcome can be used to modify the predictive model. In some embodiments, modification of the predictive model can occur in real-time. Particular embodiments can employ the tools and techniques described in the Related Applications in accordance with the methodology described herein perform the functions of a cardiac reserve monitor, a wrist-wearable sensor device, and/or a monitoring computer, as described herein (the functionality of any or all of which can be combined in a single, integrated device, in some embodiments). These functions include, but are not limited to, assessing fluid resuscitation of a patient, assessing hydration of a patient, monitoring, estimating and/or predicting a subject's (including, without limitation, a patient's) current or future blood pressure and/or compensatory reserve, estimating and/or determining the probability that a patient is bleeding (e.g., internally) and/or has been bleeding, recommending treatment options for such conditions, and/or the like. Such tools and techniques include, in particular, the systems (e.g., computer systems, sensors, therapeutic devices, etc.) described in the Related Applications, the methods (e.g., the analytical methods for generating and/or employing analytical models, the diagnostic methods, etc.), and the software programs described herein and in the Related Applications, which are incorporated herein by reference. FIG.5illustrates a method500of implementing rapid detection of bleeding before, during, and after fluid resuscitation, in accordance with various embodiments. In the embodiment ofFIG.5, method500, at block505, comprises estimating a patient's CRI before, during, and/or after resuscitation (e.g., fluid resuscitation, or the like). Estimation of the patient's CRI may be performed, for example, using the techniques described above with respect toFIGS.3A and3B, or using other techniques described above. At block510, method500might comprise recording the patient's CRI, before, during, and/or after resuscitation. In some instances, the CRI may be recorded or stored on one or more of a data storage device that is part of processing unit145and/or a memory device that is part of the monitoring computer100ofFIG.1, or the like. Method500might further comprise calculating an average CRI over a period of K seconds (where K>1), before, during, and/or after resuscitation (block515), calculating a standard deviation or variance of CRI over a period of K seconds (where K>1), before, during, and/or after resuscitation (block520), calculating Pearson's moment coefficient of skewness of CRI over a period of K seconds (where K>1), before, during, and/or after resuscitation (block525), calculating a rate of change of CRI over a period of K seconds (where K>1), before, during, and/or after resuscitation (block530), calculating a rate of rate change (or a rate of change of rate change) of CRI (also referred to herein as “acceleration of CRI”) over a period of K seconds (where K>1), before, during, and/or after resuscitation (block535). According to some embodiments, method500might further comprise, at block540, determining probability of bleeding, based on one or more of the calculations in blocks515-535(which may be referred to herein as “variation results”). In other words, the variation results might be used to estimate one or more states of bleeding—namely, a (certain) non-bleeding state (perhaps designated by a symbol, “0”), a (certain) bleeding state (perhaps designated by a symbol, “1”), and some probability of bleeding state (perhaps designated by a symbol between “0” and “1”). In some embodiments, the following definitions might be used for (i) CRI value sample, (ii) a set of values of CRT, (iii) average CRI, (iv) median CRI, (v) standard deviation of CRI, (vi) rate of change of CRI, (vii) rate of change of rate change of CRI, and (viii) skewness of CRI: (i) A specific CRI value at time t: CRI(t); (Eq. 15) (ii) A set of CRI values at times {t1, t2, . . . , tK}: CRI={CRI(t1),CRI(t2), . . . ,CRI(tK)}; (Eq. 16) (iii) Average CRI value over a specific set of times {t1, t2, . . . , tK}: CRIK=Σk=1KCRI(tk); (Eq. 17) (iv) Median CRI value over a specific set of times {t1, t2, . . . , tK}: CRIKMed=Median{CRI(t1),CRI(t2), . . . ,CRI(tK)}; (Eq. 18) (v) A measure of deviation of CRI over a specific set of times {t1, t2, . . . , tK}, perhaps variance, or standard deviation defined by: SD(CRIK)=∑k=1K(CRI(tk)-CRI_K)2K;(Eq.19) (vi) Rate of change of CRI, denoted by mK, over a set of CRI values {CRI(t1), CRI(t2), . . . , CRI(tK)}, where the rate of change measures some increase or decrease of CRI over a specific period of time, and, for example, may be calculated as a slope of the line: [mKb]=(AtA)-1At[CRI(t1)⋮CRI(tK)],(Eq.20) where A is a matrix defined by: A=[t11⋮⋮tK1];(Eq.21) (vii) Rate of change of rate change of CRI, denoted by rK, over a set of CRI values {CRI(t1), CRI(t2), . . . , CRI(tK)}, where the rate of change of rate change measures some rate of change of increase or decrease of CRI over a specific period of time, and, for example, may be calculated as a second order increase or decrease of a curve: [rKmKb]=(BtB)-1Bt[CRI(t1)⋮CRI(tK)],(Eq.22) where B is a matrix defined by: B=[(t1)2t11⋮⋮⋮(tK)2tK1];(Eq.23) (viii) Some measure of skewness, denoted by SK(not to be confused with set of signals, Skas described above with respect toFIG.4), over a set of CRI values {CRI(t1), CRI(t2), . . . , CRI(tK)}, where SKis possibly a variant of the Fisher-Pearson coefficient of skewness: SK=1(SD(CRIK))3[∑k=1K(CRI(tk)-CRI_K)3K],(Eq.24) and/or SKis some other measure of skewness, possibly Galton skewness (or Bowley's skewness), as defined as: SK=Q1+Q3-2Q2Q3-Q1.(Eq.25) A method for estimating a (certain) non-bleeding state might include, but is not limited to, one of the following calculations or a combination of two or more such calculations, perhaps within a statistical and/or machine learning framework, or the like: (1) Average of CRI before resuscitation (“CRIBR”)>NB1; (2) Average of CRI during resuscitation (“CRIDR”)>NB2; (3) Average of CRI after resuscitation (“CRIAR”)>NB3; (4)CRIAR−CRIDR>NB4; (5)CRIDR−CRIBR>NB5; (6)CRIAR−CRIBR>NB6; (7) standard deviation or variance of CRI before resuscitation (“[SD(CRI)]BR”)<NB7; (8) standard deviation or variance of CRI during resuscitation (“[SD(CRI)]DR”)<NB8; (9) standard deviation or variance of CRI after resuscitation (“[SD(CRI)]AR”)<NB9; (10) [SD(CRI)]AR−[SD(CRI)]BR<NB10; (11) moment coefficient of skewness of CRI (positive or negative) before resuscitation (“SBR”)<NB11; (12) moment coefficient of skewness of CRI (positive or negative) during resuscitation (“SDR”)<NB12; (13) moment coefficient of skewness of CRI (positive or negative) after resuscitation (“SAR”)<NB13; (14) rate of change of CRI before resuscitation (“mBR”)>NB14; (15) rate of change of CRI during resuscitation (“mDR”)>NB15; (16) rate of change of CRI after resuscitation (“mAR”)>NB16; (17) mAR−mBR>NB17; (18) mDR−mBR>NB18; (19) rate of rate change of CRI before resuscitation (“rBR”)>NB19; (20) rate of rate change of CRI during resuscitation (“rDR”)>NB20; (21) rate of rate change of CRI after resuscitation (“rAR”)>NB21; (22) rAR−rBR>NB22; (23) rDR−rBR>NB23; and/or the like. In some cases, each of, or one or more of, NB1 through NB23 might either be estimated experimentally or set by the user. Herein, the number K>0 may be different in each instance of the calculations (1) through (23), may be chosen by the user, or may be experimentally determined. With reference to (1) the average CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, andCRIBRmay be the average value of those points. Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBCRIBR(e.g., NB1 above), and classifying non-bleeding may be determined if: CRIBR>NBCRIBR(Eq. 26) Referring to (2) the average CRI during resuscitation, CRIDR={CRI (t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, andCRIDRmay be the average value of those points. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBCRIDR(e.g., NB2 above), and classifying non-bleeding may be determined if: CRIDR>NBCRIDR. (Eq. 27) Regarding (3) the average CRI after resuscitation, CRIAR={CRI(t1), CRI (t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, andCRIARmay be the average value of those points. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBCRIAR(e.g., NB3 above), and classifying non-bleeding may be determined if: CRIAR>NBCRIAR. (Eq. 28) With reference to (4),CRIDRandCRIARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARNBCRIDR(e.g., NB4 above), and classifying non-bleeding may be determined if: CRIAR−CRIDR>ARNBCRIDR. (Eq. 29) Referring to (5),CRIDRandCRIDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRNBCRIBR(e.g., NB5 above), and classifying non-bleeding may be determined if: CRIDR−CRIBR>DRNBCRIBR. (Eq. 30) Regarding (6),CRIBRandCRIARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARNBCRIBR(e.g., NB6 above), and classifying non-bleeding may be determined if: CRIAR−CRIBR>ARNBCRIBR. (Eq. 31) With reference to (7) the variance of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and [SD(CRI)]BRmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNB[SD(CRI)]BR(e.g., NB7 above), and classifying non-bleeding may be determined if: [SD(CRI)]BR<NB[SD(CRI)]BR. (Eq. 32) Referring to (8) the variance of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and [SD(CRI)]DRmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNB[SD(CRI)]DR(e.g., NB8 above), and classifying non-bleeding may be determined if: [SD(CRI)]DR<NB[SD(CRI)]DR. (Eq. 33) Regarding (9) the variance of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and [SD(CRI)]ARmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNB[SD(CRI)]AR(e.g., NB9 above), and classifying non-bleeding may be determined if: [SD(CRI)]AR<NB[SD(CRI)]AR. (Eq. 34) Referring to (10), [SD(CRI)]BRand [SD(CRI)]ARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARNB[SD(CRI)]BR(e.g., NB10 above), and classifying non-bleeding may be determined if: [SD(CRI)]AR−[SD(CRI)]BR<ARNB[SD(CRI)]BR. (Eq. 35) With reference to (11) the skewness of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and SBRmay be a measure of skewness of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBNSBR(e.g., NB11 above), and classifying non-bleeding may be determined if: |SBR|<NBSSB. (Eq. 36) Referring to (12) the skewness of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and SDRmay be a measure of skewness of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBSDR(e.g., NB12 above), and classifying non-bleeding may be determined if: |SDR|<NBSDR. (Eq. 37) Regarding (13) the skewness of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and SARmay be a measure of skewness of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBSAR(e.g., NB13 above), and classifying non-bleeding may be determined if: |SAR|<NBSAR. (Eq. 38) With reference to (14) the rate of change of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and mBRmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBmBR(e.g., NB14 above), and classifying non-bleeding may be determined if: mBR>NBmBR. (Eq. 39) Referring to (15) the rate of change of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and mDRmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBmDR(e.g., NB15 above), and classifying non-bleeding may be determined if: mDR>NBmDR. (Eq. 40) Regarding (16) the rate of change of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and mARmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBmAR(e.g., NB16 above), and classifying non-bleeding may be determined if: mAR>NBmAR. (Eq. 41) With reference to (17), mBRand mARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARNBmBR(e.g., NB17 above), and classifying non-bleeding may be determined if: mAR−mBR>ARNBmBR. (Eq. 42) Referring to (18), mBRand mDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRNBmBR(e.g., NB18 above), and classifying non-bleeding may be determined if: mDR−mBR>DRNBmBR. (Eq. 43) With reference to (19) the rate of rate change of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and rBRmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBrBR(e.g., NB19 above), and classifying non-bleeding may be determined if: rBR>NBrBR. (Eq. 44) Referring to (20) the rate of rate change of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and rDRmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBrDR(e.g., NB20 above), and classifying non-bleeding may be determined if: rDR>NBrDR. (Eq. 45) Regarding (21) the rate of rate change of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and rARmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byNBrAR(e.g., NB21 above), and classifying non-bleeding may be determined if: rAR>NBrAR. (Eq. 46) With reference to (22), rBRand rARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARNBrBR(e.g., NB22 above), and classifying non-bleeding may be determined if: rAR−rBR>ARNBrBR. (Eq. 47) Referring to (23), mBRand mDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRNBmBR(e.g., NB23 above), and classifying non-bleeding may be determined if: rDR−rBR>DRNBrBR. (Eq. 48) Similarly, in some instances, a method for estimating a (certain) bleeding state might include, but is not limited to, one of the following calculations or a combination of two or more such calculations, perhaps within a statistical and/or machine learning framework, or the like: (1) Average of CRI before resuscitation (“CRIBR”)<BL1; (2) Average of CRI during resuscitation (“CRIDR”)<BL2; (3) Average of CRI after resuscitation (“CRIAR”)<BL3; (4)CRIAR−CRIDR<BL4; (5)CRIDR−CRIBR<BL5; (6)CRIAR−CRIBR<BL6; (7) standard deviation of CRI before resuscitation (“[SD(CRI)]BR”)>BL7; (8) standard deviation of CRI during resuscitation (“[SD(CRI)]DR”)>BL8; (9) standard deviation of CRI after resuscitation (“[SD(CRI)]AR”)>BL9; (10) [SD(CRI)]AR−[SD(CRI)]BR>BL10; (11) moment coefficient of skewness of CRI (positive or negative) before resuscitation (“SBR”)>BL11; (12) moment coefficient of skewness of CRI (positive or negative) during resuscitation (“SDR”)>BL12; (13) moment coefficient of skewness of CRI (positive or negative) after resuscitation (“SAR”)>BL13; (14) rate of change of CRI before resuscitation (“mBR”)<BL14; (15) rate of change of CRI during resuscitation (“mDR”)<BL15; (16) rate of change of CRI after resuscitation (“mAR”)<BL16; (17) mAR−mBR<BL17; (18) mDR−mBR<BL18; (19) rate of rate change of CRI before resuscitation (“rBR”)<BL19; (20) rate of rate change of CRI during resuscitation (“rDR”)<BL20; (21) rate of rate change of CRI after resuscitation (“rAR”)<BL21 (22) rAR−rBR<BL22; (23) rDR−rBR<BL23; and/or the like. In some cases, each of, or one or more of, BL1 through BL20 might either be estimated experimentally or set by the user. With reference to (1) the average CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, andCRIBRmay be the average value of those points. Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBCRIBR(e.g., BL1 above), and classifying non-bleeding may be determined if: CRIBR<BCRIBR. (Eq. 49) Referring to (2) the average CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, andCRIDRmay be the average value of those points. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBCRIDR(e.g., BL2 above), and classifying non-bleeding may be determined if: CRIDR<BCRIDR. (Eq. 50) Regarding (3) the average CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, andCRIARmay be the average value of those points. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBCRIAR(e.g., BL3 above), and classifying non-bleeding may be determined if: CRIAR<BCRIAR. (Eq. 51) With reference to (4),CRIDRandCRIARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARBCRIDR(e.g., BL4 above), and classifying non-bleeding may be determined if: CRIAR−CRIDR<ARBCRIDR. (Eq. 52) Referring to (5),CRIBRandCRIDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRBCRIBR(e.g., BL5 above), and classifying non-bleeding may be determined if: CRIDR−CRIBR<DRBCRIBR. (Eq. 53) Regarding (6),CRIBRandCRIARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARBCRIBR(e.g., BL6 above), and classifying non-bleeding may be determined if: CRIAR−CRIBR<ARBCRIBR. (Eq. 54) With reference to (7) the variance of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and [SD(CRI)]BRmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byB[SD(CRI)]BR(e.g., BL7 above), and classifying non-bleeding may be determined if: [SD(CRI)]BR>B[SD(CRI)]BR. (Eq. 55) Referring to (8) the variance of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and [SD(CRI)]DRmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byB[SD(CRI)]BR(e.g., BL8 above), and classifying non-bleeding may be determined if: [SD(CRI)]DR>B[SD(CRI)]DR. (Eq. 56) Regarding (9) the variance of CRI after resuscitation, CRIAR={CRI (t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and [SD(CRI)]ARmay be the variation of those values (perhaps the standard deviation as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byB[SD(CRI)]AR(e.g., BL9 above), and classifying non-bleeding may be determined if: [SD(CRI)]AR>B[SD(CRI)]AR. (Eq. 57) Referring to (10), [SD(CRI)]BRand [SD(CRI)]ARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARB[SD(CRI)]BR(e.g., BL10 above), and classifying non-bleeding may be determined if: [SD(CRI)]AR−[SD(CRI)]BR>ARB[SD(CRI)]BR. (Eq. 58) With reference to (11) the skewness of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and SBRmay be a measure of skewness of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBSBR(e.g., BL11 above), and classifying non-bleeding may be determined if: |SBR|>BSBR. (Eq. 59) Referring to (12) the skewness of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and SDRmay be a measure of skewness of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBSDR(e.g., BL12 above), and classifying non-bleeding may be determined if: |SDR|>BSDR. (Eq. 60) Regarding (13) the skewness of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and SARmay be a measure of skewness of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBSAR(e.g., BL13 above), and classifying non-bleeding may be determined if: |SAR|>BSAR. (Eq. 61) With reference to (14) the rate of change of CRI before resuscitation, CRIBR={CRI (t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and mBRmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBmBR(e.g., BL14 above), and classifying non-bleeding may be determined if: mBR<BmBR. (Eq. 62) Referring to (15) the rate of change of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and mDRmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBmDR(e.g., BL15 above), and classifying non-bleeding may be determined if: mDR<BmDR. (Eq. 63) Regarding (16) the rate of change of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and mARmay be a measure of rate of change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBmAR(e.g., BL16 above), and classifying non-bleeding may be determined if: mAR<BmAR. (Eq. 64) With reference to (17), mBRand mARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARBmBR(e.g., BL17 above), and classifying non-bleeding may be determined if: mAR−mBR<ARBmBR. (Eq. 65) Referring to (18), mBRand mDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRBmBR(e.g., BL18 above), and classifying non-bleeding may be determined if: mDR−mBR<DRBmBR. (Eq. 66) With reference to (19) the rate of rate change of CRI before resuscitation, CRIBR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times before resuscitation, and rBRmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, for example, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBrBR(e.g., BL19 above), and classifying non-bleeding may be determined if: rBR<BrBR. (Eq. 67) Referring to (20) the rate of rate change of CRI during resuscitation, CRIDR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times during resuscitation, and rDRmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBrDR(e.g., BL20 above), and classifying non-bleeding may be determined if: rDR<BrDR. (Eq. 68) Regarding (21) the rate of rate change of CRI after resuscitation, CRIAR={CRI(t1), CRI(t2), . . . , CRI(tK)} may be any set of points sampled at times after resuscitation, and rARmay be a measure of rate of rate change of those points (perhaps as defined above). Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byBrAR(e.g., BL21 above), and classifying non-bleeding may be determined if: rAR<BrAR. (Eq. 69) With reference to (22), rBRand rARmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byARBrBR(e.g., BL22 above), and classifying non-bleeding may be determined if: rAR−rBR<ARBrBR. (Eq. 70) Referring to (23), mBRand mDRmay be as defined above. Accordingly, a classification of no bleeding may be made by choosing a threshold, either experimentally or user set, denoted byDRBmBR(e.g., BL23 above), and classifying non-bleeding may be determined if: rDR−rBR<DRBrBR. (Eq. 71) Likewise, in some instances, a method for estimating a probability of bleeding (e.g., designated by a symbol between 0 and 1) might include, but is not limited to, one of the above calculations or a combination of two or more such calculations, perhaps within a statistical and/or machine learning framework, or the like, to estimate the probability of bleeding. In some cases, the method might include, without limitation, empirical estimations of probability density functions, cumulative distribution functions using graphical and/or nonparametric models, and/or the like. Other methods might include, but are not limited to: (i) probability of bleeding being proportional to the number of times the bleeding threshold is achieved; (ii) probability of no bleeding being proportional to the number of times the no bleeding threshold is achieved; (iii) probability of bleeding being proportional to the number of times the bleeding threshold is achieved minus the number of times the no bleeding threshold is achieved; (iv) probability of bleeding being expressed as Pr(bleeding)=ƒ(CRIBR,CRIDR,CRIAR,[SD(CRI)]BR,[SD(CRI)]DR,[SD(CRI)]AR,SBR,SDR,SAR,mBR,mDR,mAR,rBR,rDR,rAR), (Eq. 72) where ƒ is some empirical estimation of the probability density function and/or cumulative distribution functions using graphical and/or nonparametric models. In some embodiments, estimated CRI values might include, but are not limited to, one or more of CRI values estimated or measured after every heartbeat, CRI values averaged over the preceding or last N seconds (where N>1), and/or the median value of CRI over the preceding or last N seconds (where N>1), or the like. According to some embodiments, the calculations described above with respect to blocks515-535might utilize these estimated CRI values. According to some embodiments, instead of using CRI measurements, a method might use all or some of the calculations above that replace CRI values with values corresponding to measurements related to any measure of compensatory reserve, or derivative thereof, using one or more of the sensor types described above. FIGS.6-8illustrate exemplary screen captures from a display device of a compensatory reserve monitor, showing various features that can be provided by one or more embodiments. Similar screens could be shown by other monitoring devices, such as a display of a wrist-wearable sensor device, a display of a monitoring computer, and/or the like. WhileFIGS.6-8use BL or CRI as an example condition for illustrative purposes, other embodiments might also display values for the volume, V, the volume of fluid necessary for effective hydration, or the probability, Pƒ, that the patient needs fluid (including additional fluid, if hydration efforts already are underway). FIG.6illustrates an exemplary display600of a compensatory reserve monitor implementation where a normalized CRI estimate of “1” implies that blood loss is certain, and “0” implies that there is no blood loss. Values in between “0” and “1” imply a continuum of a probability of blood loss. FIG.7Aillustrates four screen captures 700 of a display of a compensatory reserve monitor implementation that displays BL as a “fuel gauge” type bar graph for a person undergoing central volume blood loss and subsequent hydration efforts, or for a person who is about to, is undergoing, or has undergone fluid resuscitation. WhileFIG.6illustrates a trace of CRI over time, the bar graphs ofFIG.7Aprovide snapshots of BL at the time of each screen capture corresponding to the CRI ofFIG.6. (In the illustrated implementation, the bar graphs are continuously and/or periodically updated, such that each bar graph could correspond to a particular position on the X-axis ofFIG.6.) A variety of additional features are possible. Merely by way of exampleFIG.7Billustrates similar “fuel gauge” type displays, but the displays feature bars of different colors—for example, green (illustrated by diagonal cross-hatching), yellow (illustrated by a checked pattern) and red illustrated by gray shading) corresponding to different levels of CRI, along with arrows710indicating trending in the CRI values (e.g., rising, declining, or remaining stable), the CRI values and trends being indicative of blood loss occurring and/or resuscitation efforts being active. In some embodiments, such a “fuel gauge” display (or other indicator of BL or CRI and/or different physiological parameters) can be incorporated in a more comprehensive user interface. Merely by way of example,FIG.8illustrates an exemplary display800of a monitoring system. The display800includes a graphical, color-coded “fuel gauge” type display805of the current estimated BL (similar to the displays illustrated byFIG.8B), along with a historical display810of recent CRI estimates; in this example, each bar on the historical display810might correspond to an estimate performed every minute, but different estimate frequencies are possible, and in some embodiments, the operator can be given the option to specify a different frequency. In the illustrated embodiment, the display800also includes numerical display815of the current BL as well as a trend indicator820(similar to that indicated above). In particular embodiments, the display800can include additional information (and, in some cases, the types of information displayed and/or the type of display can be configured by the operator). For instance, the exemplary display800includes an indicator825of the patient's current heart rate and an indicator830of the patient's blood oxygen saturation level (SpO2). The exemplary display800also includes an indicator of the estimated volume, V, necessary for effective hydration, as well as an numerical indicator840, a trend indicator845, and a similar color coded “fuel gauge” display850of the current CRI Other monitored parameters might be displayed as well, such as an ECG tracing, blood pressure, probability of bleeding estimates, and/or the like. Exemplary Clinical Study FIGS.9A-9H(collectively, “FIG.9”) are graphical diagrams900illustrating rapid detection of bleeding before, during, and after fluid resuscitation of patients in a multi-trauma clinical study at Denver Health Medical Center (“DHMC”), in accordance with various embodiments. In one exemplary multi-trauma clinical study at DHMC, 50 patients were enrolled, of which 45 patients met required criteria while 5 were excluded (as having incomplete data and/or device). Of the 45 patients, 12 were bleeding (with initial CRI values of 0.17±0.07 and mean injury severity score (“ISS”) of 27±12.7), 30 non-bleeding (with initial CRI values of 0.56±0.17 and mean ISS of 7.5±8.7), and 3 indeterminate. With reference toFIG.9,FIG.9Aillustrates a receiver operating characteristic (“ROC”) curve that is used for classification of bleeding using compensatory reserve. The sensitivity is 0.93, with specificity of 0.92, and area under the curve (“AUC”) of 0.97. FIG.9Billustrates the CRI for the non-bleeding patients (indicated in the graph as “Trauma No Hemorrhage”) and for the bleeding patients (indicated in the graph as “Trauma+Hemorrhage”). As shown inFIG.9B, CRI values are low during bleeding. FIGS.9C-9Eillustrate line tracings of actual CRI curves for three representative patients among the non-bleeding group. The CRI values for the non-bleeding patients before infusing of intravenous fluid (“IVF”) is 0.56±0.17.FIG.9Cdepicts the CRI curves for non-bleeding trauma patient 003, who had a CRI of >0.3 before infusion of IVF, and with IVF containing 2 L of saline solution. There was no sustained drop in CRI in this patient during or after infusion of IVF.FIG.9Ddepicts the CRI curves for non-bleeding trauma patient 042, who had a CRI of 0.4 before infusion of IVF, and with IVF containing 1 L of saline solution. There was no wound exploration and no sustained drop in CRI in this patient during or after infusion of IVF.FIG.9Edepicts the CRI curves for non-bleeding trauma patient 018, who had a CRI of 0.65 before infusion of IVF, and with IVF containing 2 L of saline solution, 1 L of lactated ringer's (“LR”) solution, and 2 packets of packed red blood cells (“PRBC”). There was no sustained drop in CRI in this patient during or after infusion of IVF. As shown inFIGS.9C-9E, CRI is high or generally increasing during and after fluid resuscitation for the non-bleeding group. FIGS.9F-9Hillustrate line tracings of actual CRI curves for three representative patients among the bleeding group. The CRI values for the non-bleeding patients before infusing of intravenous fluid (“IVF”) is 0.17±0.07.FIG.9Fdepicts the CRI curves for bleeding trauma patient 019, who had a CRI of 0.15 before infusion of IVF (at time905), and with an infusion of a first IVF (at time910), the first IVF containing 7 L of saline solution, 3 packets of PRBC, 1 packet of platelets (“PLTs”), and 3 packets of fresh frozen plasma (“FFP”). The CRI dropped after initial increase (as shown at time915). At time920, a second IVF was infused, the second IVF containing 4 L of saline solution, 3 packets of PRBC, and 3 packets of fresh frozen plasma (“FFP”).FIG.9Gdepicts the CRI curves for bleeding trauma patient 006, who had a CRI of 0.15 before infusion of IVF (at time925), and with an infusion of a first IVF (at time930), the first IVF containing 2 L of saline solution. The CRI dropped after initial increase (as shown at time935). At time940, a second IVF was infused, the second IVF containing 1 L of saline solution. Again, the CRI dropped (as shown at time945).FIG.9Hdepicts the CRI curves for bleeding trauma patient 012, who had a CRI of 0.15 before infusion of IVF, with infusions of a first IVF (at time950) and a second IVF (at time955), the first IVF containing 1 L of saline solution and the second IVF containing 2.25 L of saline solution. As shown inFIGS.9F-9H, CRI drops after an initial increase (during and after fluid resuscitation) for the bleeding group. Exemplary System and Hardware Implementation FIG.10is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments.FIG.10provides a schematic illustration of one embodiment of a computer system1000that can perform the methods provided by various other embodiments, as described herein, and/or can function as a monitoring computer, a CRI monitor, a processing unit of a sensor device, and/or the like, as described above. It should be noted thatFIG.10is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate.FIG.10, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner. The computer or hardware system1000is shown comprising hardware elements that can be electrically coupled via a bus1005(or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors1010, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices1015, which can include, without limitation, a mouse, a keyboard and/or the like; and one or more output devices1020, which can include, without limitation, a display device, a printer, and/or the like. The computer or hardware system1000may further include (and/or be in communication with) one or more storage devices1025, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like. The computer or hardware system1000might also include a communications subsystem1030, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem1030may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system1000will further comprise a working memory1035, which can include a RAM or ROM device, as described above. The computer or hardware system1000also may comprise software elements, shown as being currently located within the working memory1035, including an operating system1040, device drivers, executable libraries, and/or other code, such as one or more application programs1045, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. A set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s)1025described above. In some cases, the storage medium might be incorporated within a computer system, such as the system1000. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system1000and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system1000(e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code. It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system1000) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system1000in response to processor1010executing one or more sequences of one or more instructions (which might be incorporated into the operating system1040and/or other code, such as an application program1045) contained in the working memory1035. Such instructions may be read into the working memory1035from another computer readable medium, such as one or more of the storage device(s)1025. Merely by way of example, execution of the sequences of instructions contained in the working memory1035might cause the processor(s)1010to perform one or more procedures of the methods described herein. The terms “machine readable medium” and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer or hardware system1000, various computer readable media might be involved in providing instructions/code to processor(s)1010for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s)1025. Volatile media includes, without limitation, dynamic memory, such as the working memory1035. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire and fiber optics, including the wires that comprise the bus1005, as well as the various components of the communication subsystem1030(and/or the media by which the communications subsystem1030provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including, without limitation, radio, acoustic and/or light waves, such as those generated during radio-wave and infra-red data communications). Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s)1010for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system1000. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention. The communications subsystem1030(and/or components thereof) generally will receive the signals, and the bus1005then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory1035, from which the processor(s)1005retrieves and executes the instructions. The instructions received by the working memory1035may optionally be stored on a storage device1025either before or after execution by the processor(s)1010. CONCLUSION This document discloses novel tools and techniques for blood loss in patients (e.g., before, during, and/or after fluid resuscitation), compensatory reserve, and similar physiological states. While certain features and aspects have been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while certain functionality is ascribed to certain system components, unless the context dictates otherwise, this functionality can be distributed among various other system components in accordance with the several embodiments. Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with—or without—certain features for ease of description and to illustrate exemplary aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several exemplary embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims. | 135,704 |
11857294 | DETAILED DESCRIPTION OF THE INVENTION As used herein, the term “a” or “an” when used in conjunction with the term “comprising” in the claims and/or the specification may mean “one,” but it is also consistent with the meaning of “one or more,” “at least one,” and “one or more than one.” Some embodiments of the invention may consist of or consist essentially of one or more elements, method steps, and/or methods of the invention. It is contemplated that any method described herein can be implemented with respect to any other method described herein. As used herein, the term “or” in the claims is used to mean “and/or” unless explicitly indicated to refer to alternatives only or the alternatives are mutually exclusive, although the disclosure supports a definition that refers to only alternatives and “and/or.” As used herein, “comprise” and its variations, such as “comprises” and “comprising,” is understood to imply the inclusion of a stated item, element or step or group of items, elements or steps but not the exclusion of any other item, element or step or group of items, elements or steps unless the context requires otherwise. Similarly, “another” or “other” may mean at least a second or more of the same or different claim element or components thereof. As used herein, the terms “subject” and “patient” are interchangeable and refer to any individual on which any of the medical devices described herein are used. In one embodiment of the present invention there is provided a medical device for measuring tissue properties in a subject, comprising a light-guiding cone comprising an opaque, anti-reflective, sloped surface and having optical properties that direct light along an optical excitation path into a homogeneous field on a tissue of interest in the subject; a plurality of excitation light sources disposed at an open end of the light-guiding cone, each of the plurality emitting light at a wavelength from visible to near infrared; an image sensor configured to measure intensities of light with different wavelengths reflected from the tissue of interest; means for optically blocking light not reflected from the tissue of interest; a printed circuit board in operable communication with the device and configured to enable wireless communications; and a processor and a memory tangibly storing an algorithm comprising processor-executable instructions for processing the reflected wavelengths as a measurement of tissue properties in electronic communication with the device. Further to this embodiment the device may comprise a removable optically clear cap comprising a sterile barrier and disposed between the device and the tissue of interest, an optical diffuser positioned on the optical excitation path configured to direct the light into the homogeneous field on the tissue of interest, a temperature sensor to measure a surface temperature of the tissue of interest, at least one accelerometer to remove effects of tissue or device movement during data calibration or during data acquisition, or a display to monitor tissue properties or a combination thereof. In another further embodiment the light-guiding cone may comprise an impedance sensor for detecting moisture content in the tissue. In yet another further embodiment the printed circuit board may comprise at least one accelerometer therewithin to quantify movements of the device. In all embodiments the means for optically blocking light not reflected from the tissue of interest comprises a radially extruded lip at the end of the light-guiding cone disposed to cover an area of the tissue of interest under interrogation and to prevent ambient light from impinging on the area, the radially extruded lip comprising at least one pressure sensor configured to sense conformal attachment of the medical device to the surface of the tissue of interest. Also in all embodiments the light-guiding cone may comprise at least one reflective material and may be configured for automatic self-calibration. In addition one of the wavelengths emitted from the plurality of excitation light sources may be an isosbestic point of about 805 nm that enables a ratiometric image sensor measurement of a non-isosbestic wavelength to the isosbestic wavelength. In all embodiments the algorithm may comprise processor-executable instructions configured to predict an Ankle Brachial Index (ABI) from measurements of tissue oxygenation, tissue temperature, or perfusion index or a combination thereof at at least one wavelength; correlate tissue oxygenation measurements from an upper limb and a lower limb of the subject to the Ankle Brachial Index (ABI); measure a photoplethysmography (PPG) signal; calculate distance from the image sensor to the tissue of interest via an analysis of patterns of light formed on the surface of the tissue of interest; predict the stage of at least one pressure ulcer in the subject as stage 1, stage 2, stage 3, or stage 4; predict sub-clinical stage 1 pressure ulcers in the subject; or predict peripheral artery disease in the subject; or a combination thereof. In another embodiment of the present invention there is provided method for measuring tissue properties in a subject, comprising the steps of a) illuminating a tissue of interest in the subject with a non-isosbestic wavelength emitted from the plurality of excitation light sources comprising the medical device, as described supra; b) measuring a reflected non-isosbestic wavelength via the image sensor comprising the device; c) illuminating the tissue of interest with an isosbestic wavelength; d) measuring a reflected isosbestic wavelength; e) determining a ratiometric image sensor measurement of the reflected non-isosbestic wavelength to the reflected isosbestic wavelength via the algorithm comprising the medical device; f) correlating the ratiometric image sensor measurement with at least one tissue property of the tissue of interest; and g) repeating steps a) to f) at least once with another non-isosbestic wavelength and the isosbestic wavelength. Further to this embodiment the method may comprise measuring the tissue properties to determine a baseline; measuring the tissue properties as the subject exercises; measuring the tissue properties during a recovery period after exercise is completed; measuring a recovery time of the tissue properties; and correlating, via the algorithm, the recovery time with an ankle brachial index in the subject or to predict severity of peripheral arterial disease in the subject. In both embodiments steps a) to d) may comprise illuminating sequentially the tissue of interest with a plurality of non-isosbestic wavelengths of differing wavelengths; measuring sequentially the plurality of reflected isosbestic wavelengths; illuminating sequentially the tissue of interest with a plurality of isosbestic wavelengths of differing wavelengths; and measuring sequentially the plurality of reflected isosbestic wavelengths. In yet another embodiment of the present invention there is provided a medical device for detecting pressure ulcers in a tissue, comprising a plurality of excitation light sources to produce an excitation signal; at least one optical sensor configured to detect a spectral response to the excitation signal from the tissue; and at least one processor in operable communication with the optical sensor(s) and having a wireless network connection. Further to this embodiment the medical device may comprise a disposable optically clear material removably positionable between the device and the tissue. In another further embodiment the medical device may comprise, in operable communication with the at least one processor, at least one temperature sensor, at least one pressure inducer, or at least one pressure sensor or a combination thereof. In all embodiments the plurality of excitation light sources may transmit at least two of a light with a 660 nm wavelength, a light with a 950 wavelength nm, or a light with a 800 nm wavelength. Also in all embodiments the processor may be configured to measure a spectral response of the tissue, to quantitate blood capillary refill rates, to determine a likelihood of the tissue developing a pressure ulcer, to communicate wirelessly with a smart device, or to update an electronic health record or a combination thereof. In a related embodiment of the present invention there is provided a system to detect a pressure ulcer in a tissue, comprising the medical device described supra; an optically clear material removably positionable between the device and the tissue; and a smart device in wireless communication with the processor. In yet another embodiment of the present invention there is provided a method for detecting a pressure ulcer in a tissue of interest in a subject, comprising the steps of a) placing the medical device described supra on the tissue of interest; b) delivering the excitation signal from at least two of the plurality of excitation light sources to the tissue of interest; c) detecting with the optical sensor an intensity of the light reflected from the tissue of interest as electrical signals; d) converting the electrical signals to a ratiometric measure of deoxyhemoglobin and water in the tissue of interest which correlates with the presence or absence of the pressure ulcer in the tissue of interest; and e) repeating steps a)-c) zero or more times to determine whether the pressure ulcer is healing or worsening. Further to this embodiment the medical device may comprise a pressure sensor, the method comprising measuring pressure returned from the illuminated tissue of interest; measuring a time decay of the intensity of the light reflected from the tissue of interest; and quantitating capillary refill based on the time decay at the measured pressure. In another further embodiment the method comprises sending the ratiometric measure to the smart device or updating an electronic health record or a combination thereof. In all embodiments the plurality of excitation light sources may deliver excitation signals with a 660 nm wavelength to measure deoxyhemoglobin in the tissue of interest, transmit light with a 950 nm wavelength to measure water in the tissue of interest and transmit a reference light with a 800 nm wavelength that is an isosbestic point of deoxyhemoglobin and oxyhemoglobin. In yet another embodiment of the present invention there is provided a platform for remote monitoring of a subject post flap surgery, comprising a flap patch removably positionable on a flap on the subject post flap surgery that is configured to obtain periodically measurements of oxygen saturation (StO2) and temperature at the flap; a control patch removably positionable on healthy tissue on the subject proximate to the flap patch that is configured to obtain periodically measurements of temperature of the healthy tissue; and a reusable receiver in wireless electronic communication simultaneously with the flap patch and the control patch and configured to transmit the measurements received from the flap patch and the control patch to a cloud server. Further to this embodiment the reusable receiver may comprise a display. In one aspect of all embodiments the flap patch may comprise an oxygen saturation (StO2) patch with a plurality of LEDs that emit multispectral light and a photodiode that receives reflected light; a temperature patch with an ambient temperature sensor and a body temperature sensor; an insulator disposed around the temperature patch; and a controller in operable electronic communication with the oxygen saturation patch and the temperature patch. In this aspect the plurality of LEDs may emit multispectral light with wavelengths of 625 nm, 680 nm, 805 nm, and 870 nm. In another aspect of all embodiments the healthy tissue patch may comprise a temperature patch with an ambient temperature sensor and a body temperature sensor; an insulator disposed around the temperature patch; and a controller in operable electronic communication the temperature patch. In yet another embodiment of the present invention there is provided a method for remotely monitoring in real time a surgical flap on a post-operative subject, comprising positioning the flap patch and the control patch comprising the platform described supra on the surgical flap and on surrounding healthy tissue; measuring simultaneously in real time oxygen saturation (StO2) and flap temperature of the surgical flap via the flap patch and temperature of the healthy tissue via the control patch; wirelessly transmitting measured values of oxygen saturation of the surgical flap and of temperatures of the surgical flap and healthy tissue to a cloud server via the receiver; and comparing remotely the measured values over time to monitor tissue health of the surgical flap on the post-operative subject. In one aspect of this embodiment the step of measuring the oxygen saturation may comprise delivering to the surgical flap light with a wavelength at an isosbestic point of blood oxygen, light at two non-isosbestic wavelengths below the isosbestic point and light at a non-isosbestic wavelength above the isosbestic point; measuring via the photodiode light reflected from the surgical flap; and calculating a ratiometric measurement of the non-isosbestic wavelengths to the isosbestic wavelength to determine the value in real time of the oxygen saturation in the surgical flap. In this aspect the isosbestic point may be 805 nm, the non-isosbestic wavelengths below the isosbestic point are 625 nm and 680 nm and the non-isosbestic wavelength above the isosbestic point is 870 nm. Provided herein are medical devices, systems and methods for measuring tissue properties, for example, but not limited to oxygen saturation, the temperature of core and peripheral tissues, and/or tissue edema. Such measurements enable a healthcare provider to quickly diagnose such conditions as peripheral artery disease, pressure ulcers and the status of wounds and post-surgical flap tissue so that medical intervention may be initiated as is known and standard in the art. The medical device may be handheld for ease of use on a patient, for example, in an office or clinic setting. The medical device may be portable, for example, of a size to fit in a pocket, where the healthcare provider can carry the device to the patient. The medical device may be embodied in a wireless enabled platform comprising patch that can be adhered to the patient for remote monitoring from the patient's home. The medical devices may comprise a display on which the results are read or may be enabled for wireless communication with an app on a smart device, such as, but not limited to, a smart phone or a tablet on which results are displayed. As such, the medical devices are in wireless communication with a cloud server, for example, a HIPAA compliant cloud server, from which a healthcare provider may download the results for review. The handheld medical device has a cone with optical material properties, such as opacity and/or that is anti-reflective, to direct the light into a homogeneous field, an aperture at the small end of the cone that is directed at the tissue, means for optically blocking light non-reflected light, for example, an extruded lip radially disposed around the aperture at the small end, a consumable sterile cap that is optically transparent that is positioned on the surface of the tissue, a ring light of LEDs a the large end of the cone aperture with a plurality of wavelengths, one of the wavelengths is the isosbestic point (˜805 nm) for oxygenated and deoxygenated blood, optical sensor with sensitivity from visible light to near infrared at the large aperture of the cone, an optical wall between the sensor and LED ring light to minimize light from the LED leaking into the sensor, optical diffuser in the optical path, X, Y, and Z accelerometers for detecting motion of the patient, a processor for algorithmic calculations, a look-up-table to predict ABI, a temperature sensor to measure the temperature at the surface of the tissue. The light directing cone is placed on the ankle and wrist to measure optical tissue properties. The cone is designed to direct eight wavelengths of light from a ring of light emitting diodes (LEDs) emitted from an aperture at the small end of the cone to form a homogeneous field of light for each wavelength. Both visible and infrared light wavelengths emitted from the ring light on the tissue is scattered and reflected by the red blood cells coursing through the area of illumination. Returning light is detected by the sensor at the large end of the cone. Tissue oxygenation or oxygen saturation in the medical field, is measured and calculated by using a change in the wavelengths intensity at the eight wavelengths and a software algorithm. Also, a blood flow (photoplethysmogram, PPG) waveform is instantaneously constructed. The changes in the wavelength intensity is a function of the oxygen in the blood. A software algorithm using the features of the eight wavelengths is used to calculate the StO2. Unlike traditional SpO2readings a pulse waveform is not required to measure oxygen concentration. Both upper extremities (thenar eminence, wrist, and the forearm) and lower extremities (ankle, heel, arch, metatarsal, and toe) are interrogated, which takes about 10 seconds for each. A report form is generated that displays waveforms and the ratio of each leg measurement compared with the arms. Results are classified as Flow Obstruction or No Flow Obstruction. A regression to the Ankle Brachial Index, is performed and an ABI score is predicted. The portable medical device is a low-cost device which measures the spectral response of ulcerated tissue using a minimum of two wavelengths, preferably three or more wavelengths, of light that are specific to pre-stage 1 pressure ulcers, wounds, and edema. A third wavelength is used as a reference signal. Light Emitting Diodes (LEDs) are used for the excitation signal and photodiodes or area sensors are used to sensing the returned light from the tissue. The device is configured to measure deoxyhemoglobin (approximately 660 nm or red light), water (approximately 950 nm or near infra-red light). The reference wavelength, is the isosbestic point for deoxyhemoglobin and oxyhemoglobin in blood, i.e., approximately 800 nm, 2nd near infra-red light. The ratio of electrical current measured at the photodiode for the deoxyhemoglobin signal to the isosbestic electrical current and the ratio of the electrical current measured at a photodiode for the water signal to the isosbestic current creates a ratiometric measurement that minimizes error and confounders, such as the amount of melanin in the dermal layer, and the distance of the LED excitation source and photodiode from the tissue. The result is the equivalent of a sub-epidermal measurement of the amount of deoxyhemoglobin and water in the tissue. Pre-stage 1 and stage 1 pressure ulcers, edema, and wounds have a higher ratio of deoxyhemoglobin and water. An algorithm based on the results from clinical studies for normal tissue, pre-stage 1, and stage 1 pressure ulcers results in an early indication or alarm to alert the healthcare worker that an ulcer is forming. In addition, there is a unique optically clear device that opto-mechanically couples the device to the tissue. The portable medical device is disposable and acts as a barrier between the instrumentation reader and the patient. A pressure sensor may be added to the system to simultaneously measure pressure as well as the light intensity returned from the tissue. The measure of the time decay of the return light signal given a known pressure is a quantitative measure of capillary refill which is the qualitative measure made by practitioners today. The wireless platform comprises a BLUETOOTH-enabled flap patch, a control or healthy tissue patch and a reusable receiver which is a display device. Both the flap patch and control patch continuously send data via BLUETOOTH to the receiver. The components are compressed into a 30×30 mm sized flap patch and have the electronic circuitry to drive the light-emitting diodes and a photodiode that measures the reflected light. All the electronic circuitry is controlled via a computing center where the captured results are collected. The returning light is sensed by a photodiode in the flap patch. The data is sent to a HIPAA-compliant cloud server by a BLUETOOTH connection to the receiver/display device via a mobile app and data server architecture. Thus, the patients may be monitored remotely from their homes by the healthcare workers. The LEDs in the flap patch are sources of the wavelength at the isosbestic point, 805 nm, two wavelengths, 625 nm and 680 nm, below the isosbestic point and one wavelength above the isosbestic point. The wavelengths below the isosbestic point have higher sensitivity and hence using two wavelengths below the isosbestic point increases measurement accuracy. The intensities of the reflected light are used to calculate oxygen saturation via a mobile app on the receiver/display as done for the handheld medical device. Particularly, healthy tissue temperature data from the control patch in addition to oxygen saturation and flap temperature data from the flap patch are sent via Bluetooth to the receiver which automatically uploads the data to the cloud. Particular embodiments of the present invention are better illustrated with reference to the Figure(s), however, such reference is not meant to limit the present invention in any fashion. The embodiments and variations described in detail herein are to be interpreted by the appended claims and equivalents thereof. FIG.1Ais a perspective view of the exterior of the medical device showing the optical cone110having an extruded lip120around the open bottom end and an optically clear cap or tip covering130the bottom end. The bottom half of a casing150is formed at the top end of the optical cone. An accelerometer mount160extends from the bottom half of the casing. With continued reference toFIG.1A,FIG.1Bis a top view of the optical cone110showing the accelerator mount160.FIG.1Cis a perspective view of the top half of the casing170with an open interior cavity180configured to contain the accelerometer mount therein and to secure to the bottom half of the casing. FIG.2is an exploded view of the medical device200showing the components in optical alignment. The clear tip130is disposed on the extruded lip120at the bottom end of the light guiding optical cone110. The extruded lip blocks ambient light. An optical diffuser210is disposed behind the accelerometer mount160on the bottom half of the casing150and is in optical alignment with the clear tip. A printed circuit board (PCB)220has a plurality of LEDs represented by222as excitation light sources positioned on the PCB in line with the optical diffuser. The printed circuit board comprises LED drivers represented by224communicating on a 12C line and limiting resistors represented by226as are known in the art. The PCB has an aperture in the center aligned with the clear tip to accommodate an image sensor or camera sensor230to capture reflected light. The PCB and image sensor are in electronic communication with a processor board240that is in operable communication with a power supply board250. A display260is disposed on the outer surface of the top half of the casing170and is in electronic communication with the processor board. With continued reference toFIG.2,FIG.3is a cross-sectional view of the handheld medical device200. The clear tip130and extruded lips120are disposed exteriorly on the bottom end of the light guiding cone110which forms a light-guiding space115. The printed circuit board220with the excitation light sources222and the image sensor230or camera sensor are secured to a camera mount270and disposed in the bottom half of the casing150formed on the light guiding cone. The processor board240and power supply board250are disposed within the top half of the casing170between the image sensor230and the accelerometer mount160. The display260is on the outer surface of the casing170top half and the casing top half is secured to the casing bottom half to enclose the components. With continued reference toFIGS.2-3,FIG.4is an exploded view showing the alignment280of the printed circuit board220with LEDs222to the camera sensor230and attachment of both to the camera mount270. FIG.5Aillustrates the arrangement of the components of the portable medical device300. A disposable silicon layer310is placed between the tissue400on the subject, for example, pressure ulcer tissue and the glass window320on the portable medical device. The infrared LED330and the photodiode340are in optical contact with the glass window such that light335from the infrared LED passes through the glass window and silicone layer to impinge on the pressure ulcer tissue. Reflected light345passes back through the silicon layer and glass window and is detected by the photodiode as raw data. The raw data is wirelessly transmitted to a smart device500which processes it to determine if and where edema is present in the tissue. The results may be displayed on the smart device and/or wirelessly transmitted to the cloud550. With continued reference toFIG.5A,FIG.5Bshows the placement of a pressure sensor350on the portable medical device300. FIGS.6A-6Bare diagrams of the flap patch610and a healthy tissue patch650or control patch comprising a wireless platform600for remote monitoring of a flap on a post-surgical patient. The wireless platform also comprises a reusable receiver having a display (seeFIG.6C).FIG.6Ais the diagram of the flap patch which is removably adhered to the flap tissue (seeFIG.6C) to monitor oxygen saturation (StO2) and body and ambient temperature. The flap patch utilizes a re-engineered MAXREFDES282 Health Patch Platform6621to drive four LEDs622a,b,c,deach emitting light with one of the wavelengths of 625 nm, 680 nm, 805 nm, and 870 nm and a photodiode623to capture light reflected from the flap tissue. The flap patch comprises an StO2patch620to measure oxygen saturation with the MAXREFDES282 patch as described and a temperature patch630for ambient temperature632and body temperature635monitoring with temperature sensors633,636and metal contacts634,637to better conduct heat from the surface of the tissue to the temperature sensor embedded within the device. An insulator640is disposed around the temperature patch to protect the sensors from the heat of the LEDs. The flap patch is BLUETOOTH625enabled and has a built-in antenna626to wirelessly transmit data to the reusable receiver. The flap patch comprises a microcontroller624, for example an ISP1807 ARM CORTEX-4 microcontroller, in electronic and operable communication with the StO2patch and the temperature patch and with an accelerometer627and integrated circuitry configured for power management628of BLUETOOTH and circuitry to boost629the built-in antenna signal. With continued reference toFIG.6A,FIG.6Bis a diagram of the healthy tissue patch630or control patch. The healthy tissue patch differs from the flap patch610in that a controller unit660replaces the MAXREFDES282 platform621, the LEDs622a,b,c,dand the photodiode623. The healthy tissue patch comprises, as does the flap patch, the temperature patch630with ambient temperature632and body temperature635sensors633,636, metal contacts634,637and insulator640, and the microcontroller624, BLUETOOTH625, built-in antenna626and accelerometer627and associated circuitry628,629to wirelessly transmit temperature data to the reusable receiver (seeFIG.6C). With continued reference toFIGS.6A-6B,FIG.6Cis a cartoon illustrating the set-up of the wireless platform on a post-operative patient. The flap patch is placed on the surgical flap on the body of the post-operative patient and the healthy tissue patch is placed on healthy tissue. Both the flap patch and the healthy tissue patch wirelessly and continuously transmit raw data to the reusable receiver that automatically, wirelessly transmits the data to a cloud server. The healthcare provider is able to review the data to monitor the patient. FIG.7is a flowchart of the operation of the wireless platform. In a first step the flap patch and the control patch are removably secured to the post-surgical flap710and to healthy tissue720on the post-operative patient whereupon the oxygen saturation (StO2) and temperature of the flap tissue and the temperature of the healthy tissue, as a control, are simultaneously and continuously monitored at715,725. In subsequent steps all the data is wirelessly sent to the reusable receiver with display at730which is automatically sent to a cloud server at735, for example, a HIPAA compliant cloud server. A healthcare provider can review the data remotely at740. The healthcare provider determines at745whether of not the flap temperature has increased compared to healthy tissue temperature and/or whether or not oxygen saturation of the flap tissue has decreased over time. If the answer is no at step750monitoring of the post-surgical flap continues at step755. If flap temperature has increased and/or oxygen saturation decreased in the flap tissue, medical intervention with continued monitoring is provided at step765. The healthcare provider determines the length of time it is necessary to monitor the patient. The following examples are given for the purpose of illustrating various embodiments of the invention and are not meant to limit the present invention in any fashion. Example 1 Medical Device: Handheld Device for Measuring Tissue Properties Reflectance: Comparison of Reflected Wavelength Intensities to Reference White Reflectance A reference white material is interrogated at 525 nm and the intensities of the reflected wavelengths measured (FIG.8A). The standard deviation over mean error is calculated at 0.17%. The clear tip of the medical device (FIG.2) is placed across the thenar area of a test subject's palm, excitation light with wavelengths of 525 nm, 590 nm, 625 nm, 690 nm, 780 nm, 810 nm, 870 nm, and 930 from the multispectral LEDs is delivered through the clear tip on the light guiding cone to the thenar whereupon light is reflected back through the clear tip and detected by the camera sensor in the image sensor and processed as reflected light intensity or reflectance. At 525 nm, 590 nm and 690 nm photoplethysmogram (PPG) data is observed (FIGS.8B-8D). At all wavelengths (FIGS.8B-8I) the blood circulation to the hand remains untouched. Standard deviations over mean error of 3.66%, 5.60%, 1.08%, 0.58%, 1.01%, 0.93%, and 1.31% (FIGS.8B-8I) was observed when a reference white material is interrogated. Table 1 shows the mean, standard deviation and error at each wavelength. TABLE 1ExcitationStandardwavelength (nm)MeanDeviationErrorReference WhiteReflectance525166.57950330.27564064610.001654709256Thenar52573.516026012.6927066240.0366274779859039.94642422.2394964960.05606250262565.729213220.7103085780.01080658969079.42441280.46705220.0058804678053.54567470.542298690.0101277881083.2306780.774585020.0093064887062.92199170.926285680.0131319193053.450182671.4776702070.027645747 Cuff Test The blood circulation to the subject's arm was occluded by placing a blood pressure cuff on a patient's arm and pressurizing it at above the patient's systolic pressure and normalized intensities (intensity of reflected light at a certain wavelength over the intensity of the reflected light at 810 nm). Excitation light was delivered and reflected light intensities measured as described for the reflectance test. As time progresses, it is expected that the arm consumes the oxygen attached to hemoglobin cells and converts it to the deoxygenated form. Deoxygenated hemoglobin has higher absorption rate at wavelengths below 810 nm and a lower absorption rate at wavelengths above 810 nm. While the blood circulation is occluded, it is expected that the ratio of intensity of light at any wavelength below 810 nm over the intensity of light of 810 nm keeps declining and that the ratio of intensity of light at any wavelength above 810 nm over the intensity of light of 810 nm keep ascending (FIG.9). The results of the cuff test are plotted as normalized wavelength intensities and demonstrate that signal intensity decreases for 590 nm, 625 nm, and 680 nm (FIGS.10B-10D) as oxygen is decreasing and that signal intensity decreases for 870 nm and 930 nm (FIGS.10G-10H) as oxygen is decreasing. Signal intensity is flat for 780 nm (FIG.10E) which is near the isosbestic point of ˜805 nm and for 525 nm (FIG.10A) which is a second isosbestic point.FIG.10Ffor 810 nm thus shows a normalized wavelength intensity of 1. Correlation with Predicate Device The cuff test was performed while acquiring data from a predicate device (InSpectra StO2, Hutchinson Technology) while simultaneously collecting data as done for the cuff test above (FIGS.11A-11B). The device was placed across the thenar region of test subject's palm. The InSpectra3 device from Hutchinson Corp. was also placed to capture the gold standard StO2value. A pressure cuff was attached to the test subject's arm and was set at 160 mmHg, higher than the test subject's systolic pressure. Towards the end of the experiments, the cuff pressure was released to let arterial blood back into the subjects' palm which consequently increases tissue oxygenation levels. The normalized reflected intensities (Ni=A1/A805 nm, Ai=reflected intensity of light at ith wavelength index) of the 8 different wavelengths were then given to a regression model to predict a StO2value as close as possible to the gold standard StO2 values. Predicted StO2(t)=Σi=18riNi(t)| so that |Predicted StO2(t)−Gold standard StO2(t)| is minimized. The correlation of the 590 nm wavelength with the predicate device for one wavelength is 0.88 (FIG.11B). Multiple wavelengths with a greater than 0.70 correlation are used with a multiple regression algorithm to predict oxygen saturation (StO2).FIG.11Cis a plot where the predicted StO2values and the gold standard, ground truth StO2values are overlaid. In this prediction a correlation of <90% was observed between the predicted StO2and the ground truth and the average prediction error was <5%. Example 2 Medical Device: Portable Device for Detection of Pressure Ulcers Wavelengths for Pressure Ulcers FIG.12is an in vitro absorption spectra of hemoglobin and water showing a spectral window in tissue in the near infrared (NIR) window. The window occurs due firstly a decrease in blood (oxy- and deoxyhemoglobin) absorption and secondly an increase in water absorption with increasing wavelength. With less oxygen in the blood, there is greater red absorption (660 nm) and a lower signal at the photodetector. With more water due to edema, there is greater absorption at 950 nm and a lower intensity at the photodetector. Classifier Algorithm Fuses Data The wavelength response is used to classify tissue damage as a pressure ulcer. Raw data is collected from the subject. The wavelength intensity of the light in each pixel of the raw data are fused in a cluster analysis by a classifier algorithm (ncbi.nlm.nih.gov/pmc/articles/PMC4991589). The resulting classified image identifies regions of injury. Example 3 Medical Device: Wireless Platform for Monitoring Post-Surgical Flap Patch Design The wireless platform, such as BLUETOOTH enabled (Bluetooth SIG), has a control patch, a flap patch and a reusable receiver with a display device. The control patch monitors the healthy tissue temperature, while the flap patch monitors both flap temperature and StO2.FIGS.6A-6Billustrate the components and electronic communication among them in the wireless platform framework. The MAXREFDES282 Health Patch Platform (Analog Devices, Inc., Wilmington, MA) is the building block on which the patch is constructed and can measure the peripheral capillary oxygen saturation (SpO2), the surface temperature of the tissue to which it adheres, and the ambient temperature of the environment. The temperature measurements, however, are affected by the proximity of active light-emitting diodes (LEDs) required for StO2monitoring. To eliminate this error, the patch is re-engineered to extract and separate the LEDs from the temperature sensor, i.e., re-engineering the printed circuit board design of the MAXREFDES282 that is provided with the device. Moreover, an insulating material is placed between the ambient measuring temperature sensor and the tissue surface temperature sensor to minimize any heat flow from the tissue surface to the ambient measuring temperature sensor. Additionally, the MAXREFDES282 platform only utilizes two LEDs, and, therefore, two wavelengths, i.e., one at red wavelengths and an infrared (IR) wavelength, to measure SpO2. Disruption of perfusion to a flap can result in flap necrosis and tissue loss. If blood supply is low, the pulse to the flap may also be minimal. Unlike a SpO2measurement that requires the ratio of pulse amplitudes of the reflected red and IR light that passes through pulsing tissue, a StO2measurement can be done without the pulsing amplitude information. Therefore, StO2is a more suitable solution to continuously monitor the tissue health of surgical flaps and prevent failures. To make the patch multispectral and to increase accuracy for StO2monitoring, additional wavelengths are required. In addition to separating the LEDs from the temperature measuring side of the patch, the configuration is designed to include additional LEDs, increasing the number of wavelengths from two to four wavelengths of 625 nm, 680 nm, 805 nm, and 870 nm. Two wavelengths, 625 nm and 680 nm, are below the isosbestic point of 805 nm and one wavelength 870 nm is above the isosbestic point. The isosbestic point is the wavelength where oxygenated and deoxygenated blood absorbs light equally. This ratio of wavelengths to the isosbestic point improves the repeatability and accuracy of the measurement. As oxygen in the blood decreases, the intensities of reflected lights of wavelengths below the isosbestic decreases while those above the isosbestic wavelength increase. The wavelengths below the isosbestic point have higher sensitivity and hence using two wavelengths below the isosbestic point increases measurement accuracy. In Vitro Testing Using Phantom Tissues A phantom tissue created with a similar thermal conductivity as the soft tissue of the flap is used for in vitro tests. The thermal conductivity of the tissue is calculated as in Example 1. The thermal conductivity in the phantom tissue is similar to human tissue and calculations can accurately predict how the patches perform on a patient. The phantom tissues are created by adding three layers of low-density polyethylene (LDPE), and Avery MED 3044 double-sided adhesive to stack multiple layers of LDPE together. Temperature testing was performed using the phantom and a hot plate. The temperature patch, a SPOTON (3M Company, St. Paul, MN), a non-invasive system that measures the core body temperature of patients, core temperature sensors, and an air temperature sensor were placed on the phantom. Temperature recordings were made with the temperature sensing patch with the max30205 sensor and the SPOTON. Raw data (FIG.13A) from this set up shows how the temperature sensing patch reacts to changes in core temperature over time in comparison to the SPOTON. The error for the temperature sensing patch at a steady state was <0.2° C. (FIGS.13B-13C). The test was performed on healthy human subjects using the temperature sensing patch and the SPOTON. The results show that the patch can estimate the core temperature of a human subject comparable to that of the SPOTON (FIG.13D). In Vivo Testing of the Patch in Porcine Models Eight pigs are used for in vivo testing of the patch, where two rectus abdominus myocutaneous flaps are harvested per pig. Anesthsia: Swine are housed individually in pens, fed ad libetum with standard hog feed, and are fasted for 24 h before the procedure, with free access to water and up to two 500 cc bottles of regular Gatorade™. On day zero, each pig undergoes induction with ketamine (Zoetis, 2.2 mg/kg), Telazol® (Zoetis; 4.4 mg/kg), and xylazine (Zoetis; 2.2 mg/kg), given as a single IM injection. Each pig is weighed and endotracheally intubated. EKG, pulse oximetry, rectal temperature, and lingual end-tidal CO2 monitors are placed. The pig is allowed to rest on a water-circulated warming blanket set at 102° F. An auricular intravenous (IV) line is placed. Anesthesia is maintained with isoflurane (0.5-1%) and supplemental oxygen (3-5 L/min) using a MATRX ventilator (midmark.com). The ventilator rate initially is set at 12-15 breaths per minute with a tidal volume of 10 mL/kg, and subsequently is adjusted to maintain the EtCO2at 40-50 mm Hg. Cotton blankets are placed over non-surgical areas to minimize any decrease in body temperature. Vital signs are continuously recorded on a laptop computer via a Bionet BM5 monitor. Rectus myocutaneous flap harvest: Flap harvesting procedures are performed under 4× binocular loupe magnification. A pedicled rectus abdominus myocutaneous flap is raised based on the deep superior epigastric artery and veins in addition to the superficial superior epigastric vein. A plastic surgeon specialized in flap and microsurgery does the procedure. The main pedicle is detected on the skin using an 8-MHz pencil Doppler probe. With the pig under general anesthesia and in the supine position, the chest, abdomen, groins, and bilateral lower extremities are shaved with an electric clipper, washed with soap and water, and then prepped using ChloraPrep™ applicators (chlorhexidine gluconate/isopropyl alcohol). The flap is harvested by creating 2 rectangular flaps designed over the rectus muscle, one flap on each side of the abdomen, and each flap is centered over the underlying rectus muscle. The skin flap is designed with a surgical marker over the rectus muscle. The skin flap always remains attached to the underlying rectus muscle, perforators are not explored or identified, or dissected. The skin superior border begins at the subchondral border, followed by a midline incision down to the umbilicus. The inferior border of the flap is located at the umbilicus level and extends laterally past the lateral border of the rectus muscle. Each flap on both sides of the abdominal midline is the same dimension. The skin paddle of the flap ill represents the surface area and boundaries of the underlying rectus muscle and is shaped as a rectangle (superior border: subchondral region, midline, lateral border of the rectus muscle, lateral border of the rectus muscle. The superior skin incision is made first to identify the location and width of the underlying rectus muscle. The skin incision width is adjusted based on the width of the underlying rectus muscle. Once the width of the rectus muscle is confirmed, this determines the width of the skin incision in the subchondral region. The vertical flap skin incision is then made just lateral to the edge of the rectus muscle down to the umbilicus level of the abdomen. A transverse skin incision is made at the umbilicus level and equal to the width of the rectus muscle. A central vertical midline incision is made in between both rectus muscles to connect the superior border of the flap with the inferior border of the flap. Once the skin incision is made circumferentially around the flap and once all borders of the rectus have been identified, the superior superficial epigastric vein is identified within the subcutaneous tissue and is found superficial to the rectus muscle. The superior superficial epigastric veins are found more lateral and more superficial than the superior epigastric vein and artery. The rectus muscle is dissected at the superior border of the fap to identify the underlying SEA and SEV (superior epigastric artery and superior epigastric vein). The superior epigastric artery and vein are identified deep in the rectus muscle and are found medial to the superior superficial epigastric vein. Once the superior rectus muscle is dissected, attention is given to the inferior border of the flap where the rectus muscle is dissected and cut in a transverse direction (same procedure as superior rectus dissection). The SEA and SEV and SSEV is always kept intact for each flap. At this point, the rectus muscle flap is completely detached and is separated from the rest of the caudal and cephalad portions of the rectus muscle as well as the anterior sheath of the abdominal wall fascia underneath the rectus, and the midline centrally. Monitoring StO2and Temperature Via the Patch The patch is attached to the central portion of the flap along with the Vioptix probe which serves as a control. Both probes are positioned over the central portion of the flap and are 2-3 cm from each other. Flap perfusion readings are measured at 1-minute intervals for 15 minutes until baseline reading is reached for tissue oxygenation for both the Vioptix probe and for the A BLUETOOTH connection is established with the patch probe. The ViOptix T.Ox probe is connected to the external monitor via the fiber optic cable which is attached to the monitor. Baseline flap readings: A stable reading is taken after 15 minutes and recorded for both the experimental probe and the Vioptix probe. Three readings are taken at 5-minute intervals after an initial baseline of 15 min. Venous congestion experiment: An Acland clamp is applied to the superior epigastric vein and the superior superficial epigastric vein for 15 min. After 15 min the readings on the experimental probe and Vioptix probe are taken, and three readings are taken at 5-minute intervals. After the last reading, the Acland clamp is removed and the flap is left to re-stabilize for 15 min before starting the arterial ischemia experiment. Arterial ischemia experiment: An Acland clamp is applied to the deep superior epigastric artery for 15 min. Tissue Oxygenation measurements are taken after 15 min baseline with the Vioptix probe and experimental probe. Readings are taken every 5 minutes after the 15 min baseline. Three recordings are taken in total every 5 minutes for both probes. The entire procedure (baseline readings, Venous congestion experiment and readings, arterial ischemia experiment, and readings) are repeated three times for each flap. After completion of all experiments, the flap skin is closed to the peripheral wound using resorbable Vicryl sutures and skin staples, and the Vioptix probe is removed. A surgical dressing is placed on the surgical wound. The experimental probe is kept in place and secured for 5 days. The experimental probe is securely covered to avoid trauma and contact loss from the underlying skin flap. Measurements are taken every 5 minutes for StO2. Oxygen saturation is measured as in Example 1. | 47,134 |
11857295 | DETAILED DESCRIPTION FIG.1illustrates a patient monitoring system100, according to an example embodiment of the present disclosure. The system100can be configured to monitor a patient, and in some embodiments, to determine a hemodynamic parameter of the patient. As used herein, the term “hemodynamic parameter” can include an indication of cardiac or vascular health, such as, for example, an indication of cardiac, circulatory, or vascular functionality. Specifically, a hemodynamic parameter can include a heart rate, a blood pressure, a vessel compliance, a saturation of hemoglobin with oxygen in arterial blood (i.e., an SpO2measurement), an aortic index, an augmentation index, reflected wave ratio, or an indication of treatment. Blood pressure can include systolic, suprasystolic, diastolic, or mean atrial pressure. It is understood that such blood pressures may be represented as a systolic blood pressure over a diastolic blood pressure, and that a mean or average blood pressure may be represented as an average systolic blood pressure over an average diastolic blood pressure. Moreover, an indication of treatment can include a parameter reflecting the affect of a drug treatment, or one or more treatments of a disease state. The system100can include a cuff12configured to at least to partially occlude the movement of blood through a blood vessel10of a patient14such as an artery, vein, or the like. In some embodiments, the cuff12can be configured to completely occlude an artery of patient14. In any of the embodiments described herein, however, the system100may be tuned and/or otherwise configured to determine one or more hemodynamic parameters of the patient14, such as a blood pressure of the patient14, without completely occluding the blood vessel10. In such embodiments, the system100, and/or components thereof, may determine the blood pressure of the patient14before the cuff12is inflated to a pressure associated with complete occlusion of the blood vessel10and/or before a systolic blood pressure of the patient14is reached. Although shown inFIG.1surrounding an arm22of the patient14, the cuff12may be adapted for placement on (i.e., around) any suitable body part of patient14, including, for example, a wrist, a finger, an upper thigh, an ankle, or any other like limb or body part. In addition, one or more cuffs12could be placed at different locations about the patient14for use with the system100. The cuff12can include one or more bladders or other like inflatable devices, and the pressure or volume within the cuff12may be controlled by any known inflation device (not shown). Such inflation devices can include a pump or similar device configured to controllably inflate and/or deflate the inflatable device of the cuff12. For example, such inflation devices could supply the cuff12with a fluid to increase the pressure or volume of the cuff12. In other embodiments, one or more inflation devices could include mechanical, electrical, or chemical devices configured to occlusion of the blood vessel10via the cuff12. Such inflation devices may comprise a component of the system100and may be included within and/or operably connected to, for example, a controller20of the system100. In some embodiments, such inflation devices can inflate the cuff12to or towards a target inflation pressure, and may be configured to generally maintain the cuff12at any desired inflation pressure for a desired period of time. In some embodiments, the target inflation pressure may be less than or equal to the systolic pressure of the patient14. Alternatively, in further embodiments the target pressure may be greater than the systolic pressure of the patient14. In example embodiments, the system100may determine the blood pressure of the patient14without inflating the cuff to the systolic pressure. Accordingly, even in embodiments in which algorithms, controllers, and/or other components of the system100employ a target inflation pressure that is equal to or greater than the systolic pressure, the system100may discontinue inflation of the cuff12at an inflation pressure less than such a target inflation pressure. Although such embodiments may use a target inflation pressure equal to or greater than the systolic pressure, discontinuing inflation of the cuff100at a pressure below such a target inflation pressure may avoid patient discomfort during blood pressure determination. The system100can further include a sensor18configured to receive a signal associated with the patient14. In some embodiments, the sensor18can be configured to receive a signal associated with an at least partially occluded vessel10of the patient14. Such an input signal can arise from blood movement through the partially occluded vessel10or from a signal associated with an occluded blood vessel10. The sensor18could sample multiple times at various intervals. In yet other embodiments, the sensor18could provide an indication of blood vessel movement, such as, for example, oscillations arising from vascular expansion or contraction. For example, the sensor18could be configured to detect a pressure or volume of cuff12that may vary periodically with the cyclic expansion and contraction of the blood vessel10of the patient14. In particular, the sensor18could determine a blood pressure, various pulses of blood through the blood vessel10, an oxygen saturation of the blood, or any other hemodynamic parameter associated with the patient14using an auscultation, oscillometric, or other known measurement method. In some embodiments, the sensor18could detect a volume or a pressure associated with cuff12. For example, the sensor18could include a pressure transducer or other like pressure sensor, and may be located within, on, or about the cuff12or other parts of the system100. In such embodiments, the sensor18may be configured to sense, measure, detect, monitor, calculate, and/or otherwise “determine” one or more blood pressure pulses associated with the patient14. Each blood pressure “pulse” may be indicative of, for example, the movement of blood through the blood vessel10by the heart of the patient14during systole, and the number of such pulses per minute may comprise the heart rate of the patient14. The controller20may comprise and/or otherwise include one or more processors, microprocessors, programmable logic controllers, and/or other like components configured to control one or more operations of the cuff12, the cuff inflation devices, the sensor18, and/or other components of the system100connected to the controller20. For example, the controller20can control inflation and/or deflation of the cuff12via control of the inflation devices described above. In some embodiments, the controller20can sense, measure, detect, monitor, calculate, and/or otherwise determine a blood pressure of the patient14based on one or more of the hemodynamic parameters determined by the sensor18. This determination may be based on one or more output signals received from sensor18, as described above. In some embodiments, the controller20may also include one or more sensors, similar to the sensor18, configured to sense, measure, detect, monitor, calculate, and/or otherwise determine one or more blood pressure pulses associated with the patient14, a pressure or volume of cuff12, and/or any of the other hemodynamic parameters described herein. The controller20may also control inflation of cuff12(via one or more of the inflation devices described herein) toward a target inflation pressure, or generally maintaining inflation of cuff12at about the target pressure. Such a target inflation pressure may be a pressure that is greater than, equal to, or less than, for example, a systolic pressure of the patient14and/or the mean arterial pressure of the patient. For example, as noted above, the system100may determine the blood pressure of the patient14without inflating the cuff to the systolic pressure. Accordingly, even in embodiments in which the controller20employs a target inflation pressure that is equal to or greater than the systolic pressure for purposes of cuff inflation, algorithms of the controller20may discontinue inflation of the cuff12at an inflation pressure less than such a target inflation pressure. Despite the use of such example target inflation pressures, the controller20may determine the blood pressure of the patient14without completely occluding the blood vessel10. Although not shown inFIG.1, in additional example embodiments, the system100can optionally include a signal analysis module. For example, the signal analysis module may be configured to analyze one or more signals received from the sensor18using one or more processors of the controller20. For example, the signal analysis module can include one or more filters configured to filter a signal associated with the sensor18or the controller20. Such filters can include band-pass, high-pass, or low-pass filters. As illustrated inFIG.1, the system100may also include a memory24operably connected to the controller20. The memory24may include, for example, a hard drive, a thumb drive, and/or any other like fixed or removable storage device known in the art. Such memory24may comprise random access memory, read-only memory, transient memory, non-transient memory, and/or any other like information storage means. In such embodiments, the memory24may be configured to store signals, data, values, curves, thresholds, and/or any other like information received from the sensor18. The memory24may also be configured to store signals, data, values, thresholds, curves, and/or any other like information determined by the controller20during the various operations described herein. For example, the memory24may be configured to store one or more pressure pulses, pulse profiles, pulse heights, pulse curves, target inflation pressures, pressure thresholds, and/or other like information. Additionally, the memory24may be configured to store one or more algorithms, protocols and/or other like programs associated with calculating and/or otherwise determining the blood pressure of the patient14. Additionally, the memory24may be configured to store one or more sets of values corresponding to points on one or more pulse curves. Such information may be recalled and/or otherwise utilized by the controller20during one or more blood pressure determination methods described herein. The system100can further include a user interface16configured to provide communication to the patient14or one or more operators. For example, the user interface16could include a display configured to communicate and/or otherwise output one or more hemodynamic parameters. The user interface16may further include one or more speakers or other like audio devices configured to communicate and/or otherwise output information to the patient14and/or a user operator of the system100. In further embodiments, the system100may include one or more transmitters, network devices, routers, Bluetooth® devices, WiFi® devices, radio devices, and/or other like communication device26configured to transmit data to a remote location and/or to a remote device. In such embodiments, the communication device26may enable the transmission of information to or from the controller20. It is understood, that such communication devices26may facilitate the transmission of such information via wired or wireless means. For example, in any of the embodiments described herein, one or more components of the system100, such as the controller20, may be disposed remote from a remainder of the components of the system100. In such embodiments, for example, the controller20may be disposed in a different location of a healthcare facility than the cuff12, user interface16, or other components of the system100. Alternatively, in further embodiments, the controller20may be in a first healthcare facility and a remainder of the components of the system100may be located in a second healthcare facility different from the first facility. In such embodiments, the various components of the system100may be in communication and/or otherwise operably connected via the communication devices26described herein. In addition to the components outlined above, the system100may include various other component, such as, for example, a power source and/or a user input device. One or more components described herein may be combined or may be separate independent components of the system. Moreover, the various components of the system100could be integrated into a single processing unit or may operate as separate processors. In operation, one or more processors can be configured to operate in conjunction with one or more software programs to provide the functionality of the system100. For example, one or more of the components described above with respect to the system100may include one or more hardware components and/or one or more software components configured to control operation of such components and/or of the system100. The system100of the present disclosure may also include one or more components configured to fluidly connect the cuff12with the controller20, and in particular, with one or more inflation devices operably connected to the controller20. For example, the controller20may include first and second connectors28a,28bfluidly coupled to one or more of the inflation devices described herein. The first and second connectors28a,28bmay comprise male barbs or other like connectors defining a respective lumen through which pressurized air or other fluids may pass from the inflation devices to tubing30fluidly connected to the one or more connectors28a,28b. For example, the tubing30may comprise dual-lumen tubing having first and second connected conduit sections32a,32bsharing a substantially smooth integrated outer surface. In such embodiments, an orifice34aof the first section32aat a proximal end36of the tubing30may be configured to form a substantially fluid-tight connection with the first connector28a. Similarly, an orifice34bof the second section32bat the proximal end36of the tubing30may be configured to form a substantially fluid-tight connection with the second connector28b. Alternatively, in other embodiments, the tubing30may comprise single-lumen tubing, and a first section of the single-lumen tubing may be configured to form a substantially fluid-tight connection with the first connector28awhile a second section of the single-lumen tubing30may be configured to form a substantially fluid-tight connection with the second connector28b. For ease of discussion, the tubing30shall be described herein as dual-lumen tubing unless otherwise noted. In any of the embodiments described herein, the tubing30may comprise a flexible, durable, medically approved material such as a thermoplastic elastomer, and the tubing30may be made from processes including extrusion molding. The first section32aof the tubing30may also include an orifice38aat a distal end40of the tubing30, and the second section32bmay include a similar orifice38bat the distal end40. The orifices38a,38bmay be configured to form a substantially fluid-tight connection with a fitting42of the present disclosure. As will be described in greater detail below, the fitting42may have various different configurations, and any of the fittings described herein may be employed by the system100in order to assist in fluidly connecting the cuff12with the controller20and/or other components of the system100. In some examples, the fitting42may comprise a dual-shaft connector (e.g., a connector having two shafts configured to mate with dual lumen tubing30), while in other examples, the fitting42may comprise a single-shaft connector. For ease of discussion, the fitting42shall be described herein as dual-shaft connector unless otherwise noted. The fitting42may include, for example, first and second shafts44a,44b, and a proximal end portion of the first shaft44amay be configured to form a substantially fluid-tight connection with the first section32aof the tubing30, while a proximal end portion of the second shaft44bmay be configured to form a substantially fluid-tight connection with the second section32bof the tubing30. In particular, a barb or other like connector may be formed at proximal end portions of the first and second shafts44a,44b, and such barbs may be inserted into the respective orifices38a,38bof the tubing30to form such a substantially fluid-tight connection between the fitting42and the tubing30. In some examples, the barbs formed at each of the proximal end portions may be substantially similar to and/or the same as the first and second connectors28a,28bdescribed above. The first and second shafts44a,44bmay define respective lumens passing therethrough and configured to fluidly connect, the tubing30with, for example, the cuff12via one or more adapters50of the cuff12. For example, as will be described in greater detail below, the fitting42may be removably attachable to a corresponding adapter50of the cuff12, and a substantially fluid tight seal may be formed between the fitting42and such an adapter50when the fitting50is removably attached to the adapter. The fitting42may include one or more grips46a,46bconnected to a body48of the fitting42, and such grips46a,46bmay be configured to assist in removably attaching the fitting42to the adapter50. It is understood that when the fitting42is removably attached to the blood pressure cuff adapter50, the fitting42may be fluidly connected to the adapter50and/or to the cuff12. Thus, when the fitting42is removably attached to the blood pressure cuff adapter50, the fitting42may be configured to direct pressurized air or other fluids to the cuff12, via the adapter50, to assist in at least partially inflating the cuff12. FIG.2illustrates the example fitting42and the example blood pressure cuff adapter50ofFIG.1in greater detail. As shown inFIG.2, in an example system100or other environment, the grips46a,46bof the fitting42may extend from the body48, and at least part of the fitting42coupling the grips46a,46bto the body48may be relatively flexible. For example, the body48may be made from a first substantially rigid material, and such a material may include, for example, one or more metals, alloys, plastics, polymers, or other materials. Such materials may include, for example, polyethylene, polypropylene, and/or other medically approved materials. In such examples, the grips46a,46bmay be connected to the body48via respective first and second stands66a,66bextending from the body48. In some examples, the stands66a,66bmay be made from any of the materials described above with respect to the body48, while in other examples, one or more of the stands66a,66bmay be made from a material that is relatively more flexible than the material used to form the body48. In any such examples, the shape, size, materials, and/or other configuration of the stands66a,66bmay provide for movement of the respective grips46a,46brelative to the body48when force is applied to the respective grips46a,46bby a user of the fitting42. Such movement may enable the fitting42to be removably attached to the blood pressure cuff adapter50, and may also enable the fitting42to be detached from the adapter50. For example, the fitting42may include one or more arms62a,62bextending from a respective grip46a,46band/or from a respective stand66a,66b. In such examples, a first arm62amay extend substantially perpendicularly from the first grip46a, and a second arm62bmay extend substantially perpendicularly from the second grip46b. The first and second arms62a,62bmay include respective first and second shelves64a,64bextending substantially parallel to the respective grips46a,46b. As can be seen fromFIG.2, the first and second shelves64a,64bmay also extend substantially perpendicularly from the respective arms62a,62b. In particular, such first and second shelves64a,64bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms62a,62b, and such surfaces may be configured to mate with a corresponding surface of the adapter50in order to removably attach the fitting42to the adapter. The fitting42may also include one or more extensions, passages, and/or other like channels68extending from the body48. In some examples, such as the example shown inFIG.2, the channel68may extend substantially along a longitudinal axis Z of the fitting42, and the longitudinal axis Z may extend substantially centrally through the channel68. As will be described in further detail below, in some examples, the channel68may form at least part of a central fluid passage extending through at least part of the fitting42. The channel68may also form an opening70configured to permit the passage of air or other fluids into the cuff12via the fitting42(e.g., via the central passage of the fitting42), and/or to otherwise fluidly connect the fitting42with the cuff12when the fitting42is removably attached to the adapter50. As shown inFIG.2, in some examples the adapter50may include a substantially rigid body52that is at least partly connected to the cuff12. For example, the body52may include a distal portion extending outwardly from a top or outer surface54of the cuff12. The body52may also include a proximal portion embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, the proximal portion of the body52may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. Alternatively, in any of the example embodiments described herein the adapter50may be fluidly connected to the cuff12via one or more lengths of tubing30and/or other components of the system100. For instance, in such examples a length of tubing30may be fluidly connected to an internal bladder and/or other inflatable portion of the cuff12, and the length of tubing30may extend outwardly from the bladder and/or other inflatable portion by any desired distance (e.g., one foot, 18 inches, two feet, etc.). In such examples, the adapter50may be fluidly, removably, permanently, and/or otherwise connected to an end of the tubing30opposite the cuff12, and the adapter50may be configured to facilitate a removable and/or releasable connection with the fitting42at a location spaced from the cuff12. With continued reference toFIG.2, the body54may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter50may be made from more than one such material. For example, one or more components or other parts of the distal portion may be made from a first material, and one or more components or other parts of the proximal portion may be made from a second material different from the first material. In such examples, the use of such first and second materials may result in the various components or other parts of the body52having different rigidities, durabilities, sealing characteristics, and/or other properties. For example, a ridge58or other part of the body52mating with the shelves64a,64bof the fitting42may be made from a first relatively rigid material to assist in securely removably attaching the fitting42to the adapter50. In such examples, at least part of a ring56(e.g., a top surface and/or an inner wall of the ring56) or other part of the body52forming a substantially fluid-tight seal with a corresponding surface of the fitting42may be made from a second relatively flexible material different from the first material to assist in forming such a substantially fluid-tight seal. In any of the examples described herein, the body52of the adapter50may also include a central opening60at least partially formed by an inner wall59of the body52. For example, the inner wall59may comprise a substantially cylindrical inner wall, and the inner wall59may define a central fluid passage of the adapter50configured to accept air or other fluids delivered to the cuff12via the fitting42. In such examples, the inner wall59may have any shape, size, diameter, and/or other configuration such that the inner wall59may accept at least part of the channel68therein. For example, a substantially cylindrical inner wall59may define a substantially circular opening60through which at least part of the channel68may pass when the fitting42is removably attached to the adapter50. In this way, at least part of the channel68may extend into and/or may otherwise be disposed within the inner wall59, via the opening, when the fitting42is removably attached to the adapter50. In any of the examples described herein, the body52may further include a longitudinal axis X, and in such examples, the longitudinal axis X may extend substantially centrally through the opening60and/or through the substantially cylindrical inner wall59. As will be described in greater detail below, in any of the examples described herein the ring56formed by the body52may comprise a substantially annular ring, flange, and/or other portion of the adapter50, and the ring56may include a top surface disposed opposite the proximal portion of the body52. In such examples, the ridge58may be disposed opposite the top surface of the ring56an, in some examples, the ridge58may comprise at least part of a bottom surface of the ring56. At least part of the ridge58and/or at least another part of the bottom surface of the ring56may be configured to mate with the shelves64a,64bof the fitting42to assist in retaining the fitting42and/or otherwise removably attaching the fitting42to the adapter50. In some examples, at least part of the ridge58and/or at least another part of the bottom surface of the ring56may extend substantially perpendicular to the longitudinal axis X of the body52. As shown inFIG.3, an example system300or other environment may include a fitting302and/or a blood pressure cuff adapter304, and in such systems, the example fitting302may be removably attachable to such an adapter304. The fitting302and/or the adapter304may include various structures and/or other components configured to assist in forming such a removable connection, and one or more such components may also assist in forming a substantially fluid-tight seal between the fitting302and the adapter304when the fitting302is removably attached to the adapter304. In example embodiments, any of the structures, functions, and/or other aspects of the fitting42described herein with respect toFIGS.1and2may be included in the fitting302and/or in any of the other example fittings described herein. Likewise, any of the structures, functions, and/or other aspects of the adapter50described herein with respect toFIGS.1and2may be included in the adapter304and/or in any of the other example blood pressure cuff adapters described herein. Further, one or more of the structures, functions, and/or features of the fitting302, and/or of the adapter304, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system300, the fitting302may include a substantially rigid body306, and one or more arms308a,308bextending from the body306. The first and second arms308a,308bmay also include respective shelves310a,310bformed at respective distal ends of the arms308a,308b. At least one of the arms308a,308band/or at least one of the shelves310a,310bmay be substantially similar to and/or the same as the arms62a,62band/or the shelves64a,64bdescribed above. For example, the arm308bmay be movably connected to the body306via at least one stand312extending from the body306. As can be seen fromFIG.3, the first and second shelves310a,310bmay extend substantially perpendicularly from the respective arms308a,308b. In particular, such first and second shelves310a,310bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms308a,308b, and such surfaces may be configured to mate with a corresponding surface of the adapter304in order to removably attach the fitting302to the adapter304. At least one of the body306, stands312, arms308a,308b, shelves310a,310b, and/or other components of the fitting302may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the stand312may provide for movement of the corresponding shelf310brelative to the body306when force is applied to a grip314associated with the stand312. Such movement may enable the fitting302to be removably attached to the blood pressure cuff adapter304, and may also enable the fitting302to be detached from the adapter304. For example, in the system300the arm308aand/or the shelf310amay remain substantially stationary relative to the body306when the fitting302is removably attached to and/or detached from the blood pressure cuff adapter304. The fitting302may also include one or more extensions, passages, and/or other like channels316extending from the body306. In some examples, the channel316may extend substantially along the longitudinal axis Z (FIG.2) of the fitting302, and the longitudinal axis Z may extend substantially centrally through the channel316. The channel316may form an opening318configured to permit the passage of air or other fluids into the cuff12via the fitting302, and/or to otherwise fluidly connect the fitting302with the cuff12, when the fitting302is removably attached to the adapter304. The fitting302may further include a central fluid passage319extending at least partially through the body306. For example, the channel316may form at least part of the central fluid passage319, and the longitudinal axis Z (FIG.2) may extend substantially centrally through at least part of the central passage319. In such examples, the opening318of the channel316may comprise an opening of the central passage319. As noted above with respect to the adapter50ofFIG.2, the adapter304may include a substantially rigid body320that is at least partly connected to the cuff12. For example, the body320may include a distal portion321extending outwardly from the outer surface54of the cuff12. The body320may also include a proximal portion336embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface340of the proximal portion336may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. The body320of the adapter304may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter304may be made from more than one such material. For example, one or more components or other parts of the distal portion321may be made from a first material, and one or more components or other parts of the proximal portion336may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body320having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion321of the adapter304may include an annular ring322having a top surface326and a ridge324disposed opposite the top surface326. The ridge324may comprise at least part of a bottom surface of the ring322. In such examples, at least part of the ridge324and/or at least another part of the bottom surface of the ring322may be configured to mate with the shelves310a,310bof the fitting302to assist in retaining the fitting302and/or otherwise removably attaching the fitting302to the adapter304. In some examples, at least part of the ridge324and/or at least another part of the bottom surface of the ring322may extend substantially perpendicular to a longitudinal axis X of the body320. Additionally, the adapter304may include a substantially cylindrical sidewall338extending from the ridge324to the top surface340of the proximal portion336. Such a sidewall338may space the ridge324from the top surface340such that the shelves310a,310bof the fitting302may have room to mate with the ridge324beneath the ring322. The top surface326of the ring322may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting302to the adapter304. In some examples, the top surface326of the ring322may comprise a convex surface extending radially away from the longitudinal axis X of the body320from a distal end (e.g., a radially innermost end)330of the top surface326to a proximal end (e.g., a radially outermost end)332of the top surface326. In such examples, the curved top surface326may comprise a camming surface along which at least part of the arm308band/or other components of the fitting302may slide as the fitting302is removably attached to the adapter304. The system300may also include one or more O-rings, gaskets, and/or other seals328configured to form a substantially fluid-tight seal between the fitting302and the adapter304when the fitting302is removably attached to the adapter304. For example, in any of the example embodiments described herein, at least one seal328may be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to either an outer surface334of the fitting302or to the top surface326of the ring326to facilitate forming such a fluid-tight seal. In the example system300ofFIG.3, at least part (e.g., a base) of the seal328may be disposed within an annular groove346formed by the top surface326of the ring322. In such examples, the seal328may engage the outer surface334of the fitting302to form a substantially fluid-tight seal with the fitting302when the fitting302is removably attached to the adapter304. Alternatively, in any of the example embodiments described herein, the seal328may be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to the outer surface334of the fitting302, and may be configured to engage the top surface326to form such a substantially fluid-tight seal. In example embodiments in which the seal328is formed integrally with the adapter304, the seal328may comprise, for example, a relatively flexible and/or a relatively thin portion of the ring322. Alternatively, in example embodiments in which the seal328is formed integrally with the fitting302, the seal328may comprise a relatively flexible and/or a relatively thin portion of the outer surface334. In still further example embodiments, the seal328may be separable from (e.g., removably attached to) either the fitting302or the adapter304. In such examples, the seal328may be press fit within, dove-tailed within, and/or otherwise at least partly disposed within a groove346, channel, and/or other structure formed by either the fitting302or the adapter304to facilitate such removable attachment thereto. Further, the body320of the adapter304may also include a central opening (as shown more clearly inFIG.2) at least partially formed by an inner wall344of the body320. For example, the inner wall344of the body320may comprise a substantially cylindrical inner wall, and the inner wall344may define a central fluid passage of the adapter304configured to accept air or other fluids delivered to the cuff12via the fitting302. In such examples, the inner wall344may have any shape, size, diameter, and/or other configuration such that the inner wall344may accept at least part of the channel316therein. For example, at least part of the channel316may pass through the central opening of the inner wall344, proximate the distal end330of the top surface326, when the fitting302is removably attached to the adapter304. The inner wall344may extend from the distal end330of the top surface326to a bottom surface342of the body320formed by the proximal portion326. In any of the examples described herein, the body320may further include a longitudinal axis X, and in such examples, the longitudinal axis X may extend substantially centrally through the central fluid passage of the adapter304formed by the substantially cylindrical inner wall344. As shown inFIGS.4and4a, an example system400or other environment of the present disclosure may include a fitting402that is substantially similar to the fitting302, and/or may include a blood pressure cuff adapter404that is substantially similar to the blood pressure cuff adapter304. In such systems400, the example fitting402may be removably attachable to such an adapter404, and unlike the fitting302illustrated inFIG.3, the fitting402may include a pair of stands412a,412bmovably connecting respective arms408a,408bof the fitting402to a body406of the fitting402. The fitting402and/or the adapter404may include various structures and/or other components configured to assist in forming a removable connection therebetween, and one or more such components may also assist in forming a substantially fluid-tight seal between the fitting402and the adapter404when the fitting402is removably attached to the adapter404. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting402. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter404. Further, one or more of the structures, functions, and/or features of the fitting402, and/or of the adapter404, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system400, various components of the fitting402may be substantially similar to corresponding components of the fitting302, and various components of the adapter404may be substantially similar to corresponding components of the adapter304. For example, the fitting402may include a substantially rigid body406, and one or more arms408a,408bextending from the body406. The first and second arms408a,408bmay also include respective shelves410a,410bformed at respective distal ends of the arms408a,408b. In such examples, the arm408amay be movably connected to the body406via a stand412aextending from the body406, and the arm408bmay be movably connected to the body406via a stand412bextending from the body406. As can be seen fromFIG.4, the first and second shelves410a,410bmay extend substantially perpendicularly from the respective arms408a,408b. In particular, such first and second shelves410a,410bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms408a,408b, and such surfaces may be configured to mate with a corresponding surface of the adapter404in order to removably attach the fitting402to the adapter404. At least one of the body406, stands412a,412b, arms408a,408b, shelves410a,410b, and/or other components of the fitting402may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the stands412a,412bmay provide for movement of the corresponding arms408a,408band/or shelves410a,410brelative to the body406when force is applied to respective grips414a,414bassociated with the stands412a,412b. Such movement may enable the fitting402to be removably attached to the blood pressure cuff adapter404, and may also enable the fitting402to be detached from the adapter404. The fitting402may also include one or more extensions, passages, and/or other like channels416extending from the body406. In some examples, the channel416may extend substantially along the longitudinal axis Z (FIG.2) of the fitting402, and the longitudinal axis Z may extend substantially centrally through the channel416. The channel416may form an opening418configured to permit the passage of air or other fluids into the cuff12via the fitting402, and/or to otherwise fluidly connect the fitting402with the cuff12, when the fitting402is removably attached to the adapter404. The fitting402may further include a central fluid passage419extending at least partially through the body406. For example, the channel416may form at least part of the central fluid passage419of the fitting402, and the longitudinal axis Z (FIG.2) may extend substantially centrally through at least part of the central passage419. In such examples, the opening418of the channel416may comprise an opening of the central passage419. As noted above with respect to the adapter50ofFIG.2, the adapter404may include a substantially rigid body420that is at least partly connected to the cuff12. For example, the body420may include a distal portion421extending outwardly from the outer surface54of the cuff12. The body420may also include a proximal portion436embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface of the proximal portion436may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. The body420of the adapter404may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter404may be made from more than one such material. For example, one or more components or other parts of the distal portion421may be made from a first material, and one or more components or other parts of the proximal portion436may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body420having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion421of the adapter404may include an annular ring422having a top surface426and a ridge424disposed opposite the top surface426. The ridge424may comprise at least part of a bottom surface of the ring422. In such examples, at least part of the ridge424and/or at least another part of the bottom surface of the ring422may be configured to mate with the shelves410a,410bof the fitting402to assist in retaining the fitting402and/or otherwise removably attaching the fitting402to the adapter404. In some examples, at least part of the ridge424and/or at least another part of the bottom surface of the ring422may extend substantially perpendicular to a longitudinal axis X of the body420. Additionally, the adapter404may include a substantially cylindrical sidewall extending from the ridge424to the top surface of the proximal portion436. Such a sidewall may space the ridge424from the top surface of the proximal portion436such that the shelves410a,410bof the fitting402may have room to mate with the ridge424beneath the ring422. The top surface426of the ring422may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting402to the adapter404. In some examples, the top surface426of the ring422may comprise a convex surface extending radially away from the longitudinal axis X of the body420from a distal end430of the top surface426to a proximal end432of the top surface426. In such examples, the curved top surface426may comprise a camming surface along which at least part of the arms408a,408band/or other components of the fitting402may slide as the fitting402is removably attached to the adapter404. The system400may also include one or more O-rings, gaskets, and/or other seals configured to form a substantially fluid-tight seal between the fitting402and the adapter404when the fitting402is removably attached to the adapter404. For example, at least one seal428(FIG.4) may be attached to, adhered to, embedded substantially within, and/or otherwise connected to either an outer surface434of the fitting402or to the top surface426of the ring422to facilitate forming such a fluid-tight seal. In the example system400ofFIG.4, at least part (e.g., a base) of the seal428may be disposed within an annular groove formed by the top surface426of the ring422. In such examples, the seal428may engage the outer surface434of the fitting402proximate a perimeter and/or outer wall of the channel416to form a substantially fluid-tight seal with the fitting402when the fitting402is removably attached to the adapter404. Alternatively, the seal428may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface434of the fitting402, and may be configured to engage the top surface426of the adapter404to form such a substantially fluid-tight seal. As shown in the partial cross-section ofFIG.4a, in an alternative example a first seal428amay be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to the outer surface434of the fitting402, and a second seal428bmay be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to the top surface426of the ring422. In such embodiments, the first seal428amay be configured to engage contact, interlock, and/or otherwise mate with the second seal428bwhen the fitting402is removably attached to the adapter402, thereby forming such a fluid-tight seal. In the example system400ofFIG.4a, at least part (e.g., a base438) of the first seal428amay be disposed within an annular groove formed by the outer surface434of the fitting402, and at least part (e.g., a base440) of the second seal428bmay be disposed within a similar annular groove formed by the top surface426of the ring422. In such examples, the first seal428amay include a sealing surface442disposed opposite the base438, and the second seal428bmay include a sealing surface444opposite the base440. Accordingly, the sealing surface442of the first seal428amay engage contact, interlock, and/or otherwise mate with the sealing surface444of the second seal428bwhen the fitting402is removably attached to the adapter402to form a fluid-tight seal therewith. Further, the body420of the adapter404may also include a central opening (as shown more clearly inFIG.2) at least partially formed by an inner wall446(FIG.4a) of the body420. In such examples, the longitudinal axis X of the body420may extend substantially centrally through the central fluid passage of the adapter404formed by the substantially cylindrical inner wall446of the adapter404. The central opening, inner wall446, and central fluid passage of the adapter404may be substantially similar to the central opening, inner wall344, and central fluid passage described above with respect toFIG.3. In such examples, a substantially cylindrical outer wall448of the channel416may be disposed adjacent and/or at least partly in contact with the inner wall446of the adapter404when the fitting402is removably attached to the adapter404. As shown inFIG.5, an example system500or other environment of the present disclosure may include a fitting502that is substantially similar to the fitting402, and/or may include a blood pressure cuff adapter504that is substantially similar to the blood pressure cuff adapter404. In such systems500, the example fitting502may be removably attachable to such an adapter504, and in such a system500, the stands412a,412bdescribed above with respect toFIG.4may be omitted. Instead, the fitting502may include a pair of arms extending from the body of the fitting502, and the fitting502may also include a relatively flexible diaphragm at a substantially central top portion of the body. Applying a downward force to the diaphragm while applying an upward force to one or more grips associated with the arms may assist in removably attaching the fitting502to the adapter504and/or detaching the fitting502from the adapter504. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting502. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter504. Further, one or more of the structures, functions, and/or features of the fitting502, and/or of the adapter504, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system500, various components of the fitting502may be substantially similar to corresponding components of the fitting402, and various components of the adapter504may be substantially similar to corresponding components of the adapter404. For example, the fitting502may include a substantially rigid body506, and one or more arms508a,508bextending from the body506. The first and second arms508a,508bmay also include respective shelves510a,510bformed at respective distal ends of the arms508a,508b. In the embodiment ofFIG.5, the arms508a,508bmay be movably connected to the body506via a direct connection with the body506and/or via one or more posts512a,512bor other pieces of material extending substantially laterally from the body506to proximal portions of the respective arms508a,508b. As can be seen fromFIG.5, the first and second shelves510a,510bmay extend substantially perpendicularly from the respective arms508a,508b. In particular, such first and second shelves510a,510bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms508a,508b, and such surfaces may be configured to mate with a corresponding surface of the adapter504in order to removably attach the fitting502to the adapter504. At least one of the body506, posts512a,512b, arms508a,508b, shelves510a,510b, and/or other components of the fitting502may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the posts512a,512bmay provide for movement of the corresponding arms508a,508band/or shelves510a,510brelative to the body506when force is applied to respective grips514a,514bconnected to the arms508a,508b. For example, the fitting502may include a diaphragm536disposed at a substantially central top portion538of the body506. In such examples, the diaphragm536may comprise a relatively thin portion of the body506, and the location, thickness, and/or other configurations of the diaphragm536may enable the body506to flex when a downward force in the direction of arrow542is applied to the diaphragm536. In such examples, the body506may also include a substantially rounded, substantially hollow internal portion540disposed opposite the top portion538. Such a substantially hollow internal portion540may be configured to increase the flexibility of the body506. In particular, the substantially hollow internal portion540may increase the distance and/or degree to which the body506flexes when a downward force in the direction of arrow542is applied to the diaphragm536. In use, a healthcare practitioner may apply an upward force in the direction of arrow544to one or both of the grips514a,514bwhile, at the same time, applying a downward force to the diaphragm536in the direction of arrow542. The application of one or more such forces may cause one or both of the shelves510a,510bto move laterally away from a central longitudinal axis Z (FIG.2) of the fitting502. Such movement may enable the fitting502to be removably attached to the blood pressure cuff adapter504, and may also enable the fitting502to be detached from the adapter504. It is understood that, in an alternate embodiment, the grips514a,514bmay be replaced with a substantially annular, substantially circular, and/or substantially disc-shaped structure substantially surrounding the diaphragm536and configured to receive the upward force described above in the direction of arrow544. The fitting502may also include one or more extensions, passages, and/or other like channels516extending from the body506. In some examples, the channel516may extend substantially along the longitudinal axis Z (FIG.2) of the fitting502, and the longitudinal axis Z may extend substantially centrally through the channel516. The channel516may form an opening518configured to permit the passage of air or other fluids into the cuff12via the fitting502, and/or to otherwise fluidly connect the fitting502with the cuff12, when the fitting502is removably attached to the adapter504. The fitting502may further include a central fluid passage519extending at least partially through the body506. For example, the channel516and/or the substantially hollow internal portion540may form at least part of the central fluid passage519of the fitting502, and the longitudinal axis Z (FIG.2) may extend substantially centrally through at least part of the central passage519. In such examples, the opening518of the channel516may comprise an opening of the central passage519. As noted above with respect to the adapter50ofFIG.2, the adapter504may include a substantially rigid body520that is at least partly connected to the cuff12. For example, the body520may include a distal portion521extending outwardly from the outer surface54of the cuff12. The body520may also include a proximal portion535embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface of the proximal portion535may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. The body520of the adapter504may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter504may be made from more than one such material. For example, one or more components or other parts of the distal portion521may be made from a first material, and one or more components or other parts of the proximal portion535may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body520having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion521of the adapter504may include an annular ring522having a top surface526and a ridge524disposed opposite the top surface526. The ridge524may comprise at least part of a bottom surface of the ring522. In such examples, at least part of the ridge524and/or at least another part of the bottom surface of the ring522may be configured to mate with the shelves510a,510bof the fitting502to assist in retaining the fitting502and/or otherwise removably attaching the fitting502to the adapter504. In some examples, at least part of the ridge524and/or at least another part of the bottom surface of the ring522may extend substantially perpendicular to a longitudinal axis X of the body520. Additionally, the adapter504may include a substantially cylindrical sidewall extending from the ridge524to the top surface of the proximal portion535. Such a sidewall may space the ridge524from the top surface of the proximal portion535such that the shelves510a,510bof the fitting502may have room to mate with the ridge524beneath the ring522. The top surface526of the ring522may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting502to the adapter504. In some examples, the top surface526of the ring522may comprise a convex surface extending radially away from the longitudinal axis X of the body520from a distal end530of the top surface526to a proximal end532of the top surface526. In such examples, the curved top surface526may comprise a camming surface along which at least part of the arms508a,508band/or other components of the fitting502may slide as the fitting502is removably attached to the adapter504. The system500may also include one or more O-rings, gaskets, and/or other seals528configured to form a substantially fluid-tight seal between the fitting502and the adapter504when the fitting502is removably attached to the adapter504. For example, at least one seal528may be attached to, adhered to, embedded substantially within, and/or otherwise connected to either an outer surface534of the fitting502or to the top surface526of the ring522to facilitate forming such a fluid-tight seal. In the example system500ofFIG.5, at least part (e.g., a base) of the seal528may be disposed within an annular groove formed by the top surface526of the ring522. In such examples, the seal528may engage the outer surface534of the fitting502proximate a perimeter and/or outer wall of the channel516to form a substantially fluid-tight seal with the fitting502when the fitting502is removably attached to the adapter504. Alternatively, the seal528may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface534of the fitting502, and may be configured to engage the top surface526of the adapter504to form such a substantially fluid-tight seal. Further, the body520of the adapter504may also include a central opening (as shown more clearly inFIG.2) at least partially formed by an inner wall of the body520. In such examples, the longitudinal axis X of the body520may extend substantially centrally through the central fluid passage of the adapter504formed by the substantially cylindrical inner wall of the adapter504. The central opening, inner wall, and central fluid passage of the adapter504may be substantially similar to the central opening, inner wall344, and central fluid passage described above with respect toFIG.3. As shown inFIGS.6aand6b, an example system600or other environment of the present disclosure may include a fitting602that is configured to be removably attached to a blood pressure cuff adapter604by movement of the fitting602in a first direction substantially parallel to the top surface54of the cuff12. In such examples, the fitting602may also be configured to be detached from the adapter604by moving the fitting602in a second direction substantially parallel to the top surface54of the cuff opposite the first direction. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting602. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter604. Further, one or more of the structures, functions, and/or features of the fitting602, and/or of the adapter604, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system600, various components of the fitting602may be substantially similar to corresponding components of, for example, the fitting402, and various components of the adapter604may be substantially similar to corresponding components of, for example, the adapter404. For example, the fitting602may include a substantially rigid body606, and one or more arms608a,608bextending from the body606. The first and second arms608a,608bmay also include respective shelves610a,610bformed at respective distal ends of the arms608a,608b. In the embodiment ofFIGS.6aand6b, the arms608a,608bmay be movably connected to the body606via a direct connection with the body606and/or via one or more posts612a,612bor other pieces of material extending substantially laterally from the body606to the respective arms608a,608b. As can be seen from at leastFIG.6a, the first and second shelves610a,610bmay extend substantially perpendicularly from the respective arms608a,608b. In particular, such first and second shelves610a,610bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms608a,608b, and such surfaces may be configured to mate with a corresponding surface of the adapter604in order to removably attach the fitting602to the adapter604. At least one of the body606, posts612a,612b, arms608a,608b, shelves610a,610b, and/or other components of the fitting602may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, location, and/or other configuration of the posts612a,612bmay provide for movement of the corresponding arms608a,608band/or shelves610a,610brelative to the body606when force is applied to respective grips614a,614bassociated with the arms608a,608b. For example, a healthcare practitioner may apply an inward force (e.g., in a direction toward the body606of the fitting602) to one or both of the grips614a,614b. The application such an inward force to the grip614amay cause the arm608ato pivot about the post612a, thereby causing the shelf610ato move laterally away from the body606and/or a central longitudinal axis of the fitting602. Likewise, the application such an inward force to the grip614bmay cause the arm608bto pivot about the post612b, thereby causing the shelf610bto move laterally away from the body606and/or a central longitudinal axis of the fitting602. Such movement of the shelves610a,610bmay enable the fitting602to be removably attached to the blood pressure cuff adapter604when the fitting602is moved in a first direction of arrow644substantially parallel to the top surface54of the cuff12. Similarly, such movement of the shelves610a,610bmay enable the fitting602to be detached from the adapter604when the fitting602is moved in a second direction of arrow644opposite the first direction and substantially parallel to the top surface54of the cuff12. The fitting602may also include one or more extensions, passages, and/or other like channels616extending from the body606. In some examples, the channel616may extend substantially along the longitudinal axis of the fitting602, and the longitudinal axis may extend substantially centrally through the channel616. The channel616may form an opening618configured to permit the passage of air or other fluids into the cuff12via the fitting602, and/or to otherwise fluidly connect the fitting602with the cuff12, when the fitting602is removably attached to the adapter604. The fitting602may further include a central fluid passage619extending at least partially through the body606. For example, the channel616may form at least part of the central fluid passage619of the fitting602, and the longitudinal axis of the fitting602may extend substantially centrally through at least part of the central passage619. In such examples, the opening618of the channel616may comprise an opening of the central passage619. As noted above with respect to the adapter50ofFIG.2, the adapter604may include a substantially rigid body620that is at least partly connected to the cuff12. For example, the body620may include a distal portion621extending outwardly from the outer surface54of the cuff12. The body520may also include a proximal portion636embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface of the proximal portion636may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. As shown inFIG.6b, the body620may include a first wall636extending substantially parallel to the top surface54of the cuff12and/or to the top surface of the proximal portion635. The body620may also include a second wall638opposite and extending substantially parallel to the first wall636. In such examples, the body620may further include a pair of sidewalls extending from the first wall636to the second wall638, and side surfaces624a,624b(e.g., outer surfaces624am624bof the body620) of such sidewalls are illustrated in at leastFIG.6a. Additionally, the body620may include a third wall640extending substantially perpendicular to the top surface54of the cuff12and/or to the top surface of the proximal portion635. The body620may also include a fourth wall642opposite and extending substantially parallel to the third wall640. In some examples, the body620may further include a pair of sidewalls extending from the third wall640to the fourth wall642. In any of the examples described herein, the body620may also include a first longitudinal axis X extending substantially centrally through a first section of the distal portion621formed, at least in part, by the first and second walls636,638. For example, the first and second walls636,638may extend substantially parallel to the first longitudinal axis X, and the first and second walls636,638may define at least part of a first central fluid passage630of the distal portion621. The body620may also include a second longitudinal axis Y extending substantially centrally through a second section of the distal portion621formed, at least in part, by the third and fourth walls640,642. For example, the third and fourth walls640,642may extend substantially parallel to the second longitudinal axis Y, and the third and fourth walls640,642may define at least part of a second central fluid passage631of the distal portion621. In such examples, the first longitudinal axis X may extend substantially perpendicular to the second longitudinal axis Y. Further, the first fluid passage630of the adapter620may be fluidly connected to the second fluid passage631of the adapter620. In such examples, the first longitudinal axis X may extend substantially parallel to the top surface54of the cuff12, and may extend substantially centrally through the first central passage630. Additionally, the second longitudinal axis Y may extend substantially perpendicular to the top surface54of the cuff12, and may extend substantially centrally through the second central passage631. The body620of the adapter604may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter604may be made from more than one such material. For example, one or more components or other parts of the distal portion621may be made from a first material, and one or more components or other parts of the proximal portion635may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body620having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion621of the adapter604may include a ring622disposed at an open end of the distal portion621. In such examples, the ring622may form an opening of the first central passage630, and the ring622may be formed, at least in part, by distal ends of the first wall636, the second wall638, and the sidewalls extending from the first wall636to the second wall638. The ring622, and the opening formed thereby, may be shaped, sized, and/or otherwise configured to allow at least part of the channel616to pass therethrough when the fitting602is removably attached to the adapter604. In such examples, the channel616may pass through the ring622and/or the opening, and at least part of the channel616may be disposed within the first central passage630of the adapter604when the fitting602is removably attached to the adapter604. In such examples, the system600may also include one or more O-rings, gaskets, and/or other seals628configured to form a substantially fluid-tight seal between the fitting602and the adapter604when the fitting602is removably attached to the adapter604. For example, at least one seal628may be attached to, adhered to, embedded substantially within, and/or otherwise connected to either an outer surface634of the fitting602or to a corresponding distal and/or other outer surface of the ring622to facilitate forming such a fluid-tight seal. In the example system600, at least part (e.g., a base) of the seal628may be disposed within a groove formed by the distal and/or other outer surface of the ring622. In such examples, the seal628may engage the outer surface634of the fitting602proximate a perimeter and/or outer wall of the channel616to form a substantially fluid-tight seal with the fitting602when the fitting602is removably attached to the adapter604. Alternatively, the seal628may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface634of the fitting602, and may be configured to engage the distal and/or other outer surface of the ring622to form such a substantially fluid-tight seal. The body620of the adapter604may also include a first ridge624aformed on the side surface626a(e.g., the outer surface of the first sidewall extending from the first wall636to the second wall638) of the distal portion621, and a second ridge624bformed on the side surface626b(e.g., the outer surface of the second sidewall extending from the first wall636to the second wall638) of the distal portion621. The first ridge624amay be configured to mate with the first shelf610aof the fitting602to assist in retaining the fitting602and/or otherwise removably attaching the fitting602to the adapter604. Likewise, the second ridge624bmay be configured to mate with the second shelf610bof the fitting602to assist in retaining the fitting602and/or otherwise removably attaching the fitting602to the adapter604. In some examples, at least part of the first ridge624amay extend substantially perpendicular to the side surface626aof the body620, and at least part of the second ridge624bmay extend substantially perpendicular to the side surface626bof the body620. As shown inFIG.6a, at least one of the ridges624a,624bmay include corresponding camming surfaces648a,648bextending at an angle (e.g., an acute angle) from the side surfaces626a,626bof the body620to ends of the respective ridges624a,624b. Such camming surfaces648a,648bmay be shaped, sized, located, and/or otherwise configured such that at least part of the arms608a,608band/or other components of the fitting602may slide along the camming surfaces648a,648bas the fitting602is removably attached to the adapter604. For example, in other embodiments such camming surfaces648a,648bmay be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting602to the adapter604. As shown inFIGS.7a-7e, in some example systems700or other embodiments of the present disclosure a fitting702and/or a blood pressure cuff adapter704may include one or more features or other structural components configured to assist a healthcare practitioner in aligning the fitting702with the adapter704as the fitting702is being removably attached to the adapter704. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting702. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter704. Further, one or more of the structures, functions, and/or features of the fitting702, and/or of the adapter704, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system700, various components of the fitting702may be substantially similar to corresponding components of, for example, the fitting402, and various components of the adapter704may be substantially similar to corresponding components of, for example, the adapter704. For example, the fitting702may include a substantially rigid body706, and one or more arms708a,708bextending from the body706. The first and second arms708a,708bmay also include respective shelves710a,710bformed at respective distal ends of the arms708a,708b. In such examples, the arm708amay be movably connected to the body706via a stand712aextending from the body706, and the arm708bmay be movably connected to the body706via a stand712bextending from the body706. As can be seen in at leastFIG.7a, the first and second shelves710a,710bmay extend substantially perpendicularly from the respective arms708a,708b. In particular, such first and second shelves710a,710bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms708a,708b, and such surfaces may be configured to mate with a corresponding surface of the adapter704in order to removably attach the fitting702to the adapter704. At least one of the body706, stands712a,712b, arms708a,708b, shelves710a,710b, and/or other components of the fitting702may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the stands712a,712bmay provide for movement of the corresponding arms708a,708band/or shelves710a,710brelative to the body706when force is applied to respective grips714a,714bassociated with the stands712a,712b. Such movement may enable the fitting702to be removably attached to the blood pressure cuff adapter704, and may also enable the fitting702to be detached from the adapter704. The fitting702may also include one or more extensions, passages, and/or other like channels716extending from the body706. In some examples, the channel716may extend substantially along the longitudinal axis Z (FIG.2) of the fitting702, and the longitudinal axis Z may extend substantially centrally through the channel716. The channel716may form an opening718configured to permit the passage of air or other fluids into the cuff12via the fitting702, and/or to otherwise fluidly connect the fitting702with the cuff12, when the fitting702is removably attached to the adapter704. The fitting702may further include a central fluid passage719extending at least partially through the body706. For example, the channel716may form at least part of the central fluid passage719of the fitting702, and the longitudinal axis Z (FIG.2) may extend substantially centrally through at least part of the central passage719. In such examples, the opening718of the channel716may comprise an opening of the central passage719. As noted above with respect to the adapter50ofFIG.2, the adapter704may include a substantially rigid body720that is at least partly connected to the cuff12. For example, the body720may include a distal portion721extending outwardly from the outer surface54of the cuff12. The body720may also include a proximal portion719embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface of the proximal portion719may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. The body720of the adapter704may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter704may be made from more than one such material. For example, one or more components or other parts of the distal portion721may be made from a first material, and one or more components or other parts of the proximal portion719may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body720having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion721of the adapter704may include an annular ring722having a top surface726and a ridge724disposed opposite the top surface726. The ridge724may comprise at least part of a bottom surface of the ring722. In such examples, at least part of the ridge724and/or at least another part of the bottom surface of the ring722may be configured to mate with the shelves710a,710bof the fitting702to assist in retaining the fitting702and/or otherwise removably attaching the fitting702to the adapter704. In some examples, at least part of the ridge724and/or at least another part of the bottom surface of the ring722may extend substantially perpendicular to a longitudinal axis X of the body720. Additionally, the adapter704may include a substantially cylindrical sidewall extending from the ridge724to the top surface of the proximal portion719. Such a sidewall may space the ridge724from the top surface of the proximal portion719such that the shelves710a,710bof the fitting702may have room to mate with the ridge724beneath the ring722. The top surface726of the ring722may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting702to the adapter704. In some examples, the top surface726of the ring722may comprise a convex surface extending radially away from the longitudinal axis X of the body720from a distal end730of the top surface726to a proximal end732of the top surface726. In such examples, the curved top surface726may comprise a camming surface along which at least part of the arms708a,708band/or other components of the fitting702may slide as the fitting702is removably attached to the adapter704. In example embodiments, the ring722may also include a groove728extending at least partly (and, in some examples, completely) around the longitudinal axis X, and being shaped, sized, located, and/or otherwise configured to accept a member734and/or other structural feature of the adapter702. As shown in at leastFIG.7a, such a member734may extend (e.g., substantially perpendicularly) from an outer surface736of the fitting702disposed opposite and/or facing the top surface726when the fitting702is removably attached to the adapter704. The member734may comprise a shaft, pin, rod, rib, ring, flange, detent, and/or other extension of the body706, and the member734may be useful in laterally and/or otherwise aligning the fitting702with the adapter704when removably attaching the fitting702to the adapter704. For example, the groove728and the member734may be positioned and/or otherwise configured such that disposing at least part of the member734within the groove728when removably attaching the fitting702to the adapter704may cause the longitudinal axis Z of the fitting702(FIG.2) to be collinear with the longitudinal axis X of the adapter704. The member734may be substantially cylindrical, substantially cube-shaped, substantially V-shaped, substantially arcuate, substantially annular, and/or any other shape to assist in aligning the fitting702with the adapter704. Additionally, in any of the example embodiments described herein the member734may be shaped, sized, dimensioned, located, and/or otherwise configured to minimize and/or substantially eliminate lateral movement of the fitting702relative to the adapter704when the fitting702is removably attached to the adapter704. In any such examples, the member734may be shaped, sized, dimensioned, located, and/or otherwise configured to provide additional rigidity, support, and/or stability to the removable connection between the fitting702and the adapter704. Further, in any of the example embodiments described herein having a member and a groove, the member (e.g., the member734) may alternatively comprise an extension and/or other component of the adapter (e.g., the adapter704), and in such embodiments the groove (e.g., the groove728) may be formed by and/or may comprise a component of the fitting (e.g., the fitting702). Additionally, in any of the embodiments described herein having a member and a groove, the member (e.g., the member734) may comprise a plurality of members (e.g., a plurality of arcuate segments, etc.) spaced substantially circumferentially about, for example, the longitudinal axis Z (FIG.2) of the fitting702. Further, whileFIG.7aillustrates the groove728being disposed on and/or formed by the top surface726, in other examples, the groove728may be disposed on, disposed proximate, and/or formed by a radially outermost portion of the ring722. In any of the embodiments herein, a first component of the fitting702may be configured to mechanically interlock with (e.g., a snap-fit, a friction fit, a meshing, and/or any other type of interlock) a corresponding second component of the adapter704. For example, in some embodiments a component of the member734may form a snap-fit and/or other interlocking fit with the groove728of the adapter704. In other embodiments, at least part of the channel716may interlock with at least part of the portion719of the adapter704when the fitting702is removably attached to the adapter704. The system700may also include one or more O-rings, gaskets, and/or other seals737configured to form a substantially fluid-tight seal between the fitting702and the adapter704when the fitting702is removably attached to the adapter704. For example, at least one seal737may be attached to, adhered to, embedded substantially within, disposed adjacent to, and/or otherwise connected to a substantially cylindrical inner wall738of the adapter704to facilitate forming such a fluid-tight seal. In such examples, the seal737may engage an outer wall740of the channel716(e.g., an outer surface of the outer wall740) to form a substantially fluid-tight seal with the fitting702when the fitting702is removably attached to the adapter704. As shown in at leastFIG.7b, the adapter704may also include a central opening742at least partially formed by a surface of the inner wall738. In such examples, the longitudinal axis X (FIG.7a) of the body720may extend substantially centrally through a central fluid passage of the adapter704formed by the substantially cylindrical inner wall738. The central opening742, inner wall738, and central fluid passage of the adapter704may be substantially similar to the central opening, inner wall344, and central fluid passage described above with respect toFIG.3. Further, as shown in the top view ofFIG.7b, in some examples the groove728may be disposed on the top surface726between (e.g., substantially centrally between) the proximal end732and the distal end730of the top surface726. Additionally, in some examples the groove728may include at least one detent744configured to contact the member734when the fitting702is removably attached to the adapter704. It is understood that in at least the systems described herein with respect toFIGS.1-5,7a-7e, and10, the fitting may be rotatable about the longitudinal axis X of the adapter704in the clockwise direction of arrow746and in the counter clockwise direction of arrow748. In such examples, the one or more detents744disposed within the groove728may be shaped, sized, located, and/or otherwise configured to contact the member734as the fitting702is rotated about the longitudinal axis X of the adapter704. The one or more detents744may be substantially dome-shaped, substantially ramp-shaped, and/or any other shape to assist in at least partially restricting rotation of the fitting702about the longitudinal axis X of the adapter704when the member734contacts the one or more detents744. As shown in at leastFIG.7c, in some examples the groove728may include at least one wall750extending substantially parallel to the longitudinal axis X. In such examples, the wall750may prohibit 360 degree rotation of the fitting702about the longitudinal axis X of the adapter704when the fitting704is removably attached to the adapter702. For example, the wall750may block rotation of the fitting702about the longitudinal axis X of the adapter704when the member734contacts the wall750. Although the example embodiment ofFIG.7cillustrates a single wall750disposed within the groove728, in further examples, two or more walls750may be disposed the groove728. In any of the examples described herein, the one or more walls750may have a width and/or a height substantially equal to a corresponding width and/or height of the groove728. The cross-sectional view ofFIG.7dillustrates portions of the fitting702and the adapter704in more detail. For example, as noted above, the groove728may have any desired width W and/or height H in order to accommodate the member734. For example, the groove728may include a base752, a first sidewall754extending substantially perpendicularly from the base752, and a second sidewall756opposite the first sidewall754and extending substantially perpendicularly from the base752. In such examples, the first and second sidewalls754,756may extend from the base752to the top surface726, and the height H of the groove728may comprise either a height H of the first sidewall754or a height H of the second sidewall756. In any of the examples described herein, one or more of the detents744described above may be disposed on the base752, the first sidewall754, or the second sidewall756. In some examples, the height H of the groove728may be substantially constant. In other examples, on the other hand, the groove728may have a variable height H along at least a portion of the groove728. For example, as shown inFIG.7dthe base752of the groove728may have at least one trough757and at least one peak758. In such examples, the trough757may comprise a first portion of the groove728having the largest height H, and the peak758may comprise a second portion of the groove728having the smallest height H. For instance, in such examples the peak758formed by the base752may be disposed axially closer to the top surface726of the ring722than the trough757formed by the base752. In such examples, the variability in the height H of the groove728(e.g., the decrease in height H from the trough757to the peak758) may assist the user in detaching the fitting702from the adapter704. For example, the member734may slidably engage and/or otherwise move substantially along the base752as the fitting702is rotated about the longitudinal axis X of the adapter704. In such examples, engagement between the member734and the base752as the member734moves from a first location proximate the trough757to a second location proximate the peak758may assist in moving the fitting702in an upward direction away from the adapter704. Such movement may assist a healthcare practitioner in detaching the fitting702from the adapter704. In such examples, the member734may also be shaped, sized, and/or otherwise configured to assist a healthcare practitioner in detaching the fitting702from the adapter704. For example, the member734may include a base764, a first sidewall766extending substantially perpendicularly from the base764, and a second sidewall768opposite the first sidewall766and extending substantially perpendicularly from the base764. In such examples, the first and second sidewalls766,768may extend from the base764to the outer surface736. The member734may also have a height H′ substantially equal to the height H of the groove728, and a width W′ substantially equal to the width W of the groove728. It is understood that in some examples, the height H′ may be marginally less than the height H, and the width W′ may be marginally less than the width W to minimize and/or substantially eliminate friction and/or resistance caused by the member734contacting one or both of the sidewalls754,756. Further, the sidewall756of the groove728may be disposed a distance L from the inner wall738, and the sidewall766may be disposed a distance L′ from the outer wall740of the channel716substantially equal to the distance L. As shown inFIG.7e, in further examples, the fitting702may include one or more sensors configured to determine a distance between at least part of the fitting702and a corresponding part of the adapter704. Additionally or alternatively, in some examples the one or more sensors may be configured to read information from a corresponding component of the adapter704and/or otherwise assist in identifying the adapter704. For example, the fitting702may include a capacitance sensor, a proximity sensor, a light sensor, RFID sensor, a barcode reader, and/or any other sensor762connected thereto. In some examples, the sensor762may be disposed on, embedded substantially within, and/or otherwise connected to the member734, such as at a distal end of the member734. In such examples, the fitting704may include a corresponding conductor, layer of reflective paint or ink, RFID tag, barcode, and/or other feature760. In such examples, the feature760may be disposed anywhere on and/or within the adapter704such that the feature760is located at least partly within a field of view of the sensor762when the fitting702is removably attached to the adapter704. For example, in embodiments in which the sensor762is disposed on embedded substantially within, and/or otherwise connected to the member734, such as at a distal end of the member734, the feature760may be disposed on embedded substantially within, and/or otherwise connected to the base752of the groove728or at least one of the sidewalls754,756. Further, althoughFIG.7eillustrates the sensor762being connected to the fitting702and the feature760being connected to the adapter704, in other examples, this configuration may be reversed such that the sensor762is connected to the adapter704and the feature760is connected to the fitting702. In any of the examples described herein, the sensor762may be operably connected to and/or otherwise in communication with the controller20described above. For example, the sensor762may be configured to determine, among other things, a distance between (e.g., a proximity) the distal end of the member734and the base752of the groove728. In such examples, various cuffs12may include adapters704having grooves728of respective known heights H based on a desired application, desired intended use (e.g., child, adult, bariatric patient, elderly patient, etc.), and/or desired fitting702with which the particular cuff12should be used. For example, a cuff12manufactured by an example first manufacturer may be tuned and/or otherwise designed to be used with fittings702manufactured by the same first manufacturer. Accordingly, the groove728of an adapter704connected to such an example cuff12may have a known height H that is saved in the memory24associated with the controller20. In examples in which a corresponding fitting702manufactured by the same first manufacturer is removably attached to the adapter704, the sensor762may determine such a known height H and/or a corresponding known distance between (e.g., a proximity) the distal end of the member734and the base752of the groove728, and may send a signal including such information to the controller20. Upon receipt of the signal, the controller20may compare such information to one or more known values corresponding to approved fittings702manufactured by the first manufacturer, and the controller20may identify a match based on such a comparison. In response, the controller20may cause a flow control device770operably connected to the controller20to inflate and/or deflate the cuff12according to standard operating flow rates and/or cuff pressures. On the other hand, in examples in which a fitting702manufactured by a second manufacturer (different from the first manufacturer of the cuff12) is removably attached to the adapter704, the sensor762may determine a different height H and/or a different distance between the distal end of the member734and the base752of the groove728. The sensor762may send a signal including such information to the controller20, and the controller20may compare such information to one or more known values corresponding to approved fittings702manufactured by the first manufacturer, and the controller20may determine that no match exists based on such a comparison. In response, the controller20may prohibit the flow control device770from inflating the cuff12or may cause the flow control device770to inflate and/or deflate the cuff12with significantly reduced (e.g., undesirably low) operating flow rates and/or cuff pressures. In this way, the sensor762and the feature760illustrated inFIG.7emay be useful in preventing the use of fittings702that do not match or correspond with a cuff having a particular adapter704. In such examples the sensor762and the feature760illustrated inFIG.7emay also be useful in preventing the use of cuffs12having adapters704that do not match or correspond with the particular fittings702. In the same way, the height H described above, the width and/or inner diameter of the groove728, an outer diameter of the adapter704, and/or any other characteristics of the cuff12may be read, sensed, and/or otherwise determined by the sensor762, and such characteristics may be used to determine whether an age appropriate cuff12is being used. As noted above, in some examples the sensor762may comprise an RFID reader, and in such examples, such characteristics may be read by the sensor762reading a corresponding RFID tag or other feature760disposed within the groove728. For example, in situations in which an adult cuff is being used on an adolescent or child patient, the controller20may determine (based on one or more signals from the sensor762) that an age inappropriate cuff12is currently being used. Such signals may indicate that, for example, a sensed height H, width, and/or inner diameter of the groove728does not match a known height H, width, and/or inner diameter of a groove corresponding to an acceptable cuff (e.g., an adolescent cuff) for the particular patient. In response, the controller20may prohibit the flow control device770from inflating the cuff12or may cause the flow control device770to inflate and/or deflate the cuff12with significantly reduced (e.g., undesirably low) operating flow rates and/or cuff pressures. It is also understood that the location of the groove728(e.g., the radial distance between the longitudinal axis X and one or more sidewalls of the groove728) may also be sensed by the sensor762in a similar manner, and such a location may also be used to determine whether the cuff12was manufactured by such a first manufacturer and/or whether the cuff12is appropriate for the age, demographics, or other characteristics of the patient. In any of the examples described herein, the fitting702and/or the adapter704may be made, at least in part, from a substantially transparent or substantially translucent urethane or other such polymer. For example, the fitting702may be made from a substantially translucent urethane, and may include one or more light-emitting diodes (LEDs) or other light sources operably connected to the controller20. In such examples, the sensor762may be configured to determine whether a particular adapter704of a cuff matches or corresponds with a particular fitting702. If such an appropriate match is found, the one or more LEDs may be controlled by the controller20to generate a first visual response (e.g., lighting up green or some other color) indicative of the match. If such an appropriate match is not found, the one or more LEDs may be controlled by the controller20to generate a second visual response (e.g., lighting up red or some other color) indicative that such a match was not found. Further, in some examples the sensor762and/or one or more additional sensors operably connected to the adapter704, the cuff12, and/or the fitting702may be configured to determine an inflation pressure of the cuff21, whether a leak is occurring, and/or other similar information. In any of the examples described herein, such sensors may provide one or more signals to the controller20indicating such information, and the controller20may control the one or more LEDs to provide a visual response (e.g., blink, remain on, illuminate a desired color, etc.) based at least in part on such information. For example, the one or more LEDs may be used to indicate that the cuff12has been inflated to a pressure above a minimum pressure threshold, that the cuff12has been inflated to a pressure above a maximum pressure threshold, that the cuff12has a leak, that the cuff12has been left unattended for longer than a maximum time threshold, that an improper cuff12is being used, and/or other conditions. In examples in which the controller20determines, based at least in part on such signals from the one or more sensors, that the cuff12has been inflated to an inflation pressure above a maximum threshold, the controller20may control a valve (e.g., a poppet valve, solenoid valve, etc.) fluidly connected to the cuff12to at least partly open in order to decrease the inflation pressure of the cuff. In alternative embodiments, the controller12may control the flow control device770to reduce the inflation pressure of the cuff12to a desired level. As shown inFIGS.7fand7g, in still further examples at least part of the ring722may be configured to mate with at least a portion of the member734to assist in, among other things, stabilizing the removable connection between the fitting702and the adapter704. For example, as shown inFIG.7fa radially inner portion of the ring722may be removed to form a groove728. In such examples, the groove728may include a base752having a length L and extending substantially perpendicularly from the inner wall738of the adapter704. In particular, the base752may extend radially outwardly from the inner wall738. The groove728may also include a radially outermost sidewall754extending substantially perpendicularly from the base752. For example, the sidewall754may extend axially from the base752to the top surface726of the ring722. In such examples, the sidewall754may mate with and/or otherwise at least partially engage a radially outermost surface of the member734when the fitting702is removably attached to the adapter704. Such engagement between the sidewall754and the radially outermost surface of the member734may assist in minimizing and/or substantially eliminating lateral movement of the fitting702relative to the adapter704when the fitting702is removably attached to the adapter704. Alternatively, as shown inFIG.7ga radially outer portion of the ring722may be removed. In such examples, a groove formed in the ring722may include a base752extending substantially perpendicularly relative to the inner wall738of the adapter704. Such a groove may also include a radially innermost sidewall756extending substantially perpendicularly from the base752. For example, the sidewall756may extend axially from the base752to the top surface726of the ring722, and the sidewall756may be disposed a radial distance L from the inner wall738. In such examples, the sidewall756may mate with and/or otherwise at least partially engage a radially innermost surface of the member734when the fitting702is removably attached to the adapter704. Such engagement between the sidewall756and the radially innermost surface of the member734may assist in minimizing and/or substantially eliminating lateral movement of the fitting702relative to the adapter704when the fitting702is removably attached to the adapter704. In still further examples, at least part of the adapter may be rotatable relative to the cuff12. In some examples, such rotation may be about a rotational axis that extends substantially perpendicular to the top surface54of the cuff. Additionally or alternatively, such rotation may be about a rotational axis that extends substantially parallel to the top surface54of the cuff. For example, as shown inFIGS.8aand8b, an example system800or other environment of the present disclosure may include a fitting802that is configured to be removably attached to a blood pressure cuff adapter804by movement of the fitting802in a first direction substantially parallel to the top surface54of the cuff12. In such examples, the fitting802may also be configured to be detached from the adapter804by moving the fitting802in a second direction substantially parallel to the top surface54of the cuff opposite the first direction. As will be described below, such an adapter804may include one or more hinges configured to facilitate rotation of at least part of the adapter804, relative to the cuff12, while the fitting is removably attached to the adapter804. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting802. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter804. Further, one or more of the structures, functions, and/or features of the fitting802, and/or of the adapter804, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system800, various components of the fitting802may be substantially similar to corresponding components of, for example, the fitting602, and various components of the adapter804may be substantially similar to corresponding components of, for example, the adapter604. For example, the fitting802may include a substantially rigid body806, and one or more arms808a,808bextending from the body806. The first and second arms808a,808bmay also include respective shelves810a,810bformed at respective distal ends of the arms808a,808b. In the embodiment ofFIGS.8aand8b, the arms808a,808bmay be movably connected to the body806via a direct connection with the body806and/or via one or more posts (not shown) or other pieces of material extending substantially laterally from the body806to the respective arms808a,808b. As can be seen from at leastFIG.8a, the first and second shelves810a,810bmay extend substantially perpendicularly from the respective arms808a,808b. In particular, such first and second shelves810a,810bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the respective arms808a,808b, and such surfaces may be configured to mate with a corresponding surface of the adapter804in order to removably attach the fitting802to the adapter804. At least one of the body806, arms808a,808b, shelves810a,810b, and/or other components of the fitting802may be made from any of the materials described above with respect to the body48. In any such examples, the arms808a,808band/or shelves810a,810bmay be moveable relative to the body806when force is applied to respective grips814a,814bassociated with the arms808a,808b. For example, a healthcare practitioner may apply an inward force (e.g., in a direction toward the body806of the fitting802) to one or both of the grips814a,814b. The application such an inward force to the grip814amay cause the arm808ato pivot relative to, for example, a first sidewall of the fitting802, thereby causing the shelf810ato move laterally away from the body806and/or a central longitudinal axis of the fitting802. Likewise, the application such an inward force to the grip814bmay cause the arm608bto pivot relative to, for example, a second sidewall of the fitting802opposite the first sidewall, thereby causing the shelf810bto move laterally away from the body806and/or a central longitudinal axis of the fitting802. Such movement of the shelves810a,810bmay enable the fitting802to be removably attached to the blood pressure cuff adapter804when the fitting802is moved in a first direction of arrow842substantially parallel to the top surface54of the cuff12. Similarly, such movement of the shelves810a,810bmay enable the fitting802to be detached from the adapter804when the fitting802is moved in a second direction of arrow844opposite the first direction and substantially parallel to the top surface54of the cuff12. The fitting802may also include one or more extensions, passages, and/or other like channels816extending from the body806. In some examples, the channel816may extend substantially along the longitudinal axis of the fitting802, and the longitudinal axis may extend substantially centrally through the channel816. The channel816may form an opening818configured to permit the passage of air or other fluids into the cuff12via the fitting802, and/or to otherwise fluidly connect the fitting802with the cuff12, when the fitting802is removably attached to the adapter804. The fitting802may further include a central fluid passage819extending at least partially through the body806. For example, the channel816may form at least part of the central fluid passage819of the fitting802, and the longitudinal axis of the fitting802may extend substantially centrally through at least part of the central passage819. In such examples, the opening818of the channel816may comprise an opening of the central passage819. As noted above with respect to the adapter50ofFIG.2, the adapter804may include a substantially rigid body820that is at least partly connected to the cuff12. For example, the body820may include a distal portion821extending outwardly from the outer surface54of the cuff12. The body820may also include a proximal portion836embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface of the proximal portion836may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. As shown inFIG.8b, the body820may include a hinge825rotatably connecting the distal portion821to the proximal portion836. In particular, the hinge825may enable the distal portion821to be rotated about a rotational axis R of the hinge825and relative to the proximal portion836. In such examples, the distal portion821may include a central longitudinal axis X and the proximal portion836may include a central longitudinal axis Y extending substantially perpendicular to, and coplanar with, the longitudinal axis X. In such examples, the rotational axis R may extend substantially perpendicular to the plane in which the axes X and Y are disposed. For example the hinge825may enable the distal portion821to rotate, about the rotational axis R, in the clockwise direction of arrow838and in the counterclockwise direction of arrow840. In any of the examples described herein, the longitudinal axis X may extend substantially centrally through a section of the distal portion821formed, at least in part, by first, second, third, and fourth walls846,848,850,852of the distal portion821. For example, the first wall846may extend substantially parallel to the second wall848, and the third wall850may extend substantially parallel to the fourth wall852. Together, the first, second, third, and fourth walls846,848,850,852may define at least part of a first central fluid passage831of the distal portion821. The proximal portion836may also include one or more walls similar to the first, second, third, and fourth walls846,848,850,852(e.g., a substantially cylindrical wall854), and the longitudinal axis Y may extend substantially centrally through a section of the proximal portion836formed, at least in part, by the one or more walls of the proximal portion836. For example, the one or more walls of the proximal portion836may define at least part of a central fluid passage830of the proximal portion836. In such examples, the fluid passage831of the distal portion821may be fluidly connected to the fluid passage830of the proximal portion836via, for example, a central fluid passage829of the hinge225. In such examples, the first longitudinal axis X may extend substantially parallel to the top surface54of the cuff12, and may extend substantially centrally through the central passage831. Additionally, the longitudinal axis Y may extend substantially perpendicular to the top surface54of the cuff12, and may extend substantially centrally through the central passage830. The body820of the adapter804may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter804may be made from more than one such material. For example, one or more components or other parts of the distal portion821may be made from a first material, and one or more components or other parts of the proximal portion836may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body820having different rigidities, durabilities, sealing characteristics, and/or other properties. The distal portion821of the adapter804may include a ring822disposed at an open end of the distal portion821. In such examples, the ring822may form an opening823of the central passage831, and the ring822may be formed, at least in part, by distal ends of the first, second, third, and fourth walls846,848,850,852. The ring822, and the opening823formed thereby, may be shaped, sized, and/or otherwise configured to allow at least part of the channel816to pass therethrough when the fitting802is removably attached to the adapter804. In such examples, the channel816may pass through the ring822and/or the opening823, and at least part of the channel816may be disposed within the central passage831of the adapter804when the fitting802is removably attached to the adapter804. In such examples, the system800may also include one or more O-rings, gaskets, and/or other seals828configured to form a substantially fluid-tight seal between the fitting802and the adapter804when the fitting802is removably attached to the adapter804. For example, at least one seal828may be attached to, adhered to, embedded substantially within, and/or otherwise connected to either an outer surface834of the fitting802or to a corresponding distal and/or other outer surface of the ring822to facilitate forming such a fluid-tight seal. In the example system800, the seal828may be connected to the ring828, and the seal628may engage the outer surface834of the fitting802proximate a perimeter and/or outer wall of the channel816to form a substantially fluid-tight seal with the fitting802when the fitting802is removably attached to the adapter804. Alternatively, the seal828may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface834of the fitting802, and may be configured to engage the distal and/or other outer surface of the ring822to form such a substantially fluid-tight seal. The body820of the adapter804may also include a first ridge824aformed on an inner surface826of the wall852, and within the central passage831. The body820may also include a second ridge824bformed on an inner surface of the wall850opposite the inner surface826, and within the central passage831. The first ridge824amay be configured to mate with the first shelf810aof the fitting802, when the arm808aand/or the first shelf810aare disposed at least partly within the central passage831, to assist in retaining the fitting802and/or otherwise removably attaching the fitting802to the adapter804. Likewise, the second ridge824bmay be configured to mate with the second shelf810bof the fitting802, when the arm808band/or the second shelf810bare disposed at least partly within the central passage831, to assist in retaining the fitting802and/or otherwise removably attaching the fitting802to the adapter804. In some examples, at least part of the first ridge824amay extend substantially perpendicular to the inner surface826of the wall852, and at least part of the second ridge824bmay extend substantially perpendicular to the inner surface of the wall850opposite the inner surface826. As shown inFIG.8a, at least one of the ridges824a,824bmay include corresponding camming surfaces extending at an angle (e.g., an acute angle) from the inner surfaces of the walls850,852to ends of the respective ridges824a,824b. Such camming surfaces may be shaped, sized, located, and/or otherwise configured such that at least part of the arms808a,808band/or other components of the fitting802may slide along the camming surfaces as the fitting802is removably attached to the adapter804. For example, in other embodiments such camming surfaces may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting802to the adapter804. FIG.9illustrates still another example system900of the present disclosure including a fitting902configured to facilitate transmission of pressurized air or other fluids to an adapter920fluidly connected to a blood pressure cuff12. The fitting902may include a distal end904configured to mate with the adapter920, and a proximal end906opposite the distal end904. The distal end904may include one or more shafts908,910configured to be inserted, at least in part, into corresponding openings formed by a distal end922of the adapter920. For example, the first shaft908may comprise a barb-like connector or any other such fluid connector extending from an outer wall930of the fitting902. The first shaft908may include a distal opening911, and the opening911may comprise a distal opening of a central fluid passage912of the first shaft908. Similarly, the second shaft910may include a distal opening913, and the opening913may comprise a distal opening of a central fluid passage914of the second shaft910. In such examples, one or more sections of tubing916may be connected to the fitting902at the proximal end906. For example, tubing916having first and second separate conduit sections918a,918bmay be fluidly connected to the fitting902at the proximal end906. The fitting902may comprise separate respective fluid passages configured to transfer fluid received from the separate conduit sections918a,918bto the cuff12and/or to the adapter920. In such examples, the adapter920may include a distal end922and a proximal end924opposite the distal end922. The fitting902may be removably attached to the adapter920by inserting at least part of the distal end904into corresponding opening(s) formed by the distal end922of the adapter920. For example, in some embodiments the adapter920may include a female port or other like opening (not shown) at the distal end922configured to accept and/or otherwise mate with the male first shaft908. Additionally, in such embodiments the adapter920may include a male shaft or other like member (not shown) at the distal end922configured to extend into the opening913and/or otherwise mate with the second shaft910. In some examples, the outer wall930of the fitting902may abut a corresponding outer wall of the distal end922when the fitting902is removably attached to the adapter920. The adapter920may also include a top surface926and a bottom surface928opposite the top surface926. In such examples, the bottom surface928may mate with and/or may be otherwise connected to a corresponding surface of the cuff12. Further, the adapter920may include one or more connection devices932operable to assist in removably attaching and detaching the fitting902from the adapter920. As shown inFIGS.10and10a, another example system1000or other environment may include a fitting1002and/or a blood pressure cuff adapter1004, and in such systems, the example fitting1002may include one or more arms or other connection devices configured to assist in removably attaching the fitting1002to the adapter1004. Such example systems may also include a groove and/or other structure configured to mate with an annular seal, and the interaction between the groove and the seal may assist in retaining the fitting1002removably attached to the adapter1004. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting1002. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter1004. Further, one or more of the structures, functions, and/or features of the fitting1002, and/or of the adapter1004, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system1000, the fitting1002may include a substantially rigid body1006, and a single arm1008extending from the body1006. The arm1008may also include a shelf1010formed at a distal end of the arm1008. The arm1008may be movably connected to the body1006via at least one stand1012extending from the body1006. As can be seen fromFIG.10, the shelf1010may extend substantially perpendicularly from the arm1008. In particular, the shelf1010may include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly from the arm1008, and such surfaces may be configured to mate with a corresponding surface of the adapter1004in order to removably attach the fitting1002to the adapter1004. At least one of the body1006, stand1012, arm1008, shelf1010, and/or other components of the fitting1002may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the stand1012may provide for movement of the shelf1010relative to the body1006when force is applied to a grip1014associated with the stand1012. Such movement may enable the fitting1002to be removably attached to the blood pressure cuff adapter1004, and may also enable the fitting1002to be detached from the adapter1004. As can be understood from the partial cross-sectional view shown inFIG.10a, in further examples, the fitting1002may include more than one arm, shelf, stand, grip, and/or other components. For example, the fitting1002ofFIG.10amay include a first arm1008a(not shown) and a second arm1008b, a first shelf1010a(not shown) and a second shelf1010b, a first stand1012a(not shown) and a second stand112b, a first grip1014a(not shown) and a second grip1014b, etc. Additionally, as will be described below, the system1000may also include a seal, and a groove configured mate with the seal. Disposing the seal at least partially within the groove when removably attaching the fitting1002to the adapter1004may assist in removably attaching the fitting1002to the adapter1004. Disposing the seal at least partially within the groove when removably attaching the fitting1002to the adapter1004may also form a substantially fluid-tight seal between the fitting1002and the adapter1004. The fitting1002may also include one or more extensions, passages, and/or other like channels1016extending from the body1006. In some examples, the channel1016may extend substantially along the longitudinal axis Z (FIG.2) of the fitting1002, and the longitudinal axis Z may extend substantially centrally through the channel1016. The channel1016may form an opening1018configured to permit the passage of air or other fluids into the cuff12via the fitting1002, and/or to otherwise fluidly connect the fitting1002with the cuff12, when the fitting1002is removably attached to the adapter1004. The fitting1002may further include a central fluid passage1019extending at least partially through the body1006. For example, the channel1016may form at least part of the central fluid passage1019, and the longitudinal axis Z (FIG.2) may extend substantially centrally through at least part of the central passage1019. In such examples, the opening1018of the channel1016may comprise an opening of the central passage1019. As noted above with respect to the adapter50ofFIG.2, the adapter1004may include a substantially rigid body1020that is at least partly connected to the cuff12. For example, the body1020may include a distal portion1021extending outwardly from the outer surface54of the cuff12. The body1020may also include a proximal portion1036embedded within the cuff12and/or extending inwardly from the outer surface54. In some examples, a top surface1040of the proximal portion1036may extend at least partly along and/or may be connected to an inner surface of the cuff12disposed opposite the outer surface54. The proximal portion1036may also include a bottom surface1042opposite the top surface1040and disposed substantially within, for example, an inflated portion of the cuff12. The body1020of the adapter1004may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter1004may be made from more than one such material. For example, one or more components or other parts of the distal portion1021may be made from a first material, and one or more components or other parts of the proximal portion1036may be made from a second material different from the first material. As noted above with respect to the adapter50ofFIG.2, in such examples, the use of such first and second materials may result in the various components or other parts of the body1020having different rigidities, durabilities, sealing characteristics, and/or other properties. In any of the embodiments described herein, including the example embodiments shown inFIGS.10and10a, the fitting1002may also be made from more than one material. For example, one or more portions of the body1006may be made from a first material, and a seal, at least one stand1012, at least one arm1008, and/or other component of the fitting1002may be made from a second material different from the first material. The distal portion1021of the adapter1004may include an annular ring1022having a top surface1026and a ridge1024disposed opposite the top surface1026. The ridge1024may comprise at least part of a bottom surface of the ring1022. In such examples, at least part of the ridge1024and/or at least another part of the bottom surface of the ring1022may be configured to mate with the shelf1010of the fitting1002to assist in retaining the fitting1002and/or otherwise removably attaching the fitting1002to the adapter1004. In some examples, at least part of the ridge1024and/or at least another part of the bottom surface of the ring1022may extend substantially perpendicular to a longitudinal axis X of the body1020. Additionally, the adapter1004may include a substantially cylindrical sidewall1038extending from the ridge1024to the top surface1040of the proximal portion1036. Such a sidewall1038may space the ridge1024from the top surface1040such that the shelf1010of the fitting1002may have room to mate with the ridge1024beneath the ring1022. The top surface1026of the ring1022may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting1002to the adapter1004. In some examples, the top surface1026of the ring1022may comprise a convex surface extending radially away from the longitudinal axis X of the body1020from a distal end1030of the top surface1026to a proximal end1032of the top surface1026. In such examples, the curved top surface1026may comprise a camming surface along which at least part of the arm1008and/or other components of the fitting1002may slide as the fitting1002is removably attached to the adapter1004. The system1000may also include one or more O-rings, gaskets, and/or other seals1028configured to form a substantially fluid-tight seal between the fitting1002and the adapter1004when the fitting1002is removably attached to the adapter1004. For example, at least one seal1028may be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to either an outer surface1034of the fitting1002or to one or more portions of an annular groove1043formed by the ring1026to facilitate forming such a fluid-tight seal. As shown inFIG.10, the seal1028may be connected to a portion of the outer surface1034, and in some examples, at least part of the seal1028may be recessed and/or otherwise disposed in a groove formed by the outer surface and substantially surrounding the channel1016. Alternatively, as shown inFIG.10a, the seal1028may be embedded within and/or formed integrally with the body1006of the fitting1002. In such examples, the seal1028may comprise a portion of the body1006configured to mate with the groove1043so as to form a substantially fluid-tight seal with the groove1043and/or other portions of the adapter1004(e.g., with the ring1022) when the fitting1002is removably attached to the adapter1004. In such examples, the seal1028may be formed from a first material (e.g., a first urethane and/or other polymer having a relatively low durometer), and at least part of the body1006may be formed from a second material (e.g., a second urethane and/or other polymer having a durometer higher than the first material) different from the first material. In further example embodiments, the body1006shown inFIG.10amay be formed from a single material. In such examples, at least the ring1022of the adapter1004may be formed from a first material (e.g., a first urethane and/or other polymer having a relatively low durometer), and the body1006may be formed from a second material (e.g., a second urethane and/or other polymer having a durometer higher than the first material) different from the first material. In such examples, a portion of the body1006may be configured to mate with the groove1043so as to form a substantially fluid-tight seal with the groove1043and/or other portions of the adapter1004(e.g., with the ring1022) when the fitting1002is removably attached to the adapter1004. In the example system1000ofFIGS.10and10a, the groove1043may be formed, at least in part, by the top surface1026of the ring1022, and the groove1043may comprise a base1046extending substantially perpendicular to the longitudinal axis X, and a sidewall1048extending substantially parallel to the longitudinal axis X. In such examples, the sidewall1048may extend distally from the base1046to the distal end1030of the top surface1026. Further, as shown in at leastFIG.10the adapter1004may include a substantially cylindrical inner wall1044having a proximal end1052at the bottom surface1042of the proximal portion1036, and a distal end1050at the base1046of the groove1043. In such examples, the base1046may extend radially from the distal end1050of the inner wall1044to the sidewall1048. In such examples, at least part (e.g., a base1056) of the seal1028may be disposed within the groove1043when the fitting1002is removably attached to the adapter1004. For example, in embodiments in which the seal1028is connected to the fitting1002(e.g., the embodiment ofFIG.10) or in which the seal1028is formed integrally with the fitting1002(FIG.10a), the base1056of the seal1028may mate with and/or otherwise contact at least part of the base1046when the fitting1002is removably attached to the adapter1004. Similarly, in such embodiments a radially outermost sidewall1058of the seal1028may mate with and/or otherwise contact at least part of the sidewall1048when the fitting1002is removably attached to the adapter1004. Alternatively, in embodiments in which the seal1028is formed integrally with and/or connected to the adapter1004, the base1056of the seal1028may be connected to the base1046of the groove1043. Further, in such embodiments at least part of the radially outermost sidewall1058of the seal1028may be connected to the sidewall1048of the groove1043. In such examples, the seal1028may engage the outer surface1034of the fitting1002and/or an outer surface1054of the channel1016to form a substantially fluid-tight seal with the fitting1002. Such engagement may also assist in removably attaching the fitting1002to the adapter1004. For example, the flexible seal1028may have an inner diameter that is substantially equal to or nominally less than an outer diameter of the outer surface1054. Accordingly, the seal1028may apply a retention force to the outer surface1054in a radially inward direction when the channel1016is inserted within the adapter1004. Such a retention force may assist in removably attaching the fitting1002to the adapter1004. In still further examples, in the embodiment shown inFIG.10, the seal1028may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface1034of the fitting1002(as described above) and/or to the outer surface1054of the channel1016. For example, as noted above with respect toFIG.10a top portion of the seal1028may be connected to a groove formed by the outer surface1034of the fitting1002. Further, at least part of a sidewall of the seal1028(e.g., a radially inner sidewall of the seal1028shown inFIG.10) may be connected to and/or disposed on the outer surface1054of the channel1016. In such examples, the seal1028may engage the sidewall1048and/or the base1046of the groove1043formed by the adapter1004to form a substantially fluid-tight seal with the adapter1004. Such engagement may also assist in removably attaching the fitting1002to the adapter1004. For example, the flexible seal1028may have an outer diameter that is substantially equal to or nominally greater than a diameter of the groove1043formed by the sidewall1048. Accordingly, the seal1028may apply a retention force to the sidewall1048in a radially outward direction when the channel1016is inserted within the adapter1004. Such a retention force may assist in removably attaching the fitting1002to the adapter1004. In the embodiment shown inFIG.10, the seal1028may comprise a primary seal configured to mate with at least a portion of the fitting1002(e.g., the outer surface1034), and the annular ring1022may comprise a secondary seal that is also configured to mate with at least a portion of the fitting1002(e.g., the outer surface1054of the channel1016and/or the outer surface1034. In such examples, the seal1028and/or the ring1022may be shaped, sized, located, and/or otherwise configured to increase the stability of the removable connection between the adapter1004and the fitting1002. Further, the body1020of the adapter1004may also include a central opening (as shown more clearly inFIG.2) at least partially formed by the wall1044of the body1020. For example, the inner wall1044may define a central fluid passage of the adapter1004configured to accept air or other fluids delivered to the cuff12via the fitting1002. In such examples, the inner wall1044may have any shape, size, diameter, and/or other configuration such that the inner wall1044may accept at least part of the channel1016therein. For example, at least part of the channel1016may pass through the central opening of the inner wall1044, proximate the distal end1030of the top surface1026, when the fitting1002is removably attached to the adapter1004. In such examples, the longitudinal axis X may extend substantially centrally through the central fluid passage of the adapter1004formed by the substantially cylindrical inner wall1044. As shown inFIG.11, still another example system1100or other environment may include a fitting1102and/or a blood pressure cuff adapter1104, and in such systems, the example fitting1102may be configured to mate with the adapter1104by moving the fitting1102in a direction substantially perpendicular to a central longitudinal axis X of the adapter1104and/or in a direction substantially parallel to a top surface of the cuff12to which the adapter1104is connected. Such example fittings1102may include a groove and/or other structure configured to assist in securing the fitting1102to the adapter1104when the fitting1102is removably attached to the adapter1104. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting1102. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter1104. Further, one or more of the structures, functions, and/or features of the fitting1102, and/or of the adapter1104, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system1100, the fitting1102may include a substantially rigid body1106, and the body1106may be substantially cylindrical and/or any other shape configured to assist in mating the fifing1102with the adapter1104. The body1106may include, for example, a pocket1108shaped, sized, and/or otherwise configured to accept at least part of the adapter1104therein and to form a substantially fluid-tight seal with the adapter1104. As shown inFIG.11, the pocket1108may be formed, at least in part, by an inner sidewall1110of the body1106. The pocket1108may also be formed, at least in part, by a ceiling1112of the body1106extending substantially perpendicular to the inner sidewall1110. In such examples, the pocket1108may include an opening at a distal end of the body1106configured to accept at least part of the adapter1104when the fitting1102is removably attached to the adapter1104. In particular, the opening of the pocket1108may enable the fitting1102to be mated with the adapter1104by moving the fitting1102in a direction substantially perpendicular to the central longitudinal axis X of the adapter1104and/or in a direction substantially parallel to a top surface of the cuff12to which the adapter1104is connected. The body1106may also include an outer top surface1114opposite the ceiling1112, and an inner bottom surface1116opposite the top surface1114. In such examples, the bottom surface1116may be disposed proximate and/or adjacent to the top surface of the cuff12when the fitting1102is removably attached to the adapter1104. In some examples, the body1106may also include a substantially circumferential groove1118. For example, the groove1118may comprise a channel or other like structure formed by the inner sidewall1110and extending at least partially circumferentially around a central longitudinal axis (not shown) of the fitting1102. The groove1118may include any width, height, depth, and/or other configuration/structure to assist in removably attaching the fitting1102with the adapter1104. For example, as will be described below, the adapter1104may include a distal portion having a ring and a ridge, and in such embodiments the groove1118may be configured to accept at least part of the distal portion therein when the fitting1102is removably attached to the adapter1104. Further, while the opening of the pocket1108may enable the fitting1102to be mated with the adapter1104by moving the fitting1102in a direction substantially perpendicular to the central longitudinal axis X of the adapter1104and/or in a direction substantially parallel to a top surface of the cuff12, the groove1118may be substantially annular and/or any other configuration so as to enable rotation of the fitting1102about the longitudinal axis X when the fitting1102is removably attached to the adapter1104. As shown inFIG.11, the adapter1104may include a substantially rigid body1120that is at least partly connected to the cuff12. For example, the body1120may include a distal portion1121extending outwardly from the outer surface of the cuff12. The body1120may also include a proximal portion1136embedded within the cuff12and/or extending inwardly from the outer surface of the cuff12. In some examples, a top surface of the proximal portion1136may extend at least partly along and/or may be connected to an inner surface of the cuff12. The body1120of the adapter1104may be made from any of the materials described above with respect to, for example, the fitting42. In some examples, the adapter1104may be made from more than one such material. For example, one or more components or other parts of the distal portion1121may be made from a first material, and one or more components or other parts of the proximal portion1136may be made from a second material different from the first material. The distal portion1121of the adapter1104may include an annular ring1122having a top surface1126and a ridge1124disposed opposite the top surface1126. The ridge1124may comprise at least part of a bottom surface of the ring1122. In such examples, at least part of the ridge1124and/or at least another part of the bottom surface of the ring1122may be configured to mate with the groove1118of the fitting1102(e.g., a bottom surface, bottom wall, bottom flange, bottom shelf, etc.) to assist in retaining the fitting1102and/or otherwise removably attaching the fitting1102to the adapter1104. In some examples, at least part of the ridge1124and/or at least another part of the bottom surface of the ring1122may extend substantially perpendicular to the longitudinal axis X of the body1120. Further, at least part of the ridge1124and/or at least part of the ring1122may be disposed substantially within the groove1118when the fitting1102is removably attached to the adapter1104. Additionally, the adapter1104may include a substantially cylindrical sidewall1138extending from the ridge1124to the top surface of the proximal portion1136. Such a sidewall1138may space the ridge1124from the top surface of the proximal portion1136such that the groove1118of the fitting1102may have room to mate with the ridge1124. In such examples, at least part of the body1106(e.g., the bottom surface1116) may be disposed beneath the ring1122when the fitting1102is removably attached to the adapter1104. The top surface1126of the ring1122may be substantially convex, substantially concave, substantially curved, substantially tapered, and/or any other configuration in order to assist in removably attaching the fitting1102to the adapter1104. In some examples, the top surface1126of the ring1122may comprise a convex surface extending radially away from the longitudinal axis X of the body1120from a distal end of the top surface1126to a proximal end of the top surface1126. The system1100may also include one or more O-rings, gaskets, and/or other seals1128configured to form a substantially fluid-tight seal between the fitting1102and the adapter1104when the fitting1102is removably attached to the adapter1104. For example, at least one seal1128may be attached to, adhered to, embedded substantially within, formed integrally with, and/or otherwise connected to either an outer surface of the fitting1102or to one or more portions of an annular groove1146formed by the ring1126to facilitate forming such a fluid-tight seal. In the example system1100ofFIG.11, the groove1146may be formed, at least in part, by the top surface1126of the ring1122. In still further embodiments, the seal1128may be omitted. In such example embodiments the adapter1104(e.g., at least part of the top surface1126and/or other portions of the ring1122) may be made from a first material (e.g., an example urethane or other polymer) having a first durometer. The fitting1102, on the other hand, may be made from a second material (e.g., an example polymer) having a second durometer that is relatively higher than the first durometer of the first material. In such examples, the relatively lower durometer first material of the adapter1104may be configured to form a substantially fluid tight seal with the relatively higher durometer second material when the fitting1102is releasable connected to the adapter1104. In still further embodiments, the fitting1102may be made from the relatively lower durometer first material described above and the adapter1104may be made from the relatively higher durometer second material described above. Further, in any of the embodiments of the system1100shown inFIG.11, the fitting1102may be shaped, sized, and/or otherwise configured to be installed directly over the top of the adapter1104in order to form a removable connection therewith and/or to form a substantially fluid-tight seal therewith. In such examples, the fitting1102(e.g., the body1106of the fitting1102) may be substantially cylindrical, and may be configured to form a substantially fluid-tight seal with the seal1128and/or with at least part of the ring1122when the fitting1102is removably connected to the adapter1104. Further, as shown inFIG.11the adapter1104may include a substantially cylindrical inner wall1144having a proximal end at a bottom surface of the proximal portion1036, and a distal end at the top surface1126of the ring1122. In such examples, the inner wall1144may form a central opening1160of the adapter1104extending from the bottom surface of the proximal portion1036to the top surface1126of the ring1122. In such examples, the longitudinal axis X may extend substantially centrally through the central opening1160formed by the inner wall1114. In example embodiments, the body1106may also include one or more channels1150a,1150b. For example, such channels1150a,1150bmay be formed, at least in part, by the ceiling1112and may extend into the ceiling1112at any desired depth. Each channel1150a,1150bmay be fluidly connected to a respective fluid passage1152a,1152bof the body. For example, each fluid passage1152a,1152bmay be fluidly connected to a respective conduit section32a,32bof the tubing30. As a result, fluid delivered to the body1106by the conduit sections32a,32bmay be passed to the channels1150a,1150bby the respective fluid passages1152a,1152b, and the channels1150a,1150bmay pass such fluid to the central opening1160of the adapter1104when the fitting1102is removably attached to the adapter1104. As shown inFIG.12, yet another example system1200or other environment may include a fitting1202and/or a blood pressure cuff adapter1104, and in such systems, the example fitting1102may be configured to mate with the adapter1104by moving the fitting1102in a direction substantially perpendicular to a rotational axis R defined by the adapter1204. In some examples, the rotational axis R may comprise a central longitudinal axis of the adapter1104, and the fitting1202may be configured to rotate about the rotational axis R when the fitting1202is removably attached to the adapter1204. As shown inFIG.12, the rotational axis R may extend substantially parallel to a top surface54of the cuff12to which the adapter1204is connected, and the fitting1202may have a range of motion about the rotational axis R equal to at least approximately 180 degrees. For example, the fitting1202may be rotatable about the rotational axis R in the clockwise direction of arrow1210and in the counterclockwise direction of arrow1212. It is understood that such an example adapter1204may have one or more grooves, channels, pockets, and/or other structures configured to mate with at least part of the fitting1202, and such structures may assist in removably attaching the fitting1202to the adapter1204. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting1202. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter1204. Further, one or more of the structures, functions, and/or features of the fitting1202, and/or of the adapter1204, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system1200, the fitting1202may include a substantially rigid body1206, and the body1206may be substantially cylindrical and/or any other shape configured to assist in mating the fifing1202with the adapter1204. The body1206may include, for example, one or more pockets, extensions, tabs, pins, channels, and/or other structures (not shown) shaped, sized, and/or otherwise configured to mate with at least part of the adapter1204and to form a substantially fluid-tight seal with the adapter1204. The adapter1204and/or the fitting1202may also include one or more seals (not shown) similar to one or more of the seals described above, to assist in forming such a substantially fluid-tight seal. In some examples, the adapter1204may include a first portion1208aand a second portion1208bdisposed opposite the first portion1208aon the top surface54. In such examples, the body1206of the fitting1202may be positioned between the first and second portions1208a,1208bwhen the fitting1202is removably attached to the adapter1204. Further, the first and second portions1208a,1208bmay include respective fluid passages configured to direct fluid into the cuff12. In such examples, a first fluid passage of the fitting1202may be fluidly connected to a respective fluid passage of the first portion1208a, and a second fluid passage of the fitting1202may be fluidly connected to a respective fluid passage of the second portion1208bwhen the fitting1202is removably attached to the adapter1204. As shown inFIGS.13aand13b, a further example system1300or other environment of the present disclosure may include a fitting1302and/or a blood pressure cuff adapter1304. In such systems1300, the example fitting1302may be removably attachable to such an adapter1304, and in such a system1300, the fitting1302may comprise a body1306having an upper section1308that is relatively flexible and/or malleable. For example, the upper section1308may comprise a first portion1310aand a second portion1310bopposite the first portion1310a. As shown inFIG.13b, at least one of the first and second portions1310a,1310bmay be deformable to assist in securing the fitting1302to the adapter1304when the fitting1302is removably attached to the adapter1302. Such a malleable upper section1308may also form a substantially fluid-tight seal with at least part of the adapter1304when the fitting1302is removably attached to the adapter1304. In such examples, one or more of the seals described above, such as the seal1328shown inFIGS.13a,13bmay be omitted. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting1302. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter1304. Further, one or more of the structures, functions, and/or features of the fitting1302, and/or of the adapter1304, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system1300, various components of the fitting1302may be substantially similar to corresponding components of the fitting502, and various components of the adapter1304may be substantially similar to corresponding components of the adapter504. The body1306and/or other components of the fitting1302may be made from any of the materials described above with respect to the body48. In any such examples, the shape, size, materials, and/or other configuration of the upper portion1308may enable at least one of the first portion1310a, or the second portion1310bto mate with a surface of the adapter1304. Such flexibility and/or malleability of the upper section1308may enable the fitting1302to be removably attached to the blood pressure cuff adapter1304. The fitting1302may also include one or more extensions, passages, and/or other like channels1316extending from the body1306. In some examples, the channel1316may extend substantially along the longitudinal axis X of the fitting1302, and the longitudinal axis X may extend substantially centrally through the channel1316. The channel1316may form an opening1318configured to permit the passage of air or other fluids into the cuff12via the fitting1302, and/or to otherwise fluidly connect the fitting1302with the cuff12, when the fitting1302is removably attached to the adapter1304. The fitting1302may further include a central fluid passage1319extending at least partially through the body1306. For example, the channel1316may form at least part of the central fluid passage1319of the fitting1302, and the longitudinal axis X may extend substantially centrally through at least part of the central passage1319. In such examples, the opening1318of the channel1316may comprise an opening of the central passage1319. With further reference toFIGS.13aand13b, the adapter1304may include a substantially rigid body that is at least partly connected to the cuff12. For example, the body may include a distal portion having an annular ring1322. The ring1322may include a top surface1326and a ridge1324disposed opposite the top surface1326. The ridge1324may comprise at least part of a bottom surface of the ring1322. In such examples, at least part of the ridge1324and/or at least another part of the bottom surface of the ring1322may be configured to mate with at least one of the first portion1310aor the second portion1310bof the fitting1302to assist in retaining the fitting1302and/or otherwise removably attaching the fitting1302to the adapter1304. In some examples, at least part of the ridge1324and/or at least another part of the bottom surface of the ring1322may extend substantially perpendicular to the longitudinal axis X. Additionally, the adapter1304may include a substantially cylindrical sidewall extending proximally from the ridge. Such a sidewall may space the ridge1324from a proximal portion of the adapter1304such that a healthcare professional may mold and/or otherwise form at least one of the first portion1310aor the second portion1310bof the fitting1302to mate with the ridge1324beneath the ring1322. The system1300may also include one or more O-rings, gaskets, and/or other seals1328configured to form a substantially fluid-tight seal between the fitting1302and the adapter1304when the fitting1302is removably attached to the adapter1304. For example, at least one seal1328may be attached to, adhered to, embedded substantially within, and/or otherwise connected to either an outer surface1334of the fitting1302or to the top surface1326of the ring1322to facilitate forming such a fluid-tight seal. In the example system1300, at least part (e.g., a base) of the seal1328may be disposed within an annular groove formed by the top surface1326of the ring1322. In such examples, the seal1328may engage the outer surface1334of the fitting1302proximate a perimeter and/or outer wall of the channel1316to form a substantially fluid-tight seal with the fitting1302when the fitting1302is removably attached to the adapter1304. Alternatively, the seal13528may be attached to, adhered to, embedded substantially within, and/or otherwise connected to the outer surface1334of the fitting1302, and may be configured to engage the top surface1326of the adapter1304to form such a substantially fluid-tight seal. In still further embodiments, the seal1328may be omitted. As shown inFIGS.14a-14f, another example system1400or other environment may include a fitting1402and/or a blood pressure cuff adapter1404, and in such systems, the example fitting1402may include one or more arms or other connection devices configured to assist in removably attaching the fitting1402to the adapter1404. As will be described below, at least some such example systems may also include a groove and/or other structure configured to mate with at least part of a member of the fitting1402, and the interaction between the groove and the member of the fitting1402may assist in aligning and/or stabilizing the fitting1402relative to the adapter1404when the fitting1402is removably attached to the adapter1404. In example embodiments, any of the structures, functions, and/or other aspects of the various fittings described herein may be included in the fitting1402. Likewise, any of the structures, functions, and/or other aspects of the various adapters described herein may be included in the adapter1404. Further, one or more of the structures, functions, and/or features of the fitting1402, and/or of the adapter1404, may be incorporated into any of the fittings or adapters of the present disclosure. In the example system1400, the fitting1402may include a substantially rigid body1406. As can be understood from the partial cross-sectional views shown inFIGS.14a-14f, the fitting1402may also include one or more arms, shelfs, stands, grips, and/or other components. For example, the fitting1402may include a first arm1408a(not shown) and a second arm1408b, a first shelf1410a(not shown) and a second shelf1410b, a first stand1412a(not shown) and a second stand112b, a first grip1414a(not shown) and a second grip1414b, etc. In some examples, such components may be substantially similar to and/or the same as the corresponding components of the adapters700described above with respect to, for example,FIGS.7a-7g. For example, the arms1408a,1408bmay include respective shelfs1410a,1410bformed at respective distal ends of the arms1408a,1408b. The arms1408a,1408bmay be movably connected to the body1406via respective stands1412a,1412bextending from the body1406. As can be seen fromFIGS.14a-14f, the shelves1410a,1410bmay extend substantially perpendicularly relative to the longitudinal axis X. In particular, the respective shelves1410a,1410bmay include one or more surfaces (e.g., a top surface, a bottom surface opposite the top surface, a side surface, etc.), extending substantially perpendicularly relative to a central longitudinal axis X of the adapter1410, and such surfaces may be configured to mate with respective corresponding surfaces of the adapter1404in order to removably attach the fitting1402to the adapter1404as described above. Additionally, the system1400may include one or more O-rings, gaskets, and/or other seals1437configured to form a substantially fluid-tight seal between the fitting1402and the adapter1404when the fitting1402is removably attached to the adapter1404. Such seals1437may be substantially similar to and/or the same as the one or more seals737described above with respect to, for example,FIGS.7a-7g. For example, the fitting1402may include a central channel1416defining a central fluid passage of the fitting1402, and when the fitting1402is removably attached to the adapter1404, a substantially cylindrical outer wall1440of the fitting1402disposed opposite the channel1416may be disposed along, adjacent, and/or at least partly in contact with a corresponding substantially cylindrical inner wall1438of the adapter1404. In such examples, the at least one seal1437may be attached to, adhered to, embedded substantially within, disposed adjacent to, and/or otherwise connected to the inner wall1438of the adapter1404to facilitate forming such a fluid-tight seal. In particular, the seal1437may engage the outer wall1440to form a substantially fluid-tight seal with the fitting1402when the fitting1402is removably attached to the adapter1404. With continued reference toFIGS.14a-14f, and as described above with respect to at least the adapter1404ofFIGS.7a-7g, the adapter1404of the system1400may include a substantially rigid body that is at least partly connected to a cuff12. For example, the body of the adapter1404may include a distal portion1421extending outwardly from an outer surface of the cuff12. The body of the adapter1404may also include a proximal portion embedded within the cuff12and/or extending inwardly from the outer surface of the cuff12. The distal portion1421of the adapter1404may include an annular ring1422having a top surface1426and a ridge1424disposed opposite the top surface1426. The ridge1424may comprise at least part of a bottom surface of the ring1422. In such examples, at least part of the ridge1424and/or at least another part of the bottom surface of the ring1422may be configured to mate with the shelves1410a,1410bof the fitting1402to assist in retaining the fitting1402and/or otherwise removably attaching the fitting1402to the adapter1404. In some examples, at least part of the ridge1424and/or at least another part of the bottom surface of the ring1422may extend substantially perpendicular to the longitudinal axis X of the adapter1404. Additionally, the adapter1404may include a substantially cylindrical sidewall extending proximally from the ridge1424. Such a sidewall may space the ridge1424from, for example, the top surface of the cuff12such that the shelves1410a,1410bof the fitting1402may have room to mate with the ridge1424beneath the ring1422. With reference to at leastFIG.14a, in some examples the system1400may include an adapter1404having an annular groove1428, and a fitting1402having one or more protruding members1434configured to at least partly engage the groove1428when the fitting1402is releasably attached to the adapter1404. For example, similar to the adapter1404described with respect toFIGS.7a-7g, in example embodiments, the ring1422of the adapter1404may include a groove1428extending at least partly (and, in some examples, completely) around (e.g., concentrically) the longitudinal axis X, and being shaped, sized, located, and/or otherwise configured to accept the member1434and/or other structural feature of the adapter1402. As shown in at leastFIG.14a, such a member1434may extend distally (e.g., in a direction toward the cuff12when the fitting1402is removably attached to the adapter1404) from an outer surface1436of the fitting1402disposed opposite and/or facing the top surface1426when the fitting1402is removably attached to the adapter1404. In some examples, the member1434may comprise a shaft, pin, rod, tab, rib, ring, ridge, flange, and/or other extension of the body1406protruding from the outer surface1436, and the member1434may be useful in laterally and/or otherwise aligning the fitting1402with the adapter1404when removably attaching the fitting1402to the adapter1404. In particular, as shown inFIG.14a, in some examples the member1434may comprise one or more tabs, ribs, rings, ridges, flanges, and/or other structures extending distally from the outer surface1436, and having at least one surface that tapers radially inwardly from the outer surface1436toward the outer wall1440. For example, the member1434may comprise at least one rib, tab, or other such structure including a radially outermost sidewall1468. The sidewall1468may extend from a proximal end of the member1434(e.g., proximate or at the outer surface1436) to a distal end of the member1434, and the sidewall1468may taper radially inwardly from the proximal end of the member1434to the distal end of the member1434. The sidewall1468may be substantially planar, substantially concave, substantially convex, and/or any other shape or configuration, and such a configuration may match a configuration of a corresponding portion of the groove1428. For instance, the member1434may also include a base1464disposed at the distal end thereof, and the groove1428may include a base1452and a radially outermost sidewall1454extending proximally from the base1452. For example, the sidewall1454may extend proximally from the base1452to the top surface1426of the ring1422. In such examples, the sidewall1454may mate with and/or otherwise at least partially engage the radially outermost sidewall1468of the member1434when the fitting1402is removably attached to the adapter1404. Similarly, in some examples the base1452may mate with and/or otherwise at least partially engage the base1464of the member1434when the fitting1402is removably attached to the adapter1404. Such engagement between the member1434and the groove1428may assist in minimizing and/or substantially eliminating lateral movement of the fitting1402relative to the adapter1404when the fitting1402is removably attached to the adapter1404. In any such examples, the member1434may be shaped, sized, dimensioned, located, and/or otherwise configured to provide additional rigidity, support, and/or stability to the removable connection between the fitting1402and the adapter1404, while still facilitating rotation of the fitting1402relative to the adapter1404. In such examples, the groove1428and the member1434may be positioned and/or otherwise configured such that disposing at least part of the member1434within the groove1428when removably attaching the fitting1402to the adapter1404may cause the longitudinal axis Z of the fitting1402(FIG.2) to be collinear with the longitudinal axis X of the adapter1404. Moreover, the sidewall1468of the member1434and the sidewall1454of the groove1428may be disposed at complimentary angles relative to, for example, the longitudinal axis X to facilitate such a mating relationship. For instance, the sidewall1454of the groove1428may be disposed at an acute included angle Θ relative to the longitudinal axis X. In such examples, the sidewall1468of the member1434may be disposed at a complimentary acute included angle A. Such an example relationship is shown inFIG.14awith respect to an axis1476extending perpendicular to the longitudinal axis X. With reference to at leastFIG.14b, in some examples the system1400may include an adapter1404in which the annular groove1428described above with respect toFIG.14ahas been omitted. For example, as shown inFIG.14b, an example system1400may include a fitting1402and an adapter1404that are substantially similar to the fitting1402and adapter1404described above with respect toFIG.14a. As shown inFIG.14b, however, in some embodiments the top surface1426of the ring1422may terminate at a substantially planar surface (e.g., a base1452). In such examples, the fitting1402may also include a corresponding substantially planar base1456configured to contact, engage, and/or otherwise mate with the base1452when the fitting1402is removably attached to the adapter1404. In such examples, the base1456of the fitting702may comprise a substantially planar, substantially annular surface extending radially away from the outer surface1440. In some examples, the fitting1402may also include one or more additional sidewalls1458extending substantially parallel to the sidewall1440. In any such examples, the base1456may extend between the sidewall1440and the sidewall1458. In some examples, the base1456may extend from the sidewall1440to the sidewall1458. In any such examples, the base1452of the adapter704may comprise a substantially planar annular surface, and the base1456may contact, engage, and/or otherwise mate with at least part of the base1452when the fitting1402is removably attached to the adapter1404. With reference to at leastFIG.14c, in some examples the system1400may include an adapter1404having an annular groove1428, and a fitting1402having one or more protruding members1434configured to at least partly engage (e.g., at least partly extend into, at least partly contact, at least partly slidably engage, etc.) the groove1428when the fitting1402is releasably attached to the adapter1404. For example, similar to the adapter1404described with respect toFIGS.7a-7g, in example embodiments, the ring1422of the adapter1404may include a groove1428extending at least partly (and, in some examples, completely) around (e.g., concentrically) the longitudinal axis X, and being shaped, sized, located, and/or otherwise configured to accept the member1434and/or other structural feature of the adapter1402. As shown in at leastFIG.14c, such a member1434may extend distally (e.g., in a direction toward the cuff12when the fitting1402is removably attached to the adapter1404) from an outer surface1436of the fitting1402disposed opposite and/or facing the top surface1426when the fitting1402is removably attached to the adapter1404. As described above with respect toFIG.14a, in some examples, the member1434may comprise a shaft, pin, rod, tab, rib, ring, ridge, flange, and/or other extension of the body1406protruding from the outer surface1436, and the engagement between the member1434and the groove1428may assist in laterally and/or otherwise aligning the fitting1402with the adapter1404when removably attaching the fitting1402to the adapter1404. WhileFIG.14cillustrates the groove1428and the member1434being spaced radially from, for example, a radially innermost end1430of the top surface1426, in other examples the member1434, groove1428, ring1422, and/or other components of the fitting1402and/or of the adapter1404may be disposed and/or configured substantially as described above with respect toFIG.7f,10a,14a, and/or one or more other embodiments of the present disclosure. For example, in some embodiments the groove1428and the member1434shown inFIG.14cmay be configured substantially the same as the corresponding groove728and member734described above with respect toFIG.7f. Moreover, as shown inFIG.14c, in addition to the groove1428, the ring1422of the adapter1404may also include one or more surfaces, sections, and/or other modified portions1474configured to accept and/or otherwise engage with a corresponding additional member1472of the fitting1402. In some examples, such a modified portion1474of the ring1422may comprise an additional groove, channel, notch, indentation, hole, or other portion of the ring1422that, relative to the ring722and/or the substantially convex top surface726shown inFIG.7a, has been omitted (e.g., via a molding process) or removed (e.g., via a mechanical or other removal process) from the ring1422. In such examples, the additional groove, channel, notch, indentation, hole, or other modified portion1474of the ring1422may be substantially annular, may extend (at least partially) circumferentially around the longitudinal axis X, and/or may otherwise be configured similar to the groove1428to facilitate rotation of the fitting1402relative to the adapter1404. Further, in the embodiment ofFIG.14cthe modified portion1474of the ring1422may be formed and/or disposed between the groove1428and the radially innermost end1430of the top surface1426. Additionally or alternatively, the modified portion1474of the ring1422may be formed by and/or disposed on a portion of the ring1422radially outward of the groove1428. For example, at least part of the modified portion1474may be formed and/or disposed between the groove1428and a radially outermost end of the top surface1426. As shown inFIG.14c, in some examples the modified portion1474of the ring1422may comprise a substantially planar portion of the top surface1426. Alternatively, in other embodiments the modified portion1474of the ring1422may comprise a substantially curved, substantially concave, and/or substantially convex surface or portion of the ring1422. Further, as shown inFIG.14c, in some examples at least a portion1490of the top surface1426may extend radially inwardly from the modified portion1474, and such a portion1490of the top surface1426may extend substantially perpendicular to the longitudinal axis X. For example, such a portion1490of the top surface1426may extend radially inward from a first end1484(e.g., a radially innermost end) of the modified portion1474to a radially outermost sidewall of the groove1428and/or to the radially innermost end1430of the top surface1426. Alternatively, as shown in the embodiment ofFIG.14e, in other examples the top surface1426may be substantially planar and may extend substantially linearly from a radially innermost end1430of the top surface1426to a radially outermost end1432of the top surface1426. In the example ofFIG.14e, the groove1428may be spaced radially from the radially innermost end1430by any desired distance. In still further embodiments, the groove1428and the member1434illustrated in at leastFIG.14emay be omitted. In any of the examples described herein, the groove1428may include a radially innermost sidewall radially spaced from the radially innermost end1430of the top surface1426by at least a portion1488of the top surface1426. In such examples, the substantially planar surface, curved surface, and/or other surface of the modified portion1474may include a first end1484(e.g., a radially innermost end) disposed at a location on the top surface1426radially spaced from the groove1428(e.g., radially spaced from a radially outermost sidewall of the groove1428disposed opposite the radially innermost sidewall of the groove1428). Additionally, in such examples the substantially planar surface, curved surface, and/or other surface of the modified portion1474may include a second end1486(e.g., a radially outermost end) disposed proximate a radially outermost end of the top surface1426. Additionally, in any of the examples described herein the shape, size, location, orientation, and/or other configuration of the modified portion1474may match a configuration of the additional member1472of the fitting1402. For instance, as shown inFIGS.14cand14e, the additional member1472of the fitting1402may comprise a substantially planar portion of the outer surface1436. Alternatively, in other embodiments the additional member1472may comprise a substantially curved, substantially concave, and/or substantially convex surface or portion of the outer surface1436opposite and facing the modified portion1474. In such examples, the additional member1472may mate with and/or otherwise at least partially engage the modified portion1474when the fitting1402is removably attached to the adapter1404. Such engagement between the additional member1472and the modified portion1474may assist in minimizing and/or substantially eliminating lateral movement of the fitting1402relative to the adapter1404when the fitting1402is removably attached to the adapter1404. Such engagement may also provide additional rigidity, support, and/or stability to the removable connection between the fitting1402and the adapter1404, while still facilitating rotation of the fitting1402relative to the adapter1404. Such engagement may also cause the longitudinal axis Z of the fitting1402(FIG.2) to be collinear with the longitudinal axis X of the adapter1404. Further, as shown inFIGS.14cand14e, the additional member1472and the modified portion1474may be disposed at complimentary angles relative to, for example, the longitudinal axis X when the fitting1402is removably attached to the adapter1404. For instance, the portion of the top surface1426forming the modified portion1474may be disposed at an acute included angle Θ relative to an axis1478that extends parallel to the longitudinal axis X. In such examples, the portion of the outer surface1436forming the additional member1472may be disposed at a complimentary acute included angle A relative to an axis1476that extends perpendicular to the axis1478and the longitudinal axis X. Further, it is understood that in some embodiments the additional member1472may include and/or comprise one or more pins, flanges, detents, and/or other extensions, and in such embodiments the portion of the top surface1426forming the modified portion1474may include one or more grooves, channels, dimples, indents, and/or other structures configured to accept such extensions. In still further embodiments, it is understood that the groove1428may be formed by the fitting1402and the member1434may be formed by the adapter1404. With reference to at leastFIG.14d, and substantially similar to the embodiment described above with respect toFIG.14c, in some examples the system1400may include an adapter1404having an annular groove1428, and a fitting1402having one or more protruding members1434configured to at least partly engage (e.g., at least partly extend into, at least partly contact, at least partly slidably engage, etc.) the groove1428when the fitting1402is releasably attached to the adapter1404. For example, similar to the adapter1404described with respect toFIG.14c, in example embodiments, the ring1422of the adapter1404shown inFIG.14dmay include a groove1428extending at least partly (and, in some examples, completely) around (e.g., concentrically) the longitudinal axis X, and being shaped, sized, located, and/or otherwise configured to accept the member1434and/or other structural feature of the adapter1402. In such embodiments, the member1434and/or other components of the fitting1402shown inFIG.14dmay be substantially similar to and/or the same as the corresponding components of the fitting1402described above with respect toFIG.14c, and the groove1428and/or other components of the adapter1404may be substantially similar to and/or the same as the corresponding components of the adapter1404described above with respect toFIG.14c. In some embodiments, as shown inFIG.14d, in addition to the groove1428, the ring1422of the adapter1404may also include one or more surfaces, sections, and/or other modified portions1480configured to accept and/or otherwise engage with a corresponding additional member of the fitting1402. In the example shown in the partial cross-section ofFIG.14d, the fitting1402may include a first arm1408a(not shown) and a second arm1408bopposite the first arm1408a. In such examples, a shelf1410a(not shown) and/or other radially inwardly facing surface or portion of the first arm1408amay form a first additional member1482a(not shown) of the fitting1402, and a shelf1410band/or other radially inwardly facing surface or portion of the second arm1408bmay form a second additional member1482bof the fitting1402. As shown inFIG.14d, in such examples the modified portion1480of the ring1422may be formed by and/or may comprise a surface and/or other portion of the ring1422disposed opposite the top surface1426and/or disposed on a radially outwardly facing portion of the ring1422. In particular, in such examples, the modified portion1480may at least partly face the top surface of the cuff12. In some examples, the modified portion1480of the ring1422shown inFIG.14dmay comprise an additional groove, channel, notch, indentation, hole, or other portion of the ring1422that, relative to the ring722and/or the substantially planar bottom surface or ridge724shown inFIG.7a, has been omitted (e.g., via a molding process) or removed (e.g., via a mechanical or other removal process) from the ring1422. In such examples, the additional groove, channel, notch, indentation, hole, or other modified portion1480of the ring1422shown inFIG.14dmay be substantially annular, may extend (at least partially) circumferentially around the longitudinal axis X, and/or may otherwise be configured similar to the groove1428to facilitate rotation of the fitting1402relative to the adapter1404. As shown inFIG.14d, in some examples the modified portion1480of the ring1422may comprise a substantially planar surface and/or other portion of the ring1422extending distally and radially inwardly between and/or from the top surface1426to the ridge1424. Alternatively, in other embodiments the modified portion1480of the ring1422may comprise a substantially curved, substantially concave, and/or substantially convex surface or portion of the ring1422. In any of the examples described herein, the groove1428may include a radially innermost sidewall that is radially spaced from a radially innermost end (e.g., the radially innermost end1430described above with respect toFIG.14c) of the top surface1426) by a portion1488of the top surface1426. In such examples, the substantially planar surface, curved surface, and/or other surface of the modified portion1480may include a first end1484(e.g., a radially outermost end) disposed proximate, adjacent, and/or at a radially outermost end of the top surface1426. Further, in such examples the substantially planar surface, curved surface, and/or other surface of the modified portion1480may include a second end1486(e.g., a radially innermost end) disposed proximate, adjacent, and/or at a radially outermost end of the ridge1424. In such examples, the first end1484of the modified portion1480may be radially spaced from a radially outermost sidewall of the groove1428by an additional portion1490of the top surface1426. Additionally, in any of the examples described herein the shape, size, location, orientation, and/or other configuration of the modified portion1480may match a configuration of the first and second additional members1482a,1482bof the fitting1402. For instance, the first and second additional members1482a,1482bmay comprise substantially planar surfaces of the respective arms1408a,1408b. Alternatively, in other embodiments the first and second additional members1482a,1482bmay comprise substantially curved, substantially concave, and/or substantially convex surfaces or portions of the respective arms1408a,1408b. In such examples, the first and second additional members1482a,1482bmay mate with and/or otherwise at least partially engage the first and second additional members1482a,1482bwhen the fitting1402is removably attached to the adapter1404. Such engagement may assist in minimizing and/or substantially eliminating lateral movement of the fitting1402relative to the adapter1404when the fitting1402is removably attached to the adapter1404. Such engagement may also provide additional rigidity, support, and/or stability to the removable connection between the fitting1402and the adapter1404, while still facilitating rotation of the fitting1402relative to the adapter1404. Such engagement may also cause the longitudinal axis Z of the fitting1402(FIG.2) to be collinear with the longitudinal axis X of the adapter1404. Further, the first and second additional members1482a,1482band the modified portion1474may be disposed at complimentary angles relative to, for example, the longitudinal axis X when the fitting1402is removably attached to the adapter1404. For instance, the portion of the ring1422forming the modified portion1480may be disposed at an acute included angle Θ relative to the longitudinal axis X. In such examples, the portions of the respective arms1408a,1408bforming the first and second additional members1482a,1482bmay be disposed at respective complimentary acute included angles A relative to the longitudinal axis X and an axis1476extending perpendicular to the longitudinal axis X. Further, it is understood that in some embodiments at least one of the first and second additional members1482a,1482bmay include and/or comprise one or more pins, flanges, detents, and/or other extensions, and in such embodiments the portion of the ring1422forming the modified portion1480may include one or more grooves, channels, dimples, indents, and/or other structures configured to accept such extensions. Moreover, with reference to at leastFIG.14f, in some examples the system1400may include an adapter1404having an annular groove1428, and a fitting1402having one or more protruding members1434configured to at least partly engage (e.g., at least partly extend into, at least partly contact, at least partly slidably engage, etc.) the groove1428when the fitting1402is releasably attached to the adapter1404. For example, similar to the adapter1404described with respect toFIGS.7a-7g, in example embodiments, the ring1422of the adapter1404may include a groove1428extending at least partly (and, in some examples, completely) around (e.g., concentrically) the longitudinal axis X, and being shaped, sized, located, and/or otherwise configured to accept the member1434and/or other structural feature of the adapter1402. As shown in at leastFIG.14f, such a member1434may extend distally (e.g., in a direction toward the cuff12when the fitting1402is removably attached to the adapter1404) from an outer surface1436of the fitting1402disposed opposite and/or facing the top surface1426when the fitting1402is removably attached to the adapter1404. As described above with respect toFIG.14a, in some examples, the member1434may comprise a shaft, pin, rod, tab, rib, ring, ridge, flange, and/or other extension of the body1406protruding from the outer surface1436, and the engagement between the member1434and the groove1428may assist in laterally and/or otherwise aligning the fitting1402with the adapter1404when removably attaching the fitting1402to the adapter1404. Example members1434of the present disclosure may have any shape, size, cross-sectional profile and/or other configuration to facilitate such functionality. For example,FIG.14fillustrates an embodiment in which the member1434comprises a substantially V-shaped cross-section. In such examples, the member1434may include a radially outermost sidewall1466and a radially innermost sidewall1468, and the radially outermost sidewall1466may extend at an acute included angle relative to the radially innermost sidewall1468. Moreover, the configuration of the groove1428shown inFIG.14fmay substantially match and/or otherwise correspond to the shape, size, cross-sectional profile, location, and/or other configuration of the member1434. For example, the groove1428may include a radially innermost sidewall1454, and a radially outermost sidewall1456, and the radially innermost sidewall1454may extend at an acute included angle relative to the radially outermost sidewall1456. In such examples, the radially innermost sidewall1454of the groove1428may extend substantially parallel to the radially innermost sidewall1468of the member1434when the fitting1402is removably attached to the adapter1404. Similarly, the radially outermost sidewall1456of the groove1428may extend substantially parallel to the radially outermost sidewall1466of the member1434when the fitting1402is removably attached to the adapter1404. Accordingly, in such embodiments at least part of the member1434may contact and/or slidably engage the groove1428when the fitting1402is removably attached to the adapter1404. Further, in the embodiment ofFIG.14f, at least one of the sidewalls1454,1456of the groove1428and/or at least one of the sidewalls1466,1468of the member1434may be substantially planar, substantially curved, substantially convex, substantially concave, substantially tapered, and/or any other configuration. The following clauses describe, alone and/or in combination, example embodiments of the present disclosure:A: A blood pressure cuff adapter includes a substantially rigid body having a distal portion, a proximal portion, a substantially cylindrical inner wall forming a central opening of the body, the inner wall extending from the distal portion to the proximal portion, and a longitudinal axis extending substantially centrally through the opening, the distal portion including: an annular ring having a top surface and a groove, the groove extending at least partly around the longitudinal axis and being configured to accept a corresponding member of a fitting when the fitting is removably attached to the adapter, and a ridge disposed opposite the top surface, the ridge extending substantially perpendicular to the longitudinal axis; and a seal disposed adjacent to the inner wall, the seal configured to form a substantially fluid-tight seal with the fitting when the fitting is removably attached to the adapter.B: The blood pressure cuff adapter of clause A, wherein the top surface comprises a convex surface extending radially away from the longitudinal axis from a distal end of the top surface to a proximal end of the top surface, the groove being disposed between the distal end and the proximal end.C: The blood pressure cuff adapter of any of the above clauses, wherein the member comprises one of a pin, a ring, and an arcuate ring segment extending substantially perpendicularly from an outer surface of the fitting.D: The blood pressure cuff adapter of any of the above clauses, wherein the groove comprises at least one detent configured to contact the member as the fitting is rotated about the longitudinal axis of the adapter.E: The blood pressure cuff adapter of any of the above clauses, wherein the groove comprises a base, a first sidewall, and a second sidewall opposite the first sidewall, the at least one detent being disposed on the base.F: The blood pressure cuff adapter of any of the above clauses, wherein the groove comprises a base having a trough and a peak, the peak being disposed axially closer to the top surface of the ring than the trough.G: The blood pressure cuff adapter of any of the above clauses, wherein the groove includes a wall prohibiting 360 degree rotation of the fitting about the longitudinal axis when the fitting is removably attached to the adapter.H: The blood pressure cuff adapter of any of the above clauses, further comprising at least one of an RFID tag, a bar code, or a conductor disposed on a base of the grove.I. The blood pressure cuff adapter of any of the above clauses, wherein the groove comprises a base extending radially from the inner wall, and a sidewall extending from the base to the top surface of the ring, the top surface comprising a curved surface extending from the sidewall of the groove.J: The blood pressure cuff adapter of any of the above clauses, wherein the top surface of the ring extends from the inner wall, and wherein the groove comprises a sidewall extending from the top surface to a base of the groove, the base extending radially from the sidewall to a radially outermost portion of the ring.K: The blood pressure cuff adapter of any of the above clauses, wherein the member includes a first sidewall disposed at an acute included angle relative to the longitudinal axis, and the groove includes a second sidewall disposed at the acute included angle, wherein the first sidewall slidably engages the second sidewall when the fitting is removably attached to the adapter.L: The blood pressure cuff adapter of any of the above clauses, wherein the top surface forms a modified portion of the ring disposed radially outward of the groove, the modified portion comprising a substantially planar surface extending at an acute included angle relative to the longitudinal axis, wherein an additional member of the fitting is configured to slidably engage the modified portion of the ring when the fitting is removably attached to the adapter.M: The blood pressure cuff adapter of any of the above clauses, wherein the modified portion of the ring includes an additional groove, and wherein the additional member of the fitting includes an additional member configured to engage the additional groove when the fitting is removably attached to the adapter.N: The blood pressure cuff adapter of any of the above clauses, wherein the ring includes a modified portion disposed between the top surface and the ridge, the modified portion comprising a substantially planar surface extending at an acute included angle relative to an axis that is perpendicular to the longitudinal axis, wherein an additional member of the fitting is configured to slidably engage the modified portion of the ring when the fitting is removably attached to the adapter.O: The blood pressure cuff adapter of any of the above clauses, wherein: the top surface comprises a substantially planar surface extending from a radially innermost end disposed proximate the inner wall to a radially outermost end, the groove is disposed between the radially innermost end and the radially outermost end, and the top surface extends at an acute included angle relative to the longitudinal axis.P: A blood pressure cuff adapter includes a substantially rigid body having a distal portion, a proximal portion, a substantially cylindrical inner wall forming a central opening of the body, the inner wall extending from the distal portion to the proximal portion, and a longitudinal axis extending substantially centrally through the opening, the distal portion including: an annular ring having a top surface and a groove, the groove extending at least partly around the longitudinal axis and being configured to accept a corresponding member of a fitting when the fitting is removably attached to the adapter, and the top surface forming a modified portion of the ring disposed radially outward of the groove, the modified portion comprising a substantially planar surface extending at an acute included angle relative to the longitudinal axis, and a ridge disposed opposite the top surface, the ridge extending substantially perpendicular to the longitudinal axis; and a seal disposed adjacent to the inner wall, the seal configured to form a substantially fluid-tight seal with the fitting when the fitting is removably attached to the adapter.Q: The blood pressure cuff adapter of any of the above clauses, the distal portion of the body further including: a substantially cylindrical sidewall extending from the ridge to a top surface of the proximal portion; and a feature disposed proximate the groove, the feature comprising at least one of a layer of reflective paint, a layer of reflective ink, an RFID tag, or a barcode.R: The blood pressure cuff adapter of any of the above clauses, wherein: the groove includes a radially innermost sidewall radially spaced from a radially innermost end of the top surface; the substantially planar surface of the modified portion includes a first end disposed at a location on the top surface radially spaced from the groove; and the substantially planar surface includes a second end disposed proximate a radially outermost end of the top surface.S: A blood pressure cuff adapter includes: a substantially rigid body having a distal portion, a proximal portion, a substantially cylindrical inner wall forming a central opening of the body, the inner wall extending from the distal portion to the proximal portion, and a longitudinal axis extending substantially centrally through the opening, the distal portion including: an annular ring having a top surface, a groove extending at least partly around the longitudinal axis and being configured to accept a corresponding member of a fitting when the fitting is removably attached to the adapter, a ridge disposed opposite the top surface, the ridge extending substantially perpendicular to the longitudinal axis, and a modified portion disposed between the top surface and the ridge, the modified portion comprising a substantially planar surface extending at an acute included angle relative to an axis that is perpendicular to the longitudinal axis; and a seal disposed adjacent to the inner wall, the seal configured to form a substantially fluid-tight seal with the fitting when the fitting is removably attached to the adapter.T: The blood pressure cuff adapter of any of the above clauses, wherein: the groove includes a radially innermost sidewall radially spaced from a radially innermost end of the top surface; the substantially planar surface of the modified portion includes a first end disposed proximate a radially outermost end of the top surface; and the substantially planar surface includes a second end disposed proximate a radially outermost end of the ridge. The example systems and methods of the present disclosure overcome various deficiencies of known prior art devices. Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure contained herein. It is intended that the specification and examples be considered as example only, with a true scope and spirit of the present disclosure being indicated by the following claims. | 190,931 |
11857296 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately. Referring toFIGS.1and2, shown therein is a vessel100having a stenosis according to an embodiment of the present disclosure. In that regard,FIG.1is a diagrammatic perspective view of the vessel100, whileFIG.2is a partial cross-sectional perspective view of a portion of the vessel100taken along section line2-2ofFIG.1. Referring more specifically toFIG.1, the vessel100includes a proximal portion102and a distal portion104. A lumen106extends along the length of the vessel100between the proximal portion102and the distal portion104. In that regard, the lumen106is configured to allow the flow of fluid through the vessel. In some instances, the vessel100is a blood vessel. In some particular instances, the vessel100is a coronary artery. In such instances, the lumen106is configured to facilitate the flow of blood through the vessel100. As shown, the vessel100includes a stenosis108between the proximal portion102and the distal portion104. Stenosis108is generally representative of any blockage or other structural arrangement that results in a restriction to the flow of fluid through the lumen106of the vessel100. Embodiments of the present disclosure are suitable for use in a wide variety of vascular applications, including without limitation coronary, peripheral (including but not limited to lower limb, carotid, and neurovascular), renal, and/or venous. Where the vessel100is a blood vessel, the stenosis108may be a result of plaque buildup, including without limitation plaque components such as fibrous, fibro-lipidic (fibro fatty), necrotic core, calcified (dense calcium), blood, fresh thrombus, and mature thrombus. Generally, the composition of the stenosis will depend on the type of vessel being evaluated. In that regard, it is understood that the concepts of the present disclosure are applicable to virtually any type of blockage or other narrowing of a vessel that results in decreased fluid flow. Referring more particularly toFIG.2, the lumen106of the vessel100has a diameter110proximal of the stenosis108and a diameter112distal of the stenosis. In some instances, the diameters110and112are substantially equal to one another. In that regard, the diameters110and112are intended to represent healthy portions, or at least healthier portions, of the lumen106in comparison to stenosis108. Accordingly, these healthier portions of the lumen106are illustrated as having a substantially constant cylindrical profile and, as a result, the height or width of the lumen has been referred to as a diameter. However, it is understood that in many instances these portions of the lumen106will also have plaque buildup, a non-symmetric profile, and/or other irregularities, but to a lesser extent than stenosis108and, therefore, will not have a cylindrical profile. In such instances, the diameters110and112are understood to be representative of a relative size or cross-sectional area of the lumen and do not imply a circular cross-sectional profile. As shown inFIG.2, stenosis108includes plaque buildup114that narrows the lumen106of the vessel100. In some instances, the plaque buildup114does not have a uniform or symmetrical profile, making angiographic evaluation of such a stenosis unreliable. In the illustrated embodiment, the plaque buildup114includes an upper portion116and an opposing lower portion118. In that regard, the lower portion118has an increased thickness relative to the upper portion116that results in a non-symmetrical and non-uniform profile relative to the portions of the lumen proximal and distal of the stenosis108. As shown, the plaque buildup114decreases the available space for fluid to flow through the lumen106. In particular, the cross-sectional area of the lumen106is decreased by the plaque buildup114. At the narrowest point between the upper and lower portions116,118the lumen106has a height120, which is representative of a reduced size or cross-sectional area relative to the diameters110and112proximal and distal of the stenosis108. Note that the stenosis108, including plaque buildup114is exemplary in nature and should be considered limiting in any way. In that regard, it is understood that the stenosis108has other shapes and/or compositions that limit the flow of fluid through the lumen106in other instances. While the vessel100is illustrated inFIGS.1and2as having a single stenosis108and the description of the embodiments below is primarily made in the context of a single stenosis, it is nevertheless understood that the devices, systems, and methods described herein have similar application for a vessel having multiple stenosis regions. Referring now toFIG.3, the vessel100is shown with instruments130and132positioned therein according to an embodiment of the present disclosure. In general, instruments130and132may be any form of device, instrument, or probe sized and shaped to be positioned within a vessel. In the illustrated embodiment, instrument130is generally representative of a guide wire, while instrument132is generally representative of a catheter. In that regard, instrument130extends through a central lumen of instrument132. However, in other embodiments, the instruments130and132take other forms. In that regard, the instruments130and132are of similar form in some embodiments. For example, in some instances, both instruments130and132are guide wires. In other instances, both instruments130and132are catheters. On the other hand, the instruments130and132are of different form in some embodiments, such as the illustrated embodiment, where one of the instruments is a catheter and the other is a guide wire. Further, in some instances, the instruments130and132are disposed coaxial with one another, as shown in the illustrated embodiment ofFIG.3. In other instances, one of the instruments extends through an off-center lumen of the other instrument. In yet other instances, the instruments130and132extend side-by-side. In some particular embodiments, at least one of the instruments is as a rapid-exchange device, such as a rapid-exchange catheter. In such embodiments, the other instrument is a buddy wire or other device configured to facilitate the introduction and removal of the rapid-exchange device. Further still, in other instances, instead of two separate instruments130and132a single instrument is utilized. In some embodiments, the single instrument incorporates aspects of the functionalities (e.g., data acquisition) of both instruments130and132. Instrument130is configured to obtain diagnostic information about the vessel100. In that regard, the instrument130includes one or more sensors, transducers, and/or other monitoring elements configured to obtain the diagnostic information about the vessel. The diagnostic information includes one or more of pressure, flow (velocity), images (including images obtained using ultrasound (e.g., IVUS), OCT, thermal, and/or other imaging techniques), temperature, and/or combinations thereof. The one or more sensors, transducers, and/or other monitoring elements are positioned adjacent a distal portion of the instrument130in some instances. In that regard, the one or more sensors, transducers, and/or other monitoring elements are positioned less than 30 cm, less than 10 cm, less than 5 cm, less than 3 cm, less than 2 cm, and/or less than 1 cm from a distal tip134of the instrument130in some instances. In some instances, at least one of the one or more sensors, transducers, and/or other monitoring elements is positioned at the distal tip of the instrument130. The instrument130includes at least one element configured to monitor pressure within the vessel100. The pressure monitoring element can take the form a piezo-resistive pressure sensor, a piezo-electric pressure sensor, a capacitive pressure sensor, an electromagnetic pressure sensor, a fluid column (the fluid column being in communication with a fluid column sensor that is separate from the instrument and/or positioned at a portion of the instrument proximal of the fluid column), an optical pressure sensor, and/or combinations thereof. In some instances, one or more features of the pressure monitoring element are implemented as a solid-state component manufactured using semiconductor and/or other suitable manufacturing techniques. Examples of commercially available guide wire products that include suitable pressure monitoring elements include, without limitation, the PrimeWire PRESTIGE® pressure guide wire, the PrimeWire® pressure guide wire, and the ComboWire® XT pressure and flow guide wire, each available from Volcano Corporation, as well as the PressureWire™ Certus guide wire and the PressureWire™ Aeris guide wire, each available from St. Jude Medical, Inc. Generally, the instrument130is sized such that it can be positioned through the stenosis108without significantly impacting fluid flow across the stenosis, which would impact the distal pressure reading. Accordingly, in some instances the instrument130has an outer diameter of 0.018″ or less. In some embodiments, the instrument130has an outer diameter of 0.014″ or less. Instrument132is also configured to obtain diagnostic information about the vessel100. In some instances, instrument132is configured to obtain the same diagnostic information as instrument130. In other instances, instrument132is configured to obtain different diagnostic information than instrument130, which may include additional diagnostic information, less diagnostic information, and/or alternative diagnostic information. The diagnostic information obtained by instrument132includes one or more of pressure, flow (velocity), images (including images obtained using ultrasound (e.g., IVUS), OCT, thermal, and/or other imaging techniques), temperature, and/or combinations thereof. Instrument132includes one or more sensors, transducers, and/or other monitoring elements configured to obtain this diagnostic information. In that regard, the one or more sensors, transducers, and/or other monitoring elements are positioned adjacent a distal portion of the instrument132in some instances. In that regard, the one or more sensors, transducers, and/or other monitoring elements are positioned less than 30 cm, less than 10 cm, less than 5 cm, less than 3 cm, less than 2 cm, and/or less than 1 cm from a distal tip136of the instrument132in some instances. In some instances, at least one of the one or more sensors, transducers, and/or other monitoring elements is positioned at the distal tip of the instrument132. Similar to instrument130, instrument132also includes at least one element configured to monitor pressure within the vessel100. The pressure monitoring element can take the form a piezo-resistive pressure sensor, a piezo-electric pressure sensor, a capacitive pressure sensor, an electromagnetic pressure sensor, a fluid column (the fluid column being in communication with a fluid column sensor that is separate from the instrument and/or positioned at a portion of the instrument proximal of the fluid column), an optical pressure sensor, and/or combinations thereof. In some instances, one or more features of the pressure monitoring element are implemented as a solid-state component manufactured using semiconductor and/or other suitable manufacturing techniques. Currently available catheter products suitable for use with one or more of Siemens AXIOM Sensis, Mennen Horizon XVu, and Philips Xper IM Physiomonitoring 5 and include pressure monitoring elements can be utilized for instrument132in some instances. In accordance with aspects of the present disclosure, at least one of the instruments130and132is configured to monitor a pressure within the vessel100distal of the stenosis108and at least one of the instruments130and132is configured to monitor a pressure within the vessel proximal of the stenosis. In that regard, the instruments130,132are sized and shaped to allow positioning of the at least one element configured to monitor pressure within the vessel100to be positioned proximal and/or distal of the stenosis108as necessary based on the configuration of the devices. In that regard,FIG.3illustrates a position138suitable for measuring pressure distal of the stenosis108. In that regard, the position138is less than 5 cm, less than 3 cm, less than 2 cm, less than 1 cm, less than 5 mm, and/or less than 2.5 mm from the distal end of the stenosis108(as shown inFIG.2) in some instances.FIG.3also illustrates a plurality of suitable positions for measuring pressure proximal of the stenosis108. In that regard, positions140,142,144,146, and148each represent a position that is suitable for monitoring the pressure proximal of the stenosis in some instances. In that regard, the positions140,142,144,146, and148are positioned at varying distances from the proximal end of the stenosis108ranging from more than 20 cm down to about 5 mm or less. Generally, the proximal pressure measurement will be spaced from the proximal end of the stenosis. Accordingly, in some instances, the proximal pressure measurement is taken at a distance equal to or greater than an inner diameter of the lumen of the vessel from the proximal end of the stenosis. In the context of coronary artery pressure measurements, the proximal pressure measurement is generally taken at a position proximal of the stenosis and distal of the aorta, within a proximal portion of the vessel. However, in some particular instances of coronary artery pressure measurements, the proximal pressure measurement is taken from a location inside the aorta. In other instances, the proximal pressure measurement is taken at the root or ostium of the coronary artery. In some embodiments, at least one of the instruments130and132is configured to monitor pressure within the vessel100while being moved through the lumen106. In some instances, instrument130is configured to be moved through the lumen106and across the stenosis108. In that regard, the instrument130is positioned distal of the stenosis108and moved proximally (i.e., pulled back) across the stenosis to a position proximal of the stenosis in some instances. In other instances, the instrument130is positioned proximal of the stenosis108and moved distally across the stenosis to a position distal of the stenosis. Movement of the instrument130, either proximally or distally, is controlled manually by medical personnel (e.g., hand of a surgeon) in some embodiments. In other embodiments, movement of the instrument130, either proximally or distally, is controlled automatically by a movement control device (e.g., a pullback device, such as the Trak Back® II Device available from Volcano Corporation). In that regard, the movement control device controls the movement of the instrument130at a selectable and known speed (e.g., 2.0 mm/s, 1.0 mm/s, 0.5 mm/s, 0.2 mm/s, etc.) in some instances. Movement of the instrument130through the vessel is continuous for each pullback or push through, in some instances. In other instances, the instrument130is moved step-wise through the vessel (i.e., repeatedly moved a fixed amount of distance and/or a fixed amount of time). Some aspects of the visual depictions discussed below are particularly suited for embodiments where at least one of the instruments130and132is moved through the lumen106. Further, in some particular instances, aspects of the visual depictions discussed below are particularly suited for embodiments where a single instrument is moved through the lumen106, with or without the presence of a second instrument. In some instances, use of a single instrument has a benefit in that it avoids issues associated with variations in pressure measurements of one instrument relative to another over time, which is commonly referred to as drift. In that regard, a major source of drift in traditional Fractional Flow Reserve (FFR) measurements is divergence in the pressure reading of a guidewire relative to the pressure reading of a guide catheter. In that regard, because FFR is calculated as the ratio of the pressure measurement obtained by the guidewire to the pressure measurement obtained by the catheter, this divergence has an impact on the resulting FFR value. In contrast, where a single instrument is utilized to obtain pressure measurements as it is moved through the vessel, drift is negligible or non-existent. For example, in some instances, the single instrument is utilized to obtain relative changes in pressures as it is moved through the vessel such that the time period between pressure measurements is short enough to prevent any impact from any changes in pressure sensitivity of the instrument (e.g., less than 500 ms, less than 100 ms, less than 50 ms, less than 10 ms, less than 5 ms, less than 1 ms, or otherwise). Referring now toFIG.4, shown therein is a system150according to an embodiment of the present disclosure. In that regard,FIG.4is a diagrammatic, schematic view of the system150. As shown, the system150includes an instrument152. In that regard, in some instances instrument152is suitable for use as at least one of instruments130and132discussed above. Accordingly, in some instances the instrument152includes features similar to those discussed above with respect to instruments130and132in some instances. In the illustrated embodiment, the instrument152is a guide wire having a distal portion154and a housing156positioned adjacent the distal portion. In that regard, the housing156is spaced approximately 3 cm from a distal tip of the instrument152. The housing156is configured to house one or more sensors, transducers, and/or other monitoring elements configured to obtain the diagnostic information about the vessel. In the illustrated embodiment, the housing156contains at least a pressure sensor configured to monitor a pressure within a lumen in which the instrument152is positioned. A shaft158extends proximally from the housing156. A torque device160is positioned over and coupled to a proximal portion of the shaft158. A proximal end portion162of the instrument152is coupled to a connector164. A cable166extends from connector164to a connector168. In some instances, connector168is configured to be plugged into an interface170. In that regard, interface170is a patient interface module (PIM) in some instances. In some instances, the cable166is replaced with a wireless connection. In that regard, it is understood that various communication pathways between the instrument152and the interface170may be utilized, including physical connections (including electrical, optical, and/or fluid connections), wireless connections, and/or combinations thereof. The interface170is communicatively coupled to a computing device172via a connection174. Computing device172is generally representative of any device suitable for performing the processing and analysis techniques discussed within the present disclosure. In some embodiments, the computing device172includes a processor, random access memory, and a storage medium. In that regard, in some particular instances the computing device172is programmed to execute steps associated with the data acquisition and analysis described herein. Accordingly, it is understood that any steps related to data acquisition, data processing, instrument control, and/or other processing or control aspects of the present disclosure may be implemented by the computing device using corresponding instructions stored on or in a non-transitory computer readable medium accessible by the computing device. In some instances, the computing device172is a console device. In some particular instances, the computing device172is similar to the s5™ Imaging System or the s5i™ Imaging System, each available from Volcano Corporation. In some instances, the computing device172is portable (e.g., handheld, on a rolling cart, etc.). Further, it is understood that in some instances the computing device172comprises a plurality of computing devices. In that regard, it is particularly understood that the different processing and/or control aspects of the present disclosure may be implemented separately or within predefined groupings using a plurality of computing devices. Any divisions and/or combinations of the processing and/or control aspects described below across multiple computing devices are within the scope of the present disclosure. Together, connector164, cable166, connector168, interface170, and connection174facilitate communication between the one or more sensors, transducers, and/or other monitoring elements of the instrument152and the computing device172. However, this communication pathway is exemplary in nature and should not be considered limiting in any way. In that regard, it is understood that any communication pathway between the instrument152and the computing device172may be utilized, including physical connections (including electrical, optical, and/or fluid connections), wireless connections, and/or combinations thereof. In that regard, it is understood that the connection174is wireless in some instances. In some instances, the connection174includes a communication link over a network (e.g., intranet, internet, telecommunications network, and/or other network). In that regard, it is understood that the computing device172is positioned remote from an operating area where the instrument152is being used in some instances. Having the connection174include a connection over a network can facilitate communication between the instrument152and the remote computing device172regardless of whether the computing device is in an adjacent room, an adjacent building, or in a different state/country. Further, it is understood that the communication pathway between the instrument152and the computing device172is a secure connection in some instances. Further still, it is understood that, in some instances, the data communicated over one or more portions of the communication pathway between the instrument152and the computing device172is encrypted. The system150also includes an instrument175. In that regard, in some instances instrument175is suitable for use as at least one of instruments130and132discussed above. Accordingly, in some instances the instrument175includes features similar to those discussed above with respect to instruments130and132in some instances. In the illustrated embodiment, the instrument175is a catheter-type device. In that regard, the instrument175includes one or more sensors, transducers, and/or other monitoring elements adjacent a distal portion of the instrument configured to obtain the diagnostic information about the vessel. In the illustrated embodiment, the instrument175includes a pressure sensor configured to monitor a pressure within a lumen in which the instrument175is positioned. The instrument175is in communication with an interface176via connection177. In some instances, interface176is a hemodynamic monitoring system or other control device, such as Siemens AXIOM Sensis, Mennen Horizon XVu, and Philips Xper IM Physiomonitoring 5. In one particular embodiment, instrument175is a pressure-sensing catheter that includes fluid column extending along its length. In such an embodiment, interface176includes a hemostasis valve fluidly coupled to the fluid column of the catheter, a manifold fluidly coupled to the hemostasis valve, and tubing extending between the components as necessary to fluidly couple the components. In that regard, the fluid column of the catheter is in fluid communication with a pressure sensor via the valve, manifold, and tubing. In some instances, the pressure sensor is part of interface176. In other instances, the pressure sensor is a separate component positioned between the instrument175and the interface176. The interface176is communicatively coupled to the computing device172via a connection178. Similar to the connections between instrument152and the computing device172, interface176and connections177and178facilitate communication between the one or more sensors, transducers, and/or other monitoring elements of the instrument175and the computing device172. However, this communication pathway is exemplary in nature and should not be considered limiting in any way. In that regard, it is understood that any communication pathway between the instrument175and the computing device172may be utilized, including physical connections (including electrical, optical, and/or fluid connections), wireless connections, and/or combinations thereof. In that regard, it is understood that the connection178is wireless in some instances. In some instances, the connection178includes a communication link over a network (e.g., intranet, internet, telecommunications network, and/or other network). In that regard, it is understood that the computing device172is positioned remote from an operating area where the instrument175is being used in some instances. Having the connection178include a connection over a network can facilitate communication between the instrument175and the remote computing device172regardless of whether the computing device is in an adjacent room, an adjacent building, or in a different state/country. Further, it is understood that the communication pathway between the instrument175and the computing device172is a secure connection in some instances. Further still, it is understood that, in some instances, the data communicated over one or more portions of the communication pathway between the instrument175and the computing device172is encrypted. It is understood that one or more components of the system150are not included, are implemented in a different arrangement/order, and/or are replaced with an alternative device/mechanism in other embodiments of the present disclosure. For example, in some instances, the system150does not include interface170and/or interface176. In such instances, the connector168(or other similar connector in communication with instrument152or instrument175) may plug into a port associated with computing device172. Alternatively, the instruments152,175may communicate wirelessly with the computing device172. Generally speaking, the communication pathway between either or both of the instruments152,175and the computing device172may have no intermediate nodes (i.e., a direct connection), one intermediate node between the instrument and the computing device, or a plurality of intermediate nodes between the instrument and the computing device. Referring now toFIGS.5-8, shown therein are various visual depictions of a vessel profile based on pressure measurements according to embodiments of the present disclosure. Referring more specifically toFIG.5, shown therein is a visual representation180of a vessel. In that regard, visual representation180illustrates approximately a 112 mm segment of the vessel between points182and184. In that regard, point182is representative of a starting position of an instrument within the vessel while point184is representative of an ending position of the instrument within the vessel after movement of the instrument longitudinally along the lumen of the vessel. Accordingly, in the instance of a pullback of the instrument, point182is distal of point184within the vessel. On the other hand, in the instance where the instrument pushed through the vessel, point182is proximal of the point184. Regardless of the direction of movement of the instrument, the instrument will cross one or more lesions and/or stenosis of the vessel between the point182and the point184. In that regard, each of the visual depictions ofFIGS.5-8is configured to identify the one or more lesions and/or stenosis based on pressure measurements obtained from the instrument as the instrument is moved through the vessel. Referring again toFIG.5, visual representation180is a heat map that illustrates changes in pressure measurements obtained as the instrument is moved through the vessel. In that regard, in some instances the pressure measurements shown in the heat map are representative of a pressure differential between a fixed location within the vessel and the moving position of the instrument as the instrument is moved through the vessel. For example, in some instances a proximal pressure measurement is obtained at a fixed location within the vessel while the instrument is pulled back through the vessel from a first position distal of the position where the proximal pressure measurement is obtained to a second position more proximal than the first position (i.e., closer the fixed position of the distal pressure measurement). For clarity in understanding the concepts of the present disclosure, this arrangement will be utilized to describe many of the embodiments of the present disclosure. However, it is understood that the concepts are equally applicable to other arrangements. For example, in some instances, the instrument is pushed through the vessel from a first position distal of the proximal pressure measurement location to a second position further distal (i.e., further away from the fixed position of the proximal pressure measurement). In other instances, a distal pressure measurement is obtained at a fixed location within the vessel and the instrument is pulled back through the vessel from a first position proximal of the fixed location of the distal pressure measurement to a second position more proximal than the first position (i.e., further away from the fixed position of the distal pressure measurement). In still other instances, a distal pressure measurement is obtained at a fixed location within the vessel and the instrument is pushed through the vessel from a first position proximal of the fixed location of the distal pressure measurement to a second position less proximal than the first position (i.e., closer the fixed position of the distal pressure measurement). The pressure differential between the two pressure measurements within the vessel (e.g., a fixed location pressure measurement and a moving pressure measurement) is calculated as a ratio of the two pressure measurements (e.g., the moving pressure measurement divided by the fixed location pressure measurement), in some instances. In some instances, the pressure differential is calculated for each heartbeat cycle of the patient. In that regard, the calculated pressure differential is the average pressure differential across a heartbeat cycle in some embodiments. For example, in some instances where a hyperemic agent is applied to the patient, the average pressure differential across the heartbeat cycle is utilized to calculate the pressure differential. In other embodiments, only a portion of the heartbeat cycle is utilized to calculate the pressure differential. The pressure differential is an average over the portion or diagnostic window of the heartbeat cycle, in some instances. In that regard, in some embodiments a diagnostic window is selected using one or more of the techniques described in U.S. patent application Ser. No. 13/460,296, filed Apr. 30, 2012 and titled “DEVICES, SYSTEMS, AND METHODS FOR ASSESSING A VESSEL,” which is hereby incorporated by reference in its entirety. As discussed therein, the diagnostic windows and associated techniques are particularly suitable for use without application of a hyperemic agent to the patient. In general, the diagnostic window for evaluating differential pressure across a stenosis without the use of a hyperemic agent is identified based on characteristics and/or components of one or more of proximal pressure measurements, distal pressure measurements, proximal velocity measurements, distal velocity measurements, ECG waveforms, and/or other identifiable and/or measurable aspects of vessel performance. In that regard, various signal processing and/or computational techniques can be applied to the characteristics and/or components of one or more of proximal pressure measurements, distal pressure measurements, proximal velocity measurements, distal velocity measurements, ECG waveforms, and/or other identifiable and/or measurable aspects of vessel performance to identify a suitable diagnostic window. In some embodiments, the determination of the diagnostic window and/or the calculation of the pressure differential are performed in approximately real time or live to identify the section212and calculate the pressure differential. In that regard, calculating the pressure differential in “real time” or “live” within the context of the present disclosure is understood to encompass calculations that occur within 10 seconds of data acquisition. It is recognized, however, that often “real time” or “live” calculations are performed within 1 second of data acquisition. In some instances, the “real time” or “live” calculations are performed concurrent with data acquisition. In some instances the calculations are performed by a processor in the delays between data acquisitions. For example, if data is acquired from the pressure sensing devices for 1 ms every 5 ms, then in the 4 ms between data acquisitions the processor can perform the calculations. It is understood that these timings are for example only and that data acquisition rates, processing times, and/or other parameters surrounding the calculations will vary. In other embodiments, the pressure differential calculation is performed 10 or more seconds after data acquisition. For example, in some embodiments, the data utilized to identify the diagnostic window and/or calculate the pressure differential are stored for later analysis. By comparing the calculated pressure differential to a threshold or predetermined value, a physician or other treating medical personnel can determine what, if any, treatment should be administered. In that regard, in some instances, a calculated pressure differential above a threshold value (e.g., 0.80 on a scale of 0.00 to 1.00) is indicative of a first treatment mode (e.g., no treatment, drug therapy, etc.), while a calculated pressure differential below the threshold value is indicative of a second, more invasive treatment mode (e.g., angioplasty, stent, etc.). In some instances, the threshold value is a fixed, preset value. In other instances, the threshold value is selected for a particular patient and/or a particular stenosis of a patient. In that regard, the threshold value for a particular patient may be based on one or more of empirical data, patient characteristics, patient history, physician preference, available treatment options, and/or other parameters. In that regard, the coloring and/or other visually distinguishing aspect of the pressure differential measurements depicted in visual representation180ofFIG.5are configured based on the threshold value. For example, a first color (e.g., green, white, or otherwise) is utilized to represent values well above the threshold value (e.g., where the threshold value is 0.80 on a scale of 0.00 to 1.00, values above 0.90), a second color (e.g., yellow, gray, or otherwise) is utilized to represent values near but above the threshold value (e.g., where the threshold value is 0.80 on a scale of 0.00 to 1.00, values between 0.81 and 0.90), and a third color (e.g., red, black, or otherwise) is utilized to represent values equal to or below the threshold value (e.g., where the threshold value is 0.80 on a scale of 0.00 to 1.00, values of 0.80 and below). It is appreciated that any number of color combinations, scalings, categories, and/or other characteristics can be utilized to visually represent the relative value of the pressure differential to the threshold value. However, for the sake of brevity Applicants will not explicitly describe the numerous variations herein. As shown inFIG.5, the heat map of visual representation180utilizes a gray scale where lighter or whiter colors are representative of values above the threshold value, while darker or blacker colors are representative of values near or below the threshold value. In that regard, the heat map of visual representation180is based on a cumulative or total pressure differential, where the gray scale color selected for a particular point is determined based on the pressure differential between the instrument at that point being moved through the vessel and the stationary or fixed instrument. As shown, in the illustrated embodiment a transition point or area186of the vessel is positioned between a portion188of the vessel having pressure differential values above the threshold value and a portion190of the vessel having pressure differential values below the threshold value. In that regard, the transition point or area186is representative of a boundary of a lesion or stenosis of the vessel that results in an increased pressure differential, which is illustrated by the change in color of the visual representation180. As a result, the visual representation180can be utilized to both identify the location of the lesion or stenosis within the vessel and assess the severity of the lesion or stenosis. Referring now toFIG.6, shown therein is a visual representation200of a vessel profile based on the same pressure measurements as the visual representation180ofFIG.5. In that regard, the heat map of visual representation200also utilizes a gray scale where lighter or whiter colors are representative of values above a threshold value, while darker or blacker colors are representative of values near or below the threshold value. While the heat map of visual representation180was based on a cumulative or total pressure differential, the heat map of visual representation200is based on a localized pressure differential, where the gray scale color selected for a particular point is determined based on differences between the pressure differential of that point with one or more of the surrounding points. In that regard, the localized pressure differential is calculated as the difference between the immediately preceding point in some instances. For example, the localized pressure differential for point Pn is equal to the cumulative or total pressure differential for point Pn minus the total or cumulative pressure differential for point Pn−1. In other instances, the localized pressure differential is calculated as the difference between that point and a point a fixed amount of time (e.g., 10 ms, 5 ms, 2 ms, 1 ms, or otherwise) or distance (e.g., 10 mm, 5 mm, 2 mm, 1 mm, or otherwise) away from that point. By utilizing a localized pressure differential the location of significant changes in pressure differential values, which are often associated with the presence of a lesion or stenosis, can be identified. For example, as shown in the illustrated embodiment ofFIG.6, a transition area202of the vessel having localized pressure differential values below the threshold is positioned between portions204and206of the vessel having pressure differential values above the threshold value. In that regard, the transition point or area202is representative of a lesion or stenosis of the vessel that results in a significant change in pressure differential, which is illustrated by the change in color of the visual representation200. As a result, the visual representation200can be utilized to both identify the location of the lesion or stenosis within the vessel and assess the severity of the lesion or stenosis. Referring now toFIG.7, shown therein is a visual representation210of a vessel profile based on the same pressure measurements as the visual representations180and200ofFIGS.5and6, respectively. In that regard,FIG.7illustrates a plot212of the cumulative or total pressure differential between the instrument being moved through the vessel and an instrument at a stationary or fixed position within the vessel. By analyzing the shape of the plot212and, in particular, such characteristics as the pressure differential value relative to the threshold value, changes in the slope of the plot, and/or combinations thereof, the visual representation210can be utilized to both identify the location of the lesion or stenosis within the vessel and assess the severity of the lesion or stenosis. Referring now toFIG.8, shown therein is a visual representation220of a vessel profile based on the same pressure measurements as the visual representations180,200, and210ofFIGS.5,6, and7, respectively. In that regard,FIG.8illustrates a plot222that is based on differences between the pressure differential of a point with one or more of the surrounding points. In that regard, the values utilized for plot222are calculated as the difference between adjacent points in some instances. For example, the value for point Pn is equal to the cumulative or total pressure differential for point Pn minus the total or cumulative pressure differential for point Pn−1, in some instances. In other instances, the value utilized a particular point of plot222is calculated as the difference between the pressure differential for that point and another point a fixed amount of time (e.g., 10 ms, 5 ms, 2 ms, 1 ms, or otherwise) or distance (e.g., 10 mm, 5 mm, 2 mm, 1 mm, or otherwise) away from that point. In the illustrated embodiment, plot222is based upon the differences in pressure differential between points 2 mm apart from one another. Utilizing these relative and localized calculations of pressure differential, the location of significant changes in pressure differential values that are associated with the presence of a lesion or stenosis can be identified. The plot222can be utilized to both identify the location of lesions or stenosis within the vessel as well as assess the severity of the identified lesions or stenosis. In the illustrated embodiment ofFIG.8, a region224of the plot222does not meet the threshold value indicated by line226. In that regard, it should be noted that inFIG.8, the y-axis values of the visual representation220go from 1.0 at the origin to 0.0 at the top of the illustrated y-axis. Accordingly, region224represents a lesion or stenosis of the vessel that is adversely impacting fluid flow to a degree that requires treatment. Analysis of the plot222provides information about the vessel and/or its lesions or stenosis. For example, the plot222provides an indication of the length of the lesion or stenosis associated with region224. In that regard, the length of the lesion or stenosis is indicated by the length of the vessel segment having values less than the threshold value226. In the illustrated embodiment, the length of the vessel segment having values less than the threshold value226is approximately 17 mm. The length of the lesion or stenosis as indicated by the plot222is based entirely on physiologic measurements that are independent of lesion composition. Further, the plot222provides an indication of the overall occlusive value of the vessel. In that regard, the total vessel occlusive value is determined by cumulative area under the plot222in some instance. In the illustrated embodiment, the total vessel occlusive value or area under the plot222is approximately 1.38. Similarly, the plot222also provides an indication of the occlusive value attributable to individual lesions or stenosis of the vessel. In that regard, the occlusive value attributable to a particular lesion or stenosis can similarly be calculated by determining the area under the plot222for a length of the vessel associated with the lesion or stenosis. For example, in the illustrated embodiment the lesion or stenosis associated with region224has an occlusive value or area under the plot222of approximately 0.67. Based on the total vessel occlusive value and the occlusive value attributable to a particular lesion or stenosis, a percentage of the total vessel occlusive value attributable to that particular lesion or stenosis can be calculated. In that regard, the ratio of the occlusive value attributable to the particular lesion or stenosis to the total occlusive value of the vessel provides the percentage of vessel occlusion attributable to that lesion or stenosis. The information regarding characteristics of the lesion or stenosis and/or the vessel as indicated by the plot222can be compared with or considered in addition to other representations of the lesion or stenosis and/or the vessel (e.g., IVUS (including virtual histology), OCT, ICE, Thermal, Infrared, flow, Doppler flow, and/or other vessel data-gathering modalities) to provide a more complete and/or accurate understanding of the vessel characteristics. For example, in some instances the information regarding characteristics of the lesion or stenosis and/or the vessel as indicated by the plot222are utilized to confirm information calculated or determined using one or more other vessel data-gathering modalities. While the visual representations180,200,210, and220ofFIGS.5,6,7, and8have been described separately. It is understood that a system may display any combination of these visual representations in series, simultaneously, and/or combinations thereof. In some instances, a system provides the user the ability to select which individual visual representation and/or combination of visual representations will be displayed. Referring now toFIGS.9-14, shown therein are aspects of evaluating a vessel according to embodiments of the present disclosure. In that regard,FIG.9provides an angiographic image300of a vessel302having a plurality of lesions or stenoses. In the illustrated embodiment, four lesions/stenoses are labeled as “A”, “B”, “C”, and “D”. Referring now toFIG.10, shown therein is a graph310mapping a pressure ratio value calculated using a diagnostic window in accordance with the present disclosure, which may be referred to as “iFR” in the drawings, relative to a distance as a first instrument is moved through a vessel relative to a second instrument, including across at least one stenosis of the vessel. In that regard, the second instrument is maintained in a position proximal of the at least one stenosis while the first instrument is moved from a position distal of the at least one stenosis to a position proximal of the at least one stenosis and adjacent the second instrument or vice versa (i.e., the first instrument is moved from a position proximal of the at least one stenosis and adjacent the second instrument to a position distal of the at least one stenosis). In the illustrated embodiment ofFIG.10, the relative position of the first instrument as depicted in plot312transitions from proximal to distal as the plot312extends from left to right. FIG.10also provides a bar graph320that depicts the change in pressure ratio values as depicted in graph310over distance. In that regard, the larger bars represent greater changes in pressure ratio value over that distance, which can be indicative of a sever lesion or stenosis. As shown, the bar graph320has been annotated to identify regions322,324,326, and328that have notable changes in pressure ratio values. More specifically, the regions322,324,326, and328correspond with lesions/stenoses A, B, C, and D of vessel302, respectively. Finally,FIG.10also provides an intensity map visual representation330(similar to visual representation180ofFIG.5) for the vessel302based on the pressure ratio values from graph310. More specifically, the intensity map330identifies a region332where the pressure ratio is below the threshold value. In the illustrated embodiment, the threshold pressure ratio value is 0.90. Accordingly, the portions of the intensity map330left of region332are colored or otherwise visualized to indicate that the pressure ratio is above the threshold value, while the portions of the intensity map within region332are colored or otherwise visualized to indicate that the pressure ratio is below the threshold value. In the illustrated embodiment, a green color is utilized to represent values above the threshold value, while a red color is utilized to represent values near or below the threshold value. Referring now toFIG.11, shown therein is an annotated angiographic image340of the vessel302. In that regard, the angiographic image300ofFIG.9has been annotated based on the pressure measurements obtained for vessel302. More specifically, based on the changes in the pressure ratio along the length of the vessel (e.g., as depicted in the bar graph320ofFIG.10) corresponding visual indicators have been added to the angiographic image. In particular, in the illustrated embodiment colored circles have been added along the length of the vessel to provide a visual indication to the user of the amount of change in pressure ratio attributable to that portion of the vessel. In some implementations, the portions of the vessel having a change in pressure ratio less than a threshold value are colored or otherwise visualized to indicate that the change in pressure ratio is below the threshold value, while the portions of the vessel having a change in pressure ratio greater than the threshold value are colored or otherwise visualized to indicate that the change in pressure ratio is above the threshold value. In the illustrated embodiment, a green colored circle is utilized to represent values above the threshold value, while a red colored circle is utilized to represent values near or below the threshold value. Referring now toFIG.12, shown therein is an annotated angiographic image350of the vessel302. In that regard, the angiographic image300ofFIG.9has been annotated based on the pressure measurements obtained for vessel302. More specifically, based on the changes in the pressure ratio along the length of the vessel (e.g., as depicted in the bar graph320ofFIG.10) corresponding visual indicators have been added to the angiographic image. In particular, in the illustrated embodiment dots have been added along the length of the vessel to provide a visual indication to the user of the amount of change in pressure ratio attributable to that portion of the vessel. More specifically, the greater the number of dots adjacent a portion of the vessel, the greater the change in pressure attributable to that portion of the vessel. In that regard, in some implementations the number of dots is directly correlated to values in bar graph320ofFIG.10. It is understood that numerous other visualization techniques may be utilized to convey the information of the graphs310,320, and/or330ofFIG.10in the context of an angiographic image or other image of the vessel (including both intravascular and extravascular imaging techniques, such as IVUS, OCT, ICE, CTA, etc.) to help the user evaluate the vessel. In that regard, while the examples of the present disclosure are provided with respect to angiographic images, it is understood that the concepts are equally applicable to other types of vessel imaging techniques, including intravascular and extravascular imaging. However, for the sake brevity the present disclosure will limit the examples to angiographic images. In some instances an intensity map (such as described in the context ofFIGS.5,6, and/or10) is overlaid onto or adjacent to the vessel302as depicted in angiographic image such that the lesion specific contributions and/or cumulative effects of the lesions can be visualized in the context of the vessel itself. In that regard, it is understood that for any of the visualization techniques the pressure data can be related to the corresponding portions of the vessel302using co-registration techniques (such as those disclosed in U.S. Pat. No. 7,930,014, titled “VASCULAR IMAGE CO-REGISTRATION,” which are hereby incorporated by reference in their entirety), based on the known pullback speed/distance, based on a known starting point, based on a known ending point, and/or combinations thereof. Further, in some embodiments the angiographic image is annotated with the numerical values associated with the pressure ratios and/or changes in pressure ratios. In some instances, a threshold value is set by the user or system (e.g., a default setting) such that values above or below the threshold, as the case may be, are identified. In that regard, the identified values can be presented to the user in chart form, added to the angiographic or other image of the vessel in the appropriate location, and/or combinations thereof. Further still, in some embodiments a graph similar to graph410and/or graph420is overlaid onto the angiographic or other image of the vessel. In that regard, the graph is scaled and oriented (i.e., positioned, rotated, and/or mirror imaged) to align with the general or average pathway of the vessel as depicted on the angiographic or other image of the vessel. In some implementations, diagnostic information and/or data is correlated to vessel images using techniques similar to those described in U.S. Provisional Patent Application No. 61/747,480, titled “SPATIAL CORRELATION OF INTRAVASCULAR IMAGES AND PHYSIOLOGICAL FEATURES” and filed Dec. 31, 2012, which is hereby incorporated by reference in its entirety. In some instances, a user is able to select what information should be included or excluded from the displayed image. In that regard, it should be noted that these visualization techniques related to conveying the pressure measurement data in the context of an angiographic or other image of the vessel can be utilized individually and in any combinations. For example, in some implementations a user is able to select what visualization mode(s) and/or portions thereof will be utilized and the system outputs the display accordingly. Further, in some implementations the user is able to manually annotate the displayed image to include notes and/or input one or more of the measured parameters. Referring nowFIG.13, shown therein is a graph360that includes a plot362representative of simulated pressure ratio value calculations for a proposed treatment option for vessel302along with the plot312of the original pressure ratio value calculations. In that regard, the plot362is based upon removing the effects of lesion/stenosis C of vessel302based on a percutaneous coronary intervention (PCI), which may include angioplasty, stenting, and/or other suitable intervention to treat lesion/stenosis C of vessel302. In that regard,FIG.13also provides a bar graph370that depicts the change in pressure ratio values that also removes the effects of lesion/stenosis C of vessel302. In particular, bar graph370is similar to bar graph320ofFIG.10, but the region326associated with lesion/stenosis C of bar graph320has been replaced with region372representative of lesion/stenosis C being treated. In particular, treated lesion/stenosis C is shown to cause no change in the pressure ratio. Finally,FIG.13also provides an intensity map visual representation380for the vessel302based on the estimated pressure ratio values associated with the treatment of lesion/stenosis C. As shown, the proposed treatment of lesion/stenosis C causes estimated pressure ratio along the full length of the vessel302to be above the threshold value of 0.90. Thus, the entire intensity map is colored or otherwise visualized to indicate that the pressure ratio is above the threshold value as there are no portions below the threshold value. In the illustrated embodiment, a green color is utilized to represent values above the threshold value. In addition to the graphical visualizations of the proposed treatment options as shown inFIG.13, the proposed treatment options can also be visualized on the angiographic or other image of the vessel. For example,FIG.14is an angiographic image of the vessel302annotated to include a proposed stent392across lesion/stenosis C. Further, it should be noted that the various other visualization techniques utilized to convey the information of the pressure measurements of the vessel302may also be applied to the estimated pressure measurements for the simulated treatment options, including various combinations of those visualization techniques as described above. In some instances, the pre-treatment pressure measurement information, such as shown inFIGS.10-12, is compared to corresponding post-treatment pressure measurement information. In that regard, the difference between the pre-treatment and post-treatment measurement information will indicate whether the treatment achieved the desired functional gain of allowing the blood to flow through the vessel. Likewise, the post-treatment pressure measurement information can be compared to the estimated pressure measurement information for the simulated treatment, such as shown inFIG.13. In that regard, the differences between the estimated measurement information and the actual post-treatment measurement information will provide an indication of the accuracy of the estimated measurement information for the simulated treatment. In that regard, in some instances the differences between estimated measurement information and actual post-treatment measurements are stored by the system and utilized to modify future estimated measurement information for similar treatment options. In some instances, the system is configured to automatically link or coordinate the pressure data and corresponding vessel locations between the pre-treatment, simulated treatment, and/or post-treatment measurement information. In that regard, it is understood that the pressure data can be related to the corresponding portions of the vessel using co-registration techniques (such as those disclosed in U.S. Pat. No. 7,930,014, titled “VASCULAR IMAGE CO-REGISTRATION,” which is hereby incorporated by reference in their entirety), based on the known pullback speed/distance, based on a known starting point, based on a known ending point, and/or combinations thereof. Referring now toFIGS.15-19, shown therein are aspects of evaluating a vessel according to embodiments of the present disclosure. In that regard,FIG.15provides an angiographic image400of a vessel402having a plurality of lesions or stenoses. In the illustrated embodiment, four lesions/stenoses are labeled as “E”, “F”, “G”, “H”, and “I”. Referring now toFIG.16, shown therein is a graph410mapping a pressure ratio value calculated using a diagnostic window in accordance with the present disclosure, which may be referred to as “iFR” in the drawings, relative to a distance as a first instrument is moved through a vessel relative to a second instrument, including across at least one stenosis of the vessel. In that regard, the second instrument is maintained in a position proximal of the at least one stenosis while the first instrument is moved from a position distal of the at least one stenosis to a position proximal of the at least one stenosis and adjacent the second instrument or vice versa (i.e., the first instrument is moved from a position proximal of the at least one stenosis and adjacent the second instrument to a position distal of the at least one stenosis). In the illustrated embodiment ofFIG.16, the relative position of the first instrument as depicted in plot412transitions from proximal to distal as the plot412extends from left to right.FIG.16also provides a bar graph420that depicts the change in pressure ratio values as depicted in graph410over distance. In that regard, the larger bars represent greater changes in pressure ratio value over that distance, which can be indicative of a sever lesion or stenosis. As shown, the bar graph420has been annotated to identify regions422,424,426,428, and430that have notable changes in pressure ratio values. More specifically, the regions422,424,426,428, and430correspond with lesions/stenoses E, F, G, H, and I of vessel402, respectively. Referring now toFIG.17, shown therein is a three dimensional model440of the vessel402based on the pressure measurement information ofFIG.16. In that regard, the three dimensional model440has been colored or otherwise visualized based on the pressure measurements obtained for vessel402. More specifically, based on the changes in the pressure ratio along the length of the vessel (e.g., as depicted in the bar graph420ofFIG.16) the corresponding portions of the three dimensional model440are colored accordingly. In that regard, in the illustrated embodiment the color of a particular portion of the three dimensional model440provides a visual indication to the user of the amount of change in pressure ratio attributable to that portion of the vessel. In some implementations, the portions of the vessel having a change in pressure ratio less than a threshold value are colored or otherwise visualized to indicate that the change in pressure ratio is below the threshold value, the portions of the vessel having a change in pressure ratio greater than the threshold value are colored or otherwise visualized to indicate that the change in pressure ratio is above the threshold value, and the portions of the vessel having a change in pressure ratio approximately equal to the threshold value are colored or otherwise visualized to indicate such. In the illustrated embodiment, blue colors are utilized to represent values above the threshold value, red colors are utilized to represent values below the threshold value, while orange, yellow, and green colors are utilized to represent values approximately equal to the threshold (e.g., orange for slightly below the threshold, yellow for equal to threshold, and green for slightly above the threshold). Referring now toFIG.18, shown therein is a three dimensional model450of the vessel402based on the pressure measurement information ofFIG.16. In that regard, the three dimensional model450has been colored or otherwise visualized based on the pressure measurements obtained for vessel402. More specifically, based on the changes in the pressure ratio along the length of the vessel (e.g., as depicted in the bar graph420ofFIG.16) the corresponding portions of the three dimensional model450having significant changes in pressure are colored accordingly. In other words, hot spots indicative of functionally significant lesions/stenoses are highlighted by use of different colors or other visualization techniques. For example, in some instances the three dimension model450is colored or visualized in a similar manner to the visual representation ofFIG.6. In the illustrated embodiment ofFIG.18, two significant lesions/stenosis452and454are highlighted by the visualization of three dimensional model450. It should be noted that the three dimensional models440and450of the vessel402may be based on data from one or more imaging techniques, including both intravascular and extravascular imaging techniques, such as angiography, CT scans, CTA, IVUS, OCT, ICE, and/or combinations thereof. Generally, any three dimensional modeling technique may be utilized and the corresponding measurement data applied thereto in accordance with the present disclosure. In that regard, in addition to and/or in lieu of the specific embodiments shown inFIGS.17and18, any of the visualization techniques described above with respect to other imaging modalities may also be applied to the three dimensional models. In that regard, in some instances a user is able to select what information should be included or excluded from the displayed three dimensional model. In some implementations, the three dimensional model is displayed adjacent to a corresponding two dimensional depiction of the vessel. In that regard, the user may select both the type of depiction(s) (two dimensional (including imaging modality type) and/or three dimensional) along with what visualization mode(s) and/or portions thereof will be utilized. The system will output a corresponding display based on the user's preferences/selections and/or system defaults. Referring nowFIG.19, shown therein is a graph460that includes a plot462representative of simulated pressure ratio value calculations for a proposed treatment option for vessel402along with the plot412of the original pressure ratio value calculations. In that regard, the plot462is based upon removing the effects of lesion/stenosis E of vessel402based on a percutaneous coronary intervention (PCI), which may include angioplasty, stenting, and/or other suitable intervention to treat lesion/stenosis E of vessel402. In that regard,FIG.19also provides a bar graph470that depicts the change in pressure ratio values that also removes the effects of lesion/stenosis E of vessel402. In particular, bar graph470is similar to bar graph420ofFIG.16, but the region422associated with lesion/stenosis E of bar graph420has been replaced with region472representative of lesion/stenosis E being treated. In particular, treated lesion/stenosis E is shown to cause no change in the pressure ratio. Finally,FIG.19also provides an intensity map visual representation480for the vessel402based on the estimated pressure ratio values associated with the treatment of lesion/stenosis E. As shown, the proposed treatment of lesion/stenosis E causes estimated pressure ratio along the full length of the vessel402to be above the threshold value of 0.90. Thus, the entire intensity map is colored or otherwise visualized to indicate that the pressure ratio is above the threshold value as there are no portions below the threshold value. In the illustrated embodiment ofFIG.19, a green color is utilized to represent values above the threshold value. Persons skilled in the art will also recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. | 67,041 |
11857297 | DETAILED DESCRIPTION Systems, methods, and apparatuses for optimizing a physiological measurement taken from a subject as disclosed herein will become better understood through a review of the following detailed description in conjunction with the figures. The detailed description and figures provide merely examples of the various embodiments of systems, methods, and apparatuses for maintaining a sensor at constant pressure against a subject. Many variations are contemplated for different applications and design considerations; however, for the sake of brevity and clarity, all the contemplated variations may not be individually described in this detailed description. Those skilled in the art will understand how the disclosed examples may be varied, modified, and altered and still fall within the scope of the examples described herein. Throughout this detailed description, examples of various systems, methods, and apparatuses for optimizing a physiological measurement taken from a subject are provided. Related elements in different examples may be identical, similar, or dissimilar in the examples. For the sake of brevity and clarity, the related elements may not be redundantly explained in the various examples. Instead, the use of a same, similar, and/or related element names and/or reference characters may cue the reader that an element in one example with a given name and/or associated reference character may be similar to another related element with the same, similar, and/or related element name and/or reference character in an example explained elsewhere herein. Elements specific to a given example may be described regarding that particular example. A person having ordinary skill in the art will understand that a given element need not be identical to the specific portrayal of a related element in any given figure or example to share features of the related element. As used herein “same” means sharing all features and “similar” means sharing a substantial number of features or sharing materially important features even if a substantial number of features are not shared. As used herein “may” should be interpreted in a permissive sense and should not be interpreted in an indefinite sense. Additionally, use of “is” regarding examples, elements, and/or features should be interpreted to be definite only regarding a specific example and should not be interpreted as required regarding every variation of the systems, methods, and/or apparatuses disclosed herein. Furthermore, references to “the disclosure” and/or “this disclosure” refer to the entirety of the writings of this document and the entirety of the accompanying illustrations, which extends to all the writings of each subsection of this document, including the Title, Background, Brief description of the Drawings, Detailed Description, Claims, Abstract, and any other document and/or resource incorporated herein by reference. As used herein regarding a list, “and” forms a group inclusive of all the listed elements. For example, an embodiment described as including A, B, C, and D is an embodiment that includes A, includes B, includes C, and also includes D. As used herein regarding a list, “or” forms a list of elements, any of which may be included. For example, an embodiment described as including A, B, C, or D is an embodiment that includes any of the elements A, B, C, and D but not necessarily all of the elements. Unless otherwise stated, an embodiment including a list of alternatively-inclusive elements does not preclude other embodiments that include various combinations of some or all of the alternatively-inclusive elements. An embodiment described using a list of alternatively-inclusive elements includes at least one element of the listed elements. However, an embodiment described using a list of alternatively-inclusive elements does not preclude another embodiment that includes all of the listed elements. And, an embodiment described using a list of alternatively-inclusive elements does not preclude another embodiment that includes a combination of some, but not necessarily all, of the listed elements. As used herein regarding a list, “and/or” forms a list of elements inclusive alone or in any combination. For example, an embodiment described as including A, B, C, and/or D is an embodiment that may include: A alone; A and B; A, B and C; A, B, C, and D; and so forth. The bounds of an “and/or” list are defined by the complete set of combinations and permutations for the list. Where multiples of a particular element are shown in a FIG., and where it is clear that the element is duplicated throughout the FIG., only one label may be provided for the element despite multiple instances of the element being present in the FIG. Accordingly, other instances in the FIG. of the element having identical or similar structure and/or function may not be redundantly labeled. A person having ordinary skill in the art will recognize, based on the disclosure herein, redundant and/or duplicated elements of the same FIG. Despite this, redundant labeling may be included where helpful in clarifying the structure of the depicted example embodiments. Conventional apparatuses for maintaining a sensor against a subject may include a device such as a smartwatch. A smartwatch may be worn by the subject as the subject goes about his or her every-day activities. In other cases, sensors may be used in clinical settings such as in a lab, doctor's office, a hospital, and so forth. In such a setting, a sensor may be held on the subject by, for example, taping the sensor to the subject. In some cases, the sensor may be clamped on to the subject, such as in a case where a pulse oximetry sensor is clamped to the subject's finger. However, various sensors may be inaccurate when held against a subject with the wrong amount of pressure. The wrong amount of pressure may lead to additional noise in a sensor signal, inaccurate readings, or failures to obtain readings altogether. Furthermore, in various cases, a sensor may be configured to take a measurement from a specific position on a subject. If the sensor is not aligned at the specific position on the subject, then the sensor will not accurately measure a physiological characteristic the sensor is designed to measure. Smartwatches may be insufficient to maintain the sensor at a reliable pressure against the subject or in a consistent position on the subject. Tape may be inconvenient, uncomfortable, and may not hold the sensor at a reliable pressure as the subject moves during the subject's daily activities. Tape may not be effective when the subject sweats or otherwise engages in an activity where the subject's skin may become damp and/or wet. Systems, methods, and apparatuses are described herein that address at least some of the problems described above. In various embodiments, systems, methods, and apparatuses are described herein for optimizing a measurement taken by a physiological sensor such as by maintaining a sensor against a subject at an approximately constant pressure and/or in an approximately constant position. An apparatus for maintaining the sensor at the constant pressure against a subject may include a wearable band. The wearable band may be secured on the subject by a mechanism that allows for fine-tuning of a pressure of the band on the subject. The sensor may be pressed against the subject by an elastic coupling mechanism that is attached to the wearable band or a housing. The housing may be attached to the band. The housing may be movably attached to the band such that, as the band remains secure and in a constant position and/or orientation on the subject, the housing can be moved relative to the subject and/or the band. A pressure sensor may detect a pressure of the wearable band on the subject or a pressure of the sensor against the subject. The band may include a slot through which the sensor extends. The sensor may be pressed against the subject through the slot. The sensor may be electronically coupled to a processing device. The processing device may be programmed to identify an optimal position of the sensor against the subject, such as in alignment with a physiological structure of the subject. The processing device may be programmed to identify an optimal pressure and/or pressure range of the sensor against the subject. The processor may be programmed to communicate information with the subject, such as via a user device and/or a user interface. The information may be associated with the position of the sensor, the pressure of the wearable band on the subject, and/or the pressure of the sensor against the subject. FIG.1Aillustrates a wearable device100with incorporated sensors112and/or114, according to an embodiment. Some of the features inFIG.1Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.1A. The wearable device100may be configured to take physiological measurements from a subject. The wearable device100may include a user device118and a band106that are configured to (e.g. in shape, size, material, and so forth) attach to a body of the subject. The wearable device100may, for example, be an electronic wrist-worn device such as a smartwatch that may be configured to attach to a wrist or arm of the subject. The wearable device100may be attached to a head of the subject using a headband, to a chest of the subject using a chest band, to an ankle of the subject using an ankle band, or otherwise attached to a body of the subject using a sweatband, bandage, band, watch, bracelet, ring, adherent, and/or other attachments and connections. As used herein, “subject” may be used to refer to an individual from whom a physiological measurement may be taken and/or who may wear the wearable device100and/or the other devices described herein. The subject may be another person who may use the wearable device100and/or the other devices described herein but from whom the measurements were not taken. For example, a healthcare professional such as a doctor, physician's assistant, and/or nurse may use information generated by the wearable device100and/or the other devices described herein. In general, the subject may be the person from whom the physiological measurements are taken and/or a person who may use the wearable device100, the other devices described herein, and/or the information related to the physiological measurements. The wearable device100may include a processing device102, a user interface104, the band106, a power source108, the first sensor112, and/or the second sensor114. The processing device102and the user interface104may be integrated into the user device118of the wearable device100. The power source108, the first sensor112, and/or the second sensor114may be integrated into the band106of the wearable device100. The band106may include one or more cavities that the power source108, the first sensor112, and/or the second sensor114may be stored in. The band106may be formed around, molded around, and/or overmolded around the power source108, the first sensor112, and/or the second sensor114. The power source108, the first sensor112, and/or the second sensor114may be connected to the processing device102by one or more electrical trace(s) or circuit(s)116(e.g. a flexible circuit board, copper traces, interconnects, and so forth). The processing device102may provide an output based on an input. The processing device102may, for example, be a central processing unit, a graphics processing unit, a vision processing unit, a tensor processing unit, a neural processing unit, a physics processing unit, a digital signal processor, an image signal processor, a synergistic processing element, a field-programmable gate array, a sound chip, a microprocessor, a multi-core processor, and so forth. The first sensor112may include a miniaturized spectrometer. The second sensor114may include a miniaturized impedance sensor. The first sensor112and/or the second sensor may include a temperature sensor, a viscosity sensor, an ultrasonic sensor, a humidity sensor, a heart rate sensor, a dietary intake sensor, an electrocardiogram (EKG) sensor, a galvanic skin response sensor, a pulse oximeter, an optical sensor, and so forth. The wearable device100may include other sensors integrated into or attached to the band106or the user device118. The wearable device100may be communicatively coupled to one or more remote and/or external devices such as sensors of other devices or third-party devices. The first sensor112and/or the second sensor114may be configured to take measurements from a subject non-invasively, such as by electrical and/or optical interrogation, and so forth. The first sensor112and/or the second sensor114may be electronically and/or communicatively coupled to the processing device102. The processing device102may be configured to manage and/or control the first sensor112, the second sensor114, the power source108, the user interface104, and so forth. The processing device102may control a frequency or rate over time that the first sensor112and/or the second sensor114take measurements, a wavelength or optical frequency at which the first sensor112and/or the second sensor114take measurements, a power consumption level of the first sensor112and/or the second sensor114, a sleep mode of the first sensor112and/or the second sensor114and so forth. The processing device102may control and/or adjust measurements taken by the first sensor112and/or the second sensor114take measurements to remove noise, increase a signal to noise ratio (SNR), dynamically adjust the number of measurements taken over time, enhance a signal amplitude, enhance one or more other signal qualities, and so forth. The power source108may include a battery, a solar panel, a kinetic energy device, a heat converter power device, a wireless power receiver, and so forth. The processing device102may be configured to (e.g. may be programmed to and/or include hardware to) transfer power from the power source108to the processing device102, the user interface104, the first sensor112, the second sensor114, other devices or units of the wearable device100, and so forth. The processing device102may be configured to regulate an amount of power provided from the power source108to the processing device102, the user interface104, the first sensor112, the second sensor114, and/or other devices or units of the wearable device100. In another embodiment, the wearable device100may include a power receiver to receive power to recharge the power source108. For example, the power receiver may include a wireless power coil, a universal serial bus (USB) connector, a thunderbolt connector, a mini USB connector, a micro USB connector, a USB-C connector, and so forth. The power receiver may be coupled to the processing device102, the power source108, and so forth. The processing device102may be configured to regulate an amount of power provided from the power receiver to the power source108. The processing device102may include a power management unit configured to control battery management, voltage regulation, charging functions, alternating current to direct current conversion, voltage scaling, power conversion, dynamic frequency scaling, pulse-frequency modulation, pulse-width modulation, amplification, and so forth. The processing device102may be electronically and/or communicatively coupled to a communication device110. The communication device110may be configured to send and/or receive data via a cellular communication channel, a wireless communication channel, a Bluetooth® communication channel, a radio communication channel, a WiFi® communication channel, a USB communication channel, an fiber-optic communication channel, and so forth. The processing device102may include a data processor, a data storage device, a communication device, a graphics processor, and so forth. The processing device102may be configured to receive measurement data from the first sensor112and/or the second sensor114. The processing device102may be configured to process the measurement data and display information associated with the measurement data via the user interface104. The processing device102may be configured to communicate the measurement data to another device. The other device may process the measurement data and provide information associated with the measurement data to the subject or another individual. The other device may process the measurement data and provide results, analytic information, instructions, and/or notifications to the processing device102to provide to the subject. The wearable device100may communicate information associated with the measurement data or information related to the measurement data to a subject via the user interface104. The user interface104may include a visual display, an input mechanism, a buzzer, a vibrator, a speaker, a microphone, and so forth. The wearable device100may be part of a system connected to other devices. For example, the wearable device100may be configured to send and/or receive data with another device such as another measurement device, another user device, a remote server, a computer, a smartphone, and so forth. The wearable device100may be configured to receive data from another measurement device, aggregate the received data with measurement data from the first sensor112and/or the second sensor114, analyze the aggregated data, and provide information and/or notifications associated with the analyzed data. FIG.1Billustrates a side perspective exploded view of the first sensor112, according to an embodiment. Some of the features inFIG.1Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.1B. The first sensor112may include a miniaturized spectrometer. The first sensor112may include a filter112a, a collimator112b, and/or an optical sensor112c. The filter112amay include an optical filter, such as a variable filter, a linear variable filter, an absorptive filter, a dichroic filter, a monochromatic filter, an infrared filter, an ultraviolet filter, a neutral density filter, a long-pass filter, a band-pass filter, a short-pass filter, a guided-mode resonance filter, a metal mesh filter, a polarizer filter, an arc welding filter, a wedge filter, and so forth. The filter may include a Fabry-Perot Etalon filter. The filter112amay include a linear variable filter. The linear variable filter may allow for selecting which wavelengths strike the optical sensor112cat a specific position on the optical sensor112c. This may allow the processing device102to, in turn, distinguish the relative intensities of wavelengths reflected from a tissue to determine which wavelengths are most strongly reflected from the tissue relative to an initial intensity of those wavelengths as emitted from a light source. The processing device102may determine, based on the reflected wavelengths, one or more parameters, constituents, and/or conditions of the tissue. For example, light having a first wavelength may strike a first region of the optical sensor112ccorresponding to a first region of the filter112a. The first wavelength may correspond to a constituent of the subject's blood. The optical sensor112cmay communicate the intensity of the first wavelength to the processor. The processor may process the first wavelength based on an emitted intensity of the wavelength, an expected attenuation of the wavelength, and/or other attenuation factors to determine an amount of the constituent in the subject's blood. Different constituents of the subject's blood may transmit and/or reflect wavelengths of light at different intensities. The filter112amay pass different wavelengths to different positions on the optical sensor112c. The optical sensor112cmay pass the intensities of the corresponding wavelengths to the processor, and the processor may determine an amount of the blood constituent based on the relative intensities of the wavelengths. The filter112amay include an absorptive filter. The absorptive filter may be formed to have distinct cutoff edges between regions of the absorptive filter corresponding to different wavelength ranges. The absorptive filter may be manufactured of a durable and/or flexible material. The filter112amay include a dichroic filter (i.e. an interference filter). The dichroic filter may be variable. The dichroic filter may allow for precise selection of wavelengths to be passed through the filter112a. For example, the dichroic filter may have a transmission profile with a narrow peak, such as a full-width half max wavelength range of 50 nm, 40 nm, 30 nm, 25 nm, 20 nm, 10 nm, 5 nm, and/or 1 nm. The dichroic filter may be implemented in embodiments where the filter112ais incorporated into a sensor for measuring sensitive phenomena. The sensitive phenomena may include various physiological parameters, conditions, and/or constituents for which small-percentage changes, such as less than or equal to a 50 percent change, result in dramatically different outcomes. For example, the sensitive phenomenon may include a blood acidity level. A healthy blood acidity may include a pH of 7.4. A blood pH less than or equal to 6.8 or greater than or equal to 7.8 may result in irreversible cell damage. As another example, the sensitive phenomenon may include bone density. The filter112amay include a grism. The filter112amay include a prism coupled to a diffraction grating. The grism and/or the coupled prism and diffraction grating may be referred to as the grism. The prism may include a dispersion prism and/or a prismatic sheet, such as a Fresnel prism. The diffraction grating may include a ruled grating, a holographic grating, a transmission grating, a reflective grating, a blazed holographic grating, a concave grating, an aberration-corrected concave grating, a constant deviation monochromator concave grating, a Rowland type concave grating, a blazed holographic concave grating, a sinusoidal holographic grating, a sinusoidal ruled grating, a pulse compression grating, and so forth. The diffraction grating may include a volume phase holographic grating. The diffraction grating may diffract impinging light along one dimension or along two dimensions. The collimator112bmay include a device that restricts beam(s) of particles or waves passing into the first sensor112, such as light in visible and/or non-visible wavelengths, to specific directions of motion, angles, or ranges of angles to become more aligned in a specific direction as the beam(s) travels through the first sensor112. The collimator112bmay restrict a spatial cross-section of the beam(s). The collimator112bmay restrict the beam(s) along one dimension and/or along two dimensions. The collimator112bmay be formed in one or more of a variety of ways. The collimator112bmay be formed of one or more microtubes. The collimator112bmay include a plurality of microtubes, where a microtube of the plurality of microtubes is defined by one or more walls encircling a through-channel. A microtube of the plurality of microtubes may have a width ranging from 10 microns to 150 microns, and/or a height ranging from 30 microns to 500 microns. For example, the microtube may have a height equal to less than a thickness of 4 pages of printer paper, and a width equal to less than a thickness of 1 page of printer paper. The microtubes may be prepared separately and joined together, such as by a binder, or the microtubes may be prepared together. For example, the walls of the microtubes may be formed of CNTs. A catalyst layer may be patterned on a substrate forming an impression of the plurality of microtubes, and the CNTs may be grown on the catalyst layer, forming the walls encircling the through-channels to form the microtubes. The collimator112bmay include a volume of material through which pores and/or apertures are formed. The volume of material may, for example, include a photoresist material. The pores and/or apertures may be etched through the photoresist material, such as by photolithography or plasma etching. The collimator112bmay be positioned against the filter112aand/or the optical sensor. For example, the collimator112bmay be disposed between the filter112aand the optical sensor112c, or the filter112amay be disposed between the collimator112band the optical sensor112c. A wall forming a microtube of the collimator112bmay be aligned normal to a surface of the filter112aand/or a surface of the optical sensor112c. Light may pass through the filter112aand the collimator112bmay allow light within a range of normal incidence passing from the filter112ato impinge on the optical sensor112c. The collimator112bmay allow light to impinge on the filter112awithin a range of normal incidence. The collimator wall may be aligned at a non-normal angle relative to the surface of the filter112aand/or the surface of the optical sensor112c. The angle may correspond to an angle of separated light leaving the filter112a. The optical sensor112cmay be operable to convert light rays into electronic signals. For example, the optical sensor112cmay measure a physical quantity of light such as intensity and translate the measurement into a form that is readable by the processor such as an amount of current corresponding directly to the intensity of the light. The optical sensor112cmay include a semiconductor. The semiconductor may have one or more bandgaps corresponding to a wavelength and/or wavelength range. The semiconductor may be arranged into an array, such as an array of pixels, corresponding to specific regions of the filter112a. In another example, the optical sensor112cmay include a temperature sensor, a velocity liquid level sensor, a pressure sensor, a displacement (position) sensor, a vibration sensor, a chemical sensor, a force sensor, a force radiation sensor, a pH-value sensor, a strain sensor, an acoustic field sensor, an electric field sensor, a photoconductive sensor, a photodiode sensor, a through-beam sensor, a retro-reflective sensor, a diffuse reflection sensor, and so forth. The optical sensor112cmay include a segment such as a pixel. The optical sensor112cmay include a plurality of the segment arrange in an array, such as an array of pixels. The sensor segment may be aligned with a region of the filter112a. The segment may have an identifier such that the processor may associate the segment with the region of the filter. The identifier may enable the processor to determine a wavelength of light detected by the segment of the optical sensor112c. For example, the optical sensor may include a first sensor segment aligned with a first filter region, a second sensor segment aligned with a second filter region, and so forth. The first sensor segment may be identified by the processor as detecting a wavelength and/or range of wavelengths that may correspond to a passband of the first filter region. For example, wavelengths ranging from 400 nm to 449 nm may pass unfiltered through the first filter region. The unfiltered light may strike the first sensor segment, and the first sensor segment may, in response generate an electrical signal that may be transmitted to the processor. The processor may identify the electrical signal as being transmitted by the first sensor segment and may identify that signals transmitted by the first sensor segment may be generated by light having a wavelength ranging from 400 nm to 449 nm. The filter112a, the collimator112b, and the optical sensor112cmay be stacked together to form the first sensor112. The filter112a, the collimator112b, and the optical sensor112cmay be integrated together to form an integrated sensor body. The filter112a, the collimator112b, and the optical sensor112cmay be interconnected together. The filter112a, the collimator112b, and the optical sensor112cmay be stacked vertically on top of each other. The filter112amay be wedge-shaped where one end of the filter112ahas a relatively thick end that tapers to a thinner edge. The collimator112band the optical sensor112cmay have relatively flat top surfaces and/or bottom surfaces. When the filter is a wedge shape, a filling material112dmay be attached or affixed to the collimator112band/or the optical sensor112cso that the filter112amay rest or attach flush or level to the collimator112band/or the optical sensor112c. The filling material112dmay include an optically transparent material (such as clear glass or a clear plastic), an optically translucent material (such as polyurethane, colored or frosted glass, colored or frosted plastic, and so forth), or other material that does not interfere with defined wavelengths of light. The filling materials112dmay be attached or affixed to the collimator112band/or the optical sensor112cby an adhesive, by welding, by friction, by a pressure fit, and so forth. FIG.1Cillustrates a perspective view of the second sensor114, according to an embodiment. Some of the features inFIG.1Cmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.1C. The second sensor114may include a miniaturized impedance sensor. The miniaturized impedance sensor may include a substrate114awhich may provide structural support for one or more microstructures. The microstructures may include various intermediate layers114b, a microelectrode114c, and/or an interstitial filler114d. The miniaturized impedance sensor may include the substrate114a, one or more of the intermediate layers114b, the microelectrode114c, and/or the interstitial filler114d. The miniaturized impedance sensor may include a plurality of microelectrodes114c. The substrate114amay provide a base support structure for deposition, growth, and/or etching of the microstructures. The substrate114amay provide a support structure for integrating the second sensor114into the wearable device100. The substrate114amay include a silicon and/or a tungsten wafer. The substrate114amay include glass, such as a glass fiber-reinforced resin. The substrate114amay be formed of a flexible material such as polyimide. The substrate114amay include one or more conductors, such as an electrical trace or a through-surface via. The conductors may electrically couple the microelectrodes114cto electronics external to the second sensor114, such as the processing device102. The intermediate layers114bmay include a conductive layer, one or more insulating layers, and/or a catalyst layer. The conductive layer may electrically couple the microelectrode114cto the substrate114aconductor. The catalyst layer may catalyze growth of the microelectrode114c. In an embodiment, the intermediate layers114bmay include one or more ceramic insulating layers, such as alumina, which may be rendered conductive by a preparation process of the miniaturized impedance sensor. The microelectrode114cmay include a bundle of nanotubes. The bundle may be infiltrated with a bolstering material, where bolster may refer to a property of a material that increases resistance against an applied force of the material and/or another material with which the material is incorporated. Accordingly, the bolstering material may increase the rigidity of the bundle relative to similarly structured bundles not including the bolstering material. The bolstering material may reduce the brittleness of the bundle relative to similarly structured bundles not including the bolstering material. For example, the nanotubes may include carbon nanotubes (CNTs) grown on an iron catalyst. The bolstering material may include carbon, a metal, and/or a conductive polymer. The microelectrode114cmay include CNTs infiltrated with carbon. The microelectrode114cmay include CNTs infiltrated with a conductive polymer. The microelectrode114cmay include a polymer coated with a conductive film. The conductive film may include a thin film. The thin film may include metal and/or carbon. The polymer may be formed into a pillar. The interstitial filler114dmay be positioned between rows and/or columns of microstructures on the substrate114a. The interstitial filler114dmay fill a region between separate microelectrodes114c. The interstitial filler114dmay include a polymer. The interstitial filler114dmay include a photoresist material. The interstitial filler114dmay include polyimide. The interstitial filler114dmay include bisphenol A novolac epoxy. The interstitial filler114dmay be deposited on the substrate114aand/or around the intermediate layers114band microelectrodes114cby sputtering and or spin-coating. The first sensor112and/or the second sensor114may be referred to as the physiological sensor(s). The physiological sensor may be pressure-sensitive such that a pressure of the physiological sensor against the subject directly correlates with noise in an electronic signal generated by the physiological sensor or an accuracy of the physiological measurement generated from the electronic signal. For example, the pressure with which the first sensor112is pressed against the subject may affect an amount of light received by the first sensor112and where the light is received. If the first sensor112is not pressed against the subject with sufficient pressure, light from outside the subject's body may strike the first sensor112, adding significant noise to the signal generated by the first sensor112. If the light source is not pressed against the subject with enough pressure, light may scatter outside the subject's body. If the scattered light is received by the first sensor112, the signal generated by the first sensor112may include a significant amount of noise. Similarly, if the second sensor114is not pressed against the subject's body with sufficient pressure, an impedance measured by the second sensor114may be significantly higher than the impedance due to a physiological characteristic of the subject, thus introducing noise into the signal generated by the second sensor114. Ensuring the physiological sensor is pressed against the subject with the correct pressure may, therefore, minimize the amount of noise in the signal generated by the physiological sensor. The first sensor112, the second sensor114, a light source, and/or other measurement electronics may be incorporated together into a sensor module. The sensor module may be referred to unitarily as a physiological sensor throughout this disclosure. The first sensor112may take a first measurement of a physiological state of the subject via a first physical mechanism, such as by optical spectroscopy. The second sensor114may take a second measurement of the physiological state via a second physical mechanism that may be different from the first physical mechanism, such as by impedance spectroscopy. The processing device102may be configured to (e.g. may include programming instructions that, when executed, perform a function that) filter noise from the first measurement and the second measurement by comparing the first measurement and the second measurement. FIG.2Aillustrates the wearable device100on a wrist202of the subject, according to an embodiment. Some of the features inFIG.2Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.2A. The wrist202may include a type of physiological structure204(e.g. a muscular-walled tube, a vein, an artery, a skeletal structure, a muscular structure, an organ, and so forth). The physiological structure204may be, in an embodiment, a vein or an artery. The wearable device100may have an integrated physiological sensor206. The physiological sensor206may be, for example, the first sensor112and/or the second sensor114. For example, the physiological sensor206may include the miniaturized impedance sensor and/or the miniaturized spectrometer. The wearable device100may be positioned on the wrist202so that the physiological sensor206may be positioned over the physiological structure204. In an embodiment, the physiological structure204may be positioned in the wrist202approximate to an underside of the wrist202. For example, the physiological structure204may be positioned in the wrist202between a dermal layer of the wrist202and one or more bones in the wrist202. The physiological sensor206may be positioned against the underside of the wrist202. This may optimize an accuracy and/or a precision of a measurement taken by the physiological sensor206from the physiological structure204. The wearable device100may use the measurements to determine a physiological condition of the subject. Positioning the physiological sensor206against the underside of the wrist may also reduce a chance of the physiological sensor206being struck or otherwise damaged in a way that may affect the accuracy and/or precision of the measurement taken by the physiological sensor206. For example, an outside of the wrist202may be exposed to other surfaces against which the wearable may be struck, whereas an underside of the wrist202may be less likely to strike other surfaces because it faces towards a body of the subject. FIG.2Billustrates the wearable device100on an arm208of the subject, according to an embodiment. Some of the features inFIG.2Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.2B. The arm208, or more generally the body part of the subject, may include a type of the physiological structure204. The wearable device100may be positioned on the arm208so that the physiological sensor206may be positioned over the physiological structure204. The wearable device100may be worn by the subject on another body part such as a hand of the subject, a forearm of the subject, an elbow of the subject, a chest of the subject, a neck of the subject, a head of the subject, a torso of the subject, a waist of the subject, a thigh of the subject, a calf of the subject, a knee of the subject, an ankle of the subject, a foot of the subject, and so forth. The body part may include the type of the physiological structure such as a muscular-walled tube, an ulnar artery, a radial artery, a brachial artery, a basilic vein, a cephalic vein, an axillary artery, an axillary vein, a carotid artery, a jugular vein, an iliac artery, a femoral artery, a femoral vein, a tibial artery, a great saphenous vein, adorsalispedis artery, an arch of foot artery, a temporal artery, and so forth. The physiological structure may include an organ, a tissue, a skeletal structure, a muscle, a tendon, a ligament, the subject's skin, and so forth. The physiological sensor206may be pressed against a skin surface of the body part. The physiological sensor206and/or wearable device100may be positioned on the body part over a region of the body part where the muscular-walled tube may be closest to the skin surface for the body part. The physiological sensor206may be positioned against the body part where the muscular-walled tube may be positioned between the physiological sensor206and a skeletal structure of the body part. This may minimize a distance between the physiological sensor206and the muscular-walled tube, which in turn may optimize one or more biometric measurements taken by the physiological sensor206from the muscular-walled tube. The physiological sensor206and/or the wearable device100may be positioned on the body part over a region of the body part where the skeletal structure is positioned between the skin surface and the muscular-walled tube. This may maximize the distance between the physiological sensor206and the muscular-walled tube, which in turn may minimize effects of the muscular-walled tube on measurements taken by the physiological sensor206. For example, the subject may desire to measure a relatively static physiological condition, physiological parameter, and/or physiological constituent such as a bone density of the subject and/or a body fat percentage of the subject. The physiological structure204may be a dynamic structure, such as a muscular-walled tube that changes shape with the subject's heartbeat, and may interfere with measuring the static physiological condition, physiological parameter, and/or physiological constituent. Accordingly, maximizing the distance between the physiological sensor206and the physiological structure204may result in more accurate and/or precise measurements of the static physiological condition, physiological parameter, and/or physiological constituent. The physiological sensor206and/or the wearable device100may be positioned on the body part such that the physiological sensor206may be approximate the physiological structure204and the skeletal structure such that the physiological structure204is not between the skeletal structure and the physiological sensor206and the skeletal structure is not between the physiological structure204and the physiological sensor206. FIG.3Aillustrates a first perspective view of an adjustable measurement device300attached to the band106of the wearable device100, according to an embodiment. Some of the features inFIG.3Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.3A. The adjustable measurement device300may include a housing302configured to attach to the band106. The housing302may house various electronic components. As such, the housing302may be formed to protect those components. The housing may be formed of a material, and may have corresponding dimensions, that resist flexing under the types of conditions the wearable device100may be subjected to as the subject wears the wearable device100. For example, the housing may be formed of aluminum, polyvinylchloride, polycarbonate, and so forth. A thickness of the walls of the housing may vary depending on the type of material used to make the housing but may generally range from approximately 1/64 of an inch to approximately ⅛ of an inch. The range may be referred to as “approximate” because various manufacturing limitations may result in, for example, variation of the thickness of the walls from 1/64 of an inch, such as plus or minus 5 thousandths of an inch (mils), and so forth. The band106may be wearable by the subject, such as on a wrist, arm, neck, head, leg, and/or ankle of the subject, and so forth. The housing302may be hollow, rigid, and/or shaped to be complementary to a body part of the subject against which the housing302is pressed by the band106as the subject wears the band106. For example, various surfaces of the housing302may be substantially planar (i.e. planar to within a manufacturing tolerance) or may be curved. The curve of the housing302may be complementary to the body part of the subject the housing302is designed to be worn against. A radius of a curve of the housing302may, therefore, be approximately equal to a radius of the subject's wrist, a radius of the subject's forearm, a radius of the subject's bicep, a radius of the subject's neck, a radius of the subject's head, a radius of the subject's ankle, and so forth. The radius of the curve of the housing302may accordingly range, from ⅛ of an inch to ¾ of an inch when the housing302is designed for a finger-worn implementation, from ½ of an inch to 4 inches when the housing302is designed for a wrist-worn implementation, from 3 inches to 15 inches when the housing302is designed for a chest-worn implementation, and so forth. A shape of the body part to which the housing302is complimentary and is designed to be held against may be curvilinear and/or non-uniform. For example, a cross-section of the subject's wrist may not be perfectly circular or a perfect oval. Rather, a curvature of a first portion of the subject's wrist may have a different radius than the curvature of a second portion of the subject's wrist, and so forth. The same may be true of various other body parts of the subject. Accordingly, the housing302may have a cross-sectional curvature with a first portion and a second portion. The first portion of the curvature of the housing302may have a first radius. The second portion of the curvature of the housing302may have a second radius. The curvature and/or general shape of the housing302may be designed for a specific subject or may be generalized. For example, a mold may be made of a specific subject's body part and the mold of the body part may be used to form a mold for the housing302. In another example, a generalized shape may be determined by overlaying cross-sections of a large sample of subjects (i.e. 100 subjects, 500 subjects, 1000 subjects, and so forth). The cross-sections may be sub-divided into size groupings such as a small-size grouping, a medium-size grouping, a large-size grouping, and so forth. An average shape of the cross-sections, collectively and/or within the size groupings, may be calculated by segmenting the cross-sections and determining average radii for the segments. The average shape of the cross-sections may be used to create a mold for the housing302. The band106may include an inward-facing surface106aand an outward-facing surface106b. The inward-facing surface106amay face towards the subject's body part as the subject wears the wearable device100. The outward-facing surface106bmay face away from the body part on which the subject is wearing the wearable device100. Similarly, an inward-facing portion302aof the housing302may face inwards towards the body part of the subject as the subject wears the wearable device100and an outward-facing portion302bof the housing302may face outwards from the body part as the subject wears the wearable device100. The inward-facing portion302aof the housing302may be shaped to conform to the subject's body part. The outward-facing portion302bmay be shaped complimentarily to the inward-facing portion302a. The outward-facing portion302bmay have a different shape than the inward-facing portion302a. For example, the inward-facing portion302amay be curvilinear and the outward-facing portion302bmay be approximately flat. The inward-facing portion302amay be rectangular relative to a plane (i.e. may create a rectangular projection on the plane) and the outward-facing portion302bmay be circular relative to the same plane (i.e. may create a circular projection on the plane). FIG.3Billustrates a front view of the user interface104of the wearable device100, according to an embodiment. Some of the features inFIG.3Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.3B. The user device118, which may be coupled to the band106, may include the user interface104. The user interface104may be electronically coupled to the processing device102. The processing device102may be configured to (i.e. may store and/or execute computer program code that, when executed, generates an output based on stored data and/or an input) compare a current pressure for the band106on the subject to an optimal pressure for the band106on the subject. The pressure may be measured by a pressure sensor in the band106, the user device118, the adjustable measurement device300, and/or the physiological sensor206. The user interface104may be configured to generate an indicator304that indicates a difference between a current pressure and an optimal pressure of the band106, the user device118, the adjustable measurement device300, and/or the physiological sensor206against the subject's body part. The optimal pressure may be one pressure or may be one of various pressure values within a range of optimal pressure values. To minimize noise in a measurement taken by the physiological sensor206, the physiological sensor206may be pressed against the subject's body part at a pressure within a range of optimal pressures. If the pressure is too light, factors external to the subject's body part may influence the measurement. If the pressure is too great, the physiological sensor206, the band106, the user device118, and/or the adjustable measurement device300may deform the subject's body part or otherwise affect the subject's body part in a way that produces an inaccurate measurement. For example, pressing the physiological sensor206against the subject's skin too hard may squeeze blood out of capillaries under the skin. Pressing the physiological sensor206against the subject's skin too hard may burst the capillaries. Having the band106too tight around the subject's wrist may restrict blood flow, which may affect measurement of constituents in the subject's blood. Thus, it may be beneficial to ensure the physiological sensor206, the band106, the user device118, and/or the adjustable measurement device300are pressed against the subject in a range of pressures that prevent outside influence on the measurement and does not distort the physiological characteristic being measured. The indicator304may communicate to the subject or other individual operating the physiological sensor206whether the physiological sensor206, the band106, the user device118, and/or the adjustable measurement device300is pressed against the subject in the optimal range of pressures. The indicator304may include an audible indication or a visual indication. For example, the user interface104may include a speaker, a touch screen display, an output-only display, and so forth. The user interface104may be electronically coupled to the processing device102. The processing device102may store and/or execute instructions to generate outputs and/or communicate the outputs via the user interface104. When the processing device102executes a function that outputs the indicator, the user interface104may respond by displaying the indicator via the display, emitting a sound, and so forth. Accordingly, the user interface104may be configured to generate and/or communicate the indicator. The indicator304may notify the subject whether the pressure measurement value is above a maximum pressure value or below a minimum pressure value. The indicator304may notify the subject whether the pressure measurement value is outside the range of optimal pressures. The range of optimal pressures may have an upper limit equal to a maximum optimal pressure and a lower limit equal to a minimum optimal pressure. The indicator304may notify the subject of an amount by which the pressure measurement value may be outside the range of pressure values. The indicator may instruct the subject to increase the pressure. For example, the indicator may instruct the subject to increase the pressure on the physiological sensor206or decrease the pressure on the physiological sensor206. As another example, the indicator may instruct the subject to tighten the band106or loosen the band106. FIG.4illustrates a perspective view of the adjustable measurement device300attached to the band106where the band106includes a pressure gauge400, according to an embodiment. Some of the features inFIG.4may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.4. The wearable device may include the pressure sensor400coupled to the band106. The pressure sensor may be configured to measure a pressure of the band106on the subject or a pressure of the physiological sensor206against the subject as the band106may be attached to the subject. The pressure sensor400may include a strain gauge, a piezoresistive strain gauge, a capacitive pressure sensor, an electromagnetic pressure sensor, a piezoelectric strain gauge, an optical pressure sensor, a potentiometric pressure sensor, a force balancing pressure sensor, and so forth. A type of the pressure sensor400incorporated with the wearable device100and/or the adjustable measurement device300may depend on how the pressure sensor400is incorporated and/or what the pressure sensor400is designed to directly measure. For example, the pressure sensor400may measure a tightness of the band106on the subject. In such an example, a strain gauge may be integrated into the band106. As another example, the pressure sensor400may measure a pressure of the physiological sensor206against the subject. In such an example, a capacitive pressure sensor, electromagnetic pressure sensor, and/or potentiometric pressure sensor may be positioned between the physiological sensor206and the band106, the housing302, and/or the user device118, and so forth. The pressure sensor400may include a strain gauge. The strain gauge may include conductive tracings embedded in the band106. The conductive tracings may be formed of copper, silver, gold tungsten, graphite, graphene, and/or carbon nanotubes, and so forth. The conductive tracings may be electronically coupled to the processing device102. The processing device may be configured to measure a change in resistance of the conductive tracings. The change in resistance may reflect a strain on the band106. The strain on the band106may be a direct indicator of the amount of pressure the band106is placing on the subject's body part. The pressure sensor400may be coupled to the band106and electronically coupled to the processing device102. For example, the pressure sensor400may couple two ends of the band106together. In another example, the pressure sensor400may be coupled to the band106and the user device118between the band106and the user device118. The pressure sensor400may generate an electronic signal corresponding to a pressure of the band106on the subject as the subject wears the band106. The processing device102may convert the electronic signal into a pressure measurement. The pressure measurement may have a corresponding pressure measurement value. The pressure measurement value may represent a tightness of the band106on the subject. It may be beneficial to measure the tightness of the band106on the subject. For example, the physiological sensor206may be embedded in the band106. As stated above, the physiological sensor206may provide optimal signal quality when pressed against the subject within a range of pressures. The strain in the band106may indicate how tightly the band106is pressed against the subject and, therefore, how tightly the physiological sensor206is pressed against the subject. In another example, the physiological sensor206may be coupled to the band106, the adjustable measurement device300, and/or the user device118by an elastic coupling mechanism such as a spring. In such an embodiment, the pressure sensor400may be positioned between the physiological sensor206and the elastic coupling mechanism, between the elastic coupling mechanism and the band106, and so forth. The pressure sensor400may thereby directly measure the pressure of the physiological sensor206against the subject. FIG.5Aillustrates a perspective view of the wearable device100where the band106is incrementally tightenable, according to an embodiment. Some of the features inFIG.5Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5A. The band106may include a first side106cextending from the user device118and/or the housing302of the adjustable measurement device300to a first end106dof the band106. The first side106cof the band106may include a set of teeth502along the first side106cof the band106between the user device118(and/or the housing302, as the case may be) and the first end106d. The set of teeth502may include a first tooth, a second tooth, and so forth. The band106may include a second side106eextending from the user device118and/or the housing302of the adjustable measurement device300to a second end106fof the band106. The second side may include a keeper loop504affixed the second side106eof the band106approximate to the second end106fof the band106. The first and/or second tooth, and so forth, may engage with a securing mechanism that secures the first end106dof the band106to the second end106fof the band106. The securing mechanism may include a pawl, a cantilevered pawl, a gear, one or more teeth of a second set of teeth on the second end106fof the band106, and so forth. A distance between the teeth of the set of teeth502, such as between the first tooth and the second tooth, may be such that tightening the band106on the subject, such as around the subject's body part, from engagement of the securing mechanism with the first tooth to engagement of the securing mechanism with the second tooth increases the pressure on the subject by the band in a range from 0.1 kpa to 1 kpa. For example, the securing mechanism may be a cantilevered pawl. Moving the cantilevered pawl from engagement with the first tooth to engagement with the second tooth may increase the pressure on the subject by the band106in a range from 0.1 kpa to 0.5 kpa, in a range from 0.2 kpa to 0.5 kpa, in a range from 0.1 kpa to 0.3 kpa, in a range from 0.2 kpa to 0.3 kpa, or by approximately 0.2 kpa. The spacing of the first tooth, second tooth, and so forth of the set of teeth502may be designed such that at least one increment of change in the pressure from engagement of the securing mechanism with the first tooth to engagement of the securing mechanism with the second tooth is less than a range of optimal pressures for the physiological sensor206. The spacing may be such that a sum of at least two increments of change in the pressure is less than the range of optimal pressures. The spacing may be such that a sum of up to five increments of change in the pressure is less than the range of optimal pressures. The spacing may be such that a sum of up to ten increments of change in the pressure is less than the range of optimal pressures. The band106may be pliable enough that the weight of the band106is sufficient to bend the band106. The band106may be rigid enough that the weight of the band106is not sufficient to bend the band106. The band106may be formed in an arc as the band106is attached to the subject and/or via a manufacturing process of the band that renders the band rigid enough to retain the arc shape against its own weight. FIG.5Billustrates a zoomed-in side view of the band106of the wearable device100where the band106is incrementally tightenable, according to an embodiment. Some of the features inFIG.5Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5B. The set of teeth502may be disposed on an outside face (i.e. the outward-facing surface106b) of the band106. A cantilevered pawl506may be attached to the keeper loop504. The set of teeth502may include a first tooth502aand/or a second tooth502b. The first tooth502aand/or the second tooth502bmay include a catch face502cthat may engage with a catch surface506aof the cantilevered pawl506. The catch face502cmay form a non-normal angle with the outside face of the band106as the band106is formed in the arc such that at least a portion of the cantilevered pawl506(e.g. an engagement end506band/or the catch face502c) is disposed under the catch face502cbetween the catch face502cand the first side106cof the band as the cantilevered pawl engages with the catch face502c. The catch face502cmay form a non-normal angle with the outside face of the band106to ensure that, even as the band106bends, the first tooth502aand/or the second tooth502bremains engaged with the cantilevered pawl506. Such a structure may also enable the cantilevered pawl506to remain engaged with the first tooth502aand/or the second tooth502bas the subject moves and/or the band106changes shape. The set of teeth502may be disposed on an inside face of the band106, e.g. the inward-facing surface106a. The catch face502cof the first tooth502aand/or the second tooth502bmay form a non-normal angle with the inside face of the band106as the band106is formed in an arc and attached to the subject. At least a portion of the cantilevered pawl506may be positioned under the catch face502cbetween the catch face502cand the inside face of the band106as the catch surface506aengages with the catch face502c. FIG.5Cillustrates a first perspective view of the cantilevered pawl506, according to an embodiment. Some of the features inFIG.5Cmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5C. The cantilevered pawl506may be coupled to the keeper loop504. The cantilevered pawl506may be positioned on the keeper loop504such that, as the first side106cof the band106is passed through the keeper loop504, the cantilevered pawl506engages the first tooth502aor the second tooth502b. The cantilevered pawl506may prevent the first side106cof the band106from pulling out of the keeper loop504. The cantilevered pawl506may be monolithically integrated with the keeper loop504. For example, the cantilevered pawl506and keeper loop504may be 3D printed together or may be manufactured from a single mold of an injection molding system. A distance between an engagement end506bof the cantilevered pawl506and an inside surface504aof the keeper loop504may be less than a thickness of the band106along the set of teeth502. As the first end106dof the band106and the set of teeth502are positioned in the keeper loop504between the cantilevered pawl506and the inside surface504a, and/or as the cantilevered pawl506engages the first tooth502aor the second tooth502b, a torsional force between the keeper loop504and the cantilevered pawl506may force the cantilevered pawl506against the first tooth502aor the second tooth502b. The cantilevered pawl506may thereby be configured to prevent the first end106dof the band106from withdrawing from the keeper loop504. FIG.5Dillustrates a second perspective view of the cantilevered pawl506and includes the band106, according to an embodiment. Some of the features inFIG.5Dmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5D. The engagement end506bof the cantilevered pawl506may include a first instance of the catch surface506athat engages the first tooth502aand/or a second instance of the catch surface506athat engages the second tooth502bsimultaneously as the first instance of the catch surface506aengages the first tooth502a. The cantilevered pawl506may include three instances of the catch surface506a, four instances of the catch surface506a, five instances of the catch surface506a, and so forth. A shallower depth of the set of teeth502may be more effectively engaged by a plurality of teeth, increasing a maximum amount of resistive force the cantilevered pawl506and set of teeth502can exert counter to one or more forces that may pull the first end106dof the band106from the keeper loop504. FIG.5Eillustrates a first perspective view of a second type of the cantilevered pawl506, according to an embodiment. Some of the features inFIG.5Emay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5E. The band106may include an outside face, e.g. the outward-facing surface106b, that faces away from the subject as the band106is attached to the subject. An inside face, e.g. the inward-facing surface106a, face towards and/or is pressed against the subject as the band106is attached to the subject. The band106may include a side face106g. The side face106gmay extend between the outside face and the inside face. The side face106gmay be approximately perpendicular to the outside face and the inside face. A plane formed in part by the side face106gmay intersect with a plane formed in part by the outside face and/or a plane formed in part by the inside face. The side face106gmay be planar or may be curved. The outside face and/or the inside face may be curved. The set of teeth502may be disposed on the side face106gof the band106. The keeper loop504may wrap around the width of the band106and the cantilevered pawl506may be on a side of the keeper loop504that is approximately coplanar with, parallel to, and/or otherwise aligned with the side face106gof the band106. The first tooth502aand/or the second tooth502bmay include the catch face502c. The cantilevered pawl506may include the catch surface506athat engages with the catch face502cof the first tooth502aor the second tooth502bsuch that, as the band106is formed in an arc, and/or as the first end106dof the band106passes through the keeper loop504, the catch surface506aof the cantilevered pawl506is flush with the catch face502cof the first tooth502aand/or the second tooth502b. The user interface104may be coupled to the keeper loop504. For example, the user interface104may be incorporated into a top surface504bof the keeper loop504. The power source108may be coupled to and/or incorporated with the keeper loop504. The keeper loop504may include electrical contacts which may electrically couple the user interface104to the electrical trace or circuit116in the band106. The electrical trace or circuit116may electronically couple the user interface104to the processing device102, the power source108, the communication device110, and so forth. The keeper loop504may incorporate one or more of the structural elements of the housing. The keeper loop504and the housing302may be incorporated and/or integrated together. The keeper loop504and the housing302may be a single unit. FIG.5Fillustrates a second perspective view of the second type of the cantilevered pawl506with a portion of the keeper loop504removed to show the cantilevered pawl506engaged with the first tooth502aand the second tooth502b, according to an embodiment. Some of the features inFIG.5Fmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5F. The keeper loop504may include two instances of the cantilevered pawl506. The first instance of the cantilevered pawl506may be on one side of the keeper loop504and the second instance of the cantilevered pawl506may be on a side of the keeper loop504opposite the first instance of the cantilevered pawl506. A spring mechanism508may couple the instances of the cantilevered pawl506to the keeper loop504. The engagement end506bof the cantilevered pawl506may pass through an opening in the keeper loop504. An inner width of the keeper loop504may be equal to the width of the band106plus a clearance between the band106and the keeper loop504. The clearance may range from 0.1 mm to 2 mm. FIG.5Gillustrates a perspective view of a third type of the cantilevered pawl506, according to an embodiment. Some of the features inFIG.5Gmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5G. The keeper loop504may be hollow similar to the housing302of the adjustable measurement device300. Electronics such as the physiological sensor206, the processing device102, the power source108, and/or the communication device110may be disposed within the hollow keeper loop504. The keeper loop504may include a window504c. The physiological sensor206may be positioned in the window504c, aligned with the window504c, and/or may extend through the window504c. The window504cmay be positioned on an underside504dof the keeper loop504. The underside504dof the keeper loop504may be positioned against the subject's body part as the subject wears the band106with the keeper loop504. FIG.5Hillustrates a top cross-section view of a motorized band tightening mechanism510, according to an embodiment. Some of the features inFIG.5Hmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5H. The wearable device100may include the motorized band tightening mechanism510. The motorized band tightening mechanism510may be integrated into the second end106fof the band106. The motorized band tightening mechanism510may include a gear510acoupled to a motor510b. The motor510bmay drive the gear510a. The motor510bmay be an electric motor. The motor510bmay be electrically coupled to the processing device102and/or the power source108via the electrical trace or circuit116in the band106. The processing device102may be configured to control the motor510b. For example, the processing device102may include instructions to output a control signal to the motor510bto tighten the band when the subject inputs an instruction to tighten the band via the user interface104. The motor510bmay drive the gear510a. The gear510amay engage with the set of teeth502to tighten and/or loosen the band106, such as when the subject wears the wearable device100. FIG.5Iillustrates a side cross-section view of the motorized band tightening mechanism510, according to an embodiment. Some of the features inFIG.5Imay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5I. FIG.5Jillustrates a perspective view of the third type of the cantilevered pawl506used with a type of the band106that includes the set of teeth502inset into the side face106gof the band106, according to an embodiment. Some of the features inFIG.5Jmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5J. FIG.5Killustrates an accordion mechanism512integrated into the band106, according to an embodiment. Some of the features inFIG.5Kmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5K. The band106may include the accordion mechanism512. The entire band106may be an accordion or only a portion of the band106may be an accordion. The accordion mechanism512may extend and/or collapse passively (e.g. as the subject moves and the subject's wrist changes diameter, the accordion mechanism512may expand and/or collapse with the changes in the diameter of the subject's wrist). The accordion mechanism512may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part (e.g. the subject's wrist202). The accordion mechanism512may maintain the band106and/or the physiological sensor206at an approximately constant pressure against the subject's body part. The accordion mechanism512may provide a durable means for ensuring constant pressure and/or constant contact when compared with more mechanically complicated mechanisms. The accordion mechanism512may enable fine adjustment of the size of the band106to enable constant pressure and/or constant contact. FIG.5Lillustrates a coil mechanism514integrated into the band106, according to an embodiment. Some of the features inFIG.5Lmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5L. The band106may include the coil mechanism514. The coil mechanism514may enable rolling and/or unrolling of the band106. The coil mechanism514may operate by a spring such as a coil spring. The coil mechanism514may enable passive extension and/or retraction of portions of the band106to accommodate changes in the diameter of the subject's body part. The coil mechanism514may be motorized. The coil mechanism514may automatically extend and/or retract portions of the band106. The coil mechanism514may be manually operated to extend and/or retract portion of the band106. Spring-loaded coiling and/or automatic coiling of the band106may enable fine-tuning of the pressure of the band106on the subject's body part. Spring-loaded coiling and/or automatic coiling of the band106may enable fine-tuning of the pressure of the physiological sensor206against the subject's body part. The coil mechanism514may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part (e.g. the subject's wrist202). The coil mechanism512may maintain the band106and/or the physiological sensor206at an approximately constant pressure against the subject's body part. FIG.5Millustrates a fold516in the band106, according to an embodiment. Some of the features inFIG.5Mmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5M. The band106may include the fold516. The fold516may enable dynamic adjustment of the size and/or shape of the band106as the size and/or shape of the subject's body part changes (e.g. as the subject moves and/or engages in activity). The fold516may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part. The fold516in the band106may be formed with a passive elastic memory. The fold516may expand into a more linear form as a strain is exerted on the band. The fold516may retract into a more folded form as the strain on the band106is lessened. The fold516may enable passive extension and/or retraction of the band106to accommodate changes in the diameter of the subject's body part. The fold516in the band106may enable fine-tuning of the pressure of the band106on the subject's body part. The fold516may enable fine-tuning of the pressure of the physiological sensor206against the subject's body part. The fold516may maintain the band106and/or the physiological sensor206at an approximately constant pressure against the subject's body part. FIG.5Nillustrates the band106formed with a set of e-links518, according to an embodiment. Some of the features inFIG.5Nmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5N. The band106may include one or more of the e-link518. An individual e-link518may be connected to a neighboring e-link518inelastically or elastically. The e-link518may include conductive tracing (e.g. the electrical trace or circuit116) and electrical contacts that electrically couple the conductive tracing to electrical contacts and/or conductive tracing in neighboring e-links518. The physiological sensor206may be integrated into one of the e-links. The band106may be made entirely of e-links518. A segment of the band106may be made of one or more B-links518and another segment of the band106may be made of another material and/or structure. The e-links518may have a same length. E-links518of differing lengths may be provided to enable fine-tuning of a fit of the band106on the subject's body part. Elastic coupling of the e-links518may enable dynamic adjustment of the size and/or shape of the band106as the size and/or shape of the subject's body part changes (e.g. as the subject moves and/or engages in activity). Elastic coupling of the e-links518may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part. Elastic coupling of the e-links518may maintain the band106and/or the physiological sensor206at a constant pressure against the subject's body part. Elastic coupling of the e-links518may enable passive extension and/or retraction of the band106to accommodate changes in the diameter of the subject's body part. The interchangeability, variable sizing, and elastic coupling of the e-links518in the band106may enable fine-tuning of the pressure of the band106on the subject's body part. The interchangeability, variable sizing, and elastic coupling of the e-links518in the band106may enable fine-tuning of the pressure of the physiological sensor206against the subject's body part. The band106with the e-links518may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part (e.g. the subject's wrist202). The band106with the e-links518may maintain the band106and/or the physiological sensor206at an approximately constant pressure against the subject's body part. FIG.5Oillustrates a loopback520in the band106, according to an embodiment. Some of the features inFIG.5Omay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5O. The band106may include the looback520. The loopback520may enable dynamic adjustment of the size and/or shape of the band106as the size and/or shape of the subject's body part changes (e.g. as the subject moves and/or engages in activity). The loopback520may maintain the band106and/or the physiological sensor206in constant contact with the subject's body part. The loopback520may have a passive elastic memory. The loopback520may shrink as a strain is exerted on the band106. The loopback520may expand to an equilibrium size as the strain on the band106is lessened. The loopback520may enable passive extension and/or retraction of the band106to accommodate changes in the diameter of the subject's body part. The loopback520may enable fine-tuning of the pressure of the band106on the subject's body part. The loopback520may enable fine-tuning of the pressure of the physiological sensor206against the subject's body part. The loopback520may maintain the band106and/or the physiological sensor206at an approximately constant pressure against the subject's body part. FIG.5Pillustrates a perspective view of the band106including a buckling beam mechanism522, according to an embodiment. Some of the features inFIG.5Pmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5P. The band106of the wearable device may include the buckling beam mechanism522as a mechanism that maintains the physiological sensor206in constant contact with the subject and/or at a constant pressure against the subject. The buckling mechanism522may include a buckling mechanism housing522a. The buckling mechanism housing522may be hollow and may house one or more components of the buckling mechanism522. The buckling mechanism522may include a buckling mechanism housing522a, a stabilizer522b, a stem522c, and a buckling column522d. The buckling mechanism housing522amay have in internal chamber and/or may house various components of the buckling mechanism522within the chamber. As shown, a wall of the buckling mechanism housing522ais removed to show the internal components. The stabilizer522bmay prevent various of the buckling mechanism522components from being pulled out of the buckling mechanism housing522aand/or may prevent the first end106dof the band106from twisting. The stem522cmay couple the stabilizer522bto the first end106dof the band106. The buckling column522dmay be coupled to a wall of the buckling mechanism housing522a. The buckling column522dmay be coupled to the stabilizer522b. The buckling column may be made of a material that springs back into an extended shape when compressed. As a force is exerted on the first end106dof the band106away from the buckling mechanism522, the buckling column522dmay buckle. The buckling column522dmay exert a counter-force that resists the force exerted on the first end106dof the band106away from the buckling mechanism522. The counter-force by the buckling column522dmay enable a constant pressure of the band106on the subject and/or of the physiological sensor206against the subject. FIG.5Qillustrates a perspective view of the band106including a tri-folding spring mechanism524, according to an embodiment. Some of the features inFIG.5Qmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5Q. The band106of the wearable device may include the tri-folding spring mechanism524as a mechanism that maintains the physiological sensor206in constant contact with the subject and/or at a constant pressure against the subject. The tri-folding spring mechanism524may include a leaf524a, a hollow leaf524b, and a tension mechanism524c. The leaf524aand the hollow leaf524bmay fold over each other and/or may latch to each other to narrow a diameter of the band106. The tension mechanism524cmay be attached to the first end106dof the band106and to an interior of the hollow leaf524b. The tension mechanism524cmay include, for example, a z-spring that is attached to an inner wall of the hollow leaf524bat one end and at another end to the first end106dof the band106. As a force is exerted on the first end106dof the band106away from the tri-folding spring mechanism524, the tension mechanism524cmay exert a counter-force that resists the force exerted on the first end106dof the band106away from the tri-folding spring mechanism524. The counter-force by the tension mechanism524cmay enable a constant pressure of the band106on the subject and/or of the physiological sensor206against the subject. FIG.5Rillustrates a perspective view of the band106including a tape spring mechanism526, according to an embodiment. Some of the features inFIG.5Rmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.5R. The band106of the wearable device may include the tape spring mechanism526as a mechanism that maintains the physiological sensor206in constant contact with the subject and/or at a constant pressure against the subject. The tape spring mechanism526may include a spring housing526a, a tape spring526b, and/or a coupling mechanism526c. The spring housing526amay house various components of the tape spring mechanism526. The coupling mechanism526cmay couple the tape spring526bto the band106. The coupling mechanism526cmay also enable expansion of the tape spring mechanism526while protecting the tape spring526bby allowing the tape spring526bto stay within the spring housing526awhen the tape spring526bis extended. As a force is exerted on the band106away from the tape spring mechanism526, the tape spring526bmay exert a counter-force that resists the force exerted on the band106away from the tape spring mechanism526. The counter-force by the tape spring526may enable a constant pressure of the band106on the subject and/or of the physiological sensor206against the subject. FIG.6Aillustrates a perspective view of the wearable device100having a moveable sensor602attached to the housing302and positioned in a slot604in the band106, according to an embodiment. Some of the features inFIG.6Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.6A. The band106may include the slot604. The moveable sensor602, which may include the first sensor112, the second sensor114, and/or generally the physiological sensor206, may be aligned with the slot604and/or positioned in the slot604. As the moveable sensor602slides along the slot604, the band106may remain fixed relative to the subject's body part. The moveable sensor602may be attached to the housing302of the adjustable measurement device300. The moveable sensor602may slide along the slot604as the housing302is adjusted on the band106. The moveable sensor602may be attached to the housing302, and the housing302and moveable sensor602may together form the adjustable measurement device300. The housing302may form a c-shape and the band106may pass through a slot of the c-shape. The moveable sensor602may be electronically coupled to the processing device102. The processing device102may be positioned in the housing302and the moveable sensor602and the processing device102may be interconnected via a printed circuit board. The processing device102may be positioned in the user device118and the moveable sensor602and the processing device102may be interconnected via electrical traces embedded in the band106. FIG.6Billustrates a perspective view of the wearable device100having a moveable sensor602in a slot604in the band106, according to an embodiment. Some of the features inFIG.6Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.6B. The band106may include the slot604. The moveable sensor602, which may include the first sensor112, the second sensor114, and/or generally the physiological sensor206, may be aligned with the slot604and/or positioned in the slot604. As the moveable sensor602slides along the slot604, the band106may remain fixed relative to the subject's body part. The moveable sensor602may be moveably attached to the band106. For example, a width of the slot604and/or the moveable sensor602at the surfaces of the band106may be less than a width of the slot604and/or the moveable sensor602between the surfaces of the band106. As another example, the moveable sensor602may include tabs or the slot604may include ridges. The slot604may include tracks along inside walls of the slot604corresponding to the tabs in the moveable sensor602. The moveable sensor602may include tracks corresponding to the ridges of the slot604. The moveable sensor602may be attached to the housing302, and the housing302and moveable sensor602may together form the adjustable measurement device300. The housing302may form a c-shape and the band106may pass through a slot of the c-shape. The moveable sensor602may be electronically coupled to the processing device102. The processing device102may be positioned in the housing302and the moveable sensor602and the processing device102may be interconnected via a printed circuit board. The processing device102may be positioned in the user device118and the moveable sensor602and the processing device102may be interconnected via electrical traces embedded in the band106. FIG.7illustrates the wearable device100with the adjustable measurement device300relative to a cross-section of the subject's wrist202, according to an embodiment. Some of the features inFIG.7may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.7. The housing302may include an underside302cand a topside302d. The underside302cof the housing302may be positioned between the wrist202and the band106as the subject wears the band106and the adjustable measurement device300is attached to the band106. The topside302dof the housing302may be positioned on an opposite side of the band106from the underside302cof the housing302as the adjustable measurement device300is attached to the band106and as the subject wears the band106. As discussed regarding other embodiments, the housing302may be c-shaped (and/or u-shaped, as the case may be) such that the band106passes through a slot302eof the housing302as the housing302is attached to the band106. The underside302cof the housing302may include the inward-facing portion302aof the housing302. The topside302dof the housing302may include the outward-facing portion302bof the housing302. At least a portion of a wall of the housing302along the inward-facing portion302aof the housing302forms an arc that is complementary to a curvature of a body part of the subject (e.g. the wrist202, the arm208, and so forth). The inward-facing portion may be configured to be approximately flush with the body part as the subject wears the band106and the housing302is coupled to the band106. The body part may include an underside202aof the wrist202of the subject that includes a radial artery202bor an ulnar artery202cof the subject, i.e. the radial artery202band/or ulnar artery202cof the subject may be closest to the surface of the wrist202along the underside202aof the wrist202. The housing302may be configured to (e.g. designed in shape) conform to the subject's body part to ensure optimal contact between the physiological sensor206and the subject's skin. Optimal contact may mean that sensor surfaces and surfaces of the housing surrounding the sensor surfaces are in complete contact with the subject's skin without having to compress the subject's skin and/or otherwise press the physiological sensor206into the subject's skin. This may reduce the optimal range for the pressure of the physiological sensor206against the subject's skin while still ensuring the signal produced by the physiological sensor206has a maximized signal-to-noise ratio (SNR) and/or a maximized amplitude. The conformity of the housing302to the subject's body part may improve the comfort of the wearable device100and/or the adjustable measurement device300to the subject while still enabling the physiological sensor206to take sufficient readings from the subject to determine one or more physiological characteristics of the subject. FIG.8illustrates a diagram of a system800that includes the user device118and the adjustable measurement device300, according to an embodiment. Some of the features inFIG.8may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.8. The system800may include the user device118, the adjustable measurement device300, and one or more external devices and/or components802. The external devices and/or components802may include the pressure sensor400, the power source108, and/or another peripheral electronic device802a. The user device118may include the processing device102, the communication device110, and/or the user interface104. The processing device102may include control programming and/or logic102a(i.e. control/logic102a) and memory102b. The control/logic102amay execute one or more instructions stored in the memory102bbased on one or more inputs received from the communication device110, the user interface104, the pressure sensor400, the other peripheral electronic device802a, and so forth. The control/logic102amay generate one or more outputs based on the input and the instructions. The output may be transmitted to the user interface104and/or the communication device110. The adjustable measurement device300may include the physiological sensor206, control programming and/or logic300a(i.e. control/logic300a), and/or an internal communication device300b. The control/logic300amay execute one or more functions based on one or more inputs received by the control/logic300afrom the physiological sensor206, and/or the communication device300b. For example, the physiological sensor206may generate a signal corresponding to a measurement of a physiological characteristic of the subject. The signal may be communicated from the physiological sensor206to the control/logic300a. The control/logic300amay filter noise out of the signal and pass the filtered signal to the communication device300b. The communication device300bmay communicate the filtered signal to the communication device110of the user device118. As another example, the control/logic300amay receive programming via the communication device300bof the adjustable measurement device300. The programming may include a schedule for taking measurements by the first sensor112. The control/logic300amay trigger the physiological sensor206to take a measurement according to the schedule. The pressure sensor400may be hardwired by conductive tracing, such as the electrical trace or circuit116, to the control/logic102aof the processing device102. For example, the pressure sensor400may be a strain gauge in the band106and may be connected to the processing device102via the electrical trace or circuit116, which may be embedded in the band106and a printed circuit board. The power source108may also be hardwired via the conductive tracing to the user device118and/or the adjustable measurement device300. The power source108may be embedded in the band106separate from the user device118and/or the adjustable measurement device300. The electrical trace or circuit116in the band106may connect the power source108to an electronic interconnect such as a printed circuit board (PCB) in the user device118or a PCB in the adjustable measurement device300. The PCB may include a power control module that regulates delivery of power to the electronic components of the user device118and/or the adjustable measurement device300. For example, the power source108may deliver power to the processing device in the user device118and the physiological sensor206in the adjustable measurement device300. The PCB may include electrical interconnects that interconnect electronic components, in the user device118and/or the adjustable measurement device300. The electronic components may include the processing device102, the communication device110, the user interface104, the physiological sensor206, the control/logic300a, the communication device300b, and so forth. The electronic components in the user device118and/or the adjustable measurement device300may be electronically coupled by wiring in the user device118and/or the adjustable measurement device300. The communication device110of the user device118and the communication device300bof the adjustable measurement device300may be networked together (e.g. communicatively coupled) via a wired connection, such as the electrical trace or circuit116, and/or a wireless connection804. For example, the communication device110and the communication device300bmay be networked over a Bluetooth® network. The other peripheral electronic device802amay be wirelessly connected to the communication device110and/or the communication device300b. The other peripheral electronic device802amay be hardwired to the adjustable measurement device300and/or the user device118. The user interface104may be integrated into the user device118. The user interface104may be integrated into the band106separate from the user device118and/or the adjustable measurement device300. The user interface104may be integrated into the adjustable measurement device300. The user interface104may be integrated into another user device such as a smartphone, a smartwatch, a tablet, a computer, and so forth, that is separate from band106, the adjustable measurement device300, and/or the wearable device100. FIG.9illustrates a diagram of a system900that includes the adjustable measurement device300networked to the user device118, according to an embodiment. Some of the features inFIG.9may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.9. The user device118and the adjustable measurement device300may not be electrically coupled and may be communicatively coupled, such as via the wireless connection804. The system900may thereby be configured to incorporate the band106where the band106does not include conductive tracing such as the electrical tracing or circuit116. For example, the user device118may be a smartwatch not having conductive elements integrated into the band106. The smartwatch and the adjustable measurement device300may be paired via a Bluetooth® network. The adjustable measurement device300may be attached to the band106as the subject wears the smartwatch. The band106may be configured to squeeze the subject's wrist202with enough pressure to ensure accurate measurement of the physiological characteristic by the adjustable measurement device300. The processing device102may be configured to take a physiological measurement from the subject using the physiological sensor206. The processing device102may send, via the communication device110in the user device118and the communication device300bin the adjustable measurement device300, instructions to take the physiological measurement. The control/logic300aof the adjustable measurement device300may automatically trigger the physiological sensor206to take the physiological measurement and transmit the signal to the processing device102. The processing device102may cause a value associated with the physiological measurement to be displayed on the user interface104. The user device118may be attached to the band106. The user interface104may be configured to display the physiological measurement taken from the subject. For example, the user interface may include an LED display, a capacitive touch screen, a resistive touch screen, an augmented reality interface, and so forth. The adjustable measurement device300may be configured to take the physiological measurement and communicate the physiological measurement to the user device118. The processing device102may be communicatively coupled to the physiological sensor206, such as via the communication devices110and300a. The processing device102may be configured to receive an electronic signal from the physiological sensor206. For example, the processing device102may be electronically coupled to the communication device110and may store instructions to process the electronic signal from the physiological sensor206. The processing device102may be configured to generate a value corresponding to a measurement of a physiological state of the subject. For example, the processing device102may be programmed with instructions to compare the electronic signal to a table of signals and corresponding glucose levels. The processing device102may be programmed with more complex data analytics to extract one or more measurement values from the electronic signal. The user interface104may be electronically coupled to the processing device102. The user interface104may be remote from the processing device102and/or may be communicatively coupled to the processing device102. For example, the user interface104may be integrated into another user device and may be communicatively coupled to the processing device102via the communication device110. The user interface104may be communicatively and/or electronically coupled to the physiological sensor206. The physiological sensor206may include processing logic that outputs a measurement value. The measurement value may be output directly to the user interface104. The user interface104may include a dedicated display processor with logic that receives the measurement value as an input and outputs a visual display of the measurement value. The physiological sensor206may output the measurement value to the processing device102. The user interface104may be configured to receive the measurement value from the processing device102. The user interface104may generate the indicator304in a way that the measurement value may be discernable by the subject or another user. The indicator304may be a visual indicator such as words, numbers, symbols, icons, graphics, and/or graphs. A networking device such as the communication device300bmay be electronically coupled to the physiological sensor206and coupled to the band106. The networking device and the physiological sensor206may be embedded in the band106. The processing device102and the user interface104may be integrated into the user device118separate from the band106. For example, the user device118may be a smartphone. The networking device may communicatively couple the physiological sensor206to the processing device102and/or the user interface104. Another communication device may be communicatively coupled to the processing device102. For example, the additional communication device may be integrated into the band106and the processing device102may be incorporated into the user device118where the user device118is remote, e.g. not physically coupled to, the band106. The adjustable measurement device300may be removably attached to the band106. The communication device300bin the adjustable measurement device300may be a short-range wireless communication device. The additional communication device in the band106may have short-range capabilities9e.g. may include a Bluetooth® communication chip, a near-field communication chip, and so forth) and long-range communication capabilities (e.g. may include a WiFi communication chip, a cellular communication chip, and so forth). The additional communication device may include a network router. The communication device300bof the adjustable measurement device300may communicate with the processing device102via the additional communication device. Thus, the physiological sensor206may be communicatively coupled to the communication device110and/or the processing device102. The user interface104may be configured to present the information communicated between the physiological sensor206and the processing device102to the subject, such as by presenting an indicator of the physiological measurement to the subject. The system900may enable the adjustable measurement device to be easily incorporated into the subject's daily routine and/or habits. The adjustable measurement device300may be obtained separately by the subject from the user device118. For example, the user device118may include a remote server. The subject may obtain access to the remote server via a subscription service. The subject may enroll the adjustable measurement device300with the subscription service. The communication device300bof the adjustable measurement device300may include a cellular communication chip that may communicate with the remote server. The user device118may include a smartwatch, a mobile phone, a personal computer, and so forth. The modes of remote communication between the adjustable measurement device300and the user device118may depend on the type of the user device118. As such, the communication devices110and/or300amay include short-range communication devices, long-range communication devices, Bluetooth® communication devices, wi-fi communication devices, cellular communication devices, and so forth. The subject may wear the adjustable measurement device300using one or more of a variety of types of the band106. The band106may, for example, include an elastic band. FIG.10illustrates a diagram of a system1000including the adjustable measurement device300with the user interface104and the power source108positioned internally in the adjustable measurement device300, according to an embodiment. Some of the features inFIG.10may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.10. The processing device102, the user interface104, and/or the power source108may be integrated and/or incorporated into the adjustable measurement device300. The processing device102may be electronically coupled to the physiological sensor206. The adjustable measurement device300may not include the communication device300b. Electronic signals generated by the physiological sensor206in response to measurement of a physiological characteristic of the subject may be communicated to the processing device102. The processing device102may generate measurement values based on the electronic signals and may output the measurement values to the user interface104. The user interface104may display and/or otherwise communicate the measurement values to the subject as the subject wears the adjustable measurement device300. The adjustable measurement device300may include the communication device300b. The communication device300bmay be electronically coupled to the processing device102. The communication device300bmay be electronically coupled to the physiological sensor206. The processing device102may transmit, via the communication device300b, the physiological measurement to another user device configured to display the physiological measurement to the subject. For example, the other user device may include an electronic watch and/or a smartphone. The other user device may include a user application installed on the other user device that interfaces wirelessly with the processing device102via the communication device300b. FIG.11illustrates a diagram of a system1100that includes the adjustable measurement device300with the communication device300band connected to the power source108, which may be external to the adjustable measurement device300, according to an embodiment. Some of the features inFIG.11may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.11. The adjustable measurement device300may include internal sensing, processing, and communication components powered by an external power supply. For example, internal components of the adjustable measurement device300may include the control/logic300a, the communication device300b, and the physiological sensor206. The internal components may be electronically coupled via a PCB and/or internal wiring of the adjustable measurement device300. The internal components may be electrically coupled to the power source108by a conductive element such as wiring and/or electrical trace or circuit116. For example, the power source108may be integrated into the band106and electrically coupled to the internal components of the adjustable measurement device300. The adjustable measurement device300may include electrical contacts and a portion of the electrical trace or circuit116in the band106may be exposed. The exposed portion of the electrical trace or circuit116may have a length greater than a length of the electrical contacts. The length of the exposed electrical trace or circuit116may determine an amount of adjustability of the adjustable measurement device300on the band106. The adjustable measurement device300may have a minimalistic design that limits the components integrated into the adjustable measurement device300to only those components necessary to acquire the electronic signal corresponding to measurement of the physiological characteristic of the subject. This may reduce the size of the housing302. Subjects, including subjects that have chronic health conditions like diabetes, are more likely to wear a monitoring device with a minimalistic design. A monitoring device with a minimalistic design is less likely to interfere with the subject's day-to-day activities and the subject is, therefore, more likely to wear the monitoring device. The adjustable measurement device300is such a minimalistic monitoring device. The limited number of internal components of the adjustable measurement device300allows for a smaller volume of the housing302compared to other monitoring devices. Integration of the power source108into the band106further minimizes the size of the housing302. FIG.12illustrates a perspective view of the adjustable measurement device300including the user interface104, according to an embodiment. Some of the features inFIG.12may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.12. The adjustable measurement device300may include the user interface104. The user interface may be integrated into a first side of the housing302(e.g. the topside302dof the housing302). The user interface may be electronically coupled to the power source108, the processing device102, the communication device(s)110/300a, and/or the physiological sensor206. The user interface104may display to the subject and/or another person viewing the user interface104a physiological measurement1202that is taken by the physiological sensor206and that is processed by the processing device102. The user interface104may display to the subject and/or another person viewing the user interface104a value1204associated with the physiological measurement1202. The user interface104may include a speaker104athat emits sounds audible to the subject and/or a person within an audible range of the speaker104a. The sound may provide an indication of one or more measurements taken by the physiological sensor206and/or the pressure sensor400. For example, the speaker104amay emit a series of beeps that increase in frequency (either regarding the time between beeps or the frequency of each beep) as the physiological sensor206as brought into closer alignment with the physiological structure204of the subject such as a vein and/or artery. The user interface104may include a touch screen104bthat receives touch-based inputs from the subject and visually displays information to the subject and/or a person within viewing range of the touch screen104b. For example, the touch screen104bmay display an icon to the subject. When the subject touches the icon, the user interface104may communicate the touch as an input to the processing device102. The processing device102may be configured to detect a proximity of the physiological sensor206to the physiological structure204(e.g. a vein and/or artery) of the subject as the subject wears the housing302. For example, the processing device102may include programming that takes an input such as a signal of heartbeat waveform of the subject and calculates an SNR for the signal. The processing device102may compare the SNR to a range of SNRs, where the range of SNRs is associated with a proximity of the physiological sensor206to the physiological structure204. The processing device102may compute the proximity based on an algorithm for the proximity as a function of the SNR. The processing device102may output the proximity of the physiological sensor206to the physiological structure204. Based on the output, the processing device102and/or the user interface104may generate indicator304, which may indicate the proximity of the physiological sensor206to the physiological structure204. The processing device102may determine the proximity iteratively, such as every second, ten times per second, one hundred times per second, and so forth. As a position of the adjustable measurement device300changes on the subject, the proximity of the physiological sensor206to the physiological structure204may change. As the proximity changes, the indicator304may change. The user interface104and/or the processing device102may be configured to dynamically update the indicator as the adjustable measurement device is moved on the band106and/or moved relative to the physiological structure204of the subject, such as to increase the proximity of the physiological sensor206to the subject's vein and/or artery. For example, the user interface104may display an arrow pointing a direction of the muscular-walled tube relative to the physiological sensor206, i.e. a direction the subject should move the adjustable measurement device300to bring the physiological sensor206into closer alignment with the physiological structure204. The arrow may decrease in size and/or change color as the physiological sensor206gets closer to the physiological structure204. The arrow may grow in size and/or change color as the physiological sensor206gets further away from the physiological structure204. As another example, the user interface104may show a virtual representation of the physiological structure204relative to a virtual representation of the physiological sensor206. The position on the user interface104of the physiological structure204relative to the physiological sensor206may change as the adjustable measurement device moves on the subject. As yet another example, the user interface104may emit a sound that changes as the physiological sensor changes proximity to the physiological structure204. The housing302may be rigid and may be shaped to be complementary to a body part of the subject against which the housing302is pressed by the band106as the subject wears the band106. The slot302emay be complementary in shape to the body part of the subject, such as by being formed in an arc-shape. The underside302cof the housing302may be complementary in shape to the body part of the subject, such as by being formed in an arc-shape. The topside302dmay be complementary in shape to the body part of the subject, such as by being formed in an arc-shape. The arc formed by the housing302, the underside302c, the slot302e, and/or the topside302dmay include an arc length ranging from half an inch to three inches and/or an arc angle ranging from ten degrees to one hundred degrees. The arc length may range from half an inch to one inch, from one inch to one-and-a-half inches, from one-and-a-half inches to two inches, from two inches to two-and-a-half inches, from two-and-a-half inches to three inches, from one inch to two inches, from two inches to three inches, from one inch to three inches, and so forth. The arc angle may range from ten degrees to fifty degrees, from fifty degrees to one hundred degrees, from ten degrees to twenty-five degrees, from twenty-five degrees to fifty degrees, from fifty degrees to seventy-five degrees, from seventy-five degrees to one hundred degrees, and so forth. The topside302dof the housing302may have a different shape than the underside302c. For example, the topside302dmay be parallel to a plane that is tangential to an arc formed by the underside302c. The slot302emay have the same shape as the underside302cor the same shape as the topside302d. The slot302emay have a different shape than both the underside302cand the topside302d. For example, the slot302emay have an arc angle that is greater than the arc angle of the underside302cand is less than the arc angle of the topside302d. The adjustable measurement device300may include components such that the user device118is effectively integrated with the adjustable measurement device300. The user device118may include components such that the adjustable measurement device300is integrated with the user device118. The user device118and the adjustable measurement device300may be integrated into the same housing, e.g. housing302, and may, therefore, be considered a single device. For example, the adjustable measurement device300may include the user interface104, the power source108, the communication device110, the processing device102, the first sensor112, the second sensor114, moveable sensor602, the underside302cwith the inward-facing portion302a, the topside302dwith the outward-facing portion302b, the slot302e, the pressure sensor400, and so forth. The user interface104may be configured to notify the subject of a pressure and/or a change in the pressure of the physiological sensor206against the subject. The user interface104may be configured to notify the subject of a strain the band106. For example, the user interface104may be configured to generate a succession of audible beeps that correspond to a difference between the current pressure/strain and an optimal pressure/strain. As another example, the user interface104may be configured to display a set of colors along a color spectrum. An individual color in the set of colors may correspond to a difference between the current pressure and the optimal pressure or a range of optimal pressures. Incorporating the user interface104into the adjustable measurement device300may allow a subject to incorporate the adjustable measurement device300as an accessory in a minimalistic way without adding additional burden to the subject. For example, the subject may already wear a wristwatch and/or wrist jewelry. The adjustable measurement device300may be attached to a band and worn by the subject in place of the wristwatch. The adjustable measurement device300may include features of the wristwatch, such as displaying the time and date, and may additionally provide physiological measurement information to the subject such as a measurement of the subject's glucose levels. The adjustable measurement device may pair with the subject's smartphone and show phone call data, message data, and so forth. FIG.13Aillustrates the underside302cof the housing302of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.13Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.13A. The housing302may include a first opening1302, a second opening1304, and/or a third opening1306. The housing302may include one or more instances of the first opening1302, the second opening1304, and/or the third opening1306. For example, the housing302may include one instance of the first opening1302, four instances of the second opening1304, and/or two instances of the third opening1306. The housing302may include one instance of the first opening1302and zero instances of the second opening1304and the third opening1306. The housing302may include one instance of the second opening1304and zero instances of the first opening1302and the third opening1306. The housing302may include one instance of the third opening1306and zero instances of the first opening1302and the second opening1304. The housing may include one instance of the first opening1302, one instance of the second opening1304, and zero instances of the third opening1306. The housing302may include four instances of the second opening1304and zero instances of the first opening1302and the third opening1306. The housing302may include two instances of the third opening1306and zero instances of the first opening1302and the second opening1304. The housing302may include other combinations of the openings and/or additional instances of the openings. The first opening1302may be shaped to fit a light source (e.g. a set of light-emitting diodes (LEDs)). For example, a substrate on which the LEDs are mounted may be circular. The first opening1302may be circular and may have a same size as the substrate. The second opening1304may be shaped to fit a first type of the physiological sensor206such as the first sensor112. The third opening1306may be shaped to fit a second type of the physiological sensor206such as the second sensor114. The LEDs may be disposed within the housing302, aligned with the first opening1302, and/or may extend through the first opening1302from the housing302. The first sensor112may be disposed within the housing302, aligned with the second opening1304, and/or may extend through the second opening1304from the housing. The first opening1302, and therefore the LEDs, may be aligned with the second opening1304such that light emitted from the LEDs passes through the second opening1304and/or to the first sensor112after passing through the body part of the subject. The LEDs may be tuned to interrogate the body part of the subject, such as by emitting light within a range of wavelengths and/or frequencies. The first sensor112may be an optical sensor and/or may detect light passing through the second opening1304from the body part of the subject. The first opening1302, the second opening1304, and/or the third opening1306may be formed in and/or through an outer wall302f, e.g. a first wall, of the housing302. Instances of the first opening1302, the second opening1304, and/or the third opening1306may be referred to separately as a first window, a second window, and so forth. For example, the housing302may include two instances of the second opening1304, including a first window1304aand a second window1304b. The first window1304aand the second window1304bmay be aligned with each other parallel to a depth302jof the slot302e. The first window1304aand the second window1304bmay be aligned with each other parallel to a length302kof the slot302e. The first window1304amay be separated from the second window1304bby a distance ranging from one-sixteenth of an inch to half an inch. The first window1304amay be separated from the second window1304bby a distance corresponding to a diameter of a human vein or artery. A first instance of the second sensor114may be positioned in the housing302, aligned with the first window1304a, and/or may extend through the first window1304a. A second instance of the second sensor114may be positioned in the housing302, aligned with the first window1304a, and/or may extend through the first window1304a. Instead of being openings that pass through the outer wall302fof the housing302, the first opening1302, the second opening1304, and/or the third opening1306may be a recess with a backing inset into the housing302above a plane of the inward-facing portion302a. The backing may include electrical interconnects that electronically couple electronic components, such as the first sensor112, the second sensor114, and/or the light source to electronic components housed within the housing302. Accordingly, the recesses may be formed in the housing302on the underside302cof the housing302. The slot302eformed through the housing302may be formed between the recesses and the topside302dof the housing302. The first sensor112may be positioned in the first window1304a. The second sensor114may be positioned in the second window1304b. Spacing between the first window1304aand the second window1304b, and therefore a position of the first sensor112in the housing302relative to the second sensor114, may be such that, as the first sensor112is aligned with a vein and/or artery of the subject as the subject wears the band106, the second sensor114may be positioned within a threshold distance of alignment with the vein and/or artery. For example, the first window1304aand the second window1304bmay be spaced apart, center-to-center, by an amount ranging from 1 mm to 5 cm, from 1 mm to 10 mm, from 5 mm to 5 cm, from 5 mm to 2 cm, from 5 mm to 10 mm, and so forth. The adjustable measurement device300may include a set of three or more sensors. The set of sensors may include a set of the same sensor type, e.g. three instances of the first sensor112, and so forth. The set of sensors may include one or more instances of different sensor types, e.g. one instance of the first sensor112and two instances of the second sensor114, and so forth. Having multiple instances of the same sensor, and/or combining multiple instances of the same sensor with one or more instances of another sensor type, may enable the adjustable measurement device300to identify a position of the physiological structure204of the subject and identify how the adjustable measurement device300should be moved to align one or more of the sensors with the physiological structure. The sensors may be fixed relative to each other such that a shift of the adjustable measurement device300shifts all the sensors. An indicator (e.g. the indicator304) that instructs the subject to shift the adjustable measurement device300may indicate a shift of the first sensor instance, the second sensor instance, the third sensor instance, and so forth. The adjustable measurement device300may include the moveable sensor602, and a shift instruction may indicate a shift of the moveable sensor602and not the other sensor instances. Incorporating multiple sensors at different positions in the adjustable measurement device300may enable the adjustable measurement device300to measure multiple physiological characteristics of the subject. The adjustable measurement device300may be enabled by the multiple sensors to determine a position of one or more of the sensors relative to the physiological structure of the subject One sensor and information about an optimal SNR of the sensor may also be used to determine the position of the sensor relative to the physiological structure. The adjustable measurement device300may be enabled by the multiple sensors near each other to take measurements from the same physiological structure of the subject by different types of sensors. Using different types of sensors, the processing device102may be able to differentiate between portions of the signals that reflect different physiological characteristics, such as by multi-variate analysis. FIG.13Billustrates another arrangement of the underside302cof the housing302of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.13Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.13B. On the outer wall302fof the housing302and/or on the inward-facing portion302aof the underside302cof the housing302, the second opening1304may be segmented into a first portion, e.g. the first window1304a, and a second portion, e.g. the second window1304b. The first window1304amay be segmented from the second window1304bby a divider1308. The divider1308may have a width ranging from one sixty-fourth of an inch to one thirty-second of an inch. The first window1304aand the second window1304bmay be configured to (e.g. may have length, width, and/or depth dimensions, may have mounting surfaces, may include mounting hardware, and so forth) receive one or more sensors. Similarly, the openings in general (e.g. the first openings1302, the second opening1304, the third opening1306, and so forth) may be configured to receive one or more sensors. For example, the first window1304amay be configured to receive a first optical sensor and the second window1304bmay be configured to receive a second optical sensor. The proximity of the first window1304ato the second window1304bmay be such that light emitted by a light source through the first opening1302and traveling through a body part of the subject travels substantially the same distance to the first window1304aas to the second window1304b. FIG.13Cillustrates a third arrangement of the underside302cof the housing302of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.13Cmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.13C. On the outer wall302fof the housing302and/or on the inward-facing portion302aof the underside302cof the housing302, the second opening1304may be aligned with the first opening1302. A light source may be positioned in the first opening1302and a photosensor such as the first sensor112may be positioned in the second opening1304. Two instances of the third opening1306may straddle the first opening1302and/or the second opening1304. Impedance sensors such as the second sensor114may be positioned in the instances of the third opening1306. FIG.13Dillustrates a fourth arrangement of the underside302cof the housing302of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.13Dmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.13D. On the outer wall302fof the housing302and/or on the inward-facing portion302aof the underside302cof the housing302, the second opening1304may be aligned with the first opening1302. A light source may be positioned in the first opening1302and a photosensor such as the first sensor112may be positioned in the second opening1304. Two instances of the third opening1306may be positioned adjacent to the first opening1302and the second opening1304and may be similarly aligned with each other. Impedance sensor such as the second sensor114may be position in the third opening1306. FIG.14illustrates a clamping mechanism1400for the housing302of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.14may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.14. The housing302may include a clamp, e.g. the clamping mechanism1400. The clamping mechanism1400may be coupled to the housing302adjacent to an open end302lof the slot302e. A thickness302mof the slot302ewith the clamping mechanism1400in a closed position may be less than the thickness302mof the slot302ewith the clamping mechanism1400in an opened position. The slot302emay receive the band106. As the slot receives the band106and the clamping mechanism1400is in the closed position, the slot302emay narrow such that an inner wall302gof the housing302forms a frictional engagement with the band106. As the inner wall302gforms the frictional engagement with the band106, the housing302may be affixed to the band106such that the housing302becomes immovable relative to the band106. For example, as the subject wears the adjustable measurement device300and the band106and goes about activities such as exercise, walking, jogging, running, playing sports, sitting at a desk, and so forth, the housing302may not move relative to the band106. As the clamping mechanism1400is in the open position, the housing302may be moveable on the band106such as by the subject sliding the housing302along the band106and/or by the subject removing the housing302from the band106while the band106remains on the body part of the subject. The clamping mechanism1400may be a c-clamp that engages with a first slot302hin the underside302cof the housing302and a second slot302iin the topside302dof the housing302. The slots may be adjacent to the open end302lof the slot302ein the housing302between the underside302cand the topside302d. The clamping mechanism1400may be fixed to one side of the housing302, e.g. the underside302c, and may include at the other side a catch. The other side of the housing302, e.g. the topside302d, may include a catch complimentary to the catch of the clamping mechanism. When the catches engage, the slot302emay narrow and/or may squeeze the band106when the band is positioned in the slot302e. The clamping mechanism1400may include a hinge at one side of the housing302, e.g. the underside302c, and the catch at the opposite side of the clamping mechanism. The clamping mechanism1400may include a magnet. The clamping mechanism1400may enable the adjustable measurement device300to be attached to the band106, removed from the band106, and/or affixed in a position on the subject's body part. The band106may retain the adjustable measurement device300in a fixed position relative to the subject's body part. FIG.15Aillustrates a cross-section of the adjustable measurement device300showing electronic components of the adjustable measurement device300, according to an embodiment. Some of the features inFIG.15Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.15A. The housing302may include a first chamber302obordered by a first wall, e.g. the outer wall302f, and a second wall302q. The second wall302qmay be a portion of the inner wall302galong the underside302cof the housing302. The first wall may include a sensor window, e.g. the first opening1302. The sensor window may be the first opening1302, the second opening1304, the third opening1306, and so forth. The housing302may include a second chamber302pbordered by the outer wall302fand a third wall302r. The third wall302rmay be a portion of the inner wall302galong the topside302dof the housing302. The slot302ein the housing302may be positioned between the first chamber302oand the second chamber302p, e.g. the first chamber302omay be positioned on the opposite side of the slot302efrom the second chamber302p. The slot302emay be bordered by the second wall302qand the third wall302r. The slot302emay be separated from the first chamber302oby the second wall302q. The slot302emay be separated from the second chamber302pby the third wall302r. The depth302jof the slot302emay range from one-quarter of an inch to two inches. The thickness302mof the slot302emay range from one thirty-second of an inch to one-quarter of an inch. The slot302emay include the open end302land a closed end302nopposite the open end302l. The housing302may include a third chamber302sbetween and/or adjacent to the first chamber302oand/or the second chamber302p. The third chamber302smay be at least partially enclosed by the outer wall302fof the housing and the inner wall302gof the housing. A fourth wall302tmay partially enclose the third chamber302s. At least a portion of the fourth wall302tmay be a portion of the inner wall302gextending between the second wall302qand the third wall302r. The fourth wall302tmay be perpendicular to the second wall302qand/or the third wall302r. The closed end302nof the slot302emay be defined by the fourth wall302t. The fourth wall302tmay separate the slot302efrom the third chamber302s. Boundaries between the chambers (e.g. the first chamber302o, the second chamber302p, and/or the third chamber302s) and the slot302emay be formed by the second wall302q, the third wall302r, and/or the fourth wall302t. The outer wall302fof the housing302and/or the inner wall302gof the housing302may be c-shaped, u-shaped, and so forth (i.e. c/u-shaped). The inner wall302gmay be nested in the outer wall302f. The slot302emay be defined by the c/u-shape of the inner wall302g. The slot may thereby extend into the housing302, where the closed-end602eof the slot302eis defined by the c/u-shaped inner wall302g. The shape of the inner wall302gand/or the outer wall302fmay be configured to extend at least partially around a width of the band106, where the width of the band106may be along the same direction as the depth302jof the slot302e. The first chamber302o, the second chamber302p, and/or the third chamber302smay be at least partially enclosed by the c/u-shaped inner wall302gand the c/u-shaped outer wall302f. The physiological sensor206may be positioned in the first opening1302(i.e. the sensor window). An elastic coupling member1502may be disposed in the first chamber302oand aligned with the first opening1302. The elastic coupling member may be positioned against the second wall302q. The elastic coupling member1502may be attached to, coupled to, and/or integrated with the second wall302q. For example, the elastic coupling member1502may be adhered to the second wall302qby glue. The elastic coupling member1502and the second wall302qmay be formed of the same material and may form a unitary piece of the housing302. For example, the elastic coupling member1502and the inner wall302gmay be 3D-printed or manufactured by a plastic injection molding process. The first sensor112may be attached to the elastic coupling member1502at an end of the elastic coupling member1502opposite where the elastic coupling member1502is attached to the second wall302q. The elastic coupling member1502and/or the first sensor112may be aligned with the first opening1302. The elastic coupling member1502may have a spring property such that the elastic coupling member1502may respond with a reactionary force directed through the first opening1302away from the housing302when a causal force on the elastic coupling member1502is directed towards the second wall302q. A force exerted by the elastic coupling member1502on the first sensor112may be in a direction through the first opening1302and/or away from the housing302. Electronic components of the adjustable measurement device300may be disposed in various locations throughout the first chamber302o, the second chamber302p, and/or the third chamber302s. For example, the power source108may be positioned in the third chamber302s, a PCB1504may be positioned in the second chamber302p, and the elastic coupling member1502and first sensor112may be positioned in and/or adjacent to the first chamber302o. The processing device102and communication device110may be positioned in the second chamber302pand may be electronically interconnected to each other, the power source108, and/or the first sensor112by a PCB1504and/or the electrical trace or circuit116. The control/logic300aand communication device300bof the adjustable measurement device300may be interconnected on the PCB1504. The power source108may be positioned in the first chamber302oand/or the second chamber302p. The power source108may include a cellular lithium-ion battery unit formed in the same shape as the inner wall302gand/or the outer wall302fof the housing302and may be attached to the inner wall302gand/or the outer wall302f. The power source108may be positioned in multiple chambers, e.g. may extend from the first chamber302othrough the third chamber302sto the second chamber302p. The physiological sensor206may be electronically coupled to the power source108. The physiological sensor206may have an integrated power management circuit. The physiological sensor206may be directly electronically coupled to the power source108. The power management circuit for the physiological sensor206may be integrated into the control/logic300aof the adjustable measurement device300. The power management circuit for the physiological sensor206may be integrated into the processing device102. The control/logic300aand/or the processing device102may regulate the provision of power to the physiological sensor206. The processing device102, the control/logic300a, the power source108, and/or the physiological sensor206may be electronically interconnected via the PCB1504and/or the electrical trace and circuit116. The underside302cof the housing302may be configured to be positioned between the band106and the subject's body part. For example, the underside302cmay be shaped to conform to the subject's body part. The adjustable measurement device300may include an indicator on the housing302of an orientation of the housing as the subject wears the band106and the adjustable measurement device300. The first opening1302(and/or the second opening1304, the third opening1306, and so forth) may be adjacent to and/or may contact the subject, e.g. the subject's body part, as the subject wears the band106and the housing302is coupled to the band106. The physiological sensor206may be adjacent to and/or may contact the subject, e.g. the subject's body part, as the subject wears the band106and the housing302is coupled to the band106. The housing302may be hollow. The electronic components of the adjustable measurement device300may be positioned within the hollow housing302in any of a variety of ways that enable efficient use of space within the housing302to minimize a volume and/or “footprint,” e.g. surface area, of the housing302. The sensing electronics such as the first sensor112, the second sensor114, the moveable sensor602, and so forth may be positioned in the first chamber302oto be aligned with the first opening1302, the second opening1304, the third opening1306, and so forth. Room permitting, other electronic components of the adjustable measurement device300, such as the processing device102, the control/logic300a, the communication device300b, the power source108, the PCB1504, and so forth, may be positioned in the first chamber302o. The other electronic components of the adjustable measurement device300may be spread out throughout the interior of the hollow housing302such as within the second chamber302pand/or the third chamber302s. The communication device300bmay communicatively couple the internal electronic components of the adjustable measurement device300to the user device118. For example, the electronic components of the adjustable measurement device300may include the control/logic300a, the communication device300b, the power source108, and the physiological sensor206. A measurement taken by the first sensor112may be processed by the control/logic300ato increase the SNR of the signal associated with the measurement. The improved signal may be communicated by the communication device300bto the user device118, such as via the communication device110. The processing device102may determine a measurement value based on the improved signal. The user device118may include the user interface104, and the user interface104may present the measurement value to the subject. The shape of the housing302and the positioning and/or shape of the chambers in the housing302may minimize the footprint of the housing302and optimize the measurement-taking capabilities of the adjustable measurement device300. The slot302epassing through the housing302, instead of having a separate structure to attach the adjustable measurement device300to the band106, may reduce the overall volume of the housing302. The band106passing through the housing302and over the sensing electronics may allow for a constant downward force of the sensors against the subject's body part. The elastic coupling member1502may counter-balance the force by the band106to ensure the sensor is pressed against the subject with the correct amount of pressure in cases where the band may be too tight or too loose. The elastic coupling member1502may also ensure constant force of the sensor against the subject as the subject moves and engages in activity such as exercise, playing sports, and so forth. The band106may loosen or tighten on the subject as the subject's body part changes shape and/or volume due to movement of the subject. The elastic coupling member1502may maintain a constant force of the sensor against the subject as these changes to the subject's body part occur. FIG.15Billustrates a zoomed-in view of the cross-section illustrated inFIG.15A, according to an embodiment. Some of the features inFIG.15Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.15B. The pressure sensor400may be coupled to the physiological sensor206, the elastic coupling member1502, housing302, and/or the band106. For example, the pressure sensor400may be disposed within the first chamber302oagainst the second wall302qbetween the second wall302qand the elastic coupling member1502. As another example, the pressure sensor400may be disposed between the elastic coupling member1502and the physiological sensor206. As another example, the pressure sensor400may be disposed in a recess in the band106between the elastic coupling member1502and the band106and/or between the physiological sensor206and the band106. The pressure sensor400may thereby be configured to measure a pressure of the band106on the subject and/or a pressure of the physiological sensor206against the subject as the band106may be attached to the subject. The pressure sensor400may be electronically coupled to the processing device102, such as via the electrical trace or circuit116, the PCB1504, and so forth. The pressure sensor400may generate an electronic signal corresponding to a pressure of the physiological sensor206on the subject. The pressure sensor400may generate an electronic signal corresponding to a pressure of the physiological sensor206on the subject as the subject wears the band106. The processing device102may convert the electronic signal into a pressure measurement. The pressure measurement may have a corresponding pressure measurement value. The pressure measurement value may be an absolute pressure measured by the pressure sensor400and may have units such as pounds per square inch. The pressure measurement value may be relative to a range of pressures. For example, the pressure measurement value may be represented as “within range,” “good,” “out of range,” “high,” “low,” and so forth. The pressure measurement value may be a scalar, such as a normalized value that is normalized relative to an optimal pressure and/or an optimal range for the pressure. The optimal pressure range may have a minimum pressure and no maximum pressure. The optimal pressure range may have a maximum pressure and no minimum pressure. The pressure sensor400may enable the subject to adjust the pressure of the band106and/or the adjustable measurement device300on the subject to an optimal pressure for the physiological sensor206. The pressure sensor400may also enable the processing device102, or another processing device, to determine how likely a physiological measurement value is to be accurate. If the physiological measurement is taken when the physiological sensor206is pressed against the subject with a pressure outside the range of optimal pressures, the physiological measurement may be tagged as being possibly inaccurate, the processing device102may prompt the physiological sensor206to take another measurement, the processing device102may prompt the physiological sensor206to take another measurement when the pressure is within the optimal range, the processing device102may discard the physiological measurement, the processing device102may adjust the value of the physiological measurement according to the pressure measurement value, and so forth. The physiological measurement value may vary as a function of the pressure with which the physiological sensor206is pressed against the subject. FIG.15Cillustrates a zoomed-in view of the cross-section illustrated inFIG.15Aincluding light piping1508, according to an embodiment. Some of the features inFIG.15Cmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.15C. The PCB1504may be positioned in the first chamber302o. A light source1506may be mounted to the PCB1504. The light source1506may be aligned with one of the openings in the outer wall302f, such as the first opening1302. The light piping1508may be coupled to the first opening1302. The light piping1508may extend into the first chamber302o. The light source1506may be tuned to interrogate a body part of the subject. For example, the light source1506may include LEDs that emit light including a range of wavelengths. The range of wavelengths may include individual wavelengths that are either strongly absorbed by the physiological structure204of the subject and/or strongly reflected by the physiological structure. The light piping1508may isolate the light source1506from internal components of the adjustable measurement device300including sensors. The light piping may extend from the first opening1302to the PCB1504. The light piping may contact the PCB1504and may form an optical seal with the PCB1504. The light piping1508may direct light emitted by the light source1506towards the subject. The adjustable measurement device300may include optical sensors that interrogate the physiological structure of the subject by detecting wavelengths of light reflected from the physiological structure. Light entering the optical sensor from outside the subject's body may distort measurements of the subject's physiological condition because the light may wash out light received from the subject's body, increase noise in light received by the optical sensors, and so forth. The first opening1302may be open such that the light source1506is directly exposed to an ambient environment outside the housing302. The first opening1302may include a transparent covering between the light source1506and the ambient environment outside the housing302. The transparent covering may be transparent to light emitted by the light source1506. The physiological sensor206, such as the first sensor112, may be similarly situated such that the light piping1508sequesters the physiological sensor206from other internal components of the adjustable measurement device300. For example, the physiological sensor206may be a photodiode and/or another photo detector. The light piping1508may prevent light noise, such as light emitted from other electronic components with the housing302, from reaching the physiological sensor206. FIG.16Aillustrates a side view of the adjustable measurement device300on the band of the wearable device, according to an embodiment. Some of the features inFIG.16Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.16A. The adjustable measurement device300may include electrical contacts1602disposed on the inner wall302gwithin the slot302eof the housing302. For example, the electrical contacts1602may be disposed on the second wall302qand/or on the third wall302r. The band106may include exposed conductive tracing1604(e.g. electrical contact surfaces). The electrical contacts1602of the adjustable measurement device300may be electrically interconnected to the electronic components of the adjustable measurement device300. For example, the electrical contacts1602may be electrically coupled to the physiological sensor206, the power source108(when, for example, the power source108is disposed in the housing302), and/or the processing device102, and so forth via the PCB1504and/or the electrical trace or circuit116. The exposed conductive tracing1604of the band106may be electrically coupled to the user device118, the power source108(when, for example, the power source108is disposed outside the housing302), and/or an inductor in the band106, and so forth. The inductor may be an inductive charging device. The electrical contacts1602and the exposed conductive tracing1604may transfer power between the adjustable measurement device300and the band106. For example, the power source108may be positioned in the band106and/or the user device118which may be attached to the band106. Power may be delivered to internal electronic components of the adjustable measurement device300, such as the control/logic300a, the communication device300b, the physiological sensor206, and so forth, from the power source108outside the housing302via the electrical contacts1602and the exposed conductive tracing1604. The exposed conductive tracing1604may be electrically coupled to the power source108. The exposed conductive tracing1604may form electrical contact with the electrical contacts1602of the adjustable measurement device300. As another example, the processing device102may be positioned in the band106. The processing device102may communicate instructions to the physiological sensor206in the adjustable measurement device300via the electrical contact between the electrical contacts1602of the adjustable measurement device300and the exposed conductive tracing1604of the band106. As another example, the power source108may be positioned in the housing302. The power source108may be a battery. A charging mechanism for the battery, such as an inductor, may be positioned in and/or integrated with the band106and/or the user device118. The battery may be charged using the inductor via the electrical contacts1602of the adjustable measurement device300and the exposed conductive tracing1604of the band106. It may be beneficial to spread electronic components of the wearable device100, including those of the adjustable measurement device300, to as many areas of the wearable device100as possible to minimize the footprint of the wearable device100and/or the adjustable measurement device300. This may include putting the processing device102and power source108outside the user device118and adjustable measurement device300and in the band106. The arrangement of electrical contacts1602aligned with the exposed conductive tracing1604of the band may enable the adjustable measurement device300to be adjustable on the band106while still delivering power and/or control instructions from the processing device102and/or power source108to the adjustable measurement device. The exposed conductive tracing1604may have a length of exposure from the band106that may correspond to, e.g. may determine, an adjustable range for the position of the adjustable measurement device300on the band106. FIG.16Billustrates a side view of the adjustable measurement device300on the band106of the wearable device100and includes a wireless charging system1606, according to an embodiment. Some of the features inFIG.16Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.16B. The wireless charging system1606may include a first wireless charging device1606apositioned in the band106and a second wireless charging device1606bpositioned in the housing302of the adjustable measurement device300approximate to the slot302e. The second wireless charging device1606bmay be electronically coupled to various electronic components of the adjustable measurement device300, and the first wireless charging device1606amay be electronically coupled to various electronic components outside the adjustable measurement device300. For example, the second wireless charging device1606bmay be electronically coupled to the processing device102and/or the power source108in the housing302. The user interface104may be integrated with the band106. The first wireless charging device1606aand the second wireless charging device1606bmay be configured to transfer power and/or data between each other. For example, the first wireless charging device1606aand/or the second wireless charging device1606bmay include inductors. The processing device102and/or the power source108may be electronically coupled to the user interface via the wireless charging system1606. The wireless charging circuitry of the adjustable measurement device, e.g. the second wireless charging device1606b, may be disposed in the first chamber302o, the second chamber302p, and/or the third chamber302s. For example, the second wireless charging device1606bmay be positioned in the first chamber302oadjacent to the second wall302q. As another example, the second wireless charging device1606bmay be positioned in the second chamber302padjacent to the third wall302r. As yet another example, the second wireless charging device1606bmay be positioned in the third chamber302sadjacent to the fourth wall302t. The first wireless charging device1606amay be integrated into the band106. The first wireless charging device1606amay be incorporated into the band106to be flush with a surface of the band106. The first wireless charging device1606amay be integrated into the band106and may be positioned below the surface of the band106within the band106. The wireless charging system1606may enable the adjustable measurement device300to be removable from the band106while still being configured to be electronically coupled to electronic components of the band106and/or the user device118. The band106may include several instances of the first wireless charging device1606aso that the adjustable measurement device300may be adjusted in position relative to the band106while still being electronically coupled to the electronic components of the band106and/or the user device118. For example, the band106may include two instances of the first wireless charging device1606a, three instances of the first wireless charging device1606a, four instances of the first wireless charging device1606a, and so forth. FIG.17illustrates the physiological sensor206and the elastic coupling member1502embedded in the band106, according to an embodiment. Some of the features inFIG.17may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.17. The physiological sensor206may be embedded in and/or coupled to the band106. The band106may include a recess106hin the inward-facing surface106aof the band106. The elastic coupling member1502may be disposed within the recess106hand/or may be attached to the band106. A first end1502aof the elastic coupling member1502may be attached to the band106. A second end1502bof the elastic coupling member1502may be attached to the physiological sensor206. A flexible seal1702may be attached to the inward-facing surface106aof the band106and the physiological sensor206. The flexible seal1702may be attached to a surface within the recess106h. The flexible seal1702and the physiological sensor206may cover the recess. The flexible seal1702and the physiological sensor206may seal off the recess from an ambient environment of the band106. For example, the flexible seal1702may form a flexible hermetic and/or watertight seal with the sensor. The flexible seal1702and the physiological sensor206may prevent sweat, dirt, and/or oil from the subject's skin from accumulating in the recess106h. The processing device102may be coupled to the band106, incorporated with a device attached to the band106, and/or integrated into the band106. The processing device102may be electronically coupled to the physiological sensor206, such as via the electrical trace or circuit116, the PCB1504, and so forth. The physiological sensor206may be configured to generate an electronic signal corresponding to a physiological state of a subject as the band106is attached to the subject. The processing device102may be configured to receive the electronic signal and convert the electronic signal to a physiological measurement corresponding to the physiological state of the subject. The physiological sensor206may be integrated into the band106and/or embedded in the recess106hof the band106. A detection surface206aof the physiological sensor206may be exposed on an underside of the band106, e.g. the inward-facing surface106a, that rests against a body part of the subject as the band106is attached to the subject. The detection surface206amay be flush with the inward-facing surface106a, e.g. the detection surface206amay be coplanar with a plane of the inward-facing surface106a. The detection surface206amay be non-coplanar with the plane of the inward-facing surface106a. For example, the detection surface206amay be recessed within the band106, or the detection surface206amay extend outside of the band106. If the physiological sensor206is fully recessed within the band106, the physiological sensor206may be directly coupled to the band106without the elastic coupling member1502. The open end of the recess106hmay press against the subject as the subject wears the band106to isolate the physiological sensor206from possible noise. The recess106hmay extend into the band106from the inward-facing surface106atowards the outward-facing surface106b. As the subject wears the band106, the elastic coupling member1502may press the physiological sensor206against the subject and may cause constant contact between the physiological sensor206and the subject. The housing302and/or the user device118may be attached to the band106. The processing device102may be electronically coupled to the physiological sensor206and positioned in the housing302or the user device118. The electrical trace or circuit116embedded in the band may extend from the physiological sensor206to the housing302or the user device118. The electrical trace or circuit116may electronically couple the physiological sensor206to the processing device102and/or other electronic components of the wearable device100such as the user interface104. The recess106hmay include a closed end and an open end opposite the closed end. The closed end and the open end may have the same shape or a different shape. For example, the closed end and the open end may both be circular, rectangular, polygonal, and so forth. As another example, the closed end may be a first shape and the open end may be a second shape that is different from the first shape. The closed end may be circular, and the open end may be rectangular. A base of the elastic coupling member1502may be circular and may be the same size as the closed end of the recess106h. The physiological sensor206may be rectangular and may fit within the open end of the recess106h. The physiological sensor206may be tiltable on the elastic coupling member1502. The physiological sensor206may be tiltable relative to the band106. The physiological sensor206may be tiltable relative to the inward-facing surface106aof the band106. The physiological sensor206may be tiltable from a plane parallel with the band106and/or the inward-facing surface106aof the band106by up to 30 degrees. The physiological sensor206may be tiltable on the elastic coupling member1502from the plane that is coplanar with the band106and/or the inward-facing surface106aat 360 degrees around the physiological sensor206, i.e. pressure on an edge of the physiological sensor206at any point around the physiological sensor206may cause the physiological sensor206to tilt on the elastic coupling member by up to 30 degrees. The recess106hmay be configured such that the physiological sensor206fits snugly within the recess106h. For example, the physiological sensor206may have a clearance fit within the recess106hwithin a range of tolerance. The clearance fit may be with respect to a width of the physiological sensor206and a width of the recess106h. The range of tolerance of the clearance fit may range from 0.25 mm to 2 mm on each side of the physiological sensor206between the physiological sensor206and the walls of the recess106h. The elastic coupling member1502may maintain the physiological sensor206in approximately constant contact with a body part of the subject as the subject wears the band106and as a pressure of the band against the body part changes. For example, the subject may wear the band106on the subject's wrist. The cross-sectional diameter of the subject's wrist may change as the subject moves, which may cause a change in the pressure of the band106against the subject's wrist. The elastic coupling member1502may compress as the pressure of the band106on the subject's wrist increases. The elastic coupling member1502may expand as the pressure of the band106on the subject's wrist decreases. The elastic coupling member may similarly maintain the physiological sensor206approximately coplanar with the body part as the subject wears the band106and as an alignment of the wearable band with the body part changes. For example, a plane of the inward-facing surface106aof the band may be parallel to a plane of the subject's body part. As the subject engages in an activity, the plane of the inward-facing surface106amay become non-parallel (e.g. intersecting) with the plane of the subject's body part. The elastic coupling member1502may press the physiological sensor206against the subject's body part and the pressure of the subject's body part on the physiological sensor206may cause the physiological sensor206to tilt relative to the plane of the inward-facing surface106aof the band106. The elastic coupling member1502may enable such tilting while still maintaining the physiological sensor206in constant contact with the subject's body part. A plane of the detection surface206amay remain parallel with the plane of the body part as the plane of the inward-facing surface106ais non-parallel with the plane of the body part. The elastic coupling member1502may be an electrical conductor. The elastic coupling member1502may electronically couple the physiological sensor206to other electronic components of the wearable device100such as the processing device102. For example, the elastic coupling member1502may be made of steel. The elastic coupling member1502may be electrically coupled to the electronic components of the physiological sensor206and may be electrically coupled to the electrical trace or circuit116. The physiological sensor206may be positioned adjacent to the inward-facing surface106aof the band. For example, the physiological sensor206may be embedded within the band106. The strain in the band106may indicate a pressure of the physiological sensor206against the subject. The strain may be measured by a strain gauge in the band106, e.g. the pressure sensor400may be embedded in the band106. The physiological sensor206may be set in the recess106h. The pressure sensor400may also be set in the recess106h. The pressure sensor400may be coupled to the band106. The pressure sensor400may be positioned between the band106and the elastic coupling member1502. The pressure sensor400may be positioned between the elastic coupling member1502and the physiological sensor206. The pressure sensor400may be positioned to measure a pressure with which the physiological sensor206is pressed into the band106. The processing device102may be configured to receive an electronic signal from the pressure sensor400corresponding to the pressure on the physiological sensor206or the strain in the band106. The processing device102may be configured to generate a pressure measurement value representative of the pressure or the strain. The processing device102may be configured to compare the pressure measurement value to a first range of pressure values from a minimum pressure value to a maximum pressure value. The processing device102may be configured to generate an alert when the pressure measurement value is outside the first range of pressure values. The user interface104may be configured to receive the alert from the processing device102and generate an indicator for the alert. The user interface104may present the indicator to the subject. Elements and/or features of how the physiological sensor206is incorporated into the band106may also be employed to incorporate the physiological sensor206into the housing302of the adjustable measurement device300. The recess106hmay be positioned in the inward-facing portion302aof the housing302. The elastic coupling member1502may be mounted in the recess106hon the inward-facing portion302aof the housing302. The physiological sensor206may be coupled to the elastic coupling member1502in the recess106h. The elastic coupling member1502may press the physiological sensor206against the subject and cause constant contact between the physiological sensor206and the subject as the subject wears the adjustable measurement device300on the band106. The flexible seal1702may be disposed within one or more of the openings through the housing302, such as the first opening1302, and so forth. The flexible seal1702may be disposed between the second wall302qand the physiological sensor206. The flexible seal1702may form a watertight or hermetic seal between the second wall302qwall and the physiological sensor206. The flexible seal1702may permit the physiological sensor206to move in and/or through the opening as the elastic coupling member1502or skin of the subject presses against the physiological sensor206. FIG.18illustrates a perspective view of a first type of the elastic coupling member1502, according to an embodiment. Some of the features inFIG.18Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.18A. The elastic coupling member1502may be a spring. The elastic coupling member1502may be a coil spring. The elastic coupling member1502may be a wave spring. The elastic coupling member1502may be formed of a material formed in a shape that gives the material an elastic property. The elastic property may include the material having an equilibrium form, an extended form, and a compressed form. When the material is in the equilibrium form, the elastic coupling member1502is static. When the material is in the extended form the elastic coupling member1502exerts a contracting force. When the material is in the compressed form, the elastic coupling member1502exerts an expanding force. The material may include a metal such as steel and/or a plastic material. FIG.19illustrates a perspective view of a second type of the elastic coupling member1502, according to an embodiment. Some of the features inFIG.19Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.19A. The elastic coupling member1502may be an ortho-conical spring. The ortho-conical spring may include a base end1902. The base end1902may be configured to be coupled to the band106and/or the pressure sensor400in the recess106h. For example, the base end1902may include an adhesive, hooks, a magnet, and so forth. The closed end of the recess may include complimentary attachment mechanisms. The base end1902may have a size and/or shape that matches the size and/or shape of the closed end of the recess106h. The ortho-conical spring may have a mounting end1904. The mounting end1904may be configured to be coupled to the physiological sensor206. For example, the mounting end1904may include an adhesive, hooks, a magnet, and so forth. An underside of the physiological sensor206may include a complementary attachment mechanism. The mounting end1904may be smaller in length, width, and/or diameter than the base end1902. The ortho-conical spring may include a leg1906that couples the base end1902to the mounting end1904. A spring constant of the ortho-conical spring may be proportional to an inverse cube of a length of the leg1906. The ortho-conical spring may have a height in an uncompressed equilibrium state of the ortho-conical spring that may be less than or equal to three-quarters of the length of the leg1906. In a compressed state, the base end1902, the mounting end1904, and the leg1906may be coplanar. For example, the leg1906may be nested in the base end1902and the mounting end1904may be nested in the base end1902and the leg1906. In an uncompressed equilibrium state, the ortho-conical spring may be conical. The base end1902may be a recess end, where the base end1902is positioned in the recess106hand attaches to the band106. The mounting end1904may be a sensor end, where the mounting end1904attaches to the physiological sensor206. The sensor end may be smaller than the recess end. For example, the recess end and the sensor end may be circular, and the sensor end may have a smaller diameter than the recess end. The sensor end may have a smaller diameter and/or surface area than the physiological sensor206. The sensor end may be attached to the physiological sensor206at a center of a surface of the physiological sensor206. FIG.20illustrates the band106with embedded instances of the physiological sensor206and the moveable sensor602in the slot604of the band106, according to an embodiment. Some of the features inFIG.20may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.20. The band106may include the slot604and the moveable sensor602may be positioned in the slot604. The slot604may be positioned in the band106along the length of the band106(i.e. the longer dimension of the band106). The adjustable measurement device300may be attached to the band106. The housing302may include the second chamber302pand not the first chamber302o. The underside302cof the housing302may include the outer wall302fand not the inner wall302g. The outer wall302fmay be c-shaped and may wrap partially around the width of the band106so that the outer wall302fextends part-way across the inward-facing surface106aof the band106. The outer wall302fmay not intersect with the slot604. The housing302may be positioned against the outward-facing surface106bof the band106over the slot604. The openings in the housing, such as the first opening1302, the second opening1304, and/or the third opening1306, may be through the third wall302rportion of the inner wall302g. At least one of the openings, such as the first opening1302, may be aligned with the slot604as the housing302is attached to the band106. The moveable sensor602may extend through, for example, the first opening1302. As the housing302is attached to the band106, the moveable sensor602may extend into and/or through the slot604. The moveable sensor602may be slidable in the slot604along the length of the slot604and/or the length of the band106as the adjustable measurement device300is attached to the band106. The moveable sensor602may be fixed relative to the housing302and the housing302may be adjustable position-wise on the band106. As the position of the housing302on the band106is adjusted along the length of the band106and/or the slot604, the moveable sensor602may slide in the slot604. The housing302may be fixed to the band106. The first opening1302may have a length greater than a width of the first opening1302. The length may extend parallel to the length of the slot604. The position of the moveable sensor602in the first opening1302and the slot604may be adjustable relative to the band106and the housing302. A lever extending from the housing302may be attached to the moveable sensor602. The position of the moveable sensor602may be adjusted by moving the lever. One or more instances of the physiological sensor206may be embedded in the band106. Two instances of the physiological sensor206may be embedded in the band106. Three instances of the physiological sensor206may be embedded in the band. Four instances of the physiological sensor206may be embedded in the band, and so forth. A first instance206bof the physiological sensor206may be positioned at a first end604aof the slot604. A second instance206cof the physiological sensor206or a first instance of another type of the physiological sensor206may be positioned at a second end604bof the slot604. The first end604aand the second end604bmay be opposite length-wise ends of the slot604or opposite width-wise ends of the slot604, e.g. sides of the slot604. The first instance206band the second instance206cmay be fixed relative to each other and/or the band106. The position of the moveable sensor602may be adjustable relative to the fixed positions of the first instance206band/or the second instance206c. The processing device102may be configured to determine a shift of the moveable sensor602relative to the first instance206band/or the second instance206cof the physiological sensor206. For example, the processing device102may receive signals from the first instance206band the second instance206cand may determine, based on the respective SNRs of the signals, a position of the physiological structure204between the first instance206band the second instance206c. Based on the position of the physiological structure, the processing device102may output a recommended amount of shift of the moveable sensor602towards the first instance206bor the second instance206cof the physiological sensor206. The arrangement of the slot604and the moveable sensor602may enable fine-tuning of the sensing capabilities of the adjustable measurement device300. The first instance206band the second instance206cmay identify the position of the physiological structure. The moveable sensor602may be placed in direct alignment with the physiological structure to maximize the SNR of the signal generated by the moveable sensor602. The first instance206bof the physiological sensor206may be a photo sensor. The second instance206cof the physiological sensor206may be a photo sensor. The moveable sensor602may be a light source (e.g. it is a sensor because it is part of a sensing system including the light source and the photo sensor). The photo sensors may straddle the light source. The first instance206band the second instance206cmay be light sources. The moveable sensor602may be a photo sensor. FIG.21illustrates a method2100of determining a sensor's proximity to a subject's vein and/or artery, according to an embodiment. Some of the features inFIG.21may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.21. Elements of the method2100may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2100may include measuring a heartbeat waveform of a subject (block2102). The heartbeat waveform may be measured, for example, by a physiological sensor (e.g. the physiological sensor206). The method2100may include determining a proximity of the physiological sensor to a physiological structure (e.g. the physiological structure204) of the subject (block2104). The physiological structure204may be a blood vessel, an organ, a muscle, a skeletal body, a muscular-walled tube, a vein, an artery, and so forth. The proximity may be determined by, for example, calculating a quality of the signal generated by the physiological sensor (e.g. a shape of the signal, an amplitude of the signal, an SNR of the signal, and so forth). The proximity may be determined by comparing the signal to a sample signal. The sample signal may have an ideal signal quality. The sample signal may be a signal generated when the proximity of the sensor to the physiological structure204is known to the subject and/or when the proximity is minimized. The sample signal may, for example, be an average best signal SNR, amplitude, shape, etc., averaged over a population of subjects. The population of subjects may have one or more physiological traits in common with the subject. For example, the population of subjects may have the same age as the subject, may be in the same age range, may have the same gender, may have a similar gender, may have the same or a similar ethnicity, and so forth. The method2100may include generating an indicator (e.g. the indicator304) that signals to the subject (e.g. informs the subject of) the proximity of the physiological sensor to the physiological structure204(block2106). The proximity may be determined by an amplitude of the signal that indicates the heartbeat waveform of the subject. An increasing amplitude may indicate increasing proximity of the physiological sensor to the physiological structure204. The proximity may be determined by an SNR of the signal that indicates the heartbeat waveform. An increasing SNR may indicate increasing proximity of the physiological sensor to the physiological structure204. The proximity may be determined by the shape of the signal over time, where the shape matches a shape of a previously measured heartbeat waveform. Better conformity of the shape with the shape of the previously measured heartbeat waveform may indicate increasing proximity of the physiological sensor to the physiological structure204. The indicator may include a sound audible by the subject and/or a visual cue visible to the subject. The indicator may change as the proximity of the physiological sensor to the physiological structure204changes. The indicator may be an output by the adjustable measurement device300and/or the user device118communicatively coupled to the adjustable measurement device300. FIG.22illustrates a method2200of positioning the adjustable measurement device on the subject, according to an embodiment. Some of the features inFIG.22may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.22. Elements of the method2200may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2200may include attaching a measurement device (e.g. the wearable device100and/or the adjustable measurement device300) to a wearable band that is wearable by a subject (e.g. the band106) such that the measurement device is moveable on the wearable band (block2202). The measurement device may include a housing formed in a shape that is complementary to a shape of a width of the wearable band (e.g. the housing302). The housing may include an opening through a wall of the housing (e.g. the first opening1302, and so forth). The measurement device may include a processing device disposed within the housing (e.g. the processing device102, the control/logic300a, and so forth). The measurement device may include an elastic coupling member (e.g. the elastic coupling member1502) and/or a physiological sensor coupled to the elastic coupling member (e.g. the physiological sensor206). The physiological sensor may be electronically coupled to the processing device and/or aligned with the opening. A force exerted by the elastic coupling member on the physiological sensor may be in a direction through the opening and away from the housing. The measurement device may include an attachment mechanism configured to attach the housing to the wearable band (e.g. the clamping mechanism1400). As the subject wears the wearable band as the housing is attached to the wearable band, the physiological sensor and/or the opening may be adjacent to the subject's skin. The method2200may include placing the wearable band on a body part of the subject such that the physiological sensor is pressed against the skin of the subject (block2204). The physiological sensor may perform best, e.g. may generate the highest-quality signal, when the physiological sensor is pressed against the subject in an optimal range of pressures. The method2200may include aligning the physiological sensor with the physiological structure of the subject, such as a muscular-walled tube within the subject's body part (block2206). The physiological sensor may be aligned with the subject's physiological structure when the signal quality output by the physiological sensor is maximized, such as by a maximum SNR and/or a maximum amplitude, and so forth. The method2200may include affixing the measurement device to the wearable band by the attachment mechanism (e.g. the clamping mechanism1400) when the physiological sensor is aligned with the physiological structure (block2208). The physiological sensor may be retained in alignment with the physiological structure of the subject by the wearable band. The method2200may include communicatively coupling (e.g. networking) the measurement device with the user interface (block2210). For example, the measurement device may be wirelessly networked to a user device such as a smartphone, a smartwatch, and so forth, via internal communication devices such as the communication device110and the communication device300b. The measurement device may be hardwired to a user interface (e.g. the user interface104) and/or a user device incorporating the user interface (e.g. the user device118) via a circuit (e.g. the electrical trace and circuit116). The user interface may be coupled to the wearable band or may be uncoupled from the wearable band. The user interface may be coupled to the wearable band separately from the measurement device, such as in a smartwatch on the wearable band. The user interface may be integrated with or integrated with a user device remote from the wearable band. FIG.23illustrates a method2300for repositioning the adjustable measurement device on the subject, according to an embodiment. Some of the features inFIG.23may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.23. Elements of the method2300may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2300may include attaching the measurement device to the wearable band (block2302). The method2300may include placing the wearable band on the subject, such as on and/or around the body part of the subject (block2304). The method2300may include aligning a physiological sensor of the measurement device (e.g. the physiological sensor206) with a physiological structure of the subject, such as the muscular-walled tube, and so forth (block2306). The wearable band may be stationary on the subject as the physiological sensor is aligned with the physiological structure and the measurement device may be adjusted position-wise on the wearable band. The method2300may include affixing the measurement device to the wearable band when the physiological sensor is aligned with the physiological structure (block2308). The method2300may include decoupling the measurement device from the wearable band (block2310). The method2300may include attaching the measurement device to a second wearable band (block2312). The second wearable band may include, for example, a second instance of the band106or another type of the band106. The method2300may include placing the second wearable band on the same body part of the subject as the first wearable band was/is on or a different body part of the subject (block2314). The method2300may include aligning the physiological sensor with the same physiological structure as the physiological sensor was previously aligned with (see, e.g., block2306), a different physiological structure on the same body part of the subject, the same physiological structure on a different body part of the subject, or a different physiological structure of the different body part (block2316). FIG.24illustrates a method2400of transmitting data between the adjustable measurement device and the user interface via the wearable device, according to an embodiment. Some of the features inFIG.24may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.24. Elements of the method2400may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2400may include attaching the measurement device to the wearable band (block2402). The method2400may include placing the wearable band and the measurement device on a body part of the subject (block2404). The method2400may include aligning a physiological sensor in the wearable band and/or the measurement device with a physiological structure of the subject such as a vein and/or artery of the subject (block2406). The method2400may include affixing the measurement device to the wearable band when the physiological sensor is aligned with the physiological structure (block2408). The wearable band may retain the physiological sensor in alignment with the physiological sensor. The method2400may include transmitting data via the wearable band between the measurement device and a user interface coupled to the wearable band (block2410). The measurement device may include a slot configured to extend at least partially around a width of the wearable band (e.g. the slot302e). The wearable band may include a data line electronically coupled to the user interface (e.g. the electrical trace or circuit116). The slot may include an electrical contact electronically coupled to the processing device and/or the sensor (e.g. electrical contact surfaces of the conductive tracing1604). The data line may electronically couple to the electrical contact as the wearable band is positioned in the slot. Data may be transmitted via the data line. FIG.25illustrates a method2500of measuring a pressure of the physiological sensor against the subject, according to an embodiment. Some of the features inFIG.25may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.25. Elements of the method2500may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2500may include receiving a signal from a pressure sensor (e.g. the pressure sensor400) (block2502). The signal may correspond to a pressure value of the physiological sensor against the subject. The signal may correspond to a strain value of the wearable band on the subject. The method2500may include generating a value for the pressure/strain (block2504). For example, the signal may indicate a resistivity of a strain gauge embedded in the wearable band. A processing device such as the processing device102may store and/or execute instructions to calculate the strain based on the resistivity. The processing device may output the strain value. The processing device may store and/or execute instructions to calculate a pressure value based on a capacitance of the pressure sensor. The processing device may be programmed with an algorithm that includes strain as a function of resistivity. The processing device may be programmed with an algorithm that includes pressure as a function of capacitance. The signal from the pressure sensor may vary over time while the actual pressure of the physiological sensor against the subject remains constant. The variation in the pressure may be due to the volume of the body part to which, for example, the wearable device is attached may change. The volume change may be due to blood being pumped periodically through arteries in the body part of the subject. The periodic variation may be reflected in a periodic variation of the signal from the pressure sensor. The periodic variation of the signal from the pressure sensor may be translated by a processing device into a heartbeat waveform of the subject. The method2500may include comparing the pressure and/or strain value to a range of pressure and/or strain values (block2506). The range of pressure and/or strain values may be the optimal range within which the physiological sensor takes the best measurements, e.g. the range within which the SNR and/or amplitude of the signal produced by the physiological sensor is maximized when the physiological sensor is properly aligned with the physiological structure being interrogated by the physiological sensor. The method2500may include generating an alert when the pressure and/or strain falls outside the range for optimized measurement by the physiological sensor (block2508). For example, the processing device may calculate a difference between the measured pressure and/or strain and a minimum pressure and/or strain. If the difference is positive, i.e. if the measured value is greater than the minimum value, the pressure may be determined to be greater than the minimum value. The processing device may calculate a difference between the measured value and a maximum value. If the difference is negative, i.e. if the measured value is less than the maximum value, the pressure may be determined to be less than the maximum value. If the measured value is determined to be less than the maximum value and greater than the minimum value, the pressure and/or strain may be determined to be in-range. If the measured value is either greater than the maximum value or less than the minimum value, the pressure and/or strain may be determined to be out-of-range. The pressure and/or strain may be in-range if it is equal to the minimum value or the maximum value. The alert may include notification content generated by the processing device, such as how far out of range the pressure and/or strain is. The alert may include a notification type, such as the pressure and/or strain is too high or too low. The method2500may include generating an indicator for the alert (block2510). The indicator may include one or more combined graphics. For example, the indicator may include a color-coded arrow that points up if the pressure and/or strain is too high or down if the pressure and/or strain is too low. The indicator may include an icon that indicates the measured value relative to the optimal range. The icon may be color-coded. The indicator may include characters and/or symbols that indicate an action to be performed by the subject, such as increasing the tightness of the wearable band, and so forth. The indicator may flash. The indicator may include sounds audible to the subject. The method2500may include presenting the indicator to the subject (block2512). For example, the processing device may be electronically coupled to a user interface (e.g. the user interface104). The user interface may be configured to emit the alert by presenting the indicator to the subject. For example, the user interface may include a visual display and/or a speaker. FIG.26illustrates a method2600of generating an alert when the physiological sensor is not pressed against the subject with enough pressure, according to an embodiment. Some of the features inFIG.26may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.26. Elements of the method2600may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2600may include receiving a signal from a pressure sensor (e.g. the pressure sensor400) (block2602). The method2600may include generating a value for the pressure and/or strain (block2604). The method2600may include comparing the pressure and/or strain value to a range of pressure and/or strain values (block2606). The method2600may include determining whether the pressure and/or strain value is outside the range of pressure and/or strain values (block2608). The method2600may include taking a physiological measurement if the pressure and/or strain value is within the range of pressure and/or strain values (block2610). For example, a processing device (e.g. the processing device102) may be configured to take the physiological measurement by a physiological sensor when the measurement value of the pressure and/or strain is within the optimal range of pressure and/or strain values. The measurement device that includes the physiological sensor may thereby be enabled to interrogate a body part of the subject without distorting a physiological measurement generated by the measurement device. The method2600may include generating an alert if the measured pressure and/or strain is outside the optimal range (block2612). The method2600may include, in response to the measurement value of the pressure and/or strain being outside the range of optimal pressure and/or strain values, skipping taking the physiological measurement (block2614). For example, the processing device may be programmed with a schedule for taking physiological measurements. The processing device may determine the measured pressure and/or strain is out-of-range and may cancel a scheduled measurement. The subject may request a measurement via a user interface electronically coupled to the processing device (e.g. the user interface104). The processing device may be programmed to measure the pressure and/or strain after receiving instructions from the subject to take a physiological measurement. The processing device may be programmed to generate the alert and skip the requested physiological measurement if the measured pressure and/or strain is outside the optimal range. The processing device may be programmed to generate an indicator that indicates the physiological measurement was skipped and present the indicator to the subject. The method2600may include removing the measured pressure and/or strain value from a data set of pressure and/or strain measurement values in response to the measured value being outside the optimal range (block2616). For example, the processing device may be programmed to store measured pressure and/or strain values and correlate the measured values to physiological measurements taken at roughly the same time as the pressure and/or strain measurement. To prevent an inaccurate measurement from skewing statistical analysis of physiological measurements taken from the subject, the physiological measurement may be skipped, and the pressure and/or strain measurement may be removed from the data set. FIG.27illustrates a method2700of tagging a measurement as possibly inaccurate, according to an embodiment. Some of the features inFIG.27may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.27. Elements of the method2700may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2700may include receiving a signal from a pressure sensor (block2702). The method2700may include generating a value for the pressure and/or strain (block2704). The method2700may include comparing the pressure and/or strain value to an optimal range of pressure and/or strain values (block2706). The method2700may include taking a physiological measurement (block2708). The physiological measurement may be taken regardless of whether the measured pressure and/or strain falls within the optimal range. The method2700may include determining whether the measured pressure and/or strain is within the optimal range (block2710). The method2700may include tagging the physiological measurement as possibly inaccurate if the measured pressure and/or strain is outside the optimal range (block2712). For example, the processing device may generate a tag stored with the physiological measurement in a database. The tag may indicate whether the pressure and/or strain at the time the physiological measurement was taken fell within the optimal range. The tag may simply act as a flag that the physiological measurement may be inaccurate, where physiological measurements taken when the pressure and/or strain was in the optimal range do not have a tag. The method2700may include determining whether the quality of the signal (e.g. the SNR and/or the amplitude of the signal) corresponding to the physiological measurement is above a threshold value for the signal quality (block2714). The physiological measurement may be tagged as possibly inaccurate in response to the signal quality falling below the threshold value (e.g. block2712). For example, the physiological measurement may be tagged as “bad.” The bad physiological measurement may correspond to the signal generated by the measurement device (e.g. the physiological sensor in the adjustable measurement device300) having a signal quality below the threshold level. The method2700may include tagging the measurement as accurate when the pressure value is within the optimal range and/or the signal quality is above the threshold value for the signal quality (block2716). Tagging the physiological measurements may enable training and/or updating of an algorithm for determining the optimal pressure and/or strain range. For example, a data set including physiological measurements may include tagged and untagged measurements. Statistical analysis of the combined tagged-untagged data set may yield a variation and distribution of the physiological measurements. Statistical analysis of the untagged data set alone may yield a second variation and distribution of the untagged physiological measurements. If the two statistical analyses yield the same results, the optimal range for pressure and/or strain may be over-broad or not broad enough. If the two statistical analyses yield different results, e.g. if removing the tagged measurements eliminates skewing of the data by inaccurate measurements, the range may be confirmed as being optimal. Tagging the physiological measurements may enable training and/or updating of an algorithm for determining an optimal range of SNRs and/or amplitudes in a similar fashion. Tagging the physiological measurements may enable the processing device to discern between a bad SNR and/or low signal amplitude due to improper alignment and a bad SNR and/or low signal amplitude due to the physiological sensor being pressed against the subject too hard or too soft. FIG.28illustrates a method2800of correlating a change in the pressure on the physiological sensor to a change in the physiological measurement value, according to an embodiment. Some of the features inFIG.28may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.28. Elements of the method2800may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2800may include taking a physiological measurement and a pressure measurement concurrently (2802). The method2800may include taking the physiological measurement and a strain measurement concurrently. For example, a processing device (e.g. the processing device102) may be configured to record a pressure measurement value concurrently with a physiological measurement value. The processing device may include an internal clock. The processing device may store and/or execute instructions to take the physiological measurement according to a schedule. The processing device may store and/or execute instructions to take pressure measurements according to the same schedule as the physiological measurements. The processing device may store and/or execute instructions that trigger taking the pressure measurement within an amount of time before and/or after the physiological measurement is taken. The amount of time may be less than or equal to half the amount of time between successive physiological measurements. A pressure measurement taken within the amount of time of the physiological measurement may be considered taken “concurrently” with the physiological measurement although not taken at the same clock time as the physiological measurement. The method2800may include correlating a change in the pressure measurement value from one or more of a set of previous pressure measurement values with a change in the physiological measurement value from one or more of a set of previous physiological measurement values (block2804). For example, the processing device may store an algorithm for the physiological measurement as a function of the pressure measurement. The algorithm may be determined by a curve-fitting process that fits a curve to past physiological measurement data as a function of pressure measurement data. The algorithm may be determined by a regression analysis of the physiological measurement data and the pressure measurement data. The change in the pressure measurement value from a previous value to a current value may be input into the algorithm. The algorithm may output a change in the physiological measurement value as measured by the physiological sensor. The output physiological measurement value may, for example, be an amount of deviation of a measurement value from an actual value for the physiological characteristic. The measurement value may deviate from the actual value due to the change in pressure of the sensor against the subject. The change in the physiological measurement value may, therefore, be correlated to the change in the pressure measurement value. The method2800may include predicting a future change between the physiological measurement value and one or more of a set of future physiological measurement values correlated to one or more of a set of possible future pressure measurement values (block2806). For example, the prediction may be determined by the algorithm that correlates pressure measurement data to physiological measurement data. The prediction may be done as a function of time. For example, past pressure measurement values may vary cyclically such that changes in future pressure measurement values may be predicted according to the cyclic variation of past pressure measurement values. Deviations of future physiological measurement values from actual values may be predicted based on the cyclic variation of the pressure measurement values. Deviation data may be combined with cyclic variations in the actual values for the physiological characteristic such that the measured value may be predicted. FIG.29illustrates a method2900of validating a physiological measurement, according to an embodiment. Some of the features inFIG.29may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.29. Elements of the method2900may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method2900may include taking a physiological measurement and a pressure measurement concurrently (block2902). The method2900may include receiving a validation measurement of the physiological measurement (block2904). The validation measurement may include a measurement of the same physiological characteristic taken by a separate measurement device concurrently or approximately concurrently with the physiological measurement. For example, the physiological measurement may be a non-invasive glucose measurement taken by the adjustable measurement device300. The validation measurement may be an invasive glucose measurement taken by an invasive glucometer. The non-invasive glucose measurement and the invasive glucose measurement may be taken within an amount of time of each other. The amount of time may be a few seconds, a few minutes, three minutes, five minutes, ten minutes, and so forth. The method2900may include determining whether the pressure and/or strain measurement value is within the optimal pressure and/or strain range (block2906). The method2900may also include determining whether the validation measurement validates the physiological measurement value (block2908). In response to the pressure and/or strain measurement value falling outside the optimal range and the physiological measurement being validated by the validation measurement, the method2900may include adjusting the optimal range of values to include the pressure and/or strain measurement value (block2910). In response to the pressure and/or strain measurement value falling within the optimal range and the physiological measurement not being validated by the validation measurement, the method2900may include adjusting the optimal range of pressure values to exclude the pressure measurement value (block2912). The method2900may be executed to identify constituent pressures of the optimal range for the pressure and/or strain measurement values, including a maximum optimal pressure and/or a minimum optimal pressure. The physiological sensor may be pressed against the subject with a variety of pressures when the physiological sensor is optimally aligned with a physiological structure to be measured (e.g. the physiological structure204). At each pressure level, the physiological measurement and the validation measurement may be taken. The validation measurement may be taken once and compared against a set of physiological measurements taken within a time frame of the validation measurement such as one minute, two minutes, five minutes, ten minutes, and so forth. The time frame may correspond to the physiological characteristic being measured. For example, the physiological characteristic may be the subject's resting heart rate. The heart rate may be measured twenty times at twenty different pressures over ten minutes. Validation of the heart rate may be taken every time the heart rate is measured by the measurement device or may be taken every minute. Validation of the subject's heart rate may be performed once during the ten minutes. The method2900may include indicating the measurement is valid when the pressure value is outside the optimal range and the physiological measurement is validated (block2914). The method2900may include indicating the measurement is not valid when the pressure value is outside the optimal range and the physiological measurement is not validated (block2916). FIG.30illustrates a method3000of taking measurements with different physiological sensors that have different pressure ranges, according to an embodiment. Some of the features inFIG.30may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.30. Elements of the method3000may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3000may include taking a physiological measurement and a pressure measurement concurrently (block3002). The method3000may include taking the physiological measurement and a strain measurement concurrently (block3002). The physiological measurement may be taken by a first type of physiological sensor (e.g. the first sensor112, the second sensor114, and so forth). The method3000may include comparing the pressure and/or strain measurement value to a first range of optimal pressure and/or strain measurement values (block3004). The first range may include pressure and/or strain values that correspond to the first type of sensor, e.g. that correspond to the amount of pressure that is optimal for the first type of physiological sensor against the subject. The method3000may include generating an alert when the pressure and/or strain measurement value is outside the first range (block3006a). The alert may, for example, include notification content that notifies the subject the physiological measurement may be inaccurate. The method3000may include indicating the first physiological measurement is accurate when the pressure and/or strain measurement value is within the first range (block3006b). The method3000may include taking a second physiological measurement and a second pressure measurement (or a first pressure measurement if the previous measurement was a strain measurement) concurrently (block3008). The method3000may include taking the second physiological measurement and a second strain measurement (or a first strain measurement if the previous measurement was a pressure measurement) concurrently (block3008). The second physiological measurement may be taken by a second type of physiological sensor (e.g. if the first type is the first sensor112, the second type would be the second sensor114, and so forth). The method3000may include comparing the pressure and/or strain measurement value taken concurrently with the second physiological measurement to a second range of optimal pressure and/or strain measurement values (block3010). The second range may be different than the first range. The second range may include pressure and/or strain values that correspond to the second type of sensor, e.g. that correspond to the amount of pressure that is optimal for the second type of physiological sensor against the subject. The method3000may include generating an alert when the pressure and/or strain measurement value measured concurrently with the second physiological measurement is outside the second range (block3012a). The alert may, for example, include notification content that notifies the subject the physiological measurement may be inaccurate. The method3000may include indicating the second physiological measurement is accurate when the pressure and/or strain measurement value measured concurrently with the second physiological measurement is within the second range (block3012b). A measurement device such as the adjustable measurement device300may include the two different types of physiological sensors. The first type may have an optimal pressure and/or strain range that is different than an optimal pressure and/or strain range for the second type of physiological sensor. The two optimal ranges may overlap or may be non-overlapping. The processing device may be configured to notify the subject to adjust the pressure of the measurement device against the subject according to which type of physiological sensor is going to take the next physiological measurement. The processing device may be configured to notify the subject to adjust the pressure when the optimal ranges do not overlap. The processing device may be configured to notify the subject when the current pressure of the measurement device on the subject is outside the overlap between the two optimal ranges. A first range of pressure and/or sensor measurement values may be an overlap between a first subrange and a second subrange. The first subrange may correspond to the first type of physiological sensor and the second subrange may correspond to the second type of physiological sensor. A maximum pressure and/or strain measurement value for the first range may fall within the first subrange. A minimum pressure and/or strain measurement value for the first range may fall within the second subrange. FIG.31Aillustrates a method3102of enabling the physiological sensor to take a measurement when it is pressed against the subject at a correct pressure within a range of pressures, according to an embodiment. Some of the features inFIG.31Amay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.31A. Elements of the method3102may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3102may include receiving a pressure measurement value of the tightness of a wearable device on a subject (block3102a). The pressure measurement value may be a measure of the strain in the band of the wearable device (e.g. the band106of the wearable device100). The pressure measurement value may be a pressure measurement by a pressure sensor (e.g. the pressure sensor400). For example, the pressure measurement value may be determined by a processing device (e.g. the processing device102) based on an electronic signal indicating a capacitance of the pressure sensor. The method3102may include determining whether the pressure measurement value falls within an optimal range of pressure values ranging from a minimum optimal pressure value to a maximum optimal pressure value (block3102b). The method3102may include enabling a measurement device to take a physiological measurement from the subject without distorting the physiological measurement (block3102c). The measurement device may be enabled to take an undistorted physiological measurement when the pressure measurement value falls within the optimal range. For example, the processing device (e.g. the processing device102) may be programmed to take the physiological measurement by the measurement device (e.g. the adjustable measurement device300) at times when the adjustable measurement device300is pressed against the subject at a pressure in the optimal range. The method3102may include generating an alert when the pressure measurement value is outside the optimal range of pressure values (block3102d). The method3102may include generating an adjustment recommendation that recommends an adjustment to the tightness of the wearable device on the subject (block3102e). For example, the processing device may be programmed to calculate how far outside the optimal range the current pressure is and/or whether the current pressure is above or below the optimal range. The processing device may be programmed to recommend tightening or loosening the band of the wearable device. The method3102may include generating an indicator of the alert or the adjustment recommendation (block31020. The processing device may be configured to generate the indicator or to communicate the alert and/or the recommendation to a user device such as the user device118. The method3102may include presenting the indicator to the subject (block3102g). FIG.31Billustrates a method3104of automatically adjusting a tightness of the wearable band, according to an embodiment. Some of the features inFIG.31Bmay be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.31B. Elements of the method3104may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3104may include receiving a pressure measurement value of the tightness of a wearable device on a subject (block3104a). The method3104may include determining whether the pressure measurement value falls within an optimal range of pressure values ranging from a minimum optimal pressure value to a maximum optimal pressure value (block3104b). The method3104may include enabling a measurement device to take a physiological measurement from the subject without distorting the physiological measurement (block3104c). The method3104may include automatically adjusting the tightness of the wearable device on the subject (block3104d). The tightness may be automatically adjusted by an electromechanical device (e.g. the motorized band tightening mechanism510). The tightness may be increased when the pressure of the measurement device against the subject is below the optimal range. The tightness may be decreased automatically when the pressure of the measurement device against the subject is above the optimal range. For example, a processing device (e.g. the processing device102) may be coupled to a pressure sensor (e.g. the pressure sensor400) and an electromechanical tightening mechanism (e.g. the motorized band tightening mechanism510). The processing device may be programmed to activate the electromechanical device upon a trigger event. The trigger event may be the pressure sensor measuring a pressure outside the optimal range. The wearable device may include a second measurement device that corresponds to a second range of pressure values (e.g. the adjustable measurement device300may include the first sensor112and the second sensor114). The optimal range of pressure values for the first measurement device may be different from the optimal range of pressure values for the second measurement device. The tightness of the wearable device may be automatically adjusted to fall within the optimal range for the first measurement device in response to the first measurement device taking a measurement. The tightness of the wearable device may be automatically adjusted to fall within the optimal range of the second measurement device in response to the second measurement device taking a measurement. For example, measurements by the first and second measurement devices may be scheduled, and the processing device may be triggered to activate the electromechanical tightening mechanism according to the schedule of measurements. FIG.32illustrates a method3200of determining an adjustment for a pressure of a measurement device and/or physiological sensor against a subject, according to an embodiment. Some of the features inFIG.32may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.32. Elements of the method3200may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3200may include receiving a signal or a set of signals corresponding to a set of physiological measurements taken by a measurement device (e.g. by the physiological sensor206of the adjustable measurement device300) (block3202a). The method3200may include receiving a pressure value for the tightness of the wearable device (e.g. of the band106of the wearable device100) on the subject (block3202b). The method3200may include determining whether an SNR for the signal or SNRs for at least a subset of the set of signals (e.g. at least one SNR of at least one signal) is above a threshold SNR (block3204a). The method3200may include determining, generally, whether the signal quality for the signal or a subset of the set of signals is above a threshold signal quality. For example, the subject may select, via a user interface (e.g. the user interface104) a type of physiological characteristic for the measurement device to measure. The processing device may be programmed with a signal processing algorithm. The processing device may apply the algorithm to the signals received from the measurement device to determine the SNRs. The method3200may include determining whether the pressure value is within the optimal range (block3204b). The method3200may include a response to the pressure value being out-of-range and/or the SNR being below the threshold SNR. The method3200may include generating an alert when the pressure measurement value is outside the optimal range of pressure values and/or when the SNR is below the threshold SNR (block3206). The method3200may include generating an adjustment recommendation that recommends an adjustment to the tightness of the wearable device on the subject (block3208). For example, the set of signals from the measurement device may be from consecutive measurements taken during a length of time. In response to the set of consecutive signals or a subset of consecutive signals having SNRs or an average SNR below the threshold SNR, the processing device may generate the alert and/or the adjustment recommendation. The alert and/or the adjustment recommendation based on the pressure measurement value relative to the optimal range. For example, if the SNR is below the threshold SNR and the pressure measurement value is outside the optimal range, the processing device may recommend adjusting the tightness of the band. If the SNR is below the threshold SNR and the pressure measurement value is within the optimal range, the processing device may recommend moving the measurement device into better alignment with the physiological structure being measured. The method3200may include generating an indicator of the alert or the adjustment recommendation (block3210). The processing device may be configured to generate the indicator or to communicate the alert and/or the recommendation to a user device such as the user device118. The method3200may include presenting the indicator to the subject (block3212). The SNR and/or other signal qualities may be improved by the measurement device being pressed against the subject within the optimal pressure range. The SNR and/or other signal qualities may be improved by aligning the measurement device as closely as possible with the physiological structure the measurement device is to measure. The method3200may be executed to enable optimization of the SNR. The method3200may be executed to enable optimization of another signal quality such as amplitude. The method3200may include taking a physiological measurement when the pressure value is within the optimal pressure range and/or when the SNR of the subset is above the threshold ratio (block3214). FIG.33illustrates a method3300of skipping physiological measurements when the physiological sensor is not pressed against the subject in the correct pressure range, according to an embodiment. Some of the features inFIG.33may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.33. Elements of the method3300may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3300may include receiving a pressure measurement value of the tightness of a wearable device on a subject (block3302). The method3300may include determining whether the pressure measurement value falls within an optimal range of pressure values (block3304). The method3300may include enabling a measurement device to take a physiological measurement from the subject when the pressure measurement value is within the optimal range (block3306). The method3300may include generating an alert when the pressure measurement value is outside the optimal range of pressure values (block3308). If the pressure measurement value is outside the optimal range, a scheduled measurement by the measurement device may be skipped. For example, the processing device may be programmed with a default schedule by which to take physiological measurements by the measurement device (e.g. the processing device102and the physiological sensor206on the adjustable measurement device300). The default setting of the processing device may be to trigger the physiological sensor to take the measurement on schedule. The processing device may be programmed to, in response to receiving a pressure measurement value outside the optimal range, skip the next scheduled physiological measurement, such as by not triggering the physiological sensor to take the measurement. The method3300may include determining whether the number of skipped measurements is above a threshold number (block3310). For example, the processing device may keep a running tally of the number of skipped physiological measurements previous to the next scheduled physiological measurement. The processing device may store the number skipped within a certain time frame before the next scheduled physiological measurement. The processing device may store the number of consecutively skipped physiological measurements before the next scheduled physiological measurement. The tally may only include the most recent “streak” of consecutive skipped physiological measurements. The method3300may include skipping the next scheduled physiological measurement if the number of skipped measurements is below the threshold number for skipped measurements (block3312). The method3300may include generating an alert in response to the threshold number or more of physiological measurements being skipped (block3314). The alert may indicate the number of skipped measurements meets and/or exceeds the threshold. The threshold may be programmed into the processing device by a manufacturer of the measurement device. The threshold may be programmed into the processing device by the subject, such as via a user interface. The threshold may be time-dependent such that one skipped measurement for a type of measurement that is taken once a day generates an alert.100skipped measurements for a type of measurement that is taken continuously or multiple times per minute may trigger the alert to generated. The alert may indicate generally that measurements are being skipped without indicating how many have been skipped. The alert may indicate how many measurements have been skipped. The method3300may include generating an adjustment recommendation (block3316). The adjustment recommendation may be an adjustment of the tightness of the band on the subject. The adjustment recommendation may be an adjustment of the pressure of the measurement device against the subject. The adjustment recommendation may be an adjustment of the position of the measurement device on the subject. FIG.34illustrates a method3400of generating a graphical display of physiological measurements and pressure values, according to an embodiment. Some of the features inFIG.34may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.34. Elements of the method3400may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3400may include generating a graphical display showing a set of physiological measurement values taken over a length of time (block3402). The graphical display may, for example, include data points distributed on a grid with a time axis and a measurement value axis. The units of time and/or the measurement values may be shown or may not be shown. Gridlines may be shown or may not be shown. A curve may connect the data points. Each data point may have a measurement value coordinate that corresponds to a time coordinate. The method3400may include generating an overlay on the graphical display showing a set of pressure measurement values taken over the length of time (block3404). The overlay may have a different visual feature from the curve and/or data points of the physiological measurement. For example, physiological measurements may be indicated by a first color and pressure measurements may be indicated by a second color. The method3400may include presenting the graphical display with the overlay to the subject (block3406). The overlay may enable the subject to visually correlate the pressure of the physiological sensor against the subject with specific physiological measurements to visually identify which measurements may be inaccurate. Without the overlay, the subject may be unaware of whether a specific physiological measurement is accurate. The overlay may show a set of pressure measurement values relative to the optimal range for the physiological sensor that generates the physiological measurements. A subset of pressure measurement values may be shown on the graphical display by a first visual cue that indicates the subset of pressure measurement values is outside the range of pressure values. A subset of physiological measurement values may be shown on the graphical display by a second visual cue that indicates the subset of physiological measurement values corresponds to the subset of pressure measurement values that is outside the optimal range. FIG.35illustrates a method3500of determining a relative position of a physiological sensor to a physiological structure of a subject, according to an embodiment. Some of the features inFIG.35may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.35. Elements of the method3500may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3500may include receiving a first signal from the physiological sensor when the physiological sensor is at a first position on the subject (block3502). The first position may have an unknown position relative to the physiological subject to be interrogated by the physiological sensor. The first position may have a relative position that is approximately known relative to the physiological sensor. For example, the physiological structure may include a vein and/or artery. A general area of the vein and/or artery may be known, but a precise position may be unknown. The method3500may include adjusting the physiological sensor to a second position and receiving a second signal from the physiological sensor when the physiological sensor is at the second position on the subject (block3504). The second position may have an unknown position relative to the physiological structure. The second position may have an approximately known position relative to the physiological structure. The method3500may include determining a difference between the first signal and the second signal (block3506). The difference may be calculated based on one or more signal qualities. For example, the difference may be calculated based on a first SNR of the first signal and a second SNR of the second signal. The difference may be calculated based on a first amplitude of the first signal and a second amplitude of the second signal. The difference may be calculated based on a combination of the first SNR and the first amplitude and a combination of the second SNR and the second amplitude. The method3500may include determining whether the first position or the second position is closer to the physiological structure (block3508). For example, the physiological sensor may yield a higher SNR and/or signal amplitude at the first position than at the second position. This may be because the first position is closer to the physiological structure than the second position. This may be because there is less interference from other physiological structures and/or other physiological phenomena at the first position than at the second position, even if the second position is spatially closer to the physiological structure. The method3500may include, once it is determined whether the first position or the second position is closer, generating an indicator showing which position is closer and presenting the indicator to the subject (block3510). The subject may wear a smartwatch with a band (e.g. the band106), a smartwatch face (e.g. the user interface104), and a measurement device attached to the band (e.g. the adjustable measurement device300). The subject may slide the measurement device on the band to approximately align the physiological sensor in the measurement device (e.g. the first sensor112, the second sensor114, and so forth) with a radial artery in the subject's wrist. The subject may press a graphically-displayed button on the touch screen of the smartwatch face and the processor in the smartwatch (e.g. the processing device102) may communicate with the physiological sensor, triggering a measurement by the physiological sensor. The measurement may be communicated to the processor and the processor may generate a prompt, displayed to the subject on the smartwatch face, to adjust the position of the measurement device. The subject may adjust the position of the measurement device and then press a graphical button on the touch screen indicating the measurement device has been moved. The processor may trigger another measurement. The processor may compare the SNR and/or the signal shape of the first measurement to the SNR and/or the signal shape of the second measurement. The processor may determine the second position is closer and may generate a prompt, displayed to the subject via the touchscreen, that the measurement device is aligned or to continue moving the measurement device in the same direction on the band and relative to the radial artery. FIG.36illustrates a method3600of using various signal characteristics to determine the relative position of a physiological structure in the subject's body, according to an embodiment. Some of the features inFIG.36may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.36. Elements of the method3600may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3600may include receiving a first signal from a physiological sensor at a first position on the subject (block3602). The method3600may include receiving a second signal from the physiological sensor at a second position on the subject (block3604). The second signal may be received from a second physiological sensor integrated into the measurement device at a different position relative to the first physiological sensor. The first signal may be generated at the same time as the second signal. The measurement device may remain stationary on the subject as the first signal is generated by the first physiological sensor and the second signal is generated by the second physiological sensor. The first signal may be generated at a first time and the second signal may be generated at a second time after the first time. The position of the measurement device may be adjusted on the subject between the first time and the second time or the measurement device may remain stationary on the subject between the first time and the second time. The method3600may include determining a sum of the SNR of the first signal (i.e. a first SNR) and the SNR of the second signal (i.e. a second SNR) (block3606a). The method3600may include determining a sum of the amplitude of the first signal (i.e. a first amplitude) and the amplitude of the second signal (i.e. a second amplitude) (block3606b). The method3600may include determining whether the sum of the first SNR and the second SNR meets a threshold SNR level (block3608a). The method3600may include determining whether the sum of the first amplitude and the second amplitude meets a threshold amplitude level (block3608b). The method3600may include, in response to the sum of the SNRs and/or the sum of the amplitudes being less than the respective thresholds, generating a warning that the first and second positions are outside a range of a physiological structure to be interrogated by the measurement device (e.g. a vein and/or artery of the subject) (block3610). Perfect alignment of the physiological sensor with the physiological structure may not be necessary to obtain a useful signal (e.g. a signal from which a measurement value can be obtained). For example, the measurement device may include two physiological sensors at different positions in the measurement device (see, e.g.,FIGS.1A and/or13A-B). The two physiological sensors may be of different types from each other, e.g. the first sensor112and the second sensor114. The two physiological sensors may be employed simultaneously to measure the same physiological characteristic of the subject. For example, the two different physiological sensors may be used to measure the subject's glucose levels. Because of the shape of the subject's vein and/or artery, it may be impossible to perfectly align both sensors with the vein and/or artery. For example, the subject's artery may be curved such that one physiological sensor can be aligned over the artery and the other physiological sensor is positioned to a side of the artery. The signal quality of the first physiological sensor may meet a threshold for an individual sensor and the signal quality of the second physiological sensor may not meet the threshold. However, the processing device may be programmed to use the first signal to clean up the second signal, e.g. by removing noise using a bandpass filter set according to the first signal. A minimum total signal quality of both signals together, e.g. the threshold SNR level or the threshold amplitude level, may allow the processing device to clean up one signal based on the other. Thus, summing the signal qualities may indicate whether the relative positions of both sensors are close enough to the physiological structure to enable accurate measurement by both sensors. The subject may not know a relative position of the physiological structure or may only know approximately where the physiological structure is but may not know precisely enough to precisely position the measurement device in alignment with the physiological structure. Positioning the measurement device, taking a measurement, repositioning, taking a second measurement, and summing the signal qualities may allow the subject to determine if the measurement device is within a range of the physiological structure to begin a guided alignment process. If the sum is below the threshold level, the measurements may be too noisy to tell the subject a direction to move the measurement device to be in better alignment. If the sum is above the threshold level, the processing device may automatically generate an indicator of which direction the subject should mover the physiological sensor based on which signal has a higher quality. The method3600may include generating a notification that the first position and/or the second position are within range of the vein and/or artery when the sum of the SNR is above the threshold and/or when the sum of the amplitudes is above the threshold (block3612). FIG.37illustrates a method3700of measuring a physiological characteristic, according to an embodiment. Some of the features inFIG.37may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.37. Elements of the method3700may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3700may include receiving a signal from a physiological sensor (block3702). The method3700may include determining whether a quality of the signal meets a minimum threshold, such as an SNR of the signal (block3704a) and/or an amplitude of the signal (block3704b). The method3700may include, in response to the signal quality meeting and/or exceeding the minimum threshold, measuring a physiological characteristic of the subject (block3706). The method3700may include recording the measurement, recording a value associated with the measurement, recording other data regarding the measurement, and/or presenting data regarding the measurement to the subject (block3708). For example, the time of the measurement and the value associated with the measurement may be sent to a remote database and stored in the database. The value associated with the measurement may be presented to the subject via a user device and/or a user interface integrated into the measurement device. The method may include skipping taking the measurement when the signal quality does not meet the minimum threshold and/or when the amplitude is below the threshold (block3710). FIG.38illustrates a method3800of determining a relative position of a physiological structure of the subject using a signal strength value, according to an embodiment. Some of the features inFIG.38may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.38. Elements of the method3800may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3800may include receiving a first signal from a physiological sensor when the physiological sensor is at a first position on the subject (block3802). The method3800may include receiving a second signal from the physiological sensor or a second physiological sensor when the physiological sensor or the second physiological sensor is at a second position on the subject (block3804). The second position may be different than the first. The first and second physiological sensors may be the same type (e.g. both are instances of the first sensor112) or the first and second physiological sensors may be different types (e.g. one is the first sensor112and one is the second sensor114). The method3800may include calculating a first signal strength value (block3806). The first signal strength value may correspond to the first signal. The first signal strength value may be, for example, a linear combination of an SNR of the first signal and an amplitude of the first signal. The method3800may include calculating a second signal strength value (block3808). The second signal strength value may correspond to the second signal. The second signal strength value may be, for example, a linear combination of an SNR of the second signal and an amplitude of the second signal. The method3800may include determining a difference between the first signal strength value and the second signal strength value (block3810). For example, the first and second signal strength values may have the same units and the difference may be determined by subtracting the second signal strength value from the first signal strength value. The method3800may include determining whether the first position or the second position is closer to the physiological structure (block3812). For example, if the difference is negative, the difference may indicate the second signal is stronger than the first signal and thus the second position is closer to the physiological structure than the first position. The first position may be spatially closer, or the first position may have fewer intervening elements and/or structures that reduce the quality of the signal. The method3800may include generating an indicator showing which position is closer (block3814). For example, the processing device may cause the user interface to display a symbol representing the first position and a symbol representing the second position. Once it is determined which position is closer to the physiological structure, the processing device may cause the user interface to signify to the user which position is closer such as by highlighting the symbol corresponding to the closer position. The signal strength value of the closer position (i.e. the first signal strength value if the first position is closer and the second signal strength value of the second position is closer) may be compared to a minimum signal strength value. The minimum signal strength value may be a minimum threshold below which it cannot be determined that the position is closer than the other position. For example, if the signal strength value associated with the closer position is lower than the minimum threshold, then it may not be possible to determine if the position is closer. Similarly, if the difference between the signal strength values does not meet a minimum threshold, then it may not be possible to determine which position is closer. The minimum signal strength value may be a minimum threshold above which a measurement value can be determined from the signal. In response to the signal strength value being greater than the minimum signal strength value, a physiological characteristic of the subject may be measured by the physiological sensor at the closer position. FIG.39illustrates a method3900of determining a shift of the physiological sensor on the subject, according to an embodiment. Some of the features inFIG.39may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.39. Elements of the method3900may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method3900may include receiving a first signal generated by a first sensor (block3902). The first sensor may be configured to interrogate a body part of a subject. The first sensor may be positioned at a first position relative to the body part. The first signal may be characterized by a first signal quality (e.g. a first SNR and/or a first amplitude). The method3900may include receiving a second signal generated by a second sensor (block3904). The second sensor may be configured to interrogate the body part. The second sensor may be positioned at a second position relative to the body part. The second position may be different than the first position. The second signal may be characterized by a second signal quality (e.g. a second SNR and/or a second amplitude). The method3900may include determining a first position of a physiological structure within in the body part (e.g. a vein and/or artery) relative to the first sensor and/or the second sensor based on the first signal quality and/or the second signal quality (block3906). The method3900may include determining a shift of the first sensor, the second sensor, and/or a third sensor based on the position of the physiological structure (block3908). The shift may be an amount the first, second, and/or third sensor should be moved to align the first, second, and/or third sensor with the physiological structure. The method3900may include generating a first indicator that indicates whether the first sensor or the second sensor is more closely aligned with the physiological structure (block3910). The first indicator may indicate a position of the third sensor relative to the physiological structure. The first indicator may indicate that the physiological structure is positioned evenly between the first sensor or the second sensor. The method3900may include generating a second indicator that indicates an amount and/or direction of the shift (block3912). For example, the second indicator may be an arrow presented on a user interface integrated with the measurement device with a number that indicates how much the subject should adjust the position of the measurement device in the direction of the arrow. The second indicator may include a symbol and/or icon that moves on the user interface as the user moves the measurement device. The user interface may display an icon that represents the physiological structure and the second indicator icon may represent the measurement device. As the subject moves the measurement device closer to alignment with the physiological structure, the second indicator may move on the user interface closer to the icon representing the physiological structure. FIG.40illustrates a method4000of determining a direction of a shift of a physiological sensor on a subject, according to an embodiment. Some of the features inFIG.40may be the same as or similar to some of the features in the other FIGS. described herein as noted by same and/or similar reference characters, unless expressly described otherwise. Additionally, reference may be made to features shown in any of the other FIGS. described herein and not shown inFIG.40. Elements of the method4000may be executed in one or more ways such as by a human, including the subject, by a processing device such as the processing device102, by mechanisms operating automatically or under human control such as the physiological sensor206, and so forth. The method4000may include receiving a first signal from a first physiological sensor and a second signal from a second physiological sensor when the subject's physiological structure is in a first position relative to the first physiological sensor and the second physiological sensor (block4002). The first physiological sensor and the second physiological sensor may be incorporated and/or integrated into a measurement device. The position of the measurement device may be adjusted on the subject by moving the measurement device on the subject. The method4000may include receiving a third signal from the first physiological sensor and a fourth signal from the second physiological sensor, the third and fourth signals received after receiving the first signal and the second signal (block4004). The third signal may be characterized by a third signal quality (e.g. a third SNR and/or a third amplitude). The fourth signal may be characterized by a fourth signal quality (e.g. a fourth SNR and/or a fourth amplitude). A second position of the physiological structure relative to the first sensor and/or the second sensor may be determined based on the third signal quality and/or the fourth signal quality. The method4000may include determining a direction to shift the measurement device to be in better alignment with the physiological structure based on a difference between the first position and the second position (block4006). The method4000may be executed to align the physiological structure between two physiological sensors in the measurement device. The first and second signals may indicate the first position is closer to the first physiological sensor than to the second physiological sensor but may not indicate whether the physiological structure is between the two physiological sensors. If the third signal and the fourth signal both decrease in quality when the measurement device is moved, the physiological structure may have been between the sensors at the first position and outside both sensors at the second position. If the third and the fourth signal both increase in quality when the measurement device is moved, the physiological structure may have been outside both sensor at the first position and between the sensor at the second position. If one of the third or the fourth signals increases and the other decreases when the measurement device is moved, the first and second positions of the physiological structure may both be between the first and second physiological sensors. A magnitude of the difference between the third and fourth signal qualities may indicate whether the physiological structure, when at the second position, is aligned more closely to the first physiological sensor, more closely to the second physiological sensor, or approximately evenly between the two physiological sensors. The above description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, to provide a good understanding of several implementations. It will be apparent to one skilled in the art, however, that at least some implementations may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in a simple block diagram format to avoid unnecessarily obscuring the present implementations. Thus, the specific details set forth above are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present implementations. It is to be understood that the above description is intended to be illustrative and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the present implementations should, therefore, be determined regarding the appended claims, along with the full scope of equivalents to which such claims are entitled. The disclosure above encompasses multiple distinct embodiments with independent utility. While these embodiments have been disclosed in a particular form, the specific embodiments disclosed and illustrated above are not to be considered in a limiting sense as numerous variations are possible. The subject matter of the embodiments includes the novel and non-obvious combinations and sub-combinations of the various elements, features, functions and/or properties disclosed above and inherent to those skilled in the art pertaining to such embodiments. Where the disclosure or subsequently filed claims recite “a” element, “a first” element, or any such equivalent term, the disclosure or claims is to be understood to incorporate one or more such elements, neither requiring nor excluding two or more of such elements. Applicant(s) reserves the right to submit claims directed to combinations and sub-combinations of the disclosed embodiments that are believed to be novel and non-obvious. Embodiments embodied in other combinations and sub-combinations of features, functions, elements and/or properties may be claimed through amendment of those claims or presentation of new claims in the present application or in a related application. Such amended or new claims, whether they are directed to the same embodiment or a different embodiment and whether they are different, broader, narrower, or equal in scope to the original claims, are to be considered within the subject matter of the embodiments described herein. | 266,011 |
11857298 | The use of cross-hatching or shading in the accompanying figures is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the figures. Accordingly, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonalities of similarly illustrated elements, or any other characteristic, attribute, or property for any element illustrated in the accompanying figures. Additionally, it should be understood that the proportions and dimensions (either relative or absolute) of the various features and elements (and collections and groupings thereof) and the boundaries, separations, and positional relationships presented therebetween, are provided in the accompanying figures merely to facilitate an understanding of the various embodiments described herein and, accordingly, may not necessarily be presented or illustrated to scale, and are not intended to indicate any preference or requirement for an illustrated embodiment to the exclusion of embodiments described with reference thereto. DETAILED DESCRIPTION Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims. Some aspects of the following description relate to differentiating the types of matter that are proximate to a device. For example, in the context of a wearable device (e.g., electronic watches, smart watches, health monitors, audio devices, gaming devices, AR/VR devices, and so on, attached to a wrist, arm, thigh, neck or other body part of a user by one or more of bands, straps, cuffs, and so on), the systems, devices, methods, and apparatus described herein may be used to differentiate when the device is likely proximate to human tissue versus when the device is likely proximate to something else (e.g., a wood, polymer (e.g., plastic), glass, and/or ceramic material or surface). In some cases, the matter differentiation may be performed by emitting a first beam of electromagnetic radiation having a first IR wavelength through a back of the device, and emitting a second beam of electromagnetic radiation having a second IR wavelength through the back of the device. The first and second IR wavelengths may be selected such that the first IR wavelength has a first human tissue reflectance factor, and the second IR wavelength has a second human tissue reflectance factor. For example, the first IR wavelength may have a higher human tissue reflectance factor than the second IR wavelength, such that the first IR wavelength reflects from human tissue more readily and is absorbed by human tissue to a lesser degree than the first IR wavelength. The first and second IR wavelengths may also be selected such that the first and second IR wavelengths both have a high reflectance factor for other materials or surfaces, such as wood, polymer (e.g., plastic), glass, and/or ceramic materials or surfaces. In some of the described embodiments, the first beam of electromagnetic radiation may be emitted through the back of the device, and an amount of electromagnetic radiation having the first IR wavelength, that is reflected or backscattered back toward the device, may be measured. The second beam of electromagnetic radiation may also be emitted through the back of the device, and an amount of electromagnetic radiation having the second IR wavelength, that is reflected or backscattered back toward the device, may be measured. A ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation, or difference between the first and second amounts of electromagnetic radiation may be determined, and the ratio or difference may be compared to a threshold, or to various ratios or differences that have been computed for different types of matter. The type of matter to which the device is likely proximate may then be determined using a result (or results) of the comparison(s). In some embodiments, a processor or other circuitry may be configured to perform (or not perform) various operations depending on whether the device is determined to likely be proximate human tissue In the context of a seat, the matter differentiation described herein may be used, for example, to determine whether a person is likely sitting in the seat. In the context of a button, the matter differentiation described herein may be used, for example, to determine whether a user is pressing the button. In the context of an earbud, headphones, or a gaming device (e.g., a set of goggles or glove), the matter differentiation described herein may be used, for example, to determine whether the earbud, headphones, or gaming device is being worn. Some aspects of the following description relate to determining the proximity of a device to an object using a first proximity sensor, when possible, and selectively turning on a second proximity sensor. In some cases, the second proximity sensor may be a proximity sensor capable of detecting the proximity of the device to more distant objects, but at the cost of greater power consumption. These and other techniques are described with reference toFIGS.1-19. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting. Directional terminology, such as “top”, “bottom”, “upper”, “lower”, “front”, “back”, “over”, “under”, “beneath”, “left”, “right”, etc. may be used with reference to the orientation of some of the components in some of the figures described below. Because components in various embodiments can be positioned in a number of different orientations, directional terminology is used for purposes of illustration only and is in no way limiting. The directional terminology is intended to be construed broadly, and therefore should not be interpreted to preclude components being oriented in different ways. The use of alternative terminology, such as “or”, is intended to indicate different combinations of the alternative elements. For example, A or B is intended to include, A, or B, or A and B. FIG.1shows a functional block diagram of a device100. In some examples, the device100may be a wearable device, such as an electronic watch, smart watch, health monitoring device, or fitness monitoring device that is wearable on a wrist. The device100may also or alternatively be wearable on an ankle, arm, forehead, waist, or other body part, or may be positionable or attachable to another device (e.g., a seat). The device100may include one or more input devices102, one or more output devices104, and a processor106(which processor may be a singular processor, a set of multiple processors, and/or a processor in combination with supporting circuitry). Broadly, the input device(s)102may detect various types of inputs or sense various types of parameters, and the output device(s)104may provide various types of outputs. In some cases, inputs detected and/or parameters sensed by the input device(s)102may be used to control one or more settings, functions, or other aspects of the device100. In some cases, one or more of the output devices104may be configured to provide outputs that are dependent on, or manipulated in response to, the inputs detected and/or parameters sensed by one or more of the input devices102. The outputs provided by one or more of the output devices104may also be responsive to, or initiated by, a program or application executed by the processor106and/or an associated companion device. The processor106may receive input signals from the input device(s)102, in response to inputs detected and/or parameters sensed by the input devices102. The processor106may interpret the input signals. In response to the interpreted signals, the processor106may maintain or alter one or more settings, functions, or aspects of the device100, and in some cases may transmit output signals to one or more of the output devices104. In some cases, the processor106may transmit output signals to one or more of the output devices104independently of any input signal. The output signals may cause the output device(s)104to provide one or more outputs. In various embodiments, the input device(s)102may include any suitable components for detecting inputs and/or sensing device, user, and/or environmental parameters. Examples of input devices102include audio sensors (e.g., microphones), optical or visual sensors (e.g., cameras and/or other electromagnetic radiation sensors (e.g., visible light or IR photodetectors), proximity sensors, touch sensors, force sensors, pressure sensors, mechanical devices (e.g., crowns, switches, buttons, or keys), vibration sensors, thermal sensors, self-mixing interferometry sensors, orientation sensors, motion sensors (e.g., accelerometers or velocity sensors), location sensors (e.g., global positioning system (GPS) devices), magnetic sensors, communication devices (e.g., wired or wireless communication devices), electroactive polymers (EAPs), resistive sensors, strain gauges, capacitive sensors, electrodes, and so on, or some combination thereof. Each input device102may be configured to detect one or more particular types of input and provide one or more signals (e.g., input signals) corresponding to the detected input(s) and/or sensed parameter(s). The signal(s) may be provided, for example, to the processor106. The output devices104may include any suitable components for providing outputs. Examples of output devices104include audio output devices (e.g., speakers), visual output devices (e.g., lights, displays, or other electromagnetic radiation emitters), tactile output devices (e.g., haptic output devices), communication devices (e.g., wired or wireless communication devices), and so on, or some combination thereof. Each output device104may be configured to receive one or more signals (e.g., output signals provided by the processor106) and provide an output corresponding to the signal. The processor106may be operably coupled to the input devices102and the output devices104. The processor106may be adapted to exchange signals with the input devices102and the output devices104. For example, the processor106may receive an input signal from an input device102that corresponds to an input detected by the input device102. The processor106may interpret the received input signal to determine whether to provide and/or change one or more outputs in response to the input signal. The processor106may then send an output signal to one or more of the output devices104, to provide and/or change outputs as appropriate. Examples of suitable processors are discussed in more detail below with respect toFIG.17. In some examples, the output devices104may include one or more electromagnetic radiation emitters, and the input devices102may include one or more photodetectors. At least two of the electromagnetic radiation emitters may emit beams of electromagnetic radiation having different IR wavelengths (e.g., at least a first IR wavelength and a second IR wavelength, but in some cases more than two IR wavelengths) into a matter detection space adjacent the device100. The one or more photodetectors may be configured (e.g., filtered) to detect a set of wavelengths including the first IR wavelength and the second IR wavelength (and in some cases, more than two IR wavelengths). The processor106may be operated, at least in part, as a matter differentiation circuit (or as part of a matter differentiation circuit) to indicate, at least partly in response to signals generated by the one or more photodetectors that indicate amounts of the first IR wavelength and the second IR wavelength received by the one or more photodetectors, whether the device is likely proximate to human tissue. In some examples, the input devices102, or the input devices102in combination with the output devices104, may include multiple proximity sensors. A first of the proximity sensors may be used to determine the proximity of the device100to an object, when possible, and a second of the proximity sensors may be selectively turned on when the proximity-sensing range of the first proximity sensor is exceeded (or, for example, when the second proximity sensor is more precise than the first proximity sensor and more precision is desired; or, for other reasons). In some cases, the second proximity sensor may be a proximity sensor capable of detecting the proximity of the device to more distant objects, but at the cost of greater power consumption. FIGS.2A and2Bshow an example of a device200that includes a set of sensors. The sensors may be used, for example, to determine whether the device200is likely proximate to human tissue, and/or to determine the proximity of the device200to an object (which object may in some cases be human tissue). The device's dimensions and form factor, and inclusion of a band204(e.g., a wrist band), suggest that the device200is an electronic watch. However, the device200could alternatively be any wearable electronic device.FIG.2Ashows a front isometric view of the device200, andFIG.2Bshows a back isometric view of the device200. The device200is an example of the device described with reference toFIG.1. The device200may include a body202(e.g., a watch body) and a band204. The body202may include an input or selection device, such as a crown218or a button220. The band204may be attached to a housing206of the body202, and may be used to attach the body202to a body part (e.g., an arm, wrist, leg, ankle, or waist) of a user. The body202may include a housing206that at least partially surrounds a display208. In some embodiments, the housing206may include a sidewall210, which sidewall210may support a front cover212(FIG.2A) and/or a back cover214(FIG.2B). The front cover212may be positioned over the display208, and may provide a window through which the display208may be viewed. In some embodiments, the display208may be attached to (or abut) the sidewall210and/or the front cover212. In alternative embodiments of the device200, the display208may not be included and/or the housing206may have an alternative configuration. The display208may include one or more light-emitting elements including, for example, light-emitting elements that define a light-emitting diode (LED) display, organic LED (OLED) display, liquid crystal display (LCD), electroluminescent (EL) display, or other type of display. In some embodiments, the display208may include, or be associated with, one or more touch and/or force sensors that are configured to detect a touch and/or a force applied to a surface of the front cover212. In some embodiments, the sidewall210of the housing206may be formed using one or more metals (e.g., aluminum or stainless steel), polymers (e.g., plastics), ceramics, or composites (e.g., carbon fiber). The front cover212may be formed, for example, using one or more of glass, a crystal (e.g., sapphire), or a transparent polymer (e.g., plastic) that enables a user to view the display208through the front cover212. In some cases, a portion of the front cover212(e.g., a perimeter portion of the front cover212) may be coated with an opaque ink to obscure components included within the housing206. In some cases, all of the exterior components of the housing206may be formed from a transparent material, and components within the device200may or may not be obscured by an opaque ink or opaque structure within the housing206. The back cover214may be formed using the same material(s) that are used to form the sidewall210or the front cover212. In some cases, the back cover214may be part of a monolithic element that also forms the sidewall210. In other cases, and as shown, the back cover214may be a multi-part back cover, such as a back cover having a first back cover portion214-1attached to the sidewall210and a second back cover portion214-2attached to the first back cover portion214-1. The second back cover portion214-2may in some cases have a circular perimeter and an arcuate exterior surface216(i.e., an exterior surface216having an arcuate profile). The front cover212, back cover214, or first back cover portion214-1may be mounted to the sidewall210using fasteners, adhesives, seals, gaskets, or other components. The second back cover portion214-2, when present, may be mounted to the first back cover portion214-1using fasteners, adhesives, seals, gaskets, or other components. A display stack or device stack (hereafter referred to as a “stack”) including the display208may be attached (or abutted) to an interior surface of the front cover212and extend into an interior volume of the device200. In some cases, the stack may include a touch sensor (e.g., a grid of capacitive, resistive, strain-based, ultrasonic, or other type of touch sensing elements), or other layers of optical, mechanical, electrical, or other types of components. In some cases, the touch sensor (or part of a touch sensor system) may be configured to detect a touch applied to an outer surface of the front cover212(e.g., to a display surface of the device200). In some cases, a force sensor (or part of a force sensor system) may be positioned within the interior volume below and/or to the side of the display208(and in some cases within the device stack). The force sensor (or force sensor system) may be triggered in response to the touch sensor detecting one or more touches on the front cover212(or a location or locations of one or more touches on the front cover212), and may determine an amount of force associated with each touch, or an amount of force associated with the collection of touches as a whole. The force sensor (or force sensor system) may alternatively trigger operation of the touch sensor (or touch sensor system), or may be used independently of the touch sensor (or touch sensor system). The device200may include various sensor systems (e.g., input devices, or input devices in combination with output devices), and in some embodiments may include some or all of the sensor systems included in the device described with reference toFIG.1. In some embodiments, the device200may have a port222(or set of ports) on a side of the housing206(or elsewhere), and an ambient pressure sensor, ambient temperature sensor, internal/external differential pressure sensor, gas sensor, particulate matter concentration sensor, or air quality sensor may be positioned in or near the port(s)222. In some cases, one or more skin-facing sensors may be included within the device200. The skin-facing sensor(s) may emit or transmit signals through the back cover214and/or receive signals or sense conditions through the back cover214. For example, in some embodiments, one or more such sensors may include a number of electromagnetic radiation emitters (e.g., visible light and/or IR emitters), and/or a number of proximity sensors (e.g., capacitive, resistive, optical, self-mixing interference (SMI), or other types of proximity sensors). The sensors may be used, for example, to determine whether the back of the housing206(e.g., the back cover214or the second back cover portion214-2) is likely proximate to human tissue, and/or to determine the proximity of the device200to an object. The sensors may also or alternatively be used as a device on/off wrist detector, a biometric sensor, a heart-rate monitor, a respiration-rate monitor, a blood pressure monitor, a blood oxygenation monitor, and/or a blood glucose monitor. The device200may include circuitry224(e.g., a processor and/or other components) configured to compute or extract, at least partly in response to signals received directly or indirectly from one or more of the device's sensors, an indication of whether the back of the housing206(e.g., the back cover214or the second back cover portion214-2) is likely proximate to human tissue. The circuitry224may also or alternatively be configured to determine the proximity of the device200to an object. Still further, the circuitry224may in some cases transition various of the device's sensors to an on state or an off state (e.g., a completely off state or a low power (or power conserving) state). In some embodiments, the circuitry224may convey the indication of whether the back of the housing206is likely proximate to human tissue, or an indication of the device's proximity to an object, via an output device of the device200. For example, the circuitry224may cause the indication(s) to be displayed on the display208, indicated via audio or haptic outputs, transmitted via a wireless communications interface or other communications interface, and so on. The circuitry224may also or alternatively maintain or alter one or more settings, functions, or aspects of the device200, including, in some cases, what is displayed on the display208. FIGS.3A and3Bshow an example of a skin-facing sensor (or sensor system300) that may be included in the device described with reference toFIG.1or2A-2B. By way of example, the sensor system300is shown to be positioned under a back or back cover of a housing (e.g., under the second back cover portion214-2described with reference toFIG.2B).FIG.3Ashows a plan view of the sensor system300, andFIG.3Bshows an elevation of the sensor system300. By way of example, and as shown inFIG.3A, the sensor system300may include first and second emitters302,304(electromagnetic radiation emitters) and a photodetector306. By way of example, each of the emitters302,304may include a vertical-cavity surface-emitting laser (VCSEL), a vertical external-cavity surface-emitting laser (VECSEL), a quantum-dot laser (QDL), a quantum cascade laser (QCL), a light-emitting diode (LED) (e.g., an organic LED (OLED), a resonant-cavity LED (RC-LED), a micro LED (mLED), a superluminescent LED (SLED), or an edge-emitting LED), or another type of light-emitting element. The first emitter302may be positioned within the housing described with reference toFIG.1or2A-2B, and may be configured to emit a first beam of electromagnetic radiation through a back of the housing (e.g., through the second back cover portion214-2). The first beam may have a first IR wavelength (or range of wavelengths including the first IR wavelength). The second emitter304may also be positioned within the housing, and may be configured to emit a second beam of electromagnetic radiation through the back of the housing (e.g., through the second back cover portion214-2). The second beam may have a second IR wavelength (or range of wavelengths including the second IR wavelength). The second IR wavelength is different from the first IR wavelength. As will be described in greater detail with reference toFIG.5, the first IR wavelength may have a first human tissue reflectance factor, and the second IR wavelength may have a second human tissue reflectance factor, with the first and second human tissue reflectance factors being different. In some embodiments, the sensor system300may include additional emitters, emitting the same or different wavelengths of electromagnetic radiation as the first and second emitters302,304. The photodetector306may also be positioned within the housing, and may receive and detect reflections or backscatters of the first beam and the second beam (and/or additional beams, when additional emitters are used). In some embodiments, the photodetector306may be an Indium Gallium Arsenide (InGaAs) photodetector. In some cases, the photodetector306may be filtered to detect a set of electromagnetic radiation wavelengths including the first IR wavelength and the second IR wavelength. In some cases, the photodetector306may be filtered to detect a single range of electromagnetic radiation wavelengths. In other cases, the photodetector306may be filtered to detect the first IR wavelength or a first notch of IR wavelengths including the first IR wavelength, and filtered to detect the second IR wavelength or a second notch of IR wavelengths including the second IR wavelength. The photodetector306may be filtered, for example, by one or more coatings applied to the photodetector306, by one or more optical filter elements disposed over the photodetector306, or by a coating (e.g., an ink) applied to an interior or exterior surface of the second back cover portion214-2. The first and/or second emitters302,304may also be filtered, in the same way or in a different way as the photodetector306. For example, if an emitter emits electromagnetic radiation that is outside a desired or useful range of wavelengths usable for matter differentiation, the electromagnetic radiation emitted by the emitter may be filtered to thereby limit the range of wavelengths that ultimately illuminate an object. The first and/or second emitter302,304may be filtered, for example, by one or more coatings applied to an aperture of the first or second emitter302,304, by one or more optical filter elements disposed over an aperture of the first or second emitter302,304, or by a coating (e.g., an ink) applied to an interior or exterior surface of the second back cover portion214-2. In various embodiments, one or all of the first emitter302, the second emitter, or the photodetector306may be filtered. Optionally, a set of one or more IR wavelength-blocking walls may be disposed between the photodetector306and emitters302,304. By way of example, a singular wall322is shown positioned between the photodetector306and emitters302,304inFIG.3A(but not inFIG.3B). The wall322may extend from a substrate, to which the photodetector306and emitters302,304are attached, to an interior surface of the second back cover portion214-2. Alternatively, the wall322may extend further around, or entirely around, the photodetector306; or further around, or entirely around, the emitters302,304; or further around, or entirely around, each emitter302,304separately; or further around, or entirely around, each of the photodetector306, the first emitter302, and the second emitter304. In some embodiments, one or more of the walls may extend into or through the second back cover portion214-2. The wall(s), such as wall322, may be used to reduce the likelihood that electromagnetic radiation emitted by one or both of the emitters302,304will impinge on the photodetector306before entering and/or exiting the second back cover portion214-2. In some embodiments, and as shown, a group of sensing components including the first and second emitters302,304and photodetector306may be positioned on-axis with respect to a center axis324of a back cover of a device (e.g., a center axis324of the second back cover portion214-2), or the group of sensing components may be disposed around the center axis324. The center axis324is perpendicular to the exterior surface of the back cover. At least the photodetector306, and in some cases the first and second emitters302,304, may be directly or indirectly connected to circuitry308(e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software)) and/or other circuitry, which in some cases may include the processor described with reference toFIG.1or2A-2B) that includes, or is configured to operate as, a timing circuit. The timing circuit may be configured to operate the first emitter302and the second emitter304. In some embodiments, the first emitter302and second emitter304may be operated to respectively emit the first beam of electromagnetic radiation or the second beam of electromagnetic radiation at different times (e.g., sequentially). The timing circuit may be configured to operate the photodetector306to cause the photodetector306to integrate a charge indicative of 1) a first amount of electromagnetic radiation received by the photodetector306after the first emitter302emits the first beam, and 2) a second amount of electromagnetic radiation received by the photodetector306after the second emitter304emits the second beam. The circuitry308may also include, or be configured to operate as, a matter differentiation circuit. The matter differentiation circuit may be configured to indicate, at least partly in response to signals indicating amounts of the first IR wavelength and the second IR wavelength received by the photodetector306(e.g., signals indicative of the integrated charges produced by reflections or backscatters of the respective first or second IR wavelengths), whether the back of the housing (e.g., the exterior surface of the second back cover portion214-2) is likely proximate to human tissue. In some cases, the indication of whether the back of the housing is likely proximate to human tissue may be based at least partly on a ratio of the first amount of electromagnetic radiation (i.e., the amount of the first IR wavelength) to the second amount of electromagnetic radiation (i.e., the amount of the second IR wavelength). For example, the matter differentiation circuit may determine a ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation, and compare the ratio to a threshold (or to a range). The matter differentiation circuit may then (only) indicate the back of the housing is likely proximate to human tissue when the ratio satisfies the threshold (or is within the range). In some cases, the indication of whether the back of the housing is likely proximate to human tissue may be based at least partly on a relationship between the first amount of electromagnetic radiation, the second amount of electromagnetic radiation, and a ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation. For example, the matter differentiation circuit may compare each of the first amount of electromagnetic radiation, the second amount of electromagnetic radiation, and the ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation to respective thresholds (or ranges), and then (only) indicate the back of the housing is likely proximate to human tissue when each of the amounts and ratio satisfy their respective thresholds (or are within their respective ranges). In some cases, the indication of whether the back of the housing is likely proximate to human tissue may be based at least partly on a modeling of human tissue (e.g., a modeling of a collection of human tissue examples). The modeling may include, for example, a modeling of human tissue's reflectance, absorption, scattering, and so on for different wavelengths of electromagnetic radiation. The modeling may be performed for a cross-section of the population; for a cross-section of potential users; for users having different characteristics (e.g., more or less hair, tattoos, different colored skin, and so on); or for particular users. In these embodiments, amounts of electromagnetic radiation received by the photodetector306, or by different photodetectors (e.g., the photodetectors described with reference toFIGS.7B,7C, and8A-9B, or different numbers or arrangement of photodetectors) may be compared to model data, and a determination of whether the back of the housing is likely proximate to human tissue may be based at least in part on the comparison. In some embodiments, machine learning or other techniques may be used to adapt or calibrate a model of human tissue and/or models of other types of matter (e.g., wood, glass, cloth, and so on). The model (and comparisons, adaptations, calibrations, and so on) may be multispectral. In some cases, the indication of whether the back of the housing is likely proximate to human tissue may be based at least partly on a detected proximity (or distance) of an object. For example, the sensor system300may further include a proximity sensor (or distance sensor)326that is configured to detects a proximity (or distance) of an object to (or from) the back of the housing (e.g., to/from the second back cover portion214-2). The sensed proximity or distance may be used (e.g., by the circuitry306when operated as a matter differentiation circuit) to adjust one or more of the thresholds (or ranges) to which the first amount of electromagnetic radiation, the second amount of electromagnetic radiation, or the ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation is/are compared. The proximity sensor (or distance sensor)326may be a capacitive, optical, self-mixing interference (SMI), ultrasonic, or other type of proximity or distance sensor. In some cases, matter differentiation may be used to determine, for example, whether a device is being worn or is in a pocket, or is being worn or resting on a table or charging mat. Matter differentiation may also be used, in combination with accelerometer measurements obtained by a device, whether a user of the device is falling or whether the device is falling independent of the user. Matter differentiation may also be used to prevent health or fitness related data from being collected when a device is not proximate human tissue and is likely not being worn (or when the device is not proximate human tissue and its health or fitness-related sensors are unlikely to produce useful information because they are not proximate, or sufficiently proximate, human tissue). In some embodiments, the circuitry308may alert a user when their device is not sufficiently proximate their skin, and a sensor is unable to obtain useful health or fitness data. In some cases, the matter differentiation circuit may always be active, or may be activated periodically. In other cases, the matter differentiation circuit may be activated by particular device functions or applications. For example a financial transaction application may activate the matter differentiation circuit to verify that a device is proximate human tissue before engaging in other password or biometric verifications. The electromagnetic radiation-emitting apertures of the emitters302,304may be equidistant from a centroid of the photodetector306as shown. Alternatively, the apertures of the emitters302,304may be positioned different distances from a centroid of the photodetector306. In some embodiments, the emitters302,304may emit beams of electromagnetic radiation having the same size (e.g., same size cross-sections or spread from a plane of emission) and/or same optical power. In some embodiments, the emitters302,304may emit beams of electromagnetic radiation having different sizes and/or different optical powers. By way of example, the emitters302,304are shown to occupy surface areas that are equal size and smaller than a surface area occupied by the photodetector306. However, the emitters302,304may occupy the same or different size surface areas, and may occupy surface areas that are smaller, the same, or larger than the surface area occupied by the photodetector306. The parameters of the emitters302,304and photodetector306discussed in this paragraph, and/or other parameters of the emitters302,304and photodetector306, may be configured or adjusted in various ways to improve the matter differentiation circuit's ability to differentiate human tissue from other types of matter (or from particular types of matter). In some cases, an improvement in matter differentiation may be achieved by changing parameters of the emitters302,304or photodetector306that tend to change the ratio of an amount of electromagnetic radiation including the first IR wavelength received by the photodetector306, and an amount of electromagnetic radiation including the second IR wavelength received by the photodetector306. In some embodiments, the circuitry308may also include, or be configured to operate as, a power conservation circuit. For example, the circuitry308may be configured to reduce power supplied to a component of a device by a power source, or halt, delay, or alter a processing, communication, or sensing function of the device, when the matter differentiation circuit indicates the device is not likely proximate to human tissue. As shown in the exploded view ofFIG.3B, the first and second emitters302,304and photodetector306may be attached to an interior surface of the second back cover portion214-2using an adhesive310. The emitters302,304and photodetector306may be attached to the interior surface of the second back cover portion214-2apart from other components of a device housing. In some embodiments, the emitters302,304and photodetector306may be attached directly to the interior surface (or to a lens or light control film or coating positioned between the interior surface and one or more of the emitters302,304or photodetector306), or one or more modules including the emitters302,304and photodetector306may be attached directly to the interior surface (or to a lens or light control film or coating positioned between the interior surface and one or more of the emitters302,304or photodetector306). Alternatively, the emitters302,304and/or photodetector306may be attached to a substrate or module that is attached directly to the interior surface of the second back cover portion214-2(or to a lens or light control film or coating positioned between the interior surface and the substrate or module). The second back cover portion214-2may similarly be attached to the first back cover portion214-1using an adhesive312. The adhesives310,312may be the same or different. The adhesive312may in some cases be a ring of adhesive disposed around the perimeter of the second back cover portion214-2. The first and second emitters302,304and photodetector306may be electrically connected to the circuitry308(e.g., an integrated circuit (IC) or printed circuit board (PCB)). In some cases, the first and second emitters302,304and/or photodetector306may be electrically connected to the circuitry308via a set of fly wires314and/or a flex circuit316. The first and second emitters302,304may emit electromagnetic radiation through the second back cover portion214-1in various spot or flood illumination patterns, and in some cases may emit electromagnetic radiation into substantially overlapping elliptical cones318,320. In some embodiments, an IR transparent ink may be applied to the interior surface of the second back cover portion214-2, in at least a region or regions disposed between the first and second emitters302,304and photodetector306, on one side, and the second back cover portion214-2on the other side. The IR transparent ink may in some cases block visible light or ambient light. In some embodiments, one or more lenses (e.g., one or more Fresnel lenses) or filters (e.g., one or more light control films (LCFs), linear variable filters (LVFs), bandpass (BP) filters, or polarizers) may also or alternatively be positioned between the first and second emitters302,304and photodetector306, on one side, and the second back cover portion214-2on the other side. FIG.4Ashows the device200described with reference toFIGS.2A-2Bwhen worn on a user's wrist400, with the back cover214-2of the device200positioned against the user's wrist400(i.e., human tissue).FIG.4Bshows the device200when sitting on a table402. When the sensor system described with reference toFIGS.3A-3Bis included in the device200, the matter differentiation circuit of the sensor system may operate the emitters and photodetector of the sensor system and indicate whether the back of the housing206(e.g., the second back cover portion214-2) is likely proximate to human tissue, as shown inFIG.4A, or likely proximate to an object that is not human tissue, as shown inFIG.4B. An example basis for the matter differentiation circuit making its indication is described with reference toFIGS.5and6. In the scenario shown inFIG.4A, electromagnetic radiation emitted by the emitters of the sensor system may propagate into the tissue of the user's wrist400and be absorbed into, or reflected or backscattered from, various structures within the user's wrist400, including, for example, blood, water, lipids, skin, ligaments, tendons, and bone. Some of the electromagnetic radiation may be reflected or backscattered and received/detected by the photodetector of the sensor system. In the scenario shown inFIG.4B, relatively little of the electromagnetic radiation emitted by the emitters of the sensor system may propagate into the table402, and most of the electromagnetic radiation may reflect or backscatter from the surface of the table402and be received/detected by the photodetector of the sensor system. FIG.5shows an example graph500of electromagnetic radiation wavelength (in nanometers (nm)) versus human tissue reflectance factor (a parameter without units). A reflectance factor may, in theory, range from a value of 0, indicating that an electromagnetic radiation wavelength is completely absorbed by an object, to a value of 1, indicating that an electromagnetic radiation wavelength is completely reflected by an object.FIG.5shows a mean human tissue reflectance factor502for each wavelength of electromagnetic radiation; a first sigma (1σ) spread of human tissue reflectance factors504for each wavelength of electromagnetic radiation (i.e., based on the variance of human tissue reflectance factor for different persons); and a range of human tissue reflectance factors506for each wavelength of electromagnetic radiation. For human tissue, the reflectance factors for various wavelengths of electromagnetic radiation range from about 0.05 to about 0.70. In the IR range—and particularly in the near-infrared (NIR) band, ranging from 750-1400 nm, and the shorter end of the short-wavelength infrared (SWIR) band (e.g., from about 1400-2000 nm)—there is great variability in the human tissue reflectance factors of different IR wavelengths. This enables the selection of first and second IR wavelengths with significant contrast508between their human tissue reflectance factors. For example, there is contrast between 1450 nm, which is highly absorbed by human tissue, and 1300 nm, which has a human tissue reflectance factor about eight times (8×) that of 1450 nm. Even more contrast is provided between 1450 nm and 1050 nm, which has a human tissue reflectance factor about eleven times (11×) that of 1450 nm. Contrast is also provided, to different degrees, at other electromagnetic radiation wavelengths. For objects other than human tissue on which a device such as the device described with reference toFIGS.2A-2Bmay be placed (e.g., objects having wood, polymer (e.g., plastic), glass, and/or ceramic materials or surfaces), as shown inFIG.4B, the object's reflectance factors for different wavelengths of electromagnetic radiation may have a much smaller variation. That is, the reflectance factors for different wavelengths of electromagnetic radiation may be spectrally flat or have little contrast. In some embodiments of the sensor system described with reference toFIGS.3A-3B, the first emitter302may be configured to emit a first IR wavelength of 1050 nm, 1200 nm, or 1300 nm, for example, and the second emitter304may be configured to emit a second IR wavelength of 1450 nm, 1550 nm, or 1650 nm, for example. In other embodiments, the first and second emitters302,304may be configured to emit other wavelengths that are useful in differentiating matter (e.g., differentiating human skin from other matter). In some cases, the pair of IR wavelengths may be selected not only because their human tissue reflectance factors have high contrast, but because their reflectance factors for other objects have low contrast. When the first and second emitters302,304sequentially emit electromagnetic radiation and the photodetector306is operated to detect a first amount of electromagnetic radiation received after the first emitter302emits the first IR wavelength, and a second amount of electromagnetic radiation received after the second emitter304emits the second IR wavelength, the matter differentiation circuit may indicate that the second back cover portion214-2is likely proximate to human tissue when a ratio of the first amount of electromagnetic radiation to the second amount of electromagnetic radiation satisfies a first threshold (or alternatively, is within a first range) associated with human tissue. When the ratio does not satisfy the first threshold (or alternatively, is not within the first range, or satisfies a second threshold, or is within a second range), the matter differentiation circuit may indicate that the second back cover portion214-2is likely not proximate to human tissue. FIG.6shows an example distribution600of the ratio discussed with reference toFIGS.3A-3B and5. The distribution600is for human tissue and non-human objects located at various distances from a back cover or back of a device housing. By way of example, the ratio corresponds to an amount of a first IR wavelength at 1050 nm versus an amount of a second IR wavelength at 1450 nm. As shown, the ratio is at or above about 1.9 for a variety of human tissue samples602(e.g., wrist tissue samples), which human tissue samples602are in a range of 0-20 millimeters (mm) from an exterior surface of a back cover. The ratio is generally about 1.0 for a variety of non-human object samples604, which non-human object samples604are in a range of 1-20 mm from the exterior surface of the back cover. The ratio moves somewhat above 1.0 for a couple of the non-human object samples at close range (e.g., at a distance of about 1 mm or less), but still remains well below the ratio for human tissue at close range. The margin606between the ratio for human tissue and non-human objects is what enables ratio thresholds or ranges to be identified so that a matter differentiation circuit may indicate whether a back cover or back of a housing is likely proximate to human tissue. The indication is an indication of whether the back cover or back of the housing is “likely proximate” to human tissue because some objects may have reflectance factors, for different electromagnetic radiation wavelengths, that are similar to human tissue reflectance factors. For example, a wet and wadded cloth or paper towel may in some cases have reflectance factors that are similar to those of human tissue. In some cases, a matter differentiation circuit may use the ratio discussed with reference toFIGS.3A-3B,5, and6, in combination with other parameters, to indicate whether a back cover or back of a housing is likely proximate to human tissue. For example, the matter differentiation circuit may use the amounts of electromagnetic radiation that are used to compute the ratio separately, in addition to using the amounts in combination (e.g., to compute the ratio). In some cases, each amount of electromagnetic radiation may be separately compared to a threshold or expected range, and the matter differentiation circuit may indicate the back cover or back of the housing is likely proximate to human tissue when each parameter satisfies its respective threshold or is within its respective range. In some cases, a matter differentiation circuit may analyze a change in ratio as an object approaches or moves farther away. For example, for two types of matter that have similar ratios, a distance-dependent variance (or lack of variance) in the ratio described with reference toFIG.6may be used to distinguish one type of matter (e.g., human tissue) from another (e.g., wood). In some cases, a matter differentiation circuit may adjust a threshold (or thresholds) to which it compares a sensed amount of the first IR wavelength, a sensed amount of the second IR wavelength, or a ratio of a sensed amount of the first IR wavelength and a sensed amount of the second IR wavelength. For example, the matter differentiation circuit may adjust a threshold ratio608of the sensed amount of the first IR wavelength and a sensed amount of the second IR wavelength in response to a sensed proximity or distance of a back cover or back of a device housing to an object (e.g., a user's skin, a table top, and so on). As shown in the example ofFIG.6, when a proximity or distance sensor senses an object at a distance of about 2 mm or greater, the matter differentiation circuit may adjust the threshold ratio608to about 1.5 (or between about 1.2 and about 1.8). However, as the sensed distance to the object falls, and the object moves closer to the back cover or housing of the device, the matter differentiation circuit may adjust the threshold ratio608higher, to a value between about 3 or about 4 (or between about 1.2 and about 6). The particular value of the threshold ratio608, and adjustments thereof, will sometimes depend on the configurations of the various sensors, including their size, placement, spacing, emission power, emission/detection wavelengths, and so on. In some cases, the IR wavelengths of the first and second emitters described with reference toFIGS.3A-3Bmay be selected or adjusted to improve the differentiation of particular materials or surfaces from human tissue. FIGS.7A-7Dshow various alternative plan views of a skin-facing sensor (or sensor system) that may be included in the device described with reference toFIG.1,2A-2B, or4A-4B. Each of the sensor systems may be positioned under the second back cover portion214-2of the device described with reference toFIG.2, or under a skin-facing housing or cover of any device, or under a back cover or back of a housing of any wearable device. The sensor system700shown inFIG.7Aincludes three emitters (e.g., a first emitter702, a second emitter704, and a third emitter706) and a photodetector708. By way of example, the emitters702,704,706may include VCSELs, VECSELs, QDLs, QCLs, LEDs (e.g., OLEDs, RC-LEDs, mLEDs, SLEDs, or edge-emitting LEDs), or other types of light-emitting elements. At least two of the emitters may be IR emitters that emit beams of electromagnetic radiation having different IR wavelengths. The other emitter may be another IR emitter that emits a beam of electromagnetic radiation having the same IR wavelength as a beam of electromagnetic radiation emitted by another one of the emitters (e.g., to increase the optical power or improve the detectability of that wavelength). Alternatively, the other emitter may be another IR emitter that emits a beam of electromagnetic radiation having a different IR wavelength as the beams of electromagnetic radiation emitted by the other emitters (e.g., an IR wavelength that has the same or different reflectance factors, as the other emitted IR wavelengths, for human tissue and/or non-human objects). Alternatively, the other emitter may be a non-IR emitter (e.g., a visible light emitter) that emits a beam of electromagnetic radiation having a wavelength that has the same or different reflectance factors, as the emitted IR wavelengths, for human tissue and/or non-human objects. In some embodiments, a third emitter (or fourth emitter, and so on) may improve a matter differentiation circuit's ability to differentiate human tissue from a particular type or types of non-human objects. A third emitter (or fourth emitter, and so on) may also or alternatively enable the sensor system700to provide other kinds of sensing. The emitters702,704,706may be positioned around the photodetector708and have electromagnetic radiation-emitting apertures that are equidistant from a centroid of the photodetector708, as shown. Alternatively, the apertures of the emitters702,704,706may be positioned different distances from a centroid of the photodetector708. In some embodiments, the emitters702,704,706may emit beams of electromagnetic radiation having the same size (e.g., same size cross-sections or spread from a plane of emission) and/or same optical power. In some embodiments, the emitters702,704,706may emit beams of electromagnetic radiation having different sizes and/or different optical powers. By way of example, the emitters702,704,706are shown to occupy surface areas that are equal size and smaller than a surface area occupied by the photodetector708. However, the emitters702,704,706may occupy the same or different size surface areas, and may occupy surface areas that are smaller, the same, or larger than the surface area occupied by the photodetector708. The parameters of the emitters702,704,706and photodetector708discussed in this paragraph, and/or other parameters of the emitters702,704,706and photodetector708, may be configured or adjusted in various ways to improve a matter differentiation circuit's ability to differentiate human tissue from other types of matter (or from particular types of matter). In some cases, an improvement in matter differentiation may be achieved by changing parameters of the emitters702,704,706or photodetector708that tend to change the ratio of: an amount of electromagnetic radiation including the first IR wavelength received by the photodetector708, and an amount of electromagnetic radiation including the second IR wavelength received by the photodetector708. The photodetector708may receive and detect reflections or backscatters of the first beam, the second beam, and the third beam. In some cases, the photodetector708may be filtered to detect a set of electromagnetic radiation wavelengths including a first IR wavelength emitted by the first emitter702, a second IR wavelength emitted by the second emitter704, and a third wavelength emitted by the third emitter706. The first IR wavelength and second IR wavelength may be different, and the third wavelength may be the same as (or different from) each of the first and second IR wavelengths. In some embodiments, the photodetector708may be an InGaAs photodetector. In some cases, the photodetector708may be filtered to detect a single range of electromagnetic radiation wavelengths. In other cases, the photodetector708may be filtered to detect the first IR wavelength or a first notch of IR wavelengths including the first IR wavelength; filtered to detect the second IR wavelength or a second notch of IR wavelengths including the second IR wavelength; and/or filtered to detect the third wavelength or a first notch of wavelengths including the third wavelength. The photodetector708may be filtered, for example, by one or more coatings applied to the photodetector708, by one or more optical filter elements disposed over the photodetector708, or by a coating (e.g., an ink) applied to an interior or exterior surface of a cover or housing portion (e.g., the second back cover portion described with reference toFIG.2B) through which the emitters702,704,706emit their beams of electromagnetic radiation. Optionally, a set of one or more IR wavelength-blocking walls may be disposed between the photodetector708and emitters702,704,706. By way of example, a singular wall710is shown positioned between the photodetector708and emitters702,704,706inFIG.7A, but any number of walls may be used to reduce the likelihood that electromagnetic radiation emitted by one or more of the emitters702,704,706will impinge on the photodetector708before entering and/or exiting the second back cover portion214-2. Examples of various additional or alternative wall configurations are described with reference toFIG.3A. At least the photodetector708, and in some cases the emitters702,704,706, may be directly or indirectly connected to circuitry (e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software) and/or other circuitry, which in some cases may include the processor or circuitry described with reference toFIG.1,2A-2B, or3A-3B) that includes, or is configured to operate as, a timing circuit and/or matter differentiation circuit, as described, for example, with reference toFIGS.3A-3B. The matter differentiation circuit may be configured to indicate whether the back of the housing is likely proximate to human tissue at least partly in response to signals indicating detected amounts of the wavelengths of the first beam of electromagnetic radiation received by the photodetector708, the second beam of electromagnetic radiation received by the photodetector708, and/or the third beam of electromagnetic radiation received by the photodetector708. For example, the matter differentiation circuit may be configured to indicate whether the back of the housing is likely proximate to human tissue at least partly in response to one or more of: a first signal indicating an amount of the first IR wavelength received by the photodetector708, a second signal indicating an amount of the second IR wavelength received by the photodetector708, or a third signal indicating an amount of the third wavelength received by the photodetector708. The signals may in some cases be generated sequentially, after the timing circuit sequentially turns on, and then off, one of the emitters702,704,706at a time. The sensor system720shown inFIG.7Bincludes two emitters (e.g., a first emitter722, and a second emitter724) and two photodetectors726,728. By way of example, the emitters722,724may include VCSELs, VECSELs, QDLs, QCLs, LEDs (e.g., OLEDs, RC-LEDs, mLEDs, SLEDs, or edge-emitting LEDs), or other types of light-emitting elements, and may be IR emitters that emit beams of electromagnetic radiation having different IR wavelengths. The emitters722,724may be positioned near and/or between the photodetectors726,728and have electromagnetic radiation-emitting apertures that are equidistant from centroids of each of the photodetectors726,728, as shown. Alternatively, the apertures of the emitters722,724may be positioned different distances from a centroid of a photodetector, or a single one (or both) of the emitters722,724may have an electromagnetic radiation-emitting aperture that is positioned different distances from the centroids of the different photodetectors726,728. In some embodiments, the emitters722,724may emit beams of electromagnetic radiation having the same size (e.g., same size cross-sections or spread from a plane of emission) and/or same optical power. In some embodiments, the emitters722,724may emit beams of electromagnetic radiation having different sizes and/or different optical powers. By way of example, the emitters722,724are shown to occupy surface areas that are equal size and smaller than surface areas occupied by the photodetectors726,728. However, the emitters722,724may occupy the same or different size surface areas, and may occupy surface areas that are smaller, the same, or larger than the surface areas occupied by the photodetectors726,728. The photodetectors726,728may also occupy different surface areas. The parameters of the emitters722,724and photodetectors726,728discussed in this paragraph, and/or other parameters of the emitters722,724and photodetectors726,728, may be configured or adjusted in various ways to improve a matter differentiation circuit's ability to differentiate human tissue from other types of matter (or from particular types of matter). In some cases, an improvement in matter differentiation may be achieved by changing parameters of the emitters722,724or photodetectors726,728that tend to change the ratio of an amount of electromagnetic radiation including the first IR wavelength received by the photodetectors726,728, and an amount of electromagnetic radiation including the second IR wavelength received by the photodetectors726,728. The photodetectors726,728may receive and detect reflections or backscatters of the first beam and the second beam. In some cases, each of the photodetectors726,728may be filtered to detect a set of electromagnetic radiation wavelengths including a first IR wavelength emitted by the first emitter722, and a second IR wavelength emitted by the second emitter724. The first IR wavelength and second IR wavelength may be different. In some embodiments, each photodetector726,728may be an InGaAs photodetector. In some cases, each photodetector726,728may be filtered to detect a single range of electromagnetic radiation wavelengths. In other cases, each photodetector726,728may be filtered to detect the first IR wavelength or a first notch of IR wavelengths including the first IR wavelength, and filtered to detect the second IR wavelength or a second notch of IR wavelengths including the second IR wavelength. The photodetectors726,728may be filtered, for example, by one or more coatings applied to the photodetectors726,728, by one or more optical filter elements disposed over the photodetectors726,728, or by a coating (e.g., an ink) applied to an interior or exterior surface of a cover or housing portion (e.g., the second back cover portion described with reference toFIG.2B) through which the emitters722,724emit their beams of electromagnetic radiation. In some cases, the photodetectors726,728may be filtered differently. For example, the first photodetector726may be filtered to receive the first IR wavelength, and the second photodetector728may be filtered to receive the second IR wavelength. Optionally, a set of one or more IR wavelength-blocking walls may be disposed between each photodetector726,728and the emitters722,724. By way of example, a first wall730is shown positioned between the first photodetector726and the emitters722,724, and a second wall732is shown positioned between the second photodetector728and the emitters722,724. Alternatively, any number of walls may be used to reduce the likelihood that electromagnetic radiation emitted by one or both of the emitters722,724will impinge on the first or second photodetector726,728before entering and/or exiting a back cover of a device. Examples of various additional or alternative wall configurations are described with reference toFIG.3A. At least the photodetectors726,728, and in some cases the emitters722,724, may be directly or indirectly connected to circuitry (e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software) and/or other circuitry, which in some cases may include the processor or circuitry described with reference toFIG.1,2A-2B, or3A-3B) that includes, or is configured to operate as, a timing circuit and/or matter differentiation circuit, as described, for example, with reference toFIGS.3A-3B. The matter differentiation circuit may be configured to indicate whether the back of the housing is likely proximate to human tissue at least partly in response to signals indicating detected amounts of the wavelengths of the first beam of electromagnetic radiation and/or the second beam of electromagnetic radiation received by the photodetectors726,728. For example, the matter differentiation circuit may be configured to indicate whether the back of the housing is likely proximate to human tissue at least partly in response to one or more of a first signal indicating an amount of the first IR wavelength received by the first photodetector726, a second signal indicating an amount of the first IR wavelength received by the second photodetector728, a third signal indicating an amount of the second IR wavelength received by the first photodetector726, or a fourth signal indicating an amount of the second IR wavelength received by the second photodetector728. The signals may in some cases be generated in pairs (e.g., one signal from each photodetector726,728), after the timing circuit sequentially turns on, and then off, one of the emitters722,724at a time. The sensor system740shown inFIG.7Cincludes two groups of sensing components, with each group including two emitters and a photodetector (e.g., a first group742including a first emitter744, a second emitter746, and a first photodetector748; and a second group750including a third emitter752, a fourth emitter754, and a second photodetector756). By way of example, the emitters744,746,752,754may include VCSELs, VECSELs, QDLs, QCLs, LEDs (e.g., OLEDs, RC-LEDs, mLEDs, SLEDs, or edge-emitting LEDs), or other types of light-emitting elements, and may be IR emitters that emit beams of electromagnetic radiation having different IR wavelengths. For example, the first and third IR emitters744,752may be respectively configured to emit first and third beams of electromagnetic radiation having a first IR wavelength, and the second and fourth IR emitters746,754may be respectively configured to emit second and fourth beams of electromagnetic radiation having a second IR wavelength. In this manner a beam having the first IR wavelength and a beam having the second IR wavelength is emitted by each group742,750of sensing components. The emitters and photodetectors of each group742,750may otherwise be configured and positioned as described with reference to any ofFIG.3A-3B,7A, or7B, and a set of one or more IR wavelength-blocking walls758,760may optionally be disposed between each photodetector748,756and the emitters744,746,752,754, as described, for example, with reference to any ofFIG.3A,7A, or7B. However, in contrast to a group of sensing components being positioned on-axis with respect to a center axis of a back cover, or being distributed around the center axis, the groups742,750of sensing components shown inFIG.7Cmay be distributed around a center axis of a back cover. In some cases, each sensing component within a group742,750may be positioned off-axis with respect to the center axis (e.g., generally to one side of, or within one range of angular extents about, the center axis). At least the photodetectors748,756, and in some cases the emitters744,746,752,754, may be directly or indirectly connected to circuitry (e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software) and/or other circuitry, which in some cases may include the processor or circuitry described with reference toFIG.1,2A-2B, or3A-3B) that includes, or is configured to operate as, a timing circuit and/or matter differentiation circuit, as described, for example, with reference toFIGS.3A-3B. The matter differentiation circuit may be configured to indicate whether the back of the housing is likely proximate to human tissue at least partly in response to signals indicating detected amounts of the wavelengths of the first beam of electromagnetic radiation and/or the second beam of electromagnetic radiation received by the photodetectors748,756. In some embodiments, the matter differentiation may select whether to use signals generated by one or the other or both of the photodetectors748,756when indicating whether the back of the housing is likely proximate to human tissue. In some cases, the matter differentiation circuit may determine which signals to use based, at least in part, on the strengths of the signals, the strengths of a subset of the signals, whether the signals satisfy particular thresholds or are within particular ranges, or signals generated by on/off wrist sensors, device tilt sensors, or device orientation sensors. The sensor system770shown inFIG.7Dincludes two emitters (e.g., a first emitter302, and a second emitter304) and a photodetector306, similarly to the sensor system described with reference toFIGS.3A and3B. By way of example, the emitters302,304may include VCSELs, VECSELs, QDLs, QCLs, LEDs (e.g., OLEDs, RC-LEDs, mLEDs, SLEDs, or edge-emitting LEDs), or other types of light-emitting elements, and may be IR emitters that emit beams of electromagnetic radiation having different IR wavelengths. The emitters302,304and photodetector306may be configured as described with reference toFIGS.3A and3B, and may be separated by a set of one or more IR wavelength-blocking walls, such as wall322. However, in contrast to the sensor system described with reference toFIGS.3A and3B, the apertures of the emitters302,304are positioned different distances from a centroid of the photodetector306(e.g., the aperture of the emitter302is closer to the centroid of the photodetector306than the aperture of the emitter304). The emitters302,304may be offset or staggered with respect to the photodetector306in various alternative ways. For example, and in some cases (not shown), the centroids of the emitters302,304and photodetector306may be aligned, with the emitter302being positioned between the emitter304and the photodetector306. Also or alternatively, and in some embodiments, the sizes or optical powers of the beams emitted by the emitters302,304may be varied with respect to each other, or other parameters of the emitters302,304may be varied, to improve a matter differentiation circuit's ability to differentiate human tissue from other types of matter (or from particular types of matter). In some cases, an improvement in matter differentiation may be achieved by changing parameters of the emitters302,304that tend to change the ratio of an amount of electromagnetic radiation including the first IR wavelength received by the photodetector306to an amount of electromagnetic radiation including the second IR wavelength received by the photodetector306. In some cases, varying the parameters or configurations of the emitters302,304may make a human tissue or non-human object ratio curve flatter with variations in distance, or raise a human tissue ratio curve, or lower a non-human object ratio curve. All of these changes can increase the margin between human tissue ratio curves and non-human object ratio curves. Variations in emitter parameters may be especially useful in lowering the ratio of received/reflected IR wavelengths for non-human objects, at or around the point of contact between an object and a device (especially in the case of non-human objects with some amount of volume scattering). FIGS.8A and8Bshow an example of a skin-facing sensor (or sensor system800) that may be included in the device described with reference toFIG.1,2A-2B, or4A-4B. By way of example, the sensor system800is shown to be positioned under a back or back cover of a housing (e.g., under the second back cover portion214-2described with reference toFIG.2B).FIG.8Ashows a plan view of the sensor system800, andFIG.8Bshows an elevation of the sensor system800. The sensor system800includes the groups742,750of sensing components described with reference toFIG.7C, but the groups742,750of sensing components are positioned differently with respect to each other than what is shown inFIG.7C. In particular, the sensing components of the groups742,750are positioned on opposite sides of a center axis324of the second back cover portion214-2(e.g., the groups742,750are positioned off-axis with respect to the center axis324), and are rotated 180 degrees with respect to each other along a diameter of the second back cover portion214-2. As shown in the elevation ofFIG.8B, the first and second groups742,750of sensing components may be attached to an interior surface of the second back cover portion214-2using an adhesive. The groups742,750of sensing components may be attached to the interior surface of the second back cover portion214-2apart from other components of a device housing. In some embodiments, the groups742,750of sensing components may be attached directly to the interior surface (or to a lens or light control film or coating positioned between the interior surface and one or more of the emitters744,746,752,754or photodetectors748,756), or one or more modules including the groups742,750of sensing components may be attached directly to the interior surface (or to a lens or light control film or coating positioned between the interior surface and one or more of the emitters744,746,752,754or photodetectors748,756). Alternatively, the groups742,750of sensing components may be attached to a substrate or module that is attached directly to the interior surface of the second back cover portion214-2(or to a lens or light control film or coating positioned between the interior surface and the substrate or module). The second back cover portion214-2may similarly be attached to the first back cover portion214-1using an adhesive. The adhesives may be the same or different. The off-axis positioning of the groups742,750of sensing components may enable the sensing components to avoid a liquid802(e.g., water or perspiration) that happens to be on the object804, which liquid may tend to be attracted toward an apex (e.g., the center axis324) of the second back cover portion214-2. The off-axis positioning of the groups742,750of sensing components may also enable a device to collect multiple sets of measurements (e.g., a set of measurements from each group742,750). In some cases, a tilt of the second back cover portion214-2with respect to the object804may make one of the other sets of measurements more useful, or the sets of measurements may be averaged or otherwise combined or used when both sets of measurements are considered useful. Still further, measurements generated by the different groups742,750of sensing components may in some cases be used as stereo measurements, and may be used to determine a distance to an object. The groups742,750of sensing components may be directly or indirectly connected to circuitry808(e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software) and/or other circuitry, which in some cases may include the processor described with reference toFIG.1or2A-2B) that includes, or is configured to operate as, a timing circuit, a matter differentiation circuit, and a proximity detection circuit. The proximity detection circuit may be configured to indicate a proximity of a device (e.g., the second back cover portion214-2) to an object804. The proximity indication may be based at least in part on a first amount of electromagnetic radiation received by the photodetector748after the emitter744emits a first beam of electromagnetic radiation, and a second amount of electromagnetic radiation received by the photodetector748after the emitter746emits a second beam of electromagnetic radiation. In some cases the proximity indication may be a discrete value. In other cases, the proximity indication may identify one of at least two different ranges of proximities. The proximity indication may also be based on a third amount of electromagnetic radiation received by the photodetector756after the emitter752emits a third beam of electromagnetic radiation, and a fourth amount of electromagnetic radiation received by the photodetector756after the emitter754emits a fourth beam of electromagnetic radiation. For example, ratios of amounts of electromagnetic radiation of a first IR wavelength to amounts of electromagnetic radiation of a second IR wavelength may be computed for each group742,750of sensing components, and a comparison of the ratios may indicate an amount of tilt of the second back cover portion214-2with respect to the object804. FIG.9Ashows an example plan view of a skin-facing sensor (or sensor system900) that may be included in the device described with reference toFIG.1,2A-2B, or4A-4B. By way of example, the sensor system900is shown to be positioned under a back or back cover of a housing (e.g., under the second back cover portion214-2described with reference toFIG.2B). The sensor system900is shown to include a first pair of electromagnetic radiation emitters902,904that emit electromagnetic radiation through a first window906in the second back cover portion214-2, and a second pair of electromagnetic radiation emitters908,910that emit electromagnetic radiation through a second window912in the second back cover portion214-2. In some embodiments, the first emitter902may emit electromagnetic radiation at 1300 nm or 1650 nm; the second emitter904may emit electromagnetic radiation at 1200 nm; the third emitter908may emit electromagnetic radiation at 1050 nm; and the fourth emitter910may emit electromagnetic radiation at 1450 nm. In alternative arrangements, the emitters may emit other wavelengths of electromagnetic radiation, or some of the emitters may emit the same wavelength of electromagnetic radiation. The sensor system900may also include a first photodetector914that receives reflected or backscattered electromagnetic radiation through a third window916, and a second photodetector918that receives reflected or backscattered electromagnetic radiation through a fourth window920. In operation, the emitters902,904,908,910may be sequentially activated, and an amount of reflected or backscattered electromagnetic radiation of each emitted wavelength may be detected by each photodetector914,918. The different distances between each emitter and each photodetector914,918may assist in determining the accuracy (or validity) of the amounts of reflected or backscattered electromagnetic radiation received by the photodetectors914,918, and in some cases may improve matter differentiation decisions made by a matter differentiation circuit. FIG.9Bshows an alternative arrangement of the components described with reference toFIG.9A, in which each of the emitters902,904,908,910has been rotated by 45 degrees. Such a rotation may place the emitters of a pair of emitters at a same distance from one of the photodetectors914or918, and at different distances to the other of the photodetectors914or918. FIG.10Ashows an example plan view of a Fresnel lens1000positioned over a group of sensing components including the emitters302,304and photodetector306described with reference toFIGS.3A-3B. The Fresnel lens1000may include one or multiple Fresnel cells. Alternatively, a different type of lens, or a stack of lenses, may be positioned over the group of sensing components. The Fresnel lens1000(or other type(s) of lens(es)) may be positioned between a device housing and the group of sensing components (e.g., between the second back cover portion214-2and the emitters302,304and photodetector306). A set of one or more IR wavelength-blocking walls1002may be optionally disposed between the photodetector306and the emitters302,304, as described, for example, with reference toFIG.3A. The wall(s) may extend from a substrate, to which the photodetector306and emitters302,304are attached, to an interior surface of the lens1000, or may alternatively extend through the lens1000and/or second back cover portion214-2. FIG.10Bshows an example plan view of a set of Fresnel lenses1010,1012,1014, with each of the Fresnel lenses1010,1012,1014positioned over a respective sensing component of a group of sensing components. For example, a first Fresnel lens1010is positioned over the first emitter302described with reference toFIGS.3A-3B; a second Fresnel lens1012is positioned over the second emitter304; and a third Fresnel lens1014is positioned over the photodetector306. Alternatively, one or more of the Fresnel lenses1010,1012,1014may be replaced by a different type of lens or a stack of lenses. The Fresnel lenses1010,1012,1014(or other type(s) of lens(es)) may be positioned between a device housing and each of the sensing components (e.g., between the second back cover portion214-2and the first emitter302, second emitter304, or photodetector306). In other lens arrangements, a lens or lens stack may be disposed between a housing and any subset of sensing components, or no type of lens or lens stack may be disposed between the housing and one or more of the sensing components. FIG.11Ashows an example plan view of an LCF1100positioned over a group of sensing components including the emitters302,304and photodetector306described with reference toFIGS.3A-3B. The LCF1100may be positioned between a device housing and the group of sensing components (e.g., between the second back cover portion214-2and the emitters302,304and photodetector306). Alternatively, different segments or types of LCF may be positioned over different sensing components, or no LCF may be positioned over one or more sensing components, or different segments or types of LCF positioned over different sensing components may be oriented to guide or block electromagnetic radiation that is emitted, reflected, or backscattered at different incident angles with respect to a surface of the LCF1100. In some embodiments, the LCF1100may be replaced or supplemented with an LVF, BP filter, or polarizer. A set of one or more IR wavelength-blocking walls1102may be optionally disposed between the photodetector306and the emitters302,304, as described, for example, with reference toFIG.3A. The wall(s) may extend from a substrate, to which the photodetector306and emitters302,304are attached, to an interior surface of the LCF1100, or may alternatively extend through the LCF1100and/or second back cover portion214-2. FIG.11Bshows an example plan view of different LCFs1110,1112positioned over different sets of the sensing components described with reference toFIGS.3A-3B. For example, a first LCF1110may be positioned over the photodetector306, and a second LCF1112may be positioned over the emitters302,304. The first LCF1110may have the same or different properties as the second LCF1112. For example, and in some embodiments, the louvers of the first LCF1110may be rotated 90 degrees with respect to the louvers of the second LCF1112. In some embodiments, the LCFs described with reference toFIG.11A or11Bmay alternatively be polarizers. In some cases, a set of one or more LCFs or polarizers may be used to increase the received signal strength for reflected or backscattered electromagnetic radiation resulting from volume scattering and/or decrease the received signal strength for reflected or backscattered electromagnetic radiation resulting from surface scattering. For example, the LCF1112may have louvers that tilt emitted electromagnetic radiation away from the photodetector306, and/or the LCF1110may have louvers that limit the photodetector's receipt of electromagnetic radiation to incident angles that are oriented away from the emitters302,304. Alternatively, the LCFs1110,1112may be replaced with polarizers having different polarization directions. As another alternative, light pipes or electromagnetic radiation waveguides may be used to control the directions in which the emitters302,304emit and the photodetector306receives. The use of LCFs and/or polarizers can be especially useful for reducing optical crosstalk between emitters and a photodetector, and for weighting the effects of volume scattering of photons more heavily than the effects of surface scattering of photons. FIGS.12A-12Dshow various example elevations of first and second beams1200,1202of electromagnetic radiation having different IR wavelengths, as emitted by first and second emitters1204,1206. Reflections or backscatters of the beams1200,1202may be received by a photodetector1208. In some embodiments, the first and second emitters1204,1206may be the first and second emitters described with reference toFIGS.3A-3B, and the photodetector1208may be the photodetector described with reference toFIGS.3A-3B. The emitters1204,1206may be separated from the photodetector1208by a set of one or more IR wavelength-blocking walls disposed between the photodetector1208and each of the first and second emitters1204,1206. In some cases, the set of one or more IR wavelength-blocking walls may include the single light-blocking wall1210shown inFIGS.12A-12D. In other cases, an IR wavelength-blocking wall may be formed around the photodetector1208, or around one or both or each of the emitters1204,1206, or around each of the photodetector1208, the first emitter1204, and the second emitter1206. Each IR-wavelength blocking wall1210may extend between a substrate (or substrates) to which the emitters1204,1206and photodetector1208are attached to an interior surface1212of a back or back cover1214of a housing (e.g., an interior surface of the second back cover portion described with reference toFIGS.2A-2B and3A-3B). Alternatively, one or more of the IR wavelength-blocking walls1210may extend through the back or back cover1214of the housing (e.g., to the exterior surface1216of the back or back cover1214). In other examples, there may be no IR wavelength-blocking walls1210. As shown inFIG.12A, the first and second emitters1204,1206may in some cases emit beams1200,1202of electromagnetic radiation along axes1218,1220that are perpendicular to electromagnetic radiation emission surfaces of the emitters1204,1206. In some cases, the beams1200,1202may fan out as they propagate along the axes1218,1220. In other cases, the beams1200,1202may be collimated or converge. In some embodiments, an electromagnetic radiation beam director1222or1224(e.g., a lens, lenses, LCF(s), polarizer(s), light guide(s), electromagnetic radiation waveguide(s), or other passive or active component) may be positioned in the path of one or both of the beams1200,1202, and may collimate or otherwise alter the direction or shape of the beam1200and/or1202. In some embodiments, one electromagnetic radiation beam director (or a common set) may alter the direction or shape of both beams1200,1202. As shown inFIG.12B, one of the first or second emitter1204or1206may emit a beam1200of electromagnetic radiation along an axis1218that is perpendicular to an electromagnetic radiation emission surface of the emitter, and the other emitter1204or1206may emit a beam1202of electromagnetic radiation along an axis1220that is tilted with respect to an electromagnetic radiation emission surface of the emitter. In some cases, the beams1200,1202may fan out as they propagate along the axes1218,1220. In other cases, the beams1200,1202may be collimated or converge. In some embodiments, an electromagnetic radiation beam director may be positioned in the path of one or both of the beams1200,1202, and may collimate or otherwise alter the direction or shape of the beam1200and/or1202. In some embodiments, one electromagnetic radiation beam director (or a common set) may alter the direction or shape of both beams1200,1202. By way of example, the second beam1202is shown to have an axis1220that is tilted toward the photodetector1208. Alternatively, the second beam1202may have an axis1220that is tilted away from the photodetector1208. Titling the axis1220of the second beam1202may tend to decrease or increase (or just change) the propagation path of emitted electromagnetic radiation, which may tend to decrease or increase the likelihood or percentage of electromagnetic radiation that may be reflected or backscattered toward the photodetector1208. This may change the ratio of amounts of different wavelengths of electromagnetic radiation received/detected by the photodetector1208, which may improve a device's ability to differentiate different types of matter against which the back cover1214is placed. In some embodiments, the axes1218,1220of both beams1200,1202may be tilted toward the photodetector1208(as shown inFIG.12C), or away from the photodetector1208(as shown inFIG.12D). FIGS.13A and13Bshow an example of a skin-facing sensor (or sensor system1300) that may be included in the device described with reference toFIG.1,2A-2B,3A-3B, or4A-4B. By way of example, the sensor system1300is shown to be positioned under a back or back cover of a housing (e.g., under the second back cover portion214-2described with reference toFIG.2B).FIG.13Ashows a plan view of the sensor system1300, andFIG.13Bshows an elevation of the sensor system1300. By way of example, and as shown inFIG.13A, the sensor system1300may include first and second proximity sensors1302,1304. By way of example, the first proximity sensor1302may be a pressure sensor, a capacitive sensor, an optical sensor, or another type of proximity sensor. Also by way of example, the second proximity sensor1304may be a capacitive sensor, an optical sensor, or another type of proximity sensor. The first proximity sensor1302may be configured to detect an object within a first range of proximities to the back or back cover of the housing, such as a range of proximities that is closer to the second back cover portion214-2. The second proximity sensor1304may be configured to detect an object within a second range of proximities to the back or back cover of the housing, such as a range of proximities that extends farther from the second back cover portion214-2than the first range of proximities. The first and second ranges of proximities may be overlapping or non-overlapping (e.g., adjacent). In some embodiments, the first and second proximity sensors1302,1304may be connected to circuitry1306(e.g., a processor (e.g., a general purpose processor programmed by suitable machine-readable instructions or software) and/or other circuitry, which in some cases may include the processor described with reference toFIG.1or2A-2B) that includes, or is configured to operate as, a proximity sensor management circuit. The proximity sensor management circuit may be configured to activate the first proximity sensor1302repeatedly or continually over a period of time, to generate a series of measurements indicating whether an object (e.g., a wrist of a user) is within the first range of proximities. By default, the proximity sensor management circuit may maintain the second proximity sensor1304in an inactive state. The proximity sensor management circuit may selectively activate the second proximity sensor1304, during the period of time in which the first proximity sensor1302is active, when the series of measurements generated by the first proximity sensor1302satisfy a set of one or more conditions. Similarly, the proximity sensor management circuit may selectively deactivate the second proximity sensor1304, during the period of time in which the first proximity sensor1302is active, when the series of measurements generated by the first proximity sensor1302satisfy a second set of one or more conditions. Selective activation/deactivation of the second proximity sensor1304may be useful, for example, when the second proximity sensor1304consumes more power when activated (or in use) than the first proximity sensor1302consumes when activated (or in use). In some cases, the second proximity sensor1304may consume more power, at least in part, because it has a higher sample rate (e.g., acquires more measurements) than the first proximity sensor1302. In some cases, the first proximity sensor1302may be a lower cost and/or less accurate proximity sensor than the second proximity sensor1304. In some embodiments, the set of one or more conditions that need to be satisfied for the second proximity sensor1304to be activated may include a measurement, in the series of measurements, that indicates an object (e.g., a wrist) is outside the first range of proximities. In some embodiments, the set of one or more conditions may include a number of measurements, in the series of measurements generated by the first proximity sensor1302, that indicate the object is outside the first range of proximities. The number of measurements may exceed a threshold number greater than one. In some embodiments, the set of one or more conditions may include a change in value in the series of measurements, which change in value exceeds a threshold change. In some embodiments, the set of one or more conditions may include a rate of change in value in the series of measurements, which rate of change in value exceeds a threshold rate of change. A change in value that exceeds a threshold, or a rate of change in value that exceeds a threshold, may indicate, for example, that the object is moving toward or out of the usable range of the first proximity sensor1302. Similarly to the set of one or more conditions that need to be satisfied to activate the second proximity sensor1304, the second set of one or more conditions, that need to be satisfied for the second proximity sensor1304to be deactivated, may include a particular measurement, number of measurements, change in value of measurements, or rate of change in value of measurements. In some embodiments, the proximity sensor management circuit may be configured to activate both the first and second proximity sensors1302,1304repeatedly or continually over a period of time, to generate first and second respective series of measurements indicating whether an object (e.g., a wrist of a user) is within the first range of proximities. In some cases the first range of proximities may be a range that requires contact and/or near contact between a device (e.g., the second back cover portion214-2) and the object (e.g., a user's wrist). In these cases, a comparison of the measurements obtained from the first and second proximity sensors1302,1304(e.g., a ratio or difference of the measurements), or a comparison of the proximities indicated by the measurements (e.g., a ratio or difference of indicated proximities) may provide an additional check to confirm whether the device is, in fact, within the first proximity range (e.g., that the device and object are in contact). For example, the measurements obtained from the different proximity sensors1302,1304may approach a common asymptote within the first proximity range (e.g., when the object is in contact with the device), such that a ratio of the measurements is very high (e.g., near 1.0) when both measurements indicate that the object is within the first proximity range. However, the measurements may be fairly different, and their ratio may be significantly less than 1.0, when the measurements indicate that the object is outside the first proximity range. In some cases, both proximity sensors1302,1304may be activated in parallel, within the first proximity range (or regardless of whether an object is within the first proximity range) when a processor or application needs to know whether an object is in contact with a device for purposes of acquiring valid sensor measurements. Power savings may be achieved in these contexts by deactivating the sensor that requires contact between the device and the object until the first and second proximity sensors1302,1304indicate individually and in combination that the object is in contact with the device. As shown in the exploded view ofFIG.13B, the first and second proximity sensors1302,1304may be attached to an interior surface of the second back cover portion214-2using an adhesive1310. The proximity sensors1302,1304may be attached to the interior surface of the second back cover portion214-2apart from other components of a device housing. In some embodiments, the proximity sensors1302,1304, or components thereof, may be attached directly to the interior surface, or one or more modules including the proximity sensors1302,1304may be attached directly to the interior surface. Alternatively, the proximity sensors1302,1304may be attached to a substrate or module that is attached directly to the interior surface of the second back cover portion214-2. The second back cover portion214-2may similarly be attached to the first back cover portion214-1using an adhesive1312. The adhesives1310,1312may be the same or different. The adhesive1312may in some cases be a ring of adhesive disposed around the perimeter of the second back cover portion214-2. The first and second proximity sensors1302,1304may be electrically connected to the circuitry1306(e.g., to an integrated circuit (IC) or printed circuit board (PCB)). In some cases, the first and second emitters302,304and/or photodetector306may be electrically connected to the circuitry1306via a set of fly wires1314and/or a flex circuit1316. In some embodiments, a visibly opaque ink may be applied to the interior surface of the second back cover portion214-2, in at least a region or regions disposed between the first and second proximity sensors1302,1304, on one side, and the second back cover portion214-2on the other side. FIGS.14A-14Cshow an example of a device1400(e.g., an electronic watch or smart watch) having a housing1402, in which a back1404or back cover of the housing1402is positioned against or at varying distances from an object1406(e.g., a user's wrist). The device1400may be an example of the devices described with reference toFIG.1,2A-2B, or4A-4B.FIG.14Ashows the back1404of the housing1402positioned against or close to the object1406;FIG.14Bshows the back1404of the housing1402positioned farther away from the object1406than what is shown inFIG.14A; andFIG.14Cshows the back1404of the housing1402positioned farther away from the object1406than what is shown inFIG.14B. When the device1400is a wearable device, the position of the device1400inFIG.14Amay be consistent with the device1400being worn and positioned against a user's skin (e.g., a wrist). A wearable device, when worn, will typically spend most of its time in the position shown inFIG.14A. The position of the device1400inFIG.14Bmay be consistent with the device1400being somewhat loosely worn, such that it may occasionally tilt with respect to, or separate from, the user's skin. The position of the device1400inFIG.14Bmay also be consistent with the device1400being temporarily dislodged from the user's skin due to a shock or the user repositioning the device1400. The position of the device1400inFIG.14Cmay be consistent with the device1400being removed, or consistent with the device1400being more loosely worn than inFIG.14A or14B, such that the proximity of the device1400to the user's skin may only be detected using, for example, the second proximity sensor described with reference toFIG.13. For the description contained in the next few paragraphs, it will be assumed that the device1400includes the sensor system described with reference toFIG.13and, particularly, the first and second proximity sensors1302,1304and circuitry1306configured to operate as a proximity sensor management circuit. In some embodiments, the proximity sensor management circuit may be configured to determine whether the object1406(e.g., skin, or a user's wrist) is within a first range of proximities1408, using the first proximity sensor1302, while maintaining the second proximity sensor1304in an inactive state. The object1406is within the first range of proximities1408inFIG.14A. In some cases, the proximity sensor management circuit may determine, using the first proximity sensor1302, that the object1406has moved outside the first range of proximities1408, as shown inFIG.14B or14C. If the object1406moves outside the first range of proximities1408for less then a predetermined period of time (e.g., less than a short period of time), or for fewer than a threshold number of times within a predetermined evaluation period, the proximity sensor management circuit may continue to maintain the second proximity sensor in the inactive state. This state of operation is represented, for example, by one or more movements of the device1400between the state shown inFIG.14Aand the state shown inFIG.14B. After the device1400moves outside the first range of proximities1408, or after the device1400moves outside the first range of proximities1408for more than the predetermined period of time and/or more than the threshold number of times, the proximity sensor management circuit may transition the second proximity sensor1304from the inactive state to an active state and scan for the object within a second range of proximities1410using the second proximity sensor1304. In some embodiments, the proximity sensor management circuit may also scan for the object1406within the first range of proximities1408, using the first proximity sensor1302, while the second proximity sensor1304is used to scan for the object within the second range of proximities1410. In some embodiments, the proximity sensor management circuit may transition the first proximity sensor1302to an inactive state after the second proximity sensor1304is activated. The proximity sensor management circuit may be further configured to determine, while the second proximity sensor1304is active, that the object1406is outside the second range of proximities1410. After determining the object1406is outside the second range of proximities1410, the proximity sensor management circuit may transition the second proximity sensor1304from the active state to the inactive state. The proximity sensor management circuit may also or alternatively be configured to determine, while the second proximity sensor1304is active, that the object is within the first range of proximities1408. After determining the object1406is within the first range of proximities1408, the proximity sensor management circuit may transition the second proximity sensor1304from the active state to the inactive state. In some cases, the first and second ranges of proximities1408,1410may both be consistent with the device1400being worn. In these cases, the circuitry1306(e.g., a processor) may be configured to distinguish between proximity ranges of the back1404of the housing1402to the object1406using outputs of the first proximity sensor1302and the second proximity sensor1304. For example, when the output of the first proximity sensor1302indicates a detection of the object1406within the first range of proximities1408, the processor may be configured to generate a first indication that the back1404of the housing1402is in close proximity to the object1406; and when the output of the second proximity sensor1304indicates a detection of the object1406within the second range of proximities1410while the output of the first proximity sensor1302indicates no detection of the object1406within the first range of proximities1408, the processor may be configured to generate a second indication that the back1404of the housing1402is farther from the object1406than the close proximity. In some cases, the first range of proximities1408may be consistent with the device1400likely being on a user (e.g., worn by the user, or on-wrist), and the second range of proximities1410may be consistent with the device1400being off a user (e.g., not worn by the user, or off-wrist). In these cases, the circuitry1306(e.g., a processor) may be configured to distinguish between whether the wearable device is likely on or off of a user (e.g., between a likely on state and a likely off state) using outputs of the first proximity sensor1302and the second proximity sensor1304. For example, when the output of the first proximity sensor1302indicates a detection of the object1406within the first range of proximities1408, the processor may be configured to indicate an existence of the likely on state; and when the output of the second proximity sensor1304indicates a detection of the object1406within the second range of proximities1410while the output of the first proximity sensor1302indicates no detection of the object1406within the first range of proximities1408, the processor may be configured to indicate an existence of the likely off state. FIGS.15A and15Bshow example conditions that may be used to trigger the activation of the second proximity sensor1304described with reference toFIGS.13A-14B. As shown inFIG.15A, the first proximity sensor1302described with reference toFIGS.13A-14Bmay have a response1500(e.g., an output) that varies with a proximity of an object to the sensor (e.g., a distance z between the object and the sensor). Typically, the sensor's response1500will have a maximum value (or peak value) at some distance z, and trail off on either side of the maximum value. Often, but not always, the trailing off of the sensor's response1500will be most significant at greater distances (e.g., greater values of z). Above some distance of z, the response1500of the first proximity sensor1302may become unreliable. For example, above some value of z, the response1500of the first proximity sensor1302may be less accurate or be indistinguishable from noise. The response1500may have a value1502(i.e., a threshold), below which the response1500is considered unreliable. The value1502, or a value of the response1500that is somewhat higher than the value1502, may define a boundary1504between first and second proximity ranges. When the output of the first proximity sensor1302drops below the value1502, or drops below the value1502for a threshold number of times, the proximity sensor management circuit described with reference toFIGS.13A-14Bmay transition the second proximity sensor1304from its inactive state to its active state. FIG.15Bshows another example response1510(or output) of the first proximity sensor1302described with reference toFIGS.13A-14B. Similarly to the response described with reference toFIG.15A, the response1510may vary with a proximity of an object to the sensor (e.g., a distance z between the object and the sensor). Typically, the sensor's response1510will have a maximum value (or peak value) at some distance z, and trail off on either side of the maximum value. Often, but not always, the trailing off of the sensor's response1510will be most significant at greater distances (e.g., greater values of z). As discussed with reference toFIGS.13A-13B, and in some embodiments, the set of one or more conditions that need to be satisfied for the second proximity sensor1304to be activated may include a change in value in a series of measurements, which change in value exceeds a threshold change. In some embodiments, the set of one or more conditions may include a rate of change in value in the series of measurements, which rate of change in value exceeds a threshold rate of change. Examples of such changes are represented inFIG.15Bby a threshold change1512in response values (i.e., measurements) and a threshold rate of change1514in response values. The threshold change1512may occur at various points along the response curve, but is unlikely to occur within a window of proximities about the maximum value of the response1510(e.g., because the response1510does not change this much near the maximum value). The threshold rate of change1514(or threshold slope of the response1510) can likewise be selected so that the threshold rate of change1514is unlikely to be met within a window of proximities about the maximum value of the response1510. In some cases, the conditions described with reference toFIG.15Amay be more suitable for activating the second proximity sensor1304when the first proximity sensor1302is a response with sharper roll off about the response's maximum value; and the conditions described with reference toFIG.15Bmay be more suitable for activating the second proximity sensor1304when the first proximity sensor1302has a response with slower roll off about the response's maximum value. In some embodiments, the conditions described with reference toFIG.15A or15Bmay be subjected to hysteresis (e.g., a time-varying average), to prevent activation of the second proximity sensor1304under conditions such as those described with reference toFIGS.14A and14B, where a device is temporarily struck, moved, or jostled, leading to intermittent or short-term measurements suggesting a back or back cover of a housing has moved farther away from an object (e.g., a user's wrist). FIGS.16A and16Bshow example relationships between the measurements of the proximity sensors described with reference toFIGS.13A-13B. As shown inFIG.16A, and by way of example, a first proximity sensor may have a response1602that tapers off more quickly as an object moves farther away from the first proximity sensor, and a second proximity sensor may have a response1604that tapers off more slowly as an object moves farther away from the second proximity sensor. The first proximity sensor may therefore be useful to detect when an object is in close proximity1606(e.g., within a first range of proximities) and the second proximity sensor may be useful to detect when an object is in further proximity1608(e.g., within a second range of proximities that is more distant than the first range of proximities), or within the close or further proximity1606,1608. However, if the second proximity sensor is able to detect an object within the further proximity by consuming more power than the first proximity sensor, the second proximity sensor may be selectively enabled, as described with reference toFIG.15A or15B, thereby conserving power. A boundary1610between the close and further proximity1606,1608may be defined as described with reference toFIG.15A or15B. As shown inFIG.16A, the first and second proximity sensors may have responses1600that approach a common asymptote within a first range of proximities, such as when an object is in contact with (or in near contact with) a device. A comparison of the measurements of the proximity sensors (e.g., a ratio or difference) may therefore provide an additional check that can be used to confirm whether the object is, in fact, within the first proximity range (e.g., that the object is in contact with the device). An example ratio1620of measurements of first and second proximity sensors is shown inFIG.16B. When the ratio1620is closer to 1.0, or above a threshold1622, the object may be considered in contact with the device (i.e., in a contact zone1624). When the ratio1620drops below the threshold1622, the object may be considered not in contact with the device (i.e., within a further proximity range1626). FIGS.17A-17Cshow examples of proximity sensors that may be used as the first proximity sensor in the systems and devices described with reference toFIGS.13A-14B. In some cases, the proximity sensors shown inFIGS.17B and17Cmay also or alternatively be used as the second proximity sensor in the systems and devices described with reference toFIGS.13A-14B. FIG.17Ashows an example of a pressure sensor1700(or load cell). The pressure sensor1700may be positioned between a back cover1702and frame1704of a housing1706, and in some embodiments may include a force-sensitive gasket including first and second electrodes that move closer to one another and generate a series of measurements (e.g., capacitive-based pressure measurements) as the back cover1702is moved toward the frame1704. For example, when a user fastens a device including the housing1706to their wrist using a band (e.g., a wrist band), their wrist may apply pressure to the back cover1702and press it toward the frame1704. In alternative embodiments, the pressure sensor1700may include a force-sensitive gasket having an air-filled pocket, fluid-filled pocket, or the like, and the pressure sensor1700may generate a series of pressure measurements indicating the pressure of the air or fluid within the pocket. In other alternative embodiments, the pressure sensor1700may be moved to a cavity within the frame1704and/or interior to a device that includes the housing1706. For example, an air or fluid-filled cavity may be positioned interior to the device, and pressure on the back cover1702may impart changes to the pressure of the air or fluid within the cavity. The pressure sensor1700described with reference toFIG.17Amay be considered a contact sensor, because an object needs to be in contact with the sensor before the sensor can detect a presence (or proximity) of the object. The range of object proximities that is detectable by a contact sensor corresponds to a range of movement of the contact sensor. Other types of contact sensor that may be used in place of, or in combination with, the pressure sensor1700include resistive sensors, bending beam sensors, and so on. FIG.17Bshows an example of a capacitive sensor1710. The capacitive sensor1710may be a self-capacitance sensor (having at least one sense electrode) or a mutual-capacitance sensor (having at least one sense electrode and at least one drive electrode). By way of example, a self-capacitance sensor is shown. In contrast to the pressure sensor described with reference toFIG.17A, the capacitive sensor1710may detect an object (e.g., a user's wrist) before the object contacts the capacitive sensor1710. In some embodiments, the capacitive sensor1710may generate a series of measurements (e.g., capacitance measurements) as it approaches the back cover1702and possibly comes into contact with the back cover1702. In some embodiments, the capacitive sensor1710may detect a user within a range of proximities extending from about 0-5 mm from the back cover1702. FIG.17Cshows an example of an optical sensor1720. The optical sensor1720may be disposed within the housing1706, and in some cases may be attached to an interior surface of the back cover1702(or to a module that is attached to the interior surface of the back cover1702). In some embodiments, one or more optic elements (e.g., a lens, lenses, LCF(s), polarizer(s), light guide(s), electromagnetic radiation waveguide(s), or other passive or active component) may be positioned between the optical sensor1720and the back cover1702, or formed into the back cover1702. In some embodiments, the optical sensor1720may generate a series of measurements (e.g., optical measurements) as it approaches the back cover1702and possibly comes into contact with the back cover1702. The optical sensor1720may in some cases have a greater proximity detection range than the pressure sensor or capacitive sensor described with reference toFIG.17A or17B. FIG.18shows an example plan view of a skin-facing sensor (or sensor system1800) that may be included in the device described with reference toFIG.1,2A-2B,4A-4B,13A-13B, or14A-14C. By way of example, the sensor system1800is shown to be positioned under a back or back cover1806of a housing (e.g., under the second back cover portion described with reference toFIG.2B). The sensor system1800includes multiple groups1802,1804of proximity sensors distributed about different locations under the back or back cover1806. In some embodiments, a first group1802of proximity sensors may include a first proximity sensor1808and a second proximity sensor1810, and a second group1804of proximity sensors may include a third proximity sensor1812and a fourth proximity sensor1814. The first and second proximity sensors1808,1810may be respectively configured similarly to the first and second proximity sensors described with reference toFIGS.13A-13B, but for their positions with respect to the exterior surface of the back cover1806. The third and fourth proximity sensors1812,1814may also be respectively configured similarly to the first and second proximity sensors described with reference toFIGS.13A-13B, but for their positions with respect to the exterior surface of the back cover1806. In some embodiments, the proximity sensors1808,1810,1812,1814may be connected to circuitry1816(e.g., a processor and/or other circuitry, which in some cases may include the processor described with reference toFIG.1or2A-2B, or the circuitry described with reference toFIG.3A-3B,4A-4B,13A-13B, or14A-14C) that includes, or is configured to operate as, a proximity sensor management circuit. The proximity sensor management circuit may be configured to activate each of the first and third proximity sensors1808,1812repeatedly or continually over a period of time, to generate first and third series of measurements indicating whether an object (e.g., a wrist of a user) is within the first range of proximities. By default, the proximity sensor management circuit may maintain the second and fourth proximity sensors1810,1814in an inactive state. The proximity sensor management circuit may selectively activate the second proximity sensor1810, during the period of time in which the first proximity sensor1808is active, when the series of measurements generated by the first proximity sensor1808satisfy a set of one or more conditions. Similarly, the proximity sensor management circuit may selectively deactivate the second proximity sensor1810, during the period of time in which the first proximity sensor1808is active, when the series of measurements generated by the first proximity sensor1808or the second proximity sensor1810satisfy a second or third set of one or more conditions. The proximity sensor management circuit may selectively activate the fourth proximity sensor1814, during the period of time in which the third proximity sensor1812is active, when the series of measurements generated by the third proximity sensor1812satisfy a set of one or more conditions. Similarly, the proximity sensor management circuit may selectively deactivate the fourth proximity sensor1814, during the period of time in which the third proximity sensor1812is active, when the series of measurements generated by the third proximity sensor1812or the fourth proximity sensor1814satisfy the second or third set of one or more conditions. Selective activation/deactivation of the second and fourth proximity sensors1810,1814may be useful, for example, when the second and fourth proximity sensors1810,1814consume more power when activated (or in use) than the first and third proximity sensors1808,1812consume when activated (or in use). In some cases, the circuitry1816(e.g., a processor of the circuitry1816) may be configured to indicate a tilt of the device (e.g., a watch body of an electronic watch) that includes the proximity sensors1808,1810,1812,1814. The tilt may in some cases be determined with respect to an object (e.g., a wrist to which the device is attached using a wrist band). In some embodiments, the sensor system1800may further include a set of electromagnetic radiation emitters and one or more photodetectors that are usable to determine whether the back or back cover1806is likely proximate to human tissue. For example, the sensor system1800may include the two groups of sensing components described with reference toFIG.7CorFIGS.9A-9B(e.g., a first group742including a first emitter744, a second emitter746, and a first photodetector748; and a second group750including a third emitter752, a fourth emitter754, and a second photodetector756). In some cases, a matter differentiation circuit provided by the circuitry1816may only operate, or may determine which of the first photodetector748and/or the second photodetector756is likely outputting valid signals; or may interpret the signals output by the first photodetector748and/or the second photodetector756, in response to whether the groups1802,1804of proximity sensors indicate the back or back cover1806is positioned squarely above an object or tilted with respect to the object. In some cases, the first and third proximity sensors1808,1812may not be activated until one or both of the groups742,750of sensing components indicate the back or back cover1806is likely proximate human tissue. The groups1802,1804,742,750of sensing components may also be used cooperatively, or separately, in other ways. FIG.19shows a sample electrical block diagram of an electronic device1900, which electronic device may in some cases be implemented as any of the devices described with reference toFIG.1,2A-2B,4A-4B,13A-13B, or14A-14C. The electronic device1900may include an electronic display1902(e.g., a light-emitting display), a processor1904, a power source1906, a memory1908or storage device, a sensor system1910, or an input/output (I/O) mechanism1912(e.g., an input/output device, input/output port, or haptic input/output interface). The processor1904may control some or all of the operations of the electronic device1900. The processor1904may communicate, either directly or indirectly, with some or all of the other components of the electronic device1900. For example, a system bus or other communication mechanism1914can provide communication between the electronic display1902, the processor1904, the power source1906, the memory1908, the sensor system1910, and the I/O mechanism1912. The processor1904may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions, whether such data or instructions is in the form of software or firmware or otherwise encoded. For example, the processor1904may include a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a controller, or a combination of such devices. As described herein, the term “processor” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements. In some cases, the processor1904may provide part or all of the circuitry described with reference to any ofFIG.1-4B,7A-14C, or17A-18. It should be noted that the components of the electronic device1900can be controlled by multiple processors. For example, select components of the electronic device1900(e.g., the sensor system1910) may be controlled by a first processor and other components of the electronic device1900(e.g., the electronic display1902) may be controlled by a second processor, where the first and second processors may or may not be in communication with each other. The power source1906can be implemented with any device capable of providing energy to the electronic device1900. For example, the power source1906may include one or more batteries or rechargeable batteries. Additionally or alternatively, the power source1906may include a power connector or power cord that connects the electronic device1900to another power source, such as a wall outlet. The memory1908may store electronic data that can be used by the electronic device1900. For example, the memory1908may store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing signals, control signals, and data structures or databases. The memory1908may include any type of memory. By way of example only, the memory1908may include random access memory, read-only memory, Flash memory, removable memory, other types of storage elements, or combinations of such memory types. The electronic device1900may also include one or more sensor systems1910positioned almost anywhere on the electronic device1900. In some cases, the sensor systems1910may include one or more electromagnetic radiation emitters and detectors, and/or one or more proximity sensors, positioned as described with reference to any ofFIG.1-4B,7A-14C, or17A-18. The sensor system(s)1910may be configured to sense one or more type of parameters, such as but not limited to, vibration; light; touch; force; heat; movement; relative motion; biometric data (e.g., biological parameters) of a user; air quality; proximity; position; connectedness; matter type; and so on. By way of example, the sensor system(s)1910may include one or more of (or multiple of) a heat sensor, a position sensor, a proximity sensor, a light or optical sensor (e.g., an electromagnetic radiation emitter and/or detector), an accelerometer, a pressure transducer, a gyroscope, a magnetometer, a health monitoring sensor, and an air quality sensor, and so on. Additionally, the one or more sensor systems1910may utilize any suitable sensing technology, including, but not limited to, interferometric, magnetic, pressure, capacitive, ultrasonic, resistive, optical, acoustic, piezoelectric, or thermal technologies. The I/O mechanism1912may transmit or receive data from a user or another electronic device. The I/O mechanism1912may include the electronic display1902, a touch sensing input surface, a crown, one or more buttons (e.g., a graphical user interface “home” button), one or more cameras (including an under-display camera), one or more microphones or speakers, one or more ports such as a microphone port, and/or a keyboard. Additionally or alternatively, the I/O mechanism1912may transmit electronic signals via a communications interface, such as a wireless, wired, and/or optical communications interface. Examples of wireless and wired communications interfaces include, but are not limited to, cellular and Wi-Fi communications interfaces. The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art, after reading this description, that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art, after reading this description, that many modifications and variations are possible in view of the above teachings. As described above, one aspect of the present technology is the gathering and use of data available from various sources, including biometric data (e.g., the presence and/or proximity of a user to a device). The present disclosure contemplates that, in some instances, this gathered data may include personal information data that uniquely identifies or can be used to identify, locate, or contact a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to activate or deactivate various functions of the user's device, or gather performance metrics for the user's device or the user. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide mood-associated data for targeted content delivery services. In yet another example, users can select to limit the length of time mood-associated data is maintained or entirely prohibit the development of a baseline mood profile. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publicly available information. | 131,092 |
11857299 | DETAILED DESCRIPTION The following embodiments are exemplifying. Although the specification may refer to “an”, “one”, or “some” embodiment(s) in several locations of the text, this does not necessarily mean that each reference is made to the same embodiment(s), or that a particular feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments. FIG.1illustrates a system to which embodiments of the invention may be applied. Said system may be used to monitor physical training, activity, and/or inactivity of a user100. Thus, the embodiments may not be limited to monitoring and/or measuring physical training of the user100, and thus said system may be used to monitor physical activity and/or inactivity during the day and/or night (e.g. 24 hours a day). Such may be possible by using one or more devices described with respect toFIG.1and in the embodiments below. Referring toFIG.1, the user100may wear a wearable device, such as a wrist device102, a head sensor unit104C, a torso sensor104B, and/or a leg sensor104A. In another example, the wearable device may be and/or be comprised in glasses. In another example, the wearable device is comprised or configured to be coupled with a garment or garments (or apparel). Examples of such garments may include bra(s), swimming apparel, such as swimming suit or cap, and glove(s). The garment or apparel may be worn by the user. In some embodiments, the wearable device is integrated as a part of the garment or apparel. Due to simplicity reasons, let us now describe the wearable device as being the wrist device102. However, embodiments described in relation to wrist device102may be utilized by other types of wearable devices, i.e. the embodiments are not necessarily limited to wrist device or devices102. The wrist device102may be, for example, a smart watch, a smart device, sports watch, and/or an activity tracking apparatus (e.g. bracelet, arm band, wrist band, mobile phone). The wrist device102may be used to monitor physical activity of the user100by using data from internal sensor(s) comprised in the wrist device102data from external sensor device(s)104A-C, and/or data from external services (e.g. training database112). It may be possible to receive physical-activity-related information from a network110, as the network may comprise, for example, physical activity-related information of the user100and/or some other user(s). Thus, the wrist device102may be used to monitor physical activity related information of the user100and/or the other user(s). Naturally, one or more of the external sensor device(s)104A-C may be worn by the other user(s), and thus information received from said one or more sensor device(s)104A-C may be monitored from the wrist device102by the user100. The network110may comprise the training database112and/or the server114. The server114may be configured to enable data transfer between the training database112and some external device, such as the wearable device. Hence, the database112may be used to store cardiac activity measurement data, for example. It needs to be understood that the wrist device102may be used to monitor physical activity of the user100and/or to be used as a smart watch configured to enable communication with, for example, a portable electronic device106, the network110, and/or some other network, such as a cellular network. Thus, for example, the wrist device102may be connected (i.e. wirelessly connected) to the portable electronic device106, such as a mobile phone, smart phone, tablet and/or computer to name a few. This may enable data transfer between the wrist device102and the portable electronic device106. The data transfer may be based on Bluetooth protocol, for example. Other wireless communication methods, such as Wireless Local Area Network (WLAN) and/or Near Field Communication (NFC), may also be used. The wrist device102may comprise a heart activity sensor configured to determine cardiac activity of the user100, such as heart rate, heart beat interval (HBI) and/or heart rate variability (HRV), for example. The heart activity sensor may comprise an optical cardiac activity sensor unit configured to measure the cardiac activity of the user100by using optical measurements. An example of such sensor is a PPG (photoplethysmography) sensor.FIG.2illustrates a sensor head of a PPG sensor, comprising multiple light emitting diodes (LEDs)210,212and a photo sensor such as a photodiode214. The optical measurements may comprise the LEDs210,212emitting light200,202towards body tissue208of the user100and measuring the bounced, reflected, diffracted, scattered and/or emitted light204from the body tissue of the user100by using the photodiode214. The emitted light is modulated when travelling through veins of the user100and the modulation may be detected by the optical cardiac activity sensor unit. By using detected optical measurement data, the wrist device102may determine cardiac activity of the user100, such as the heart rate. The optical cardiac activity sensor unit may obtain via the measurement a measurement signal characterizing or carrying the cardiac activity information on the user. As understood, similar cardiac activity circuitry may be comprised in the other wearable devices described herein. It also needs to be noted that the cardiac activity circuitry may produce raw measurement data of the cardiac activity and/or it may process the measurement data into cardiac activity information, such as heart rate for example. The sensor(s) in the cardiac activity circuitry may comprise data processing capabilities. Also, the wrist device102and/or some other wearable device may comprise a processing circuitry configured to obtain the cardiac activity measurement data from the cardiac activity circuitry and to process said data into cardiac activity information, such as a cardiac activity metric characterizing the cardiac activity of the user100. For example, the measurement data of the optical cardiac activity sensor unit may be used, by the processing circuitry, to determine heart rate, HRV and/or HBI of the user100. Further, the raw measurement data and/or processed information may be processed by the wrist device102or some other wearable device, and/or transmitted to an external device, such as the portable electronic device106. The wrist device102(or more broadly, the wearable device) may comprise other types of sensor(s). Such sensor(s) may include a Laser Doppler-based blood flow sensor, a magnetic blood flow sensor, an Electromechanical Film (EMFi) pulse sensor, a temperature sensor, a pressure sensor, an electrocardiogram (ECG) sensor, and/or a polarization blood flow sensor. Measuring cardiac activity of the user with the optical cardiac activity sensor unit (referred to simply as OHR), may be affected by motion artefacts. That is, motion artefacts may cause an effect on the measured cardiac activity signal. The effect may cause the information carried by the signal to be erroneous and/or incomplete. Some embodiments described below provide a solution to reduce the effect of motion artefacts on a cardiac activity signal measured using the OHR. The solution may enable the users to receive even more accurate cardiac activity information to help them, for example, during physical training or to plan their future training sessions. FIG.3illustrates an embodiment of a wearable heart activity sensor device comprising: a substrate300of optically transparent material arranged to face a skin208of a user when the sensor device is worn by the user; at least one LED302arranged on the substrate300and arranged to emit light through the substrate300; at least one photo sensor304arranged on the substrate300as spatially separated from the at least one LED302and arranged to absorb light through the substrate, wherein the at least one LED and the at least one photodiode are comprised in a PPG sensor of the heart activity sensor device; and an overmold308of thermoplastic material covering the at least one LED302, the at least one photo sensor304and a space between the at least one light emitting diode and the at least one photodiode. The overmold308fills all the spaces between the components assembled on the substrate and, upon solidifying, provides a rigid structure and support for the electronic components. This thermoplastic nature of the overmold308also enables a very thin structure for protecting the components of the PPG sensor. In an embodiment where the overmold is of optically non-transparent thermoplastioc material, the overmold also provides for an optical barrier between the LED(s) and the photo sensor(s) by covering the space(s) therebetween. The optical barrier reduces or eliminates a direct light path from the LED(s) to the photo sensor(s), thus improving the quality of measurements. Accordingly, the overmold may have three functions in a simple construction: the optical barrier, the rigid support for the electronics on the substrate300, and a cover for the electronics on the substrate300. Furthermore, assembling the optoelectronic components of the PPG sensor on the substrate ensures that they are inherently at the same distance from the skin. Moreover, different dimensions of the LED(s) and the photo sensors will not cause negative effects. These factors improve the quality of PPG measurements. In addition to the electronic components such as the LED(s) and the photo sensor(s) of the PPG sensor head, signal lines306coupling to the electronic components may be provided on the substrate300before overmolding. Let us next describe a method for manufacturing the wearable heart activity sensor device ofFIG.3. Referring toFIG.4, the method comprises obtaining400a substrate of optically transparent material; assembling406at least one LED of a PPG sensor on the substrate such that the at least one LED is arranged to emit light through the substrate; assembling406at least one photo sensor of the PPG sensor on the substrate as spatially separated from the at least one LED and such that the at least one photo sensor is arranged to absorb light through the substrate; and overmolding410the at least one light emitting diode, the at least one photo sensor and a space between the at least one LED and the at least one photo sensor with thermoplastic material. The manufacturing process may comprise additional steps also illustrated inFIG.4. In an embodiment, patterns such as graphics, colours and/or decorations may be provided (402) on the substrate. The patterns may serve for a decorative purpose or have a technical character such as serving as an indicator related to a function of the heart activity sensor. The patterns may be printed on the substrate by using inkjetting, pad printing, digital printing, vacuum metallization, coating, or screen printing. The signal lines may also be provided (404) on the substrate before the overmolding. The signal lines may be provided by using various techniques, e.g. copper lines, printing and/or application of conductive adhesive and/or conductive ink, flexible printed circuit board(s), or laser direct structuring. Combinations of such techniques may also be employed. For example, some of the signal lines may be printed on the substrate and further signal lines or connections between the signal lines may be added thereafter by applying drops of conductive ink and/or adhesive. As another example, components may be applied to the substrate comprising the signal lines, and the components may be coupled to appropriate signal lines by applying the drops of conductive ink and/or adhesive. In some embodiments, the substrate comprising the components may be laminated before the overmolding. In particular, when at least some of the components are relatively high or otherwise susceptible to get displaced during the overmolding, the lamination prevents or reduces such an effect. In an embodiment, the substrate is a film or a foil. In an embodiment, the substrate is made of plastics or a polymer such as polycarbonate, Polymethyl methacrylate, polyimide, or polyethylene terephthalate (PET). In another embodiment, the substrate is glass. In an embodiment, thickness of the substrate is 0.76 millimetres or less. In an embodiment, thickness of the substrate is 0.50 millimetres or less. In an embodiment, thickness of the substrate is 0.250 millimetres or less. In an embodiment, thickness of the substrate is 0.175 millimetres or less. In an embodiment, thickness of the substrate is 0.125 millimetres or less. Thinner substrates enable reduction in thickness of the assembly while thicker substrates provide for better support during the overmolding. In an embodiment, thickness of the substrate is between 0.76 and 0.25 millimetres. In an embodiment, thickness of the substrate is at least 0.25 millimetres to facilitate the overmolding. In an embodiment, the components are glued to the substrate in step406. One or several types of glues may be employed. For example, one glue may provide the structural attachment to the substrate while another, conductive glue is used to couple each component to one or more signal lines. In another embodiment, a single glue providing both the structural attachment and the electric coupling is used. Yet another glue may be employed to cover the components during the lamination step408. In an embodiment, the manufacturing process further comprises providing a hard coating on the side of the substrate that faces the skin208. The hard coating provides for mechanical protection. In an embodiment, the manufacturing method is performed by using in-mold labelling (IML) technology. In an embodiment, the manufacturing method is performed by using injection molding decoration (IMD) technology also called in-mold-decoration. In an embodiment, the manufacturing method is performed by using film insert molding (FIM) technology. In an embodiment, the manufacturing method is performed by using injection molded structural electronics (IMSE®) technology. In an embodiment, the overmold of thermoplastic material may is thermoplastic polyurethane or other thermoplastic elastomer. In an embodiment, the manufacturing process ofFIG.4further comprises a step of thermoforming the substrate. Thermoforming may be used to shape the substrate to have a desired form. For example, the substrate may be curved by using the thermoforming. The thermoforming may be carried out before assembling the components in step406. In this manner, the thermoforming will not cause displacement of the components and disconnection of the components from the signal lines. The thermoforming may be carried out between steps400and402or between steps404and406, for example. In another embodiment, the thermoforming is performed after assembling the components, i.e. after step406. In an embodiment, the heart activity sensor device further comprises at least one skin measurement electrode arranged on the substrate on the opposite side than the at least one LED and the at least one photo sensor, i.e. on the side facing the skin208.FIG.5illustrates such an embodiment. The skin measurement electrode500may be an electrode for an ECG sensor, bioimpedance sensor, or a skin temperature sensor. The skin measurement electrode may be assembled on the substrate in step406.FIG.5illustrates the assembly comprising the electronic components on the substrate from two viewpoints to illustrate the layers of the assembly and placement of the electronic components on the substrate. In the lower part, it can be seen that the skin measurement electrode(s)500may be arranged in the layout such that PPG measurements are not degraded. The optical barrier formed by the overmold308may still be provided between the LED(s) and the photo sensor(s). In an embodiment, the substrate300comprises at least one through hole502for a signal line504to the at least one skin measurement electrode500. The through hole may be formed before step404, and the signal line through the through hole may be formed in step404. The signal line504may substantially fill the through hole, or the signal line may be formed on the edges of the through hole from one side of the substrate to the other side via the through hole. Thereafter, the through hole502may be filled to make it502waterproof. The filling waterproofness may be realized by mechanical pressure by using a gasket or a similar element. In another embodiment, the hole may be filled with elastomer, adhesive, or similar material that fills the hole in a waterproof manner. In an embodiment, the substrate300comprises multiple through holes for signal lines to multiple skin electrodes, wherein one skin electrode may be a ground electrode and another skin electrode may be a measurement electrode. Multiple measurement electrodes and corresponding through holes may be provided, e.g. for measuring bioimpedance. In the embodiments where substrate that comprises the PPG measurement head and the electrodes, the wearable device may be configured to carry out various measurements. The various measurements may be carried out in different operational modes. One measurement mode may employ only the electrode(s), and the PPG measurement head may be disabled. In such a measurement mode, the electrodes may be used for measuring bioimpedance and/or electrocardiogram. In another measurement mode, the electrode(s) may be disabled, and the PPG measurement head may be enabled to measure heart activity and/or oxygen saturation. In yet another measurement mode, both the electrode(s) and the PPG measurement head may be enabled to perform measurements, e.g. to measure a pulse transit time or blood pressure. In this measurement mode, both the PPG measurements and the electrodes may be used to measure the pulse transit time or blood pressure. For example, the electrodes may be used to compute electrocardiogram that indicates a timing of a blood pulse at a heart, and the PPG measurement head may be used to detect the blood pulse at another location in the user's body, e.g. the wrist. A time difference between the electrocardiogram detection of the blood pulse and the PPG detection of the blood pulse represents the pulse transit time that may be used for computing the blood pressure, for example. In an embodiment, the LED(s) and the photo sensor(s) of the PPG sensor head are provided in a strap of a wrist-worn heart activity sensor device. The LED(s) and the photo sensor(s) of the PPG sensor may be provided at a location of a casing housing other electronics of the wrist device, e.g. a display screen. In another embodiment, The LED(s) and the photo sensor(s) of the PPG sensor may be provided at a location offset from the location of the casing, e.g. the PPG sensor head may be provided such that it will be disposed at an opposite side of the wrist than the display screen, when the user wears the wrist device. In an embodiment, the wearable heart activity sensor device further comprises at least one processor external to the overmold of thermoplastic material; and signal lines arranged on the substrate inside and outside the overmold to couple the at least one processor to the at least one light emitting diode and the at least one photo sensor.FIG.6illustrates such an embodiment. As already illustrated inFIGS.3and5, the overmold308may cover the signal lines306only partially and leave a part of the signal lines exposed such that the signal lines306may be coupled to electronics external to the overmold. Referring toFIG.6, the processor(s)602may be provided in a casing600assembled on the overmold. The casing may further comprise the display screen610. Signal lines disposed on the substrate may be coupled to the processor(s)602by further signal lines604,606that may comprise cables and, optionally, suitable connector(s). In an embodiment, the signal lines604,606are provided through one or more through holes in the overmold and through the bottom of the casing600such that the casing protects the signal lines604,606. In the embodiment ofFIG.6, the processor and the display screen are external to the overmold. In the embodiment ofFIG.7, the processor and the display are comprised in the over mold. The processor and, optionally, other integrated circuits may be assembled on the substrate in step406, and the overmold308may cover the processor and the integrated circuits. Even the display screen may be assembled on the substrate and covered by the overmold, e.g. in the embodiments where the overmold is optically transparent. In embodiment where the overmold is optically non-transparent, the display screen610may be assembled on top of the non-transparent overmold700, and a transparent overmold layer702of thermoplastic material may be provided to cover the display screen. Let us then describe an embodiment addressing the problem of motion artefacts mentioned above. Referring toFIG.8, a wearable sensor device comprises: a PPG sensor head31comprising a first set of LEDs30arranged to emit light at a first wavelength and a second set of LEDs30arranged to emit light at a first wavelength different from the first wavelength, wherein the first set of LEDs and second set of LEDs are arranged spatially in pairs such that each pair of LEDs comprises a LED of the first set and a LED of the second set disposed directly next to one another, and wherein different pairs of LEDs are spatially separated from one another, the PPG sensor head further comprising at least one photo sensor32; a controller12configured to activate the LEDs30to emit light in a sequential manner such that LEDs of each pair are activated at different timings; a measurement circuitry14configured to acquire a first measurement signal from the at least one photo sensor when a first LED of a pair of LEDs is emitting light and further configured to acquire a second measurement signal from the at least one photo sensor when a second LED of the pair of LEDs is emitting light, to remove motion interference from at least one of the first measurement signal and the second measurement signal by using common mode interference cancellation on the first measurement signal and the second measurement signal. The interference cancellation may be performed by an interference cancellation circuitry16. The arrangement of the LEDs spatially in pairs provides for a technical effect that a source location (LED) and a sink location (photo sensor) of a light path for measurements remain are the same for the first measurement signal and the second measurement signal. The arrangement of the LEDs in pairs such that each pair comprises a LED of each wavelength provides the technical effect that they travel a different path from the source to the sink in the tissue/skin. For example, green light penetrates the tissue deeper than red light. Other wavelengths may naturally be employed in the LEDs. The different paths cause the effect that the first measurement signal will differ from the second measurement signal and, further, that the motion artefacts are induced to the first measurement signal and the second measurement signal with the same characteristics. Since the interference signal is substantially similar in the first measurement signal and the second measurement signal, i.e. common mode interference, the common mode interference cancellation is able to cancel the interference from the measurement signals. The interference cancellation may be performed on either the first measurement signal or the second measurement signal. Advantageously, the interference cancellation is performed on the signal more suitable for the main purpose, e.g. if the purpose is heart rate measurement, the measurement signal measured from green light would be preferred over a measurement signal measured from red light, for example. When applied to any one of the embodiments ofFIGS.3to7, the overmold may cover a space or spaces between the different pairs of LEDs. Since LEDs of a pair are directly next to one another, the overmold may not extend between the LEDs of the pair. In a very simple embodiment, the common mode interference cancellation cancels the common mode interference by subtracting samples of the first measurement signal from samples of the second measurement signal, thus negating the common mode interference. More sophisticated common mode interference cancellation may, however, be used. The controller may also control the measurement circuitry to measure a measurement signal from selected one or more photo sensors according to the sequence in which the LEDs are activated, as described in greater detail below with reference to Tables. The controller and the measurement circuitry may be comprised in the at least one above-described processor602or processing circuitry. The wearable sensor device may be any one of the above-described devices, e.g. the wrist device. The sensor device may further comprise a communication interface providing the sensor device with wireless communication capability according to a radio communication protocol. The communication interface may support Bluetooth® protocol, for example Bluetooth Low Energy or Bluetooth Smart. The training computer may further comprise a user interface34comprising the display screen and input means such as buttons or a touch-sensitive display. The processor(s)10may output the instructions regarding the exercise to the user interface34, e.g. on the basis of PPG measurements performed by the measurement circuitry14. The sensor device may further comprise or have access to at least one memory20. The memory20may store a computer program code24comprising instructions readable and executable by the processor(s)10and configuring the above-described operation of the processor(s). The memory20may further store a configuration database28defining parameters for the processing circuitry, e.g. the sequence for the LEDs used by the controller 12. As used in this application, the term ‘circuitry’ refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term in this application. As a further example, as used in this application, the term ‘circuitry’ would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The techniques and methods described herein may be implemented by various means. For example, these techniques may be implemented in hardware (one or more devices), firmware (one or more devices), software (one or more modules), or combinations thereof. For a hardware implementation, the apparatus(es) of embodiments may be implemented within one or more application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), graphics processing units (GPUs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof. For firmware or software, the implementation can be carried out through modules of at least one chipset (e.g. procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory unit and executed by processors. The memory unit may be implemented within the processor or externally to the processor. In the latter case, it can be communicatively coupled to the processor via various means, as is known in the art. Additionally, the components of the systems described herein may be rearranged and/or complemented by additional components in order to facilitate the achievements of the various aspects, etc., described with regard thereto, and they are not limited to the precise configurations set forth in the given figures, as will be appreciated by one skilled in the art. FIGS.9to14illustrate different layouts of the PPG sensor head31. The layouts comprise multiple pairs of LEDs and most of the embodiments comprise multiple photo sensor to provide multiple, mutually orthogonal spatial measurement channels to improve the accuracy of the measurements. A purpose in some of the described embodiments that use multiple measurement channels is that a distance the light travels from a LED to the photo diode remains substantially constant in the multiple measurement channels. As a consequence, a measurement configuration employing the multiple spatial measurement channels comprises a first pair of LEDs and a second pair of LEDs that are substantially at the same distance from a photo sensor sensing light from the first and second pairs of LEDs. FIG.9illustrates a first embodiment of the layout. The pairs of LEDs are disposed to different directions from the photo sensor904, e.g. along an imaginary annulus around the photo sensor where the photo sensor is at an origin of the annulus. In the embodiment ofFIG.9, the four pairs of LEDs form a cross with the photo sensor. Referring toFIG.9, the filling pattern distinguishes the different sets of LEDs. LEDs having diagonal line filling belong to the first set of LEDs900while dotted filling refers to the second set of LEDs902. The numbers inFIG.9inside the LEDs and the photo sensor refer to the sequence in which the LEDs are lighted by the controller. Table 1 below illustrates an embodiment of a sequence for the layout ofFIG.9 TABLE 1Measurement indexActivated Photo SensorActivated LED11112112311341145115611671178118 As can be seen from Table 1, the controller may be configured to activate the LEDs such that LEDs of the same pair of LEDs are activated one directly after the other. This provides for that the first measurement signal and the second measurement signal are measured substantially under the same conditions regarding the motion artefacts. A measurement channel may be understood as the path from one LED to one photo sensor. Accordingly, LEDs of each pair provide for substantially identical measurement channels towards a photo sensor, e.g. the photo sensor 11. Each LED may emit light for a determined time interval as controlled by the controller 12. In an embodiment, the time interval is between 1 and 100 microseconds (us). During that time, the measurement circuitry may sample an electric output of the photo sensor and acquire digital measurement signals. Moreover, the LEDs are disposed such that each LED of the first set of LEDs900is disposed at an equal distance from the photo sensor 11 and, similarly, each LED of the second set of LEDs902is disposed at an equal distance from the photo sensor 11. In the embodiment ofFIG.9, the first set of LEDs900forms an outer ring around the photo sensor 11, while the second set of LEDs902forms an inner ring around the photo sensor 11. Since the dimensions of the LEDs are non-zero, such an arrangement may be used to improve the accuracy when combining the multiple measurement channels in the measurement circuitry14. Combining the measurements carried out by using multiple spatial channels improves the accuracy of the heart activity measurements. FIG.10illustrates another embodiment which is an extension of the layout ofFIG.9. In the embodiment ofFIG.10, two photo sensors 11 and 12 are provided, and the pairs of LEDs are disposed in two groups: one group around the photo sensor 11, as described above with reference toFIG.9, and another group around the photo sensor 12. The other group comprises multiple pairs of LEDs is disposed around a second photo sensor 12 such that LEDs of the first set of LEDs900in the second group are disposed at an equal distance from the second photo sensor 12 and LEDs of the second set of LEDs902in the second group are disposed at an equal distance from the second photo sensor 12. LED 8 belongs to both sets. The three LEDs disposed denoted by numbers 7 and 8 between the first photo sensor 11 and the second photo sensor 11 form two pairs of LEDs: an upper LED denoted by 7 forms a first pair with LED 8 and forms a measurement channel towards the photo diode 11, while a lower LED denoted by 7 forms a second pair with LED 8 and forms a measurement channel towards the photo diode 12. Table 2 below illustrates an embodiment of a sequence for lighting the LEDs in the layout ofFIG.10. TABLE 2Measurement indexActivated Photo SensorActivated LED11112112312141225113611471238124911510116111251212613117141181512716128 The sequence of Table 2 enables simultaneous measurements by both (all) photo sensors 11, 12. For example, LEDs denoted by number 1 are so distant from one another that light emitted by them reaches only the closest photo sensor. Some light may reach the more distant photo sensor but it would have such a low intensity with respect to the light from the closer LED that it would cause little interference to the measurements. In another embodiment, when only one of the photo sensors is configured to measure at a time, only the LED closest to the measuring photo sensor is activated amongst the LEDs having the same number inFIG.10. For example, when the measurement index 1 is selected, only the LED having the reference number 1 and being closest to the photo sensor 11 is enabled to emit light, and the other LED 1 distant to the photo sensor 11 is disabled. The same applies to the other measurement indices. This embodiment applies to the other embodiments described below in a straightforward manner. In an embodiment that is a modification ofFIG.10, LEDs denoted by numbers 3, 4, 5 and 6 are omitted. Table 2 may be modified accordingly by removing respective lines from the Table. FIG.11illustrates a further modification of the embodiment ofFIG.10. In this embodiment, the LEDs 7, 8 between the photo sensors are rotated by 90 degrees with respect to the embodiment ofFIG.10and one of the LEDs of the second set of LEDs902denoted by number 7 is omitted. This reduces the number of structural elements by one LED. However, the distance from the LEDs 7, 8 to the photo sensor 11 or 12 may differ slightly from the distance between the photo sensors 11, 12 and the other LEDs 1, 2, 3, 4, 5, 6. The sequence of Table 2 may still be used. FIGS.12and13illustrate embodiments where the pairs of LEDs and the photo sensors are arranged along an imaginary annulus (illustrates by a dotted annulus) in an alternating manner that each pair of the pairs of LEDs is between two photo sensors and each photo sensor is between two pairs of LEDs along the annulus. Furthermore, at least one pair of LEDs is provided at a centre of the imaginary annulus. In the embodiment ofFIG.12just like in the embodiments ofFIGS.10and11, the measurement channels are formed between a LED pair and the closest photo sensor(s). This may be arranged by designing the spatial distribution of the LEDs and the photo sensors appropriately. Light from a LED distant to a photo sensor will not substantially reach the photo sensor. The required distance will be decided by illumination intensity of the LED with respect to the distance between the photo sensor and the LED. The embodiment ofFIG.12comprises the pair of LEDs 5, 6 at the centre of the annulus. With respect to the other pairs of LEDs disposed along the annulus, the first set of LEDs forms an inner ring and the second set of LEDs forms an outer ring, thus providing substantially equal distances to the photo sensors. The LEDs 5, 6, at the centre deviate from this symmetricity to some degree but the advantage is reduced number of components (LEDs). Table 3 below illustrates an embodiment of a sequence for lighting the LEDs in the layout ofFIG.12. TABLE 3Measurement indexActivated Photo SensorActivated LED111121123131413251136114713381349121101221114112142131231412415143161441711518116191252012621135221362314524146 In the embodiment ofFIG.12and Table 3, the photo sensor 11 may measure measurement signals from LEDs 1 and 2 next to the photo sensor 11 on the annulus (measurement indices 1 and 2) and perform the common mode interference cancellation on these measurement signals, thus acquiring an interference-cancelled measurement signal. Simultaneously, the measurement indices 3 and 4 may be performed, i.e. the photo sensor 13 may measure measurement signals from LEDs 1 and 2 next to the photo sensor 13 on the annulus and perform the common mode interference cancellation on these measurement signals, thus acquiring an interference-cancelled measurement signal. In the next phase, the photo sensor 11 may measure measurement signals from LEDs 3 and 4 next to the photo sensor 11 on the annulus (measurement indices 5 and 6) and perform the common mode interference cancellation on these measurement signals, thus acquiring another interference-cancelled measurement signal. Simultaneously, the photo sensor 13 may measure the light from LEDs 3, 4 next to the photo sensor 13 in the same manner. Next, the procedure is repeated for the photo sensors 12 and 14 that measure the LEDs 1, 2, 3, and 4 closest to them in the same manner. Then, measurement indices 17, 19, 21, and 23 may be performed simultaneously, i.e. each photo sensor may measure the light from the LED 5 at the centre simultaneously. Thereafter, measurement indices 18, 20, 22, and 24 may be performed simultaneously, i.e. each photo sensor may measure the light from the LED 6 at the centre simultaneously and perform the interference cancellation on the measurement signals received from the LEDs 5 and 6. As a result, three interference-cancelled measurement signals are available per photo sensor 11 to 14. These measurement signals may then be combined in a desired manner, e.g. only measurement signals measured by the same photo sensor may be combined or even all these measurement signals may be combined for the final computation of the heart activity. In principle, all measurement signals associated with the same set of LEDs900,902may be combined. However, in some cases it may be advantageous to combine only a subset of the measurement signals, e.g. measurement signals associated with LEDs having the same index in the Figures. The LEDs having the same index are disposed on opposite edges of the layout and, thus, at least one of the LEDs can be assumed to have a proper contact with the skin even under rapid motion. In the embodiment ofFIG.13, further LEDs are provided at the centre to provide the symmetricity with the pairs of LEDs along the annulus. In this embodiment, a LED 6 of the first set of LEDs900is provided at the very centre of the annulus, and a number of LEDs of the second set of LEDs902is provided around the LED 6. The number of LEDs of the second set of LEDs902at the centre and around the LED 6 may equal to the number of photo sensors 11 to 14 in the configuration. The distance to the closest photo sensor from each LED of the second set of LEDs902at the centre and around the LED 6 may equal to the distance from the LED(s) of the second set on the annulus to the closest photo sensor, thus providing alike distances between the LEDs having their emitted light measured by the same photo sensor. At the centre of the annulus, each LED 7 of the second set of LEDs902forms a pair of LEDs with the LED 6 of the first set of LEDs900. Table 4 below illustrates an embodiment of a sequence for lighting the LEDs in the layout ofFIG.13. TABLE 4Measurement indexActivated Photo SensorActivated LED111121123131413251136114713381349121101221114112142131231412415143161441711518116191352013621127221262314724146 In the embodiment ofFIG.13using the sequence of Table 4, photo sensors 11 and 13 may measure simultaneously as may photo sensors 12 and 14. Accordingly, the following measurement indices may be performed simultaneously: {1, 3}; {2, 4}; {5, 7}; {6, 8}; {9, 11}; {10, 12}; {13, 15}; {14, 16}; {17, 19}; {18, 20}; {21, 23}; {22, 24}. This embodiment follows mainly the same principles for acquiring the measurement signals as described above with respect to the embodiment ofFIG.12. Regarding the LEDs at the centre, the photo diodes 11 to 14 acquire measurement signals from the LED 6 and the closest one of LEDs 5 or 7 for the interference cancellation. The photo sensors 11 and 13 do not measure light from LEDs 7, for instance. Similarly, the photo sensors 12 and 14 do not measure light from LEDs 5. FIG.14illustrates yet another embodiment where only the photo sensors are provided along the same annulus, and the LEDs are provided along another annulus, one that has a greater radius (illustrated by dash-dotted line inFIG.14. A further pair of LEDs is provided at the centres of the annuli. The LEDs on the other annulus are provided at the same locations on the annulus as the photo sensors 11 to 14 on their annulus, i.e. at directions 12 o'clock, 3 o'clock, 6 o'clock, and 9 o'clock. Accordingly, both LEDs 1 to 4 and the photo sensors 11 to 14 are disposed symmetrically on the annuli, e.g. at equal distances. Table 5 below illustrates an embodiment of a sequence for lighting the LEDs in the layout ofFIG.14. TABLE 5Measurement indexActivated Photo SensorActivated LED11112112313141325123612471438144911510116111251212613135141361514516146 As in the embodiments above, photo sensors 11 and 13 may measure simultaneously, as may photo sensors 12 and 14. Accordingly, corresponding measurement indices of Table 5 may be performed simultaneously. In the embodiments described above, the LEDs of the same pair are activated one directly after the other, thus providing for substantially identical measurement conditions for the interference cancellation. Such near-simultaneous activation is, however, not necessary. There may be arbitrary delay between the activation of the LEDs of the same pair. In such an embodiment, the measurement circuitry is configured to perform, before the common mode interference cancellation, a time-shift on samples of one of the measurement signals to compensate for the delay in emission times of the LEDs of the same pair of LEDs. It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims. | 42,983 |
11857300 | DETAILED DESCRIPTION 1. Overview As illustrated inFIG.1, some embodiments of the present technology may implement sensing or detection apparatus100and102, useful for detecting physiological characteristics of multiple users or patients. The sensors may be standalone sensors or may be coupled with other apparatus, such as a respiratory treatment apparatus, so as to provide an automated treatment response based on an analysis of the physiological characteristics detected by the sensors of the apparatus. For example, a respiratory treatment apparatus with a controller and a flow generator may be configured with such a sensor or to communicate with such a sensor and may be configured to adjust a pressure treatment generated at a patient interface (e.g., mask) in response to physiological characteristics detected by the sensor. Or such a sensor might be used to detect physiological characteristics of a patient when the flow generator is not in use by the patient to inform them of the advantage of using the flow generator. An example respiratory treatment apparatus is described in International Patent Application No. PCT/US2015/043204, filed on Jul. 31, 2015, the entire disclosure of which is incorporated herein by reference. A typical sensor of such an apparatus may employ a transmitter to emit radio frequency (RF) waves, such as radio frequency pulses for range gated sensing. A receiver, which may optionally be included in a combined device with the transmitter, may be configured to receive and process waves reflected from the patient's body. Signal processing may be employed, such as with a processor of the apparatus that activates the sensor, to derive physiological characteristics based on the received reflected signals. An example of the operation of such a sensor can be found in U.S. Patent Application Publ. No. 2009/0203972, the entire disclosure of which is incorporated herein by reference. A principal diagram of a sensor, or of a component of the sensor, is shown inFIG.3. As illustrated inFIG.3, the transmitter transmits a radio-frequency signal towards a subject, e.g., a human. Generally, the source of the RF signal is a local oscillator (LO). The reflected signal is then received by the RF receiver, amplified and mixed with a portion of the original signal, and the output of this mixer may then be filtered. The resulting signal may contain information about the movement, respiration and cardiac activity of the person, for example, and is referred to as the raw motion sensor signal. The phase difference between the transmitted signal and the reflected signal may be measured in order to estimate any one of the movement, respiration and cardiac activity of the person. The raw motion sensor signal can be processed to obtain signal components reflecting bodily movement, respiration and cardiac activity. Bodily movement can be identified by using zero-crossing or energy envelope detection algorithms (or more complex algorithms), and used to form a “motion on” or “motion off” indicator. For example, such movement detection algorithms may be implemented in accordance with the methodologies disclosed in any of U.S. Patent Application Publ. No. 2009/0203972, mentioned previously, International Patent Application No., PCT/US14/045814; U.S. Provisional Patent Application No. 62/149,839, filed Apr. 20, 2015, and U.S. Provisional Patent Application No. 62/207,687, filed Aug. 20, 2015, the entire disclosures of which are each incorporated herein by reference. The respiratory activity is typically in the range 0.1 to 0.8 Hz, and can be derived by filtering the original signal with a bandpass filter with a passband in that region. The cardiac activity is reflected in signals at higher frequencies, and this activity can be accessed by filtering with a bandpass filter with a pass band of a range from 0.8 to 10 Hz (e.g., 70 heart beats per minute is within this range at around 1.17 Hz). Such a respiration and movement sensor may be a range gated RF motion detector. The sensor may be configured to accept a DC power supply or battery input and provide, for example, four analog motion channel outputs with both in-phase and quadrature components of the respiration and movement signals of a person within the detection range. In the case of a pulsed RF motion sensor, range gating can help to limit movement detection to only a preferred zone or range. Thus, detections made with the sensor may be within a defined distance from the sensor. As illustrated inFIG.4, a typical sensor402of the present technology may employ one or more oscillators, such as an oscillator404, such as a dielectric resonant oscillator (DRO). The DRO may be a high Q DRO that is a narrowband oscillator (e.g., a DRO operating at 10.525 GHz), such as an oscillator incorporating a puck of dielectric material. The DRO typically generates a stable RF frequency characteristic and is relatively immune to variation in temperature, humidity and component parasitics. In some cases, the sensor may be a sensor described in U.S. Patent Application Publication No. 2014/0024917, the entire disclosure of which is incorporated herein by reference. As illustrated inFIG.5A, the pulsed radio frequency signal has two main modulation parameters. These are the pulsed repetition interval (PRI), with time duration represented by T, and the pulse width (PW), with time duration represented by τ. The term pulsed repetition frequency (PRF) is the inverse of the PRI. For example, a sensor may transmit a 10.525 GHz RF signal which was pulse modulated at a frequency of approximately 250 KHz to create an RF pulse signal with a pulse repetition interval designated T of 4 μs and a pulse width timing designated τ of 2 μs. Accordingly, the RF signals in the example would be 0.5 μs long and produced every 4 μs (i.e., a 12.5% duty cycle). The sensor may be a homodyning transceiver capable of both transmitting and receiving RF signals. As such, the transceiver may measure the magnitude and phase of the received signal with respect to the transmitted signal. The phase and/or magnitude of the received signal changes with respect to the transmitted signal when the target moves or upon the distance the received signal travelled. As a result, the demodulated magnitude detector receiver output signal is a measure of the movement of a target and/or the distance the signal travelled. While such a magnitude detector may be optionally implemented, in some cases, other circuit elements or detectors may be implemented in place of or to serve the function of the magnitude detector(s). For example, any detector circuit configured to detect signal modulation, such as a peak detector, envelope detector, or harmonic mixer circuit may be employed. Both transmitted RF signals and received RF signals may be presented to the input of a homodyning receiver switched magnitude detector (e.g., an RF magnitude detector). For example, as shown inFIG.5B, a received signal may be detected during a receive time interval period when the sensor is transmitting an RF pulse. In this regard, the magnitude detector may detect RF pulses every time an RF pulse is transmitted. In some embodiments, the magnitude detector may detect signals during a 5 ns period during the first 12 ns of a RF pulse transmission (i.e., the 5 ns could be anywhere within the first 12 ns, e.g. starting at the 7thns, 1stns etc.). When only a single source of RF pulses is present, the transmitted and received RF signals may be represented with the following mathematical formulas: A Sin(ω1t+θ1); and Transmitted RF signal: B Sin(ω1t+θ2) Received RF signal: Where A and B are amplitudes, ω1is the angular frequency, t is the time, and θ1and θ2are respective phases. (Time travelled is implicit in the phase difference between θ1[e.g., reference phase at oscillator] and θ2[after bouncing off the subject]). As the two signals are from the same source, they also have the same frequency. Accordingly, when they are superimposed the resulting RF signal has an amplitude that varies with phase and amplitude of the reflected signal. The transmitted RF signal and received RF signal may be combined by using the following formulas where a and b are amplitudes, x is a time multiplied angular frequency (2πft), and β and α are phases. As shown inFIG.6, a resulting signal602may be the result of a transmitted RF signal being modulated by a received RF signal. The resulting signal602may have a periodic sinusoidal amplitude envelope600and phase that varies only respective of distance and movement of a target. Accordingly, the superimposed signal varies with the distance to a target or target movement. Time Dithering When operating two or more sensors, oscillator timing differences and/or dithering may promote noise interference reduction. For example, in some sensors, the timing of the pulse generation may be dithered with respect to the timing associated with the underlying pulse repetition frequency by inclusion of a dithering circuit (not shown) such as one coupled with or included with a pulse generator408.FIG.10bshows a signal representation of “dithering” of the pulse generation (in which the onset time of the pulse varies with respect to the overall pulse generation timing). With such dithering, the overall pulse repetition frequency can be varied, so that the pulse onset time is linearly delayed or advanced with respect to the nominal overall pulse center (i.e., a second pulse train is at a slower pulse repetition frequency than a first pulse train). This has the net effect of changing the position of the pulse onset time, compared to its nominal onset time if the PRF remained fixed. This may be achieved with a synchronous ramp dithering circuit. An example synchronous ramp dithering circuit may be implemented with a voltage controlled delay element based on an underlying RC (resistor-capacitor) time constant. The ramp control voltage results in a varying varactor capacitance which in turn results in a varying resonator frequency. In this way the frequency of the pulse generation circuit oscillator and associated PRF is varied on the order of about 1% in a synchronous and linear manner every 1 ms approximately. In some examples, the linear ramp function may be at 1 kHz, which produces an associated dither on the PRI and PW timing. Dithering may be utilised to remove synchronous RF demodulation noise artefacts. Ramp dithering may be utilised because it is less complex to implement, however it can produce tone artefacts if not synchronous with the RF modulation and demodulation timing. Synchronous ramp dithering prevents these unwanted tones from being generated. However, the use of a timing dithering circuit complicates the unit to unit PRI timing difference and hence complicates pulse timing synchronization. In some sensors, RF modulation and demodulation timing is established by a 4 MHz ceramic resonator oscillator and associated binary ripple counter (see example ofFIG.17). To achieve low demodulation noise, the oscillator timing may be subsequently “synchronously dithered” with a linear ramp function (e.g., at 1 kHz) which produces an associated dither on the PRI and PW timing. This use of a timing dithering circuit complicates the unit to unit PRI timing difference. This timing difference is further compounded by the use of a ceramic resonator which has a lower frequency tolerance and higher drift compared to that of a quartz crystal. In summary, although synchronous dithering can mitigate RF interference signal noise it creates issues for a second method of RF interference signal noise reduction, namely pulse timing synchronization, due to dithering and/or the timing difference. For example, as shown inFIG.10Athe read signals of the 1stsensor may overlap with the pulse of the 2ndsensor and vice-versa. For two sensors to coexist without producing RF interference the first sensor should transmit its RF pulse in the quiet period of the second sensor and vice versa. For example, as shown inFIG.11A, the read signals (white lines/indicated as “RL” on the Figure) of the first sensor occur only during the time periods during which the second sensor is not transmitting a RF signal. Similarly, the second sensor only reads signals when the first sensor is not transmitting. However, in practice the asynchronous nature of the sensor operations (dithering, frequency difference and frequency drift) results in periodic overlap of the RF pulse of the first sensor with the receive timing of the second sensor, as shown inFIG.11B. In this regard, due to the nature of the sensors, the read signals of the first sensor may occur during the transmission of an RF pulse by the second sensor and vice-versa. 2. Sources of Noise Sensors, such as the sensors of illustrated detection apparatus100and102which are positioned within close proximity of each other, may suffer from radio frequency (RF) coexistence issues. As illustrated inFIG.2, two sensors300and302may be placed so that their respective RF pulses are projected in the direction of the opposing sensor. In this regard, sensor300may transmit RF pulse312in the direction of sensor302, and sensor302may transmit RF pulse310in the direction of sensor300. As a result, the sensor300may receive the reflection of its RF pulse312, the direct RF pulse310of the opposing sensor302, as well as the opposing sensor's double-reflection RF pulses (not shown). Accordingly, the RF pulses received by a sensor may include more than just the desirable RF pulse reflections created by the RF signal that the sensor originally transmitted, resulting in baseband interference when the received RF pulses are demodulated. (Generally, in some cases, a baseband signal may be a signal that has a very narrow frequency range, i.e. a spectral magnitude that is nonzero only for frequencies in the vicinity of the origin (termed f=0) and negligible)). While only RF pulses of the sensors are shown inFIG.2, RF waves from other apparatus may also be received by the sensors. Such RF waves may come from apparatus located both near and far from the sensors. Due to the nature of RF waves, they may possess the ability to travel through barriers such as walls and other obstacles and geographical topography. Accordingly, embodiments of the present technology may be directed to significantly diminishing the sensors' susceptibility to RF coexistence issues. When received RF signals come from an apparatus or source other than the transmitting sensor, noise may be introduced into the received RF signals. The transmitted, received and resultant RF signals may be represented with the following mathematical formulas: A Sin(ω1t+θ1); and RF signal transmitted by the first source: A Sin(ω2t+θ2) RF signal received at the second source: Acos2πf1t+Acos2πf2t=2Acos2πf1-f22tcos2πf1+f22t Where A and B are amplitudes, ω1t and ω2t are respective time dependent angular frequencies, and θ1and θ2are respective phases. Combining the transmitted signal from the first source and received RF signal from the second source may result in an RF signal with a periodic sinusoidal amplitude envelope and phase that constantly varies in time irrespective of any movement of a target. In the example ofFIG.7, transmitted signal700from a first source may be combined with a signal702received from a second source, producing resulting signal704. As can be seen, the resulting signal704has a periodic sinusoidal amplitude envelope and phase that constantly varies in time irrespective of any movement of a target. (As the phase is constantly changed with time, this is represented as a circle inFIG.7[left hand side]). Such constant variations to the resulting signal may be introduced when the received signal702is combined with the transmitted signal700at a multiple of the intermediate frequency of the transmitted frequency. Such variations may lead to baseband interference. The amount of interference generated by an RF signal transmitted from an apparatus or source other than the receiving sensor may be dependent on the received signal strength of the unwanted interfering RF signal. In this regard, the unwanted interfering RF signal's strength may be dependent upon the path the interfering RF signal travels before arriving at the receiving sensor. For example, as shown inFIG.8A, an interfering signal806may be reflected off of an object804and received by the receiving sensor800. Accordingly, the power of the interfering RF signal806is reduced when it arrives at the receiving sensor800, compared to the case when the signal arrives directly at the receiving sensor800. The formula for the power of the interfering signal after being reflected is given by: Pr=PtGtArδF4(4π)2Rr4 Wherein:Pr=power of the interfering signal which has been reflected;Pt=transmitter power;Gt=gain of the transmitting antenna;Ar=effective aperture (area) of the receiving antenna (most of the time noted as Gr);δ=radar cross section, or scattering coefficient, of the target;Rr=distance from the transmitter to the target (for a reflected signal) andF=pattern propagation factor (Normally close to 1) In contrast, as shown inFIG.8B, an interfering signal808may be generated by a second active source802and received directly by sensor800, without incurring any reflections. As no reflection of the interfering signal808occurs, the power of the interfering signal808is only reduced based on distance travelled. Accordingly, the power of the interfering RF signal is reduced based on distance. The formula for the power of the interfering signal808which has not been reflected but directly arrives at the receiver is given by: Pd=PtGtArδF4(4π)2Rd2 Wherein:Pd=power of the interfering signal which has not been reflected;Pt=transmitter power;Gt=gain of the transmitting antenna;Ar=effective aperture (area) of the receiving antenna (most of the time noted as Gr);δ=radar cross section, or scattering coefficient, of the target;Rd=distance from the transmitter to the receiver. (for a direct transmit to receive); andF=pattern propagation factor (Normally close to 1) As a result, when two sensors are placed in a room, the interfering signal level from the second unit can be higher than the signal transmitted by the first sensor and reflected back to the first sensor because of the shorter effective path length and the absence of attenuation due to scattering. When an interfering RF signal is received at a sensor, the interfering RF signal will cause baseband noise under certain conditions: (1) the interfering RF frequency is in-band (i.e. close to 10.525 GHz, 10.587 GHz, 9.3 GHz, 61 GHz, 24 GHz, 10.45 GHz), (2) the interfering RF signal is received during the receive time interval of the sensor, (3) the frequency of the RF signal transmitted by the sensor and the frequency of interfering RF signal have a difference frequency that is a multiple of the pulse repetition frequency of the transmitted frequency signal, and (4) the RF interfering signal has a sufficient amplitude to produce an interfering noise signal. These four conditions may be restated as follows, where RF1represents the main sensor's RF centre frequency, RF2represents the interfering sources RF centre frequency, IF1represents the intermediate frequency (Generally, in some cases, such as in communications and electronic engineering, an intermediate frequency (IF) may be considered a frequency to which a carrier frequency is shifted as an intermediate step in transmission or reception) of RF1, and IF2represents the intermediate frequency of RF2:(1) Interference may occur when RF2is within the demodulation frequency range of RF1, typically by +/−25 MHz;(2) Interference may occur when the pulse repetition frequency of RF2(PRF2) is a multiple of the RF1frequency. More specifically, if RF1=RF2+/−n(PRF2), where n is any integer;(3) Because the receiver is a synchronous phase detector interference may occur when the information on demodulated RF2includes the frequency of IF1or any of its odd harmonics. Stated another way, RF2(i.e., a signal which has been modulated by AM/FM modulation, or any other modulation scheme) contains information on the IF1or its odd harmonics;(4) Interference may occur when RF1and RF2are combined and then subsequently demodulated, where RF2is of sufficient signal level to produce a baseband noise component. To curtail such baseband noise interference, the present technology contemplates implementation of several solutions. First, the sensors may be synchronized in time to avoid any overlap in RF pulses in time. Second, the sensors may be synchronized to avoid overlap in RF pulses in frequency (i.e., RF1=RF2+/−(n+0.5)(PRF2)). Third, the sensors may be configured to pulse in such a way that the probability of interference is negligible. Fourth, two or more sensors may be placed in one housing facing in different directions (e.g., placed midway along the bed at the headboard or at feet). Examples of each of the implementations are detailed herein. For the case of a range gated RF sensor using a DRO as a reference oscillator, a second or subsequent sensor can have such a stable emitted frequency and behaviour, that it becomes a nearly optimal interferer to similar sensors that are nearby (i.e., when considering a multiple non-contact range gated sensor system). In order to mitigate this interference for effective low noise operation where more than one sensor is in proximity the following implementations may be made:(i) timing synchronization can be implemented between the sensors via a wire or wirelessly (where precise timing signals are used between cooperating sensors), or(ii) each sensor can be configured to independently behave in a manner that does not cause this nearly perfect interference (without requiring cooperation). For the latter case (i.e., with no inter-sensor cooperation), dithering in both time and/or frequency may be used. Dithering of timing may be used to spread the noise, while frequency dithering may be used to prevent interference. For example, as discussed in more detail herein, in some embodiments ramping of the voltage on a diode coupled to a timing oscillator can change diode capacitance. This leads to a change in the frequency of oscillation of the timing oscillator, resulting in timing dithering, such as by dithering timing pulsed involved in the generated pulsed RF signals. Additionally, by ramping the supply voltage of the DRO (Dielectric Resonant Oscillator), the frequency is changed resulting in frequency dithering (e.g., by ramping voltage 2.5-3.0-2.5V) of the RF signals. 3. Timing Synchronization Time synchronizing of the RF pulses may be implemented by generating a synchronizing pulse from the first sensor (master) to the second sensor (slave). As such, the second sensor would transmit its RF pulse in the quiet period of the first sensor. This solution may have the same noise level as shown in the “baseline noise” setup ofFIG.9. Instead of by a master sensor, the synchronization of all sensors could be driven by an independent controller in a very similar way to that used by a master sensor to drive one or more slave sensors. Or indeed the sensors could act as peers, e.g., control and communication is distributed among devices in the field, whereby each device communicates directly with the devices around it without having to go through/via a master device. In order to implement timing synchronization between sensors the following factors may be considered. First, the timing of the sensors may include synchronous dithering such as at a 1 ms interval, or more or less, so synchronization may not be easily achieved. Second, the timing of the sensors may be controlled by a ceramic resonator, with only about 1% frequency accuracy. Third, the slave unit should be enabled to detect loss of synchronization and maintain the sensor timing. Fourth, both a clock and dither synchronous signal can be transmitted from the master to slave sensor such as to help address issues of the use of dithering and the asynchronous nature of the operations. Fifth, synchronization should be achieved with sub-micro second timing accuracy to maintain the required RF pulse interleave locking. Sixth, the master and slave sensors should know they are required to transmit or receive the synchronization signal. (i.e., the sensors should automatically synchronize when required or be set at installation to synchronize) 3.1 RF Pulse Signal Arising from these considerations, particularly in view of the dithering and asynchronous nature of the timing, some versions may include generation of a timing synchronization that includes transmission of both a clock and a dither synchronous signal, and which may be with sub micro second timing accuracy. There are a number of ways of achieving this, including detecting the RF pulse signal from the master unit. In this regard, when the pulse signal from the master unit is received by the slave units, the timing of the slave units are adjusted to assure the slave units do not transmit an RF signal at the timing associated with the pulse signal of the master unit. However, a change of timing architecture may be required as the RF pulse signal may not be at the clock frequency of the oscillator. Additionally, the implementation of a phase-locked loop (PLL) may be required but may be complicated by the dithering. If timing dithering is not employed then the synchronization requirements above are reduced to that of PRF pulse timing synchronization, which is a lesser requirement. Pulse timing dithering might not be employed and instead enhanced interference noise reduction may be achieved by RF frequency dithering in some cases. 3.2 RF Pulse Signal Detection at Intermediate Stage Another method for transmitting both the clock and dither synchronous signal with sub micro second timing accuracy is by detecting the RF pulse signal of a master sensor at the IF (intermediate frequency) stage by the slave sensor. Because of circuit complexity to receive, amplify and condition the receive signal and in addition to phase lock it to the local 4 MHz oscillator, this solution is not easily implemented, however it is feasible, especially if digital sampling is employed. 3.3 Separate RF Signal In another method, a separate RF synchronization signal may be sent. For example, a separate industrial, scientific, and medical band (ISM) RF signal may be generated to provide the synchronization signals from the master unit to slave units. In this regard, through a wireless means the ISM RF signal could potentially be piggy-backed on an existing RF communications channel. 3.4 Other Wireless Signal In an alternate method, timing synchronization can be implemented through other wireless communications methods, including RF signals such as Bluetooth, Wi-Fi, ZigBee or other proprietary wireless means. 3.5 Infra-Red Signal In an alternate method, timing synchronization can be implemented by photonic means such as through light pulses and specifically through an infra-red signal. As shown inFIG.12, a master sensor1201could send an infra-red signal1204to slave sensor1202. In this regard, the master sensor1201may include an infra-red transmitter/receiver and the slave sensor1202may include an infra-red transmitter/receiver. Accordingly, the master sensor1201may transmit timing signals from the infra-red transmitter to the infra-red receiver of the slave sensor1202. However, complications achieving the required coverage and speed required for timing accuracy may be presented. For example, depending on the distance between the sensors, the infra-red signal may be delayed, thereby not providing proper synchronization. Further, interference issues such as a high speed IR signal “jamming” output from other devices (e.g., a television remote control) may be encountered. Other methods, such as fibre optic connection or transmission via visible light communication (e.g., pulsing LEDs or fluorescent lamps) in the range between 400 and 800 THz (780-375 nm) could also be used. 3.6 Wire Cable Coupling Another method of timing synchronization can be implemented through a wired connection. For example, a multi-wire cable (e.g., a three wire cable or a two wire cable, etc.) may be used to connect a master sensor to a slave sensor. Such a three wire cable synchronous master-slave oscillator circuit is shownFIG.13. The three wire cable may connect the master sensor to a slave sensor, thereby enabling the master sensor to transmit timing and dithering synchronization information from the master sensor to the slave sensor. On the left side ofFIG.13is the master sensor circuit1301, and on the right side of the figure is the slave sensor circuit1302. The master sensor circuit may include a Reset U1 pin11connected to ground via 1 k resistor to enable reset control (not shown). Additionally, the master sensor circuit1301may include a 4 MHz oscillator input U1 CLK pin10buffered by a gate driver/buffer (and supplied as a clock output to the slave. Further, the master sensor circuit may include a 1 kHz dither output U1 Q12 pin1buffered by gate driver/buffer which is supplied as a reset output to slave. Finally, the master circuit may be connected to ground (0V) and supplied as output to the slave sensor. The slave sensor circuit may include Reset U1 pin11connected to ground via 1 k resistor to enable reset control, as well as a 1 kHz dither output, received from the master circuit, and presented to the slave circuit Reset U1 pin11via 2.2 Nf series capacitor. The slave sensor circuit may further include a 4 MHz oscillator output gate driver/buffer received from the master circuit to drive transistor Q1collector via a 1 kHz resistor. Finally, the slave circuit may include a circuit ground (0V) supplied as input from the master circuit. In more general terms, the master clock output is transmitted through a first buffer (on the master circuit) to a wire which is received by a second buffer on the slave circuit. The output from the second buffer is presented to the slave clock input. Similarly the reset output is transmitted through a buffer to a wire which is received by a buffer on the slave circuit. The output from the second buffer is presented to the reset pin through a differentiator circuit/high pass filter. Only the leading edge of the reset pulse is passed through to the reset pin. The slave circuit may be connected to ground. The master and slave sensors may be synchronized by a three wire cable by sending, from the master sensor, a pulse width of around 0.5 us for example. Such a pulse width enables out of phase synchronization of the RF pulses. As already stated, if timing dithering is not employed then the synchronization requirements above are reduced to that of PRF pulse timing synchronization which is a lesser requirement. In this case the three wiring timing circuit described reduces to that of a two wire timing circuit. A two wire circuit can be implemented by transmitting the master reset output and letting the clocks run. This removes the master clock requirement. A wired connection can achieve synchronization requirements and can also optionally provide other functions. For example, the wire cable could be implemented to power a second (or subsequent) unit(s). As such, the wire may allow for more remote sensor placements while not necessarily introducing more wires and cables. In addition the two wires could provide timing synchronization and power to a second unit by modulating the signals. The wire could also reduce the need for other wireless chipsets; e.g., a set of sensors may form a pair, with only one of them having a Wi-Fi or Bluetooth interface and power adaptor or space for batteries, and the second simply connected via cable and not having a need of a separate Wi-Fi etc. radio capability as relevant control/sensor data are also modulated onto the wire. A more complex wired connection based on Ethernet could also be used. Both the three wire and two wire synchronization circuits described above could be implemented on the circuitry of the sensor or could be located in the connecting wires. The advantage of the latter would be that synchronization circuitry and associated cost would not be included in every unit. 3.7 Quartz Crystal Either in addition to the above timing synchronization solutions, or as a stand-alone solution, the oscillator may be implemented with a quartz crystal. Accordingly, less frequent synchronization signals will be necessary as the quartz crystal has a high frequency tolerance and low frequency drift rate. Additionally, only a single synchronization signal is necessary (e.g., clock), as a quartz crystal may be implemented without dithering. 4. Frequency Synchronization Another implementation to reduce RF interference between multiple sensors is to synchronize the RF frequencies of the sensors. In this regard the sensors can coexist without producing RF interference. For example, if two sensors transmit at RF frequencies, f1and f2, respectively at time t, the receive signal due to f1and f2is: Acos2πf1t+Acos2πf2t=2Acos2πf1-f22tcos2πf1+f22t Maximum interference between f1and f2occurs when f1−f2=n*PRF±IF, wherein IF is an intermediate frequency, and n is an integer. Minimum interference occurs when f1−f2=(n+0.5)*PRF±IF. 4.1 Differing Frequencies To minimize interference, different sensors may be configured for different frequencies. In this regard, sensors may be dynamically set to different frequencies. For example, the sensors may implement a DRO whose frequency is a function of voltage. For example, a variation of 1V DC may result in an RF frequency change of 1.5 MHz. The voltage controlled RF oscillator of the first sensor may synchronize to the RF frequency to the second unit. In this regard, a control circuit within the first sensor can adjust the DRO voltage to a minimum noise voltage by detecting the DRO voltages which result in a high level of interfering noise and by moving to a central control voltage position (i.e. an area of low noise) between these two “high noise level” voltages (e.g., where there is a pattern of constructive, destructive, constructive, destructive etc. interference gives rise to multiple interference maxima. RF frequency synchronization between two sensors has been demonstrated to produce the same noise level as if only a single sensor was in use. 4.2 Automatic Detection of Interference and Communication Between Sensors Via a Wired or Wireless Network, or Via Coded Interfering Pulses Another way of minimizing interference is by having each of a plurality of sensors detect their own respective centre frequency. Each sensor may then transmit their respective frequency value to the other sensors over a wired or wireless connection. The sensors may then adjust their respective centre frequencies to achieve an optimal spacing in order to maximally reduce interference between each other. In this manner, the more than one sensors can cooperate to reduce or avoid interference. Such a configuration may avoid the need to transmit a clock signal, clock edge, and/or a reset. Also this approach may be tolerant of delays and other potential issues that may arise in a communications channel, thereby allowing the sensors to operate over a link with poor quality of service (QoS). However, such an approach is not limited to poor QoS networks, and could be implemented on QoS links with good or high quality. Further, transmitting respective centre frequencies could potentially avoid the use of buffer circuits (unless required), dedicated cables, and/or synchronize radio or infra-red links. In contrast, transmitting a clock signal, clock edge, and/or reset constantly between sensors may require a defined QoS including latency, bandwidth, and other parameters. A network such as the Internet or an ad-hoc peer to peer Wi-Fi link such as Wi-Fi Direct using Wi-Fi Protected Setup (WPS) are examples of such a link (e.g., these are examples of links suitable for centre frequencies transmission or for a clock signal transmission). Detection of a centre frequency may require a few additional circuit components amount of circuitry (e.g., tapping a signal from the mixer), and may be enabled by a digital sensor. In this regard, a first digital sensor may send a notification of its most current or recent centre frequency reading, to a second digital sensor, and the second digital sensor may send its most current or recent centre frequency reading to the first sensor. For example, the first digital sensor may send a centre frequency of 10.252791 GHz and the second digital sensor may send a centre frequency of 10.525836 GHz. The first and second digital sensors may then adjust their respective centre frequencies to achieve an optimal spacing, of for example, 125 kHz, in order to maximally reduce interference between each other. The optimal adjustment amount may be based on the IF and PRF configuration of the respective sensors. Although a digital sensor is described, transmittal of a frequency value may also be enabled by an analogue baseband sensor configured to share information with a processor in an attached device. Transmittal of the centre frequencies may occur over a Wi-Fi, Ethernet, Bluetooth, or any other type of connection. Transmission may involve an authentication handshake and then periodic transmission of centre frequency values. Optionally, transmission of updated values may occur when the values deviate by a defined threshold from a past value. In some embodiments the transmitted data may be encoded in packets over the Wi-Fi link. 4.3 Frequency Lookup Table Another technique to establish frequency synchronization is to use lookup tables of frequencies in each sensor. In this regard, sensors may each store copies of lookup tables (or functions or formula to dynamically calculate such frequencies). For example, a table or tables may include a set of odd frequencies and a set of even frequencies. The odd and even frequencies may be chosen to sit at nulls in mutual interference. The sensors may then be programed to select a frequency to operate at from the odd and even tables. As such, the tables may span a region within an allowable spectral mask of a filter associated with a sensor, wherein the region is within the controllable centre frequency range of the sensor. For example, the frequencies may be selected from: (0.5+n)*PRF; Where n is an integer and PRF is a pulse rate frequency. In the case where sensors may be programmed to calculate frequencies using a mathematical formula as needed, such a configuration may permit a reduction in sensor memory. In certain embodiments, a first sensor might check if it is operating at or near a frequency in either the even table or the odd table, and make minor adjustments to match one of these close (or closest) frequencies on either table. For example, the first sensor might adjust its frequency to the nearest frequency in the even table. A second sensor may then adjust its frequency to the nearest frequency in the odd table, thereby achieving minimal interference. Communication between a defined pair (or plurality) of sensors may only have to happen once at setup. As such, a defined pair configuration may help remove the need for an ongoing wired or wireless transmission between the sensors. Accordingly, complexities introduced by continually updating the frequencies during ongoing operation may be minimized or removed. The one-off pairing process for a defined pair (or plurality of sensors) could be performed via wired or wireless means. For example, near field communications (NFC) and/or an accelerometer could be used to enable “tap to pair,” whereby close proximity and/or a control signal is used to enable the communication of the table information between sensors. Alternatively, the pairing process could be repeated periodically, such as on an occasional or best effort basis (e.g., via a store and forward network) in order to verify that no control parameters have changed. In this regard, a change of location or situation of one or more sensors could cause a system to prompt a user to perform a manual re-pair, or the re-pairing process may be automatically performed in the background. Where two or more sensors are in close proximity, parts of each table may be ring-fenced for each respective sensor. For example, each sensor may be assigned to a range of possible frequencies on the even table, or to a range of possible frequencies on the odd table. Additionally, there may be a switch, or some other type of input, on a sensor to define a preferred behaviour. For example, the switch may be set to adjust which frequency the sensor will operate at, or to select whether the sensor will operate at frequencies of the even table or the frequencies of the odd table or some portion thereof. It also may be desirable that other metrics of the RF environment be collected by one or more sensor devices of the system. For example, other metrics of the RF environment which may be collected could be the spacing or a measure of distance between the sensors and the relative orientation of the sensors. Using these metrics, a configuration may be programmed such that the sensors cooperate in order to minimise mutual interference. Should the sensors be placed in a position where an elevated level of residual interference is likely, before or after a pairing routine is performed, the sensors may provide a notification to reposition or re-orient one or more of the sensors. 4.4 Detection of Friend or Foe As noted, there are cases where multiple sensors, such as two or more, are in proximity and can interfere strongly with each other if countermeasures are not taken. However, even though the sensors can interfere, they are “friends” in that they can be configured to have specific behaviours when detecting and adjusting to interference from each other. In contrast, third party sources of RF signals may interfere with the operation of a sensor by accidentally or actively jamming the pulse sequences. Such third party sources operating at a similar centre frequency to a sensor may be considered “foes”; examples could be a sensing technology from another manufacturer or supplier operating at a similar frequency/pulsing strategy, or perhaps a malicious user trying to deliberately disrupt the operation of a medical cardiorespiratory sensor. Other exemplar sources that could interfere may be an in-bedroom/in-hospital (or outside bedroom) combined passive infrared sensor (PIR) and microwave security detector (e.g., where the microwave detector component operates at a similar RF centre frequency), a high power aviation RADAR, and/or a military, police or traffic management or vehicular RADAR may all produce similar centre frequencies which could interfere with operation of a sensor. For the case of interference caused by friendly sensors, the sensors may be programmed to deliberately scan a frequency range to determine the presence of interference. In that regard, one or more of the friendly sensors may each modify its centre frequency, in a search mode to attempt to maximise interference. Upon maximising interference, the sensors may then reconfigure their centre frequency to minimise interference. A frequency range between the maximised and minimum interference frequencies may then be determined by the sensor(s). The sensors may then determine if the frequency range is related to, for example, a known frequency range, and if so, the sensors may assume that another sensor of a known type is the source of the interference. Such scanning would preferentially occur during the absence of any motion in the vicinity of the sensors. Based on the determination that a friendly sensor is the source of interference, the sensors may initiate communication (e.g., using Manchester coding). For example, one or more of the sensors may adjust their respective centre frequencies around a determined interference maximum until two or more sensors agree upon the maxima. In this regard, a first sensor may move its centre frequency at a predefined rate, with the second (or plurality of other sensors) detecting when an interference maxima is achieved. Each sensor could communicate an agreed upon maxima point, and then agree which sensor should move to seek a minima frequency point. Upon achieving the minima, the sensor which adjusted to the minima would hold at that centre frequency until a correction is needed, for example, to account for temperature drift or other changing factor. Such cooperative actions between friendly sensors may be made through base stations, and/or through mesh network of devices. In that regard, sensors may be reconfigurable, e.g., have the ability to dynamically adjust centre frequency by, for example, including a processor that controls varying voltage to a voltage controlled oscillator for example, and/or other RF characteristics including pulse timing and radiated power. In another embodiment, coding may be applied to some RF pulse trains to enable faster communication between “friends” without using other communications channels. Therefore communication of a polling event to local or remote processor using a control signal is enabled in order to account for brief interfering signal and mask out. As such, this realises a system with no inter-communication needed (i.e., exact knowledge of current centre frequency is not needed). Thus, a temperature variation reference, e.g., detection of a certain temperature change, may be used to trigger or initiate polling between the sensors for updating the frequencies of each sensor to avoid interference as a result of recent temperature changes. Based on the maximum interference detected and a sensor's own centre frequency, it is possible to measure the frequency of the interference signal. How this works is by (a) knowing the centre frequency of the sensor, (b) optionally sweeping the centre frequency, (c) locating maximal interference, and (d) deducing the frequency of this interfering source. Where maximal (or high/elevated) interference is detected, it can be deduced that the interfering source has a component at that frequency. Once the interfering frequency is known, it is possible to reconfigure the sensor, alert the user, or indeed reconfigure the third party interfering source (e.g., for the case where configuration of the third party sensor(s) is possible, such as by turning off, adjusting angle, distance, frequency, power level etc.). Maximal interference is defined by maximal noise; this can be measured by looking at the higher frequency components in the baseband or intermediate frequency. For the example of a sensor with baseband range from DC to 200 Hz, and intermediate frequency of 8 kHz, a filter range to check for interference could be say 500-1500 Hz (roughly, they are factors of 10 apart; one centred around the 100, one around the 1000, one approximately around the 10,000) (seeFIG.15).FIG.16Ashows exemplar in-phase and quadrature (IQ) baseband signals in the time domain with no interference.FIG.16Bshows an intermittent interfering signal, which 16C shows to have peak noise greater than 200 mVrms and unpredictable of the peak noise level. 4.5 Adjustment of Centre Frequency to Avoid Foes (or Other Interfering Sources) As previously described, strong interference sources which are not from other sensors (i.e., friends), may be considered foes. A foe may transmit a RF signal with a centre frequency close or identical to the frequencies being transmitted by the one or more sensors. Accordingly, it is desirable for sensors to be able to operate in the presence of foes which may transmit jamming signals or other malicious RF emissions. In some embodiments, the continued detection of an RF interference, caused by a foe, by a sensor cooperating with another sensor or sensors, may prompt a system to adjust the frequencies within which it is operating. For example, the system may carry out a search across an allowable lookup table of values or other allowable blocks of radio spectrum in order to find a situation such that the unusual external interference is minimised. If the third party source was a sensor using a similar pulsing scheme, it can be seen that moving to a null interference frequency (e.g., moving centre frequency by 125 kHz might be sufficient to minimise interference, and/or by adjusting PRF). If this was unsuccessful, it can be seen that a sensor could adjust centre frequency in steps in order to build up a picture of the local RF environment, and carry out an optimization process (e.g., using gradient descent interference avoidance) in order to locate an interference minima over time. In some examples, this may require a large change in centre frequency, such as, for example, a move from 10.587 GHz to 9.3 GHz (or vice versa). Should the system be unsuccessful in minimizing the external interference caused by foes and/or other sources, the system may inform a user. For example, the system may attempt to adjust operation via clocking, transmission of adjusted centre frequencies or centre traversal via a special lookup table. If such adjustments are unsuccessful, the system may inform the user that a readjustment is “Unsuccessful,” as residual interference detected is such that it exceeds a predefined acceptable threshold. Optionally, such information can be presented to the user only if the interference is sustained in nature. In certain extreme cases of interference, the sensor's RF radio may be turned off automatically, and an error signal set (e.g., displayed on a screen). As such, the sensor may be unable to process and/or extract physiological signals, and thus unable to detect user's biometric parameters. Further, if detected, the biometric parameters may be inaccurate. 5. Noise Reduction without a Synchronization Requirement In some embodiments, noise reduction may be achieved without synchronization between two or more sensors. In that regard, the sensors may be configured to minimize RF coexistence issues without synchronizing the frequency or timing of the sensors. 5.1 Reduce RF Pulse Width One such technique may include reducing the RF pulse width to lessen the probability of interference. Turning back toFIG.5A, pulse width τ may be used to determine the length of the RF pulse signals. Reduction of the pulse width while maintaining the pulse repetition interval, PRI, may have no adverse effect on the sensor operation. Additionally, the shorter pulse width has less chance of being modulated with other pulses than do longer pulses. The lowest pulse width value may be chosen to meet regulatory approval standards requirements for the RF signal bandwidth and spurious signal level. 5.2 Dither Each Sensor Another technique for noise reduction may include dithering the pulse timing of each sensor differently or maintaining the pulse timing of each sensor at a constant frequency offset to each other. In this regard, the different timing from a master and slave sensor could reduce the chance of the sensors RF pulses locking in-phase with each other. 5.3 Increase Dither Timing The pulse timing dither between two sensors may also be increased and made pseudo random to reduce noise. Similar to dithering the time of each sensor, the dithering cycle could be extended and made pseudorandom for either, or both, the master and slave sensor. In some examples, a second binary ripple counter and exclusive OR gate, or a microcontroller or processor may be used to create the extended and pseudorandom dithering cycle. As stated already, the synchronous nature of the dithering or pseudo random timing dithering with the PRF (and IF) is significant so that tone artefacts are not produced by the phase sensitive demodulator receiver of the sensor. In one example, a diode may be coupled with a timing oscillator (e.g., pulse generator408ofFIG.4) configured to control emission of timed pulses from the DRO. The voltage level on the diode may be ramped up, thereby changing the diode capacitance. This leads to a change in the frequency of oscillation of the timing oscillator. As such, timing dithering may occur. 5.4 Dither the RF Frequency Another technique for asynchronously reducing the noise and interference is to dither the dielectric resonant oscillator (DRO) RF frequency to mitigate PRF frequency locking. Further, frequency dithering has the advantage of mitigating external RF interference. In some examples, an additional circuit to either modulate the drain voltage of the DRO or dither the supply voltage of the DRO from a voltage controlled regulator may be required. In this regard, by ramping the supply voltage of the DRO, the frequency output by the DRO may be adjusted. Frequency dithering may allow for multiple sensors to coexist in a common vicinity. For example, by employing frequency dithering, the center frequency of each sensor is moving such that there is a statistically insignificant chance that interference occurs between the sensors in normal operation. Such a technique may also be utilized by itself or in addition to timing dithering. 5.5 Single Housing Another technique for reducing noise between sensors is by positioning the sensors in a single housing unit. As shown inFIG.14, two sensors1406and1408are placed within housing unit1400. Sensor1406is placed 180 degrees away from sensor1408, and accordingly signals1402and1404create minimal, if any, interference. Turning back toFIG.9, the lowest level of noise between sensors occurs when the sensors are placed at least 90 degrees apart. Accordingly, when placing the sensors within the housing, they should be at an angle of at least 90 degrees from each other. A benefit of this technique is the sensors can be easily synchronized through a direct connection. 5.6 Orientation Because the signal level of the interference source is important the location of the sensors has a role to play in reducing noise. Placing sensors in close proximity and in a line of sight of each other produces maximum interference noise. This noise can be mitigated by orienting the sensors to increase the effective path length. Locating sensors further apart and at an angle to each other is an effective means of reducing coexistence noise 5.7 Polarisation When the sensors transmit and receive RF signals are circularly polarised (i.e., their RF signal electric and magnetic fields have a preferred transmit and receive direction) then further noise reduction can be achieved by arranging the sensors such that the polarisation of one sensor is orthogonal to that of the other. In this way the reflected movement signal is preferred (suffers no attenuation due to polarisation) while the received interference RF signal is rejected (suffers attenuation due to its orthogonal polarisation). 6. Combination Configurations Noise reduction may also be obtained by using more than one of the previously described noise reduction architectures and/or techniques. In Table 1 below: “S” represents Synchronization and “D” represents Dithering. For the case of a term in brackets “0”, this implies the case of “S” and/or “D” (i.e., to simplify the presentation of Table 1). For Table 1, “t” represents Timing (which includes IF timing and PRF timing). “f” represents the RF centre frequency. “nothing” represents where no intervention is made (i.e., noting/nothing is the trivial as-is case). Table shows the case of 1 co-existing sensor, but could be extended to (1, 2, . . . n) linked sensors. It should be noted that synchronization of timing implies the synchronization of a wired or wireless control signal (i.e., a first control signal); if dithering is also employed (simultaneous synchronization of timing and the dithering of timing), then a second control signal is used to facilitate synchronization. For timing synchronization, this implies that the PRF timing are locked. For dithering, this implies that IF and PRF are synchronised, but dithered (hence the requirement for the second control signal). A potential limitation of timing synchronization alone is that it requires good RF pulse isolation. (RF bleed through of the RF signal during the OFF period of the RF modulation results in poor RF pulse isolation). Therefore, it is desirable to turn off the RF radio transmitter between pulses or other approaches to remove this bleed through. In considering Table 1, the combination of timing dither and frequency synchronization [t(D) f(S)] is likely to perform well. The combination of timing synchronization and frequency synchronization [t(S) f(S)] is also likely to perform well, especially if there is good isolation, or timing synchronization and timing dither and frequency synchronization [t(S,D) f(S)]. TABLE 1t (of IF and PRF)fNothingNothingS (D)NothingS (D)DNothingSDSS (D)SDDDNothingNothingD 7. Other Considerations 7.1 Correcting for Temperature Variation A centre frequency of a sensor may shift, even under the control of a DRO, as certain operating parameters change. In this regard, a variation in both ambient and internal temperature may cause a sensor's frequency output to shift. In some examples, sensors may experience repeated and significant changes in temperature if a high power light or heat source is in proximity of, or in the same housing as, an RF sensor, and such a source switches on or off over time. There may also be a shift in centre frequency when a product with a processor and sensor is first turned on, and the enclosure reaches the expected operating temperature of the system (which may be above ambient temperature). For the case of a system that contains separate temperature monitoring, a detection of a change in temperature (with reference to rate of change in temperature over time) can be used to adjust the sensor transmission frequency. Therefore, embodiments may include design parameters to assist the sensor in outputting a certain frequency, regardless of any temperature variations. For a system with two or more sensors sending a continuous clock or associated reset synchronization signal over wired or wireless link with defined QoS, any temperature or related change in centre frequency may automatically be corrected. In this regard, the sensors may be adjusted by the techniques previously described regarding QoS. For a system with two or more sensors sending periodic centre frequency values read from sensors, optimal spacing may be maintained. For example, the sensors may transmit the values read from the sensors over a network, which allows the adjustment of one or more sensors in such a way that defined frequency spacing is achieved and retained. As such, interference between sensors may be minimized. Such corrections may be made based on a change or delta in a centre frequency of a sensor above a defined threshold. For a system with two or more sensors using lookup tables, subsequent to an initial pairing process, each sensor may be able to dynamically detect their current centre frequency (e.g., as drifting due to a change in temperature or other parameter), and continually or on a periodic basis adjust its frequency in order that it matches an agreed lookup table centre frequency. Such adjustments may thereby minimising interference between the sensors, while assuring the sensors remain within a defined spectral mask. RF sensor variation and processor control offsets may also be used to estimate the temperature such that an RF sensor alone could be used to estimate temperature to a certain resolution. In this regard, the temperatures may allow for sensor start-up effects, and further, the resolution may enable temperature sensing where no separate temperature sensor is provisioned. Accordingly, no prior knowledge of a temperature coefficient of the oscillator may be necessary. 7.2 Reduce Sensor Synchronization Events The amount of times a sensor may provide its actual centre frequency to nearby sensors may be reduced by reading and accurately setting the centre frequency at the time of manufacture. In this regard, it is possible to know the maximum and minimum extremes, in terms of temperature, with which a DRO or a quartz crystal operates. Based on the maximum and minimum extremes, it is also possible to determine a temperature coefficient, as operation of the DRO may be linear. Based on the frequency of operation being output by the sensor, an adjustment to correct for inaccuracies caused by the operating temperature may be made given the known temperature coefficient. 7.3 Third Party Timing Correction In embodiments where a sensor measures its own centre frequency, the accuracy of the clock producing the RF signals may be known and adjusted for in the centre frequency determination. For example, a network time protocol (NTP) may be used to determine the actual frequency of the clock at a certain time. A timing calibration may then be performed on the clock, so the other sensors may be adjusted to assure they operate at previously defined differences in frequency. NTP is a networking protocol for clock synchronization between systems over packet-switched, variable-latency data networks (e.g., the internet, and using User Datagram Protocol (UDP) on port number 123). Commercial crystals may have a known clock rate and accuracy. For example, a crystal may have an accuracy of 20 parts per million, and an associated variation with temperature. In cases of a slow temperature change, for example, a period of 10 or 30 mins, a timing calibration can be performed on the 4 MHz clock. In this regard, once the current time is available, it is possible to send to the other devices. To provide the current time to other sensors, the clock rate, in this example a 4 MHz clock, may be mixed with a frequency rate, for example 10.525 GHz. As a result of the mixed signal, a harmonic of the 4 MHz clock signal appears on the received signal. Therefore, based on the output, there is a frequency that is: n*actual frequency Where n is an integer and where actual frequency is the outputted frequency of the sensor (in the current example 10.525 GHz). The clock rate, due to accuracy issues, may be slightly different than the advertised rate. Continuing the above example, the clock rate may be 4.01387 MHz. To assure an accurate timing between the sensors, the clock rate may be adjusted until the output frequency includes no frequency deviation and/or beat frequency. Based on the adjusted rate, the crystal can be determined to be operating at n times the clock rate. Based on the passage of time using the network timing protocol or GPS timing signals or other time reference, a clock synchronization signal may be calculated. For example, a reference frequency may be found from an internet source. Based on passage of time using the NTP, a clock synchronization signal may be calculated. This synchronization signal may then be sent to the other sensors. 7.4 Location Aware Sensor Parameters In some embodiments it may be possible for a sensor to obtain location information. In this regard, the sensor may include or have access to the data of a positioning system, such as a global positioning system (GPS), whereby the sensor may obtain geographical location information as to where the sensor is currently positioned. In some embodiments, the sensor may be included in, or connected to, a smart-device (e.g., smartphone, tablet, smartwatch etc.). Accordingly, the sensor may receive its geographical location information from that smart-device. Timing information may also be recovered via a GPS receiver, and could allow wireless synchronization (assuming that an adequate GPS signal is available). Based on input of geographical location information, the sensor may then assure that its operation is within an allowed spectral mask for the current geographical region which the sensor is located. Alternatively, the sensor may automatically deactivate if possible control set of parameters of the sensor are incapable of operating the sensor within the allowed spectral mask of the geographical region which the sensor is located. Therefore, one or more sensors can coexist both with local radio frequency regulations and with one another. 7.5 Low Power State Sensors may be switched to a low power search mode (or even a sleep or off mode) if no motion is detected for a predetermined amount of time. In this regard, a sensor might be integrated into body worn devices, such as pendants, chest bands, bracelets, watches, hats, and other such devices. Additionally, sensors may be directly into existing electronics devices such as, smart watches, smartphones, internet of things (IoT) devices, etc. As such, sensors may be programmed to switch to a low power search mode if the device which the sensor is integrated is not being used, such as if no motion is detected. For example, a sensor integrated into a pendant may be placed onto a dresser. Since the user is not wearing the pendant, the sensor may detect no motion. Accordingly, the range of the sensor may be adjusted by reducing the output power, frequency, and/or duration of pulses to reduce the overall power consumption. Further, the adjustments may be programmed to be within allowable ranges. While sensors integrated into devices are described, standalone sensors may also be programmed to switch to a lower power search mode if the sensor fails to detect motion. Therefore, a low power condition can act a further aid to coexistence, by reducing the RF emitted power of one or more sensors. 7.6 Security Sensors might also be used in security sensing applications, to detect unauthorised physiological patterns (e.g., intrusion of a person or persons) into a detection area, and raise an alarm (or send a control signal to processor). As can be seen, in a security application, many such sensors could be co-located in a building, and thus RF sensor coexistence is highly important. It is also possible that one or more sleep sensors could be reconfigured by a control system when the user is away during the day to act as nodes or sensors in an intruder (burglar) alarm system (e.g., to detect an intruder in a bedroom). 7.7 Processing Processing of signals, such as those received by a sensor, may be performed by a processor on a sensor printed circuit board assembly (PCBA). Such a PCBA may also allow communication over an analog and/or digital link with a remote processor, such as a microprocessor on a main board. In embodiments with digital sensors, signals may be digitized and transmitted over a wireless or wired connection. Digitisation may be performed at a high resolution and/or sampling rate, and the sensor signals themselves (e.g., in-phase (I) and quadrature (Q) streams, or a stream prior to or not requiring such separation of I and Q), may be transmitted to a single or multiple processors. Further, each channel of transmitted information may also contain information about current or recent centre frequency, relative changes in centre frequency, lookup table location in use, etc. The number of components to implement a multi-sensor system may be reduced by minimising component count on one or more sensors. For example, in an operating environment, such as a home, apartment block, hotel, office, hospital or nursing home where multiple sensors are in use, the system may utilise existing data links and/or data processing power available in a wider system implementation in order to achieve the desired motion and physiological sensing. In one example, sensors may transmit their respective signals to a remotely located, separately housed processor, capable of processing multiple sensor signals at once. Optionally, digitised sensor signals could be transcoded to audio frequencies such that existing audio processing accelerators and routines might be utilised in order to detect specific motion patterns. In addition, whilst the main focus of the described technology is associated with applications for detecting respiration, sleep and heart rate, it is similarly suitable for detecting other movements of the human body (or of an animal if so configured). Unless the context clearly dictates otherwise and where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit, between the upper and lower limit of that range, and any other stated or intervening value in that stated range is encompassed within the technology. The upper and lower limits of these intervening ranges, which may be independently included in the intervening ranges, are also encompassed within the technology, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either or both of those included limits are also included in the technology. Furthermore, where a value or values are stated herein as being implemented as part of the technology, it is understood that such values may be approximated, unless otherwise stated, and such values may be utilized to any suitable significant digit to the extent that a practical technical implementation may permit or require it. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this technology belongs. Although any methods and materials similar or equivalent to those described herein can also be used in the practice or testing of the present technology, a limited number of the exemplary methods and materials are described herein. When a particular material is identified as being preferably used to construct a component, obvious alternative materials with similar properties may be used as a substitute. Furthermore, unless specified to the contrary, any and all components herein described are understood to be capable of being manufactured and, as such, may be manufactured together or separately. It must be noted that as used herein and in the appended claims, the singular forms “a”, “an”, and “the” include their plural equivalents, unless the context clearly dictates otherwise. All publications mentioned herein are incorporated by reference to disclose and describe the methods and/or materials which are the subject of those publications. The publications discussed herein are provided solely for their disclosure prior to the filing date of the present application. Nothing herein is to be construed as an admission that the present technology is not entitled to antedate such publication by virtue of prior invention. Further, the dates of publication provided may be different from the actual publication dates, which may need to be independently confirmed. Moreover, in interpreting the disclosure, all terms should be interpreted in the broadest reasonable manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. The subject headings used in the detailed description are included only for the ease of reference of the reader and should not be used to limit the subject matter found throughout the disclosure or the claims. The subject headings should not be used in construing the scope of the claims or the claim limitations. Although the technology herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the technology. In some instances, the terminology and symbols may imply specific details that are not required to practice the technology. For example, although the terms “first” and “second” may be used, unless otherwise specified, they are not intended to indicate any order but may be utilised to distinguish between distinct elements. Furthermore, although process steps in the methodologies may be described or illustrated in an order, such an ordering is not required. Those skilled in the art will recognize that such ordering may be modified and/or aspects thereof may be conducted concurrently or even synchronously. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the technology. It will further be understood that any reference herein to subject matter known in the field does not, unless the contrary indication appears, constitute an admission that such subject matter is commonly known by those skilled in the art to which the present technology relates. PARTS LIST detection apparatus100detection apparatus102sensor300sensor302RF pulse310RF pulse312sensor402oscillator404pulse generator408periodic sinusoidal amplitude envelope600signal602signal700signal702resultant receiver RF signal704sensor800second active source802object804rf signal806signal808master sensor1201slave sensor1202red signal1204master sensor circuit1301slave sensor circuit1302housing unit1400signal1402signal1404sensor1406sensor1408 | 73,623 |
11857301 | DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying figures, which form a part hereof. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, figures, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. Further, while embodiments disclosed herein make reference to use on or in conjunction with a living human body, it is contemplated that the disclosed methods, systems and devices may be used in any environment where non-invasive detection of a flow property is desired. The environment may be any living or non-living body or a portion thereof, a gel, an emulsion, a fluid conduit, a fluid reservoir, etc. For example, one of skill in the art will recognize that the embodiments disclosed herein may be used to sense properties of fluid flow in a water treatment system. Moreover, while the present disclosure describes embodiments for use in vivo, one of skill in the art will also recognize that in vitro applications are possible as well. Accordingly, the environment may also include a test tube or other vessel for holding a fluid. I. Overview A property of flow in an environment (e.g., a mean velocity of a fluid flow in an environment, a peak velocity of scatterers in an environment, a distribution of velocities of scatterers in an environment) can be detected by illuminating the environment using a beam of light emitted by a laser (e.g., a substantially coherent, substantially monochromatic beam of light) and detecting a change over time of a pattern of constructive and destructive interference in light emitted by the environment in response to the illumination. That is, scattering of the illumination by scattering elements in the environment (e.g., cells in blood, smoke particles in air) could cause constructive and destructive interference between illuminating light that takes different paths through the scattering environment, thus forming a pattern of light and dark speckles when projected on a surface. A light sensor could be disposed on the surface and configured to detect the intensity (or some other property) of light emitted by the environment in response to the illumination. Changes in the detected level of intensity of the received light over time (i.e., a time-dependence of the pattern of constructive and destructive interference in the light emitted by the environment) could be used to determine the property of flow in the environment. For example, the duration, rise time, fall time, and/or some other property of speckle events (e.g., changes in the environment (e.g., translations or rotations of scattering particles) that cause a corresponding change in the speckle pattern) could be used to determine the property of flow in the environment. The environment could be any environment that, when illuminated by a laser or other source of substantially coherent light, emits light having a pattern of constructive and destructive interference (e.g., a speckle pattern) related to the configuration of elements in the environment such that a change in the pattern of light can be related to a flow property of a region of the environment (e.g., a velocity of flow of a liquid in the environment). The environment could include gases, liquids, gels, or other fluids. The environment can include a population of scattering agents, i.e., small particles or other objects or features configured to move with a fluid flow and to reflect, refract, diffract, or otherwise scatter light. Examples of such scattering agents and environments include blood cells and other particles in blood, particles of water or ice in a fog, and solid and liquid particulates in smoke. In some examples, the laser, light sensor, and/or other elements of systems for measuring flows as described herein could be disposed proximate to the environment of interest; in other examples, the components of the flow-measuring systems could be disposed at a distance from an environment of interest. In some examples, a scattering agent (e.g., dielectric nanoparticles, fog, smoke, cavitation bubbles) could be introduced into and/or induced in an environment of interest to enable detection of a flow property of a region of the environment of interest. In some examples, the scattering agent could include cavitation bubbles induced in a region of the environment of interest, e.g., by emitting ultrasonic energy in to the environment of interest. Changes in the arrangement of scattering agents or other scattering features within the environment can cause a change in the pattern of light (i.e., the speckle pattern) emitted by the environment in response to illumination by a laser or other light source that emits a beam of coherent illumination. For example, displacement of scattering features disposed in a fluid due to flow of the fluid can cause a change in the pattern of light that is related to the direction, velocity, or other properties of the fluid, the fluid flow, and/or the location and orientation of the scattering features. When the intensity of the speckle pattern is measured in a specified region (e.g., at a ‘point’ corresponding to the location of a light sensor), time dependent features of the measured intensity (i.e., a waveform of the measured intensity) can be related to a flow property and/or other properties of the environment. For example, movements of the scattering features can cause the specified region to experience a ‘speckle event’, wherein the intensity of the light received from the environment increases/decreases suddenly, followed by a sudden decrease/increase. Such a pulse in the intensity could take the form of a quasi-trapezoidal pulse, a raised-cosine pulse, or some other pulse shape. Further, one or more properties of the speckle event pulse (e.g., a rise time, a fall time, a pulse width, a pulse amplitude) could be related to a flow property (e.g., a velocity of an individual scattering feature in a fluid flow or a distribution of velocities of individual scattering features in a fluid flow) of the environment. Other properties (e.g., a rate of change) of the measured intensity level over time could be related to a flow property of the environment. In some examples, the intensity of the received light (as detected using a light sensor) could be sampled at a high rate by an analog-to-digital converter, and subsequent processing could be performed by a processor or other computational substrate, based on the sampled intensity information, to determine a flow property of the environment. In some examples, analog circuitry (e.g., operational amplifiers, filters, comparators, sample-and-holds, peak detectors, differentiators) could be included to perform some analog computation on the output of the light sensor. For example, a rate of change (i.e., slope) of the output of the light sensor could be computed using a differentiator, and the output of the differentiator could be passed to a peak detector, such that the output of the peak detector could be related to the velocity of the highest-velocity scattering feature to have caused a speckle event as measured by a light sensor during a specified period of time. Additional or alternative methods of using the output of a light sensor to determine a flow property of an environment are anticipated. In some embodiments, multiple light sources (e.g., lasers) could be configured to emit beams of illumination having different wavelengths and/or from different angles or locations relative to an environment of interest and/or relative to a light sensor(s) to determine one or more flow properties of the environment or sub-regions thereof or according to some other application. In some embodiments, a plurality of light sensors could be disposed at a plurality of locations relative to an environment of interest and/or relative to an illuminating laser(s) to determine one or more flow properties of the environment of interest (or a subsection thereof) or according to some other application. In some examples, the plurality of sensors could be part of a single integrated circuit. In some examples, each individual light sensor of the plurality of light sensors could have a respective set of analog or other electronic components as described herein to enable detection of a flow property of a region of an environment. The above described system may be implemented as a wearable device. The term “wearable device,” as used in this disclosure, refers to any device that is capable of being worn at, on, or in proximity to a body surface, such as a wrist, ankle, waist, chest, ear, eye or other body part. In order to take in vivo measurements in a non-invasive manner from outside of the body, the wearable device may be positioned on a portion of the body where subsurface vasculature or some other flow environment is easily observable. The device may be placed in close proximity to the skin or tissue, but need not be touching or in intimate contact therewith. A mount, such as a belt, wristband, ankle band, headband, etc. can be provided to mount the device at, on or in proximity to the body surface. The mount may prevent the wearable device from moving relative to the body to reduce measurement error and noise. Further, the mount may be an adhesive substrate for adhering the wearable device to the body of a wearer. The light sensor, laser, and, in some examples, a processor and/or other components, may be provided on the wearable device. In other embodiments, the above described system may be implemented as a stationary measurement device that a user may be brought into contact or proximity with or as a device that may be temporarily placed or held against a body surface during one or more measurement periods. In other embodiments, the above described system may be implemented to interrogate an environment that is not a part of a human body, e.g., an in vitro or other sample container, an outdoor environment, an animal body, or some other environment of interest that can scatter a laser-emitted beam of illumination in a manner related to a flow property of the environment (or a subsection thereof). It should be understood that the above embodiments, and other embodiments described herein, are provided for explanatory purposes, and are not intended to be limiting. II. Illustrations of Flow Property Detection Flow properties (e.g., a flow rate) of fluid in an environment (e.g., blood in a portion of vasculature, a liquid, gel, emulsion, gas, or other flowing material in an industrial or other environment) can be detected by a variety of methods related to properties of the fluid and of the environment. In examples wherein the environment contains scatterers (i.e., particles that can scatter incident illumination and that can be affected by a fluid or other flow in the environment), a flow property of the environment could be detected and/or determined by illuminating the environment and detecting a time-dependence of a pattern, intensity, or other property of illumination scattered by the scatterers. FIG.1is a partial cross-sectional side view of a human arm105illustrating the operation of an example system100. In the example shown inFIG.1A, the system100includes a laser110configured to emit a beam of coherent illumination115into tissue of the arm105including a portion of subsurface vasculature107containing blood cells109(i.e., scatterers). The system100additionally includes a light sensor120configured to detect a pattern of constructive and destructive interference in a portion of the beam of coherent illumination115that is scattered by tissue of the arm105and that is emitted as an emitted light117toward the light sensor120. The system100additionally includes a controller (not shown) configured to operate the laser110and the light sensor120to determine a flow property (e.g., a flow rate) of blood in the portion of subsurface vasculature107. The system100could include further elements, e.g., a housing within which the laser110, light sensor120, and/or controller could be disposed, a mount configured to mount the laser110and light sensor120to the arm105, or some other elements. The pattern of constructive and destructive interference in the emitted light117can be the result of individual portions of the beam of coherent illumination115being scattered by different scattering (e.g., reflecting, refracting, diffracting) elements in the arm105(e.g., cell walls, blood cells, cell elements, tissue boundaries, chromophores, fat globules, or other reflective elements/boundaries and/or discontinuities in refractive index) and thus experiencing different paths lengths between emission at the laser110and reception at the light sensor120. The different portions of the beam of coherent illumination115(having been scattered toward the light sensor in the form of the emitted light117), are thus out of phase and will constructively and/or destructively interfere with each other in a manner related to respective amplitudes and relative phases of the portions of the emitted light117to form a pattern of constructive and destructive interference at the light sensor120and/or at other locations in the vicinity of the system100and arm105. Thus, the pattern of constructive and destructive interference in the emitted light117can be related to a configuration of elements of the arm105(e.g., to the location of blood cells109in the portion of subsurface vasculature107). The light sensor120detecting the pattern of constructive and destructive interference can include the light detector120being configured and/or operated to detect any property or properties of the pattern of constructive interference having a time dependence that can be used to determine a flow property of blood in the portion of subsurface vasculature107. In some examples, this could include the light sensor120being configured to detect the intensity and/or some other property of the emitted light117at a single point (e.g., using a single photodetector or other light-sensitive element disposed at a specified location relative to the laser110and/or elements of the arm105). In some examples, this could include the light sensor120being configured to detect the intensity and/or some other property of the emitted light117at multiple points (e.g., using two or more photodetectors or other light-sensitive elements). In some examples, the light sensor120could be a camera (i.e., an aperture, an array of photodetectors, and/or optics) and detecting the pattern of constructive interference could include detecting the amplitude of the emitted light117that is received by the camera from various locations (e.g., from various respective angles relative to the camera) of the arm105. Other configurations and operations of one or more light sensors (e.g.,120) to detect the pattern of constructive and destructive interference in the emitted light117are anticipated. Further, detecting the pattern of constructive and destructive interference in the emitted light117could include detecting additional or alternative properties of the pattern of constructive and destructive interference. Detected properties of the pattern of constructive and destructive interference could include the intensity, wavelength, spectrum, degree of polarization, direction of polarization, or some other property of light at a specified location(s) in the pattern and/or from a particular direction (e.g., from a particular location of the arm105and/or from a particular direction relative to a photodetector or other light sensitive element). Detected properties of the pattern of constructive and destructive interference could include properties of an image formed by the pattern of constructive and destructive interference (e.g., an image detected using a camera, an array of photodetectors on a surface, or some other image-detecting apparatus); for example, a contrast ratio, a speckle location, a speckle size, a number of speckles, a speckle shape, an overall pattern width, or some other property or properties. FIG.2illustrates an example speckle image200that could be generated on an imaging surface (e.g., a surface of the light sensor120) in response to illumination of a scattering environment (e.g., the arm105, portion of subsurface vasculature107, and blood cells109) by light emitted from the environment (e.g.,117) in response to a beam of coherent illumination (e.g., a beam115emitted by the laser110). The speckle image200includes a plurality of speckles210corresponding to where the constructive and destructive sum of the light impinging on a corresponding region of the imaging surface results in an overall higher level of light intensity than in other regions of the speckle image200. Properties of the pattern of constructive and destructive interference that results in the speckle image200are related to properties of the scattering environment (e.g., location of scattering elements in the environment, refractive index of elements of the environment), properties of the illuminating beam of coherent illumination (e.g., a wavelength, a spectral line width, an intensity, a coherence length, a beam width, a beam polarization), and of the imaging surface on which the speckle image200is formed (e.g., the location of the imaging surface relative to the beam of coherent illumination and the environment). Thus, time-dependent changes in the configuration of the environment (e.g., movement of scatterers in a fluid flow in the environment) could result in a time-dependent change in the pattern of constructive and destructive interference in the light emitted by the environment that could further results in a time-dependent change in the imaged speckle pattern200. That is, the location, number, size, shape, intensity, or other properties of speckles210or other features of the speckle pattern200could change in a time-dependent manner related to a change in the environment and/or a change in the location of the imaging surface and/or source of the beam of coherent illumination relation to the environment. The pattern of constructive and destructive interference represented by the speckle image200could be related to reflection, refraction, diffraction, scattering, absorption, or other interactions between a beam of coherent light illuminating an environment and elements of the environment. For example, interfaces between regions of the environment having different indices of refraction (e.g., at a cell wall, at the wall of a portion of vasculature, at the surface of a bone, at the surface of a muscle, at a skin surface, at some other interface in a biological or other environment) can cause scattering, refraction, reflection, and/or other interactions with light. Other elements of an environment (e.g., metallic and/or semiconductive particles, surfaces, or other elements) could cause reflection, scattering, and/or other interactions with illuminating light in a manner related to the pattern of constructive and destructive interference represented by the speckle image200. FIGS.3A-3Eillustrate the operation of an example system300that could be operated to determine a flow property (e.g., a flow rate) of blood in a portion of subsurface vasculature307in an arm305. The system300includes a laser310configured to emit a beam of coherent illumination315into tissue of the arm305that includes the portion of subsurface vasculature307and blood cells (e.g., illustrative blood cell309) contained in the portion of subsurface vasculature307that move along with blood in the portion of subsurface vasculature307. The system300additionally includes a light sensor320configured to detect a pattern of constructive and destructive interference in a portion of the beam of coherent illumination315that is scattered by tissue of the arm305and that is emitted as an emitted light317a,317b,317ctoward the light sensor320. The system300additionally includes a controller (not shown) configured to operate the laser310and the light sensor320to determine a flow property (e.g., a flow rate) of blood in the portion of subsurface vasculature307. The system300could include further elements, e.g., a housing within which the laser310, light sensor320, and/or controller could be disposed, a mount configured to mount the laser310and light sensor320to the arm305, or some other elements. To illustrate the operation of the system300, the movement of an illustrative blood cell309due to blood flow in the portion of subsurface vasculature307is illustrated inFIGS.3A-3Cand the corresponding time-dependent changes of the pattern of constructive and destructive interference detected by the light sensor320. Specifically,FIG.3Dillustrates an example detected light intensity waveform351corresponding to the intensity of the pattern of constructive and destructive interference at a specified point as detected by the light sensor320. FIG.3Aillustrates the system330and arm307at a first period of time. The illustrative blood cell309is in an upstream region of the portion of subsurface vasculature307that is substantially outside of a region illuminated by the beam of coherent illumination315. As a result, the light sensor320detects a first light intensity350arelated to a pattern of constructive and destructive interference in first emitted light317a. FIG.3Billustrates the system330and arm307at a second period of time. The illustrative blood cell309is moved downstream due to blood flow into the region of the portion of subsurface vasculature307that is illuminated by the beam of coherent illumination315and thus acts to scatter the beam of coherent illumination315. As a result, the light sensor320detects a second light intensity350brelated to a pattern of constructive and destructive interference in second emitted light317bthat is substantially different from the pattern of constructive and destructive interference in first emitted light317a. FIG.3Cillustrates the system330and arm307at a third period of time. The illustrative blood cell309is moved downstream due to blood flow into a downstream region of the portion of subsurface vasculature307that is substantially outside of the region illuminated by the beam of coherent illumination315. As a result, the light sensor320detects a third light intensity350crelated to a pattern of constructive and destructive interference in third emitted light317cthat is substantially similar to the pattern of constructive and destructive interference in first emitted light317a. The movement of the illustrative blood cell309through the portion of subsurface vasculature305in the first, second, and third periods of time (as illustrated inFIGS.3A-C, respectively) results in the light sensor320detecting an illustrative speckle event350in the detected light intensity waveform351. The illustrative speckle event350is a trapezoidal pulse that includes a rising edge353, a plateau355, and a falling edge357. One or more of these elements could be related to the speed of the illustrative blood cell309and thus to a flow property of the blood in the portion of subsurface vasculature. In some examples, a time property (e.g., a rise time of the rising edge353, a duration of the plateau355, a fall time of the falling edge357) of the speckle event350could be related to a speed of the illustrative blood cell309. For example, the rate of increase in intensity during the rising edge353could correspond to the velocity of the illustrative blood cell309such that higher rates correspond to higher velocities. Note that the movement of the illustrative blood cell309and the corresponding detected light intensity waveform351are meant as illustrative examples. A portion of subsurface vasculature could include many blood cells having respective different velocities related to the movement of blood in the portion of subsurface vasculature. Further, the movement of an individual blood cell through a region of subsurface vasculature illuminated by a coherent light source could result in no speckle event, multiple speckle events, or some other feature(s) to be present in a detected light intensity waveform or other detected signal related to the pattern of constructive and destructive interference in a portion of a beam of coherent illumination that is scattered the environment including the portion of subsurface vasculature and blood cell(s)) and that is emitted as an emitted light toward a light sensor. FIG.3Eshows an example detected light intensity waveform361that could be detected using the system300when a plurality of blood cells and other scatterers are being moved in a flow of blood in the portion of subsurface vasculature307. The detected light intensity waveform361includes a plurality of speckle events having respective shapes, durations, amplitudes, rise/fall times, and/or other properties. The system300could include electronics (e.g., amplifiers, filters, comparators, envelope detectors, slope detectors, differentiators, peak detectors, ADCs, microprocessors, microcontrollers) configured to determine one or more flow properties of the blood in the portion of subsurface vasculature based on the detected light intensity waveform361. For example, the electronics could be configured to detect a rise time of individual speckle events in the detected light intensity waveform361and to determine a corresponding blood cell velocity. The electronics could be further configured to determine a distribution of velocities of individual blood cells in the blood, a mean flow rate of the blood, and/or some other flow property of the blood in the portion of subsurface vasculature. Determined flow properties of an environment (e.g., of blood in a portion of subsurface vasculature) could be any properties or physical parameters relating to a flow of a fluid within the environment. In some examples, determined flow properties could include the velocity, direction, acceleration, or other information about the movement of individual particles (e.g., blood cells or other scatterers) or groups of particles within the environment. For example, a system could determine the velocity of individual particles in the environment based on a detected temporal property of speckle events or other features of a detected waveform that is related to a pattern of constructive and destructive interference in light emitted by the environment in response to illumination by a beam of coherent light. In some examples, determined flow properties could include properties describing a bulk flow of fluid, e.g., a flow rate, a mean flow velocity, a mean flow speed, a mass flow rate, or some other property of a fluid flow in an environment. In some examples, the detected flow property could correspond to a subsection or other specified region of an environment, e.g., blood in a portion of subsurface vasculature in an arm or other portion of anatomy. The location of the specified region could be related to the configuration of the system (e.g., the location and direction of a laser, the location and direction of sensitivity of a light sensor). For example, a laser of the system could be configured to emit a beam of coherent illumination in a specified direction relative to the laser, and a light sensor could be configured to detect a property of light received from a specified direction relative to the light sensor, such that the determined flow property is a flow property of fluid proximate to the intersection of the beam of coherent illumination and a vector extending from the light sensor in the specified direction relative to the light sensor. The laser310could be configured in a variety of ways and include a variety of elements such that the emitted beam of coherent illumination315has one or more specified properties according to an application. The beam of coherent illumination315could have a specified wavelength. In some examples, the wavelength of the beam of coherent illumination315could be specified such that it could penetrate an environment of interest, be scattered by scatterers in a fluid flow in the environment of interest, or according to some other considerations. For example, the environment could be a portion of subsurface vasculature within a portion of human anatomy, the determined flow property could be a flow property of blood within the portion of subsurface vasculature, and the wavelength of the beam of coherent illumination315could be between approximately 400 nanometers and approximately 1000 nanometers. In some examples, the wavelength of the beam of coherent illumination315could be specified relative to a characteristic size or other property of scatterers (e.g., blood cells, cavitation bubbles, natural and/or artificial particles, bubbles or gas or other material having dissimilar optical properties to a surrounding fluid medium) such that the scatterers could scatter the beam of coherent illumination315and cause the environment to emit light having a pattern of constructive and destructive interference related to the configuration of the environment and/or scatterers. The wavelength of the beam of coherent illumination315could be within a near-infrared (NIR) transparency window of biological tissue (e.g., between approximately 780 and approximately 810 nanometers). In some examples, the beam of coherent illumination315could have a coherence length that is greater than some minimum coherence length (e.g., greater than 1 millimeter) that is related to scattering properties of elements of the environment (e.g., skin cells, connective tissue, portions of subsurface vasculature, blood cells, and other elements of a human arm or other portion of human anatomy). The specified minimum coherence length could be related to a spacing of scatterers or other optical features (e.g., reflecting, refracting, and/or diffracting interfaces between regions having different indices of refraction, metallic and/or semiconductive elements) in the environment such that one or more properties of a pattern of constructive and destructive interference can be detected and used to determine a flow property of the environment. Further, the laser310could include a volume holographic grating, a monochromator, a Lyot filter, a Bragg reflector, a dielectric mirror, or some other element(s) configured to increase a coherence length of and/or decrease a spectral line width of the beam of coherent illumination315. Such elements could be disposed on a discrete laser (e.g., a volume holographic grating could be disposed in the path of the beam of a laser) and/or could be incorporated into one or more elements of the laser310(e.g., mirrors, lenses, gain media, frequency doublers, or other elements of the laser310could be configured such that they had properties of one or more of the listed additional elements). The laser310could be selected from a wide variety of lasers according to an application. The laser310could include a gas laser, a chemical laser, a dye laser, a metal-vapor laser, a solid-state laser, a semiconductor laser, or any other type of laser configured to produce a beam of coherent illumination having one or more specified properties (e.g., wavelength, spectral line width, coherence length) such that the laser could illuminate an environment of interest (e.g., a portion of subsurface vasculature307) that contains light-scattering elements (e.g., blood cells, human tissue) such that the environment of interest responsively emits light having a pattern of constructive and destructive interference that has one or more time-dependent properties that can be detected and used to determine a flow property (e.g., a flow rate of blood) of the environment. In some applications, the system300could be a wearable device and the laser310could be configured to satisfy limited power and space requirements of the wearable device such that the system300could be battery-powered and could be comfortably worn by a wearer (e.g., worn around a wrist of the wearer). For example, the laser310could be a small laser diode, e.g., a VCSEL, a double heterostructure laser, a quantum well laser, or some other structure of semiconductor laser incorporating gallium nitride, indium gallium nitride, aluminum gallium indium phosphide, aluminum gallium arsenide, indium gallium arsenide phosphide, lead salt, or some other material or combination of materials as a gain medium. In some examples, the laser310could include frequency doublers, optics, collimators, or some other elements according to an application. In some examples, the laser310could be incorporated into other elements of the system300. For example, the laser310could be wire-bonded, soldered, or otherwise electronically and/or mechanically coupled to a circuit board or other element(s) of the system,300. Additionally or alternatively, the laser310or elements thereof could be incorporated into a single semiconductor device (e.g., wafer or chip) with other components (e.g., a laser power supply, a microcontroller). Further, the laser310could be configured to control the direction of the beam of coherent illumination315(e.g., by including servos, motors, piezo elements, or other actuators configured to translate and/or rotate the laser and/or optics or other elements thereof) to enable detection of flow properties in specified sub-regions of the arm305(e.g., different regions of the portion of subsurface vasculature307, different portions of subsurface vasculature (not shown)) by directing the beam of coherent illumination315toward the different specified sub-regions of the arm305. In some examples, the system300could include more than one laser. Individual lasers of the more than one laser could have respective specified properties (e.g., locations, angles and/or locations of emitted beams of coherent illumination, wavelengths, coherence lengths, polarizations) according to an application. More than one laser could be provided to allow for detection of a flow property in more than one region of the arm305(e.g., multiple locations of the portion of subsurface vasculature307, other portions of tissue in the arm305). In some embodiments, the system300could include a spatially distributed array of lasers configured such that individual lasers of the array emit beams of coherent illumination into respective individual subregions (e.g., overlapping or non-overlapping portions of tissue) of the arm305. Such an array of lasers could be operated to determine a corresponding plurality of flow properties of the respective individual subregions of the arm305(e.g., to determine a flow map within the arm305, to determine a location, shape or other property of vasculature in the arm305, or according to some other application). More than one laser could be provided to enable higher-accuracy or otherwise improved detection of a flow property of blood or other fluid (e.g., by providing a redundant source of coherent illumination, by allowing illumination of a single region of the portion of vasculature from multiple angles, by providing multiple wavelengths of illumination for detection). More than one laser could be provided to enable measurement of more than one flow property. For example, a first laser could emit a beam having a first wavelength that is preferentially scattered by a first population of scatterers in the environment (e.g., portion of subsurface vasculature) and a second laser could emit a beam having a second wavelength that is preferentially scattered by a second population of scatterers in the environment such that the first and second lasers could be operated, in combination with one or more light sensors, to determine a first flow property of the environment related to movement of the first scatterers and a second flow property of the environment related to movement of the second scatterers. The light sensor320could include any variety of light-detecting apparatus configured to detect a pattern of constructive and destructive interference in light that is emitted by an environment (e.g.,305,307) and that is related to the configuration of the environment and/or scatterers therein. The light sensor320could include one or more photodetectors, photodiodes, phototransistors, CCDs, active pixel sensors, photoresistors, or other light-sensitive elements. The light sensor320could be configured to detect an intensity, a wavelength, a spectrum, a degree of polarization, a direction of polarization, or some other property of light emitted by the environment and received at one or more locations on or within the light sensor320. For example, the light sensor320could be configured to detect the intensity of light received in a specified region (i.e., a sensitive region of the light sensor320) relative to the arm305that is received from a direction toward the portion of subsurface vasculature305relative to the light sensor320. In some examples, the light sensor320could include a camera (e.g., an aperture, a plurality of individual light-sensitive elements (e.g., a CCD, an array of active pixel sensors), and/or optics). In some examples, the system300could include more than one light sensor (e.g., a plurality of light sensors) disposed at more than one location relative to the laser310, arm305, or other elements of an environment of interest. The more than one light sensor could be provided to allow for detection of a flow property in more than one region of the arm305(e.g., multiple locations of the portion of subsurface vasculature307, other portions of tissue in the arm305). The more than one light sensor could be provided to enable higher-accuracy or otherwise improved detection of a flow property of blood or other fluid (e.g., by providing a redundant source of information about a pattern of constructive and destructive interference in light emitted by an environment, by allowing detection of multiple patterns of constructive and destructive interference emitted by the portion of vasculature toward multiple angles, by providing detection of multiple wavelengths of emitted light). The light sensor320could include a variety of components according to an application. The light sensor320could include lenses, polarization filters, color filters, apertures, mirrors, diffraction gratings, liquid crystal elements, baffles, or other optical elements to affect the light received by the light sensor320. In some examples, the light sensor320could include a color filter configured to substantially block light having wavelengths different from a wavelength of light emitted by the laser310. In some examples, the light sensor320could include an aperture, lenses, or other element(s) configured to make the light sensor320electively sensitive to light coming from a particular direction(s) relative to the light sensor320, laser310, or other elements of the system300and/or the arm305. For example, the light sensor320could be configured to be selectively sensitive to light emitted from a specified region of the portion of subsurface vasculature307. In some examples, the size of the specified region could be specified such that a bandwidth or other time-dependent property of a signal produced and/or detected by the light sensor320(e.g., a rate of speckle events detected by the lights sensor320) is within some specified limit(s). For example, the specified region could be a region having a diameter or other characteristic size between approximately 100 microns and approximately 1 millimeter. In some examples, the light sensor320could include an annular filter (i.e., a ring-shaped aperture disposed in front of a light-sensitive element of the light sensor320). The annular filter could be configured to substantially block light from being received by the light sensor320unless the light approaches the light sensor320from an angle relative to a specified axis of the light sensor320(e.g., an optical axis of the sensor) that is within a specified range of angles. The specified range of angles could be specified related to a scattering property of the environment of interest (e.g., tissue of an arm305that includes blood flowing in a portion of subsurface vasculature307). For example, the range of angles could be specified such that light received by the light sensor320is light that has been scattered a specified number of times and/or that has a statistical distribution of number of scattering events/collisions that has one or more specified properties (e.g., mean number of scattering events/collisions, variance of number of scattering events/collisions). Note that the example speckle event350and other features of the example detected light intensity waveform361illustrated inFIGS.3D and3E, respectively, are meant as illustrative examples of signals related to patterns of constructive and destructive interference in light emitted from an environment of interest that could be used to determine a flow property of the environment. Rise times, rise rates, pulse widths, fall times, fall rates, and other temporal features of such detected signals are non-limiting examples of time-dependent waveform features that could be used to determine a flow property of an environment. Additionally or alternatively, an envelope, a spectrum, a derivative, a power in one or more frequency bands, a speckle or other event rate, an autocorrelation, or some other time-dependent variable or variables related to such detected signals could be used to determine a flow property of an environment. In examples wherein multiple physical features of patterns of constructive and destructive interference in emitted light are detected, additional or alternate time-dependent methods and/or derived variables could be used to determine a flow property of an environment based on the multiple detected physical features. For example, where the light sensor320includes a camera or some other array of light sensing elements (e.g., a rectangular array of photodetectors arranged on a surface), one or more properties of an image generated by the light sensor320could be used to determine a flow property of the environment. For example, a contrast level, a spatial correlation, a number of speckles in the image, a shape of speckle in the image, a change over time (e.g., a displacement, a change in size and/or shape) of the speckles in the image, or some other property of an image generated by the light sensor320could be determined and used to determine a flow property of the environment. Determining a flow property of the environment could include sampling an output of the light sensor320(e.g., a detected light intensity at a specified location) at a sufficiently high frequency to determine and/or detect information in the output (i.e., to detect the output at a plurality of respective points in time) that is related to the flow property. For example, a controller or other elements of the system300could operate a high-speed analog-to-digital converter (ADC) of the system300to sample an output (e.g., a voltage, a current) of the light sensor320at a specified high rate (e.g., one megahertz) to detect features of individual speckle events in the output that have one or more properties (e.g., a pulse width, a rise time, a rise rate) related to a flow property of blood in the portion of subsurface vasculature307. The specified high rate of sampling could be related to the duration, frequency, or some other temporal property of the output (e.g., an expected minimum duration of speckle events). For example, a speckle event could be expected to last approximately 1 microsecond, so the specified sample rate could be sufficiently in excess of 1 megahertz to resolve features of interest (e.g., a rising edge, a plateau, a falling edge) of individual speckle events. Additionally or alternatively, the system300could include an analog frontend that includes analog circuitry configured to filter, decimate, quantize, or otherwise alter and/or perform other analog operations or computations on the output of the light sensor320to produce an output electronic signal that is related to the flow property of the environment (e.g., a flow property in the portion of subsurface vasculature). This output electronic signal could then be used (e.g., sampled by an ADC of a microcontroller) to determine the flow property. In examples wherein the light sensor320has a plurality of electronic outputs (e.g., a plurality of voltage outputs relating to the amplitude of light detected by a plurality of light-sensitive elements) and/or wherein the system300includes a plurality of light sensors, the system300could include a plurality of such analog frontends configured to receive respective outputs from respective lights sensors/elements of light sensors and to output respective electronic signals related to the respective received sensor output signals. Additionally or alternatively, the system300could include fewer instances of such an analog frontend, and the outputs of respective light sensors could be electronically multiplexed such that the fewer instances of the analog frontend could be operated in combination with the outputs of the respective light sensors. An analog frontend as described above could include a variety of components configured in a variety of ways to generate output electronic signals having a variety of properties related to the flow property of the environment. In one example, a rate of change of the output signal of the light sensor320(e.g., a rise rate of rising edges of speckle events) could be related to the velocity of a corresponding scatterer in the environment. The analog frontend could include a differentiator configured to output a signal related to a rate of change of the output signal of the light sensor320. The differentiator could be passive (e.g., an RC and/or RL filter circuit), active (e.g., an op-amp configured with capacitors, resistors, and/or other elements as a differentiator), or some combination thereof. Further, the differentiator could be configured to output a signal that is related to the rate of change of the output signal of the light sensor320; for example, the differentiator could output a low-passed, rectified, or otherwise altered version of the rate of change of the output signal of the light sensor320. The analog frontend could additionally include a peak detector configured to output a signal related to the maximum value of the signal output by the differentiator during a specified previous time period. The peak detector could include passive and active components configured in a variety of ways. In some examples, the peak detector could include an op-amp, a rectifier, and a capacitor configured to output a signal equal to the maximum value of the input to the peak detector in the past. This variety of peak detector could additionally include a reset electronic switch that could be operated to reset the peak detector, allowing the peak detector to output a signal equal to the maximum value of the input to the peak detector during a previous time period specified by the operation of the electronic switch. Additionally or alternatively, the peak detector could include a lossy integrator. The output of the peak detector could form the output of the analog frontend, and could be used to determine a flow property of the environment (e.g., by sampling the output using an ADC at one or more points in time and operating a microcontroller based on the digital output(s) of the ADC). Additional or alternative analog and/or digital components and/or combinations of such with circuitry described herein could be configured and/or operated to enable determination of a flow property of an environment based on signals output from the light sensor320. For example, the system300could include a plurality of light sensors, and the outputs of a first subset of the light sensors could be sampled at a high rate by high-frequency ADCs and the outputs of a second subset of the light sensors could be input into respective analog frontends as described herein. In another example, analog circuitry could be configured to detect the presence of a speckle event in the output of a light sensor, and a high-frequency ADC could be operated responsively to sample the output of the light sensor for a specified period of time after the detection of the speckle event by the analog circuitry (i.e., the operation of the ADC could be triggered by the detection of the speckle event by the analog circuitry). Other embodiments of analog and/or digital circuitry to determine one or more flow properties of an environment based on the outputs of one or more light sensors are anticipated. Note that the detection of flow properties of blood in a portion of subsurface vasculature307of an arm305based on scattering of coherent illumination by scatterers (e.g., illustrative blood cell309) in the portion of subsurface vasculature307and the detection of a flow property of the blood due to time-dependent changes in the detected pattern of constructive and destructive interference in the scattered light emitted by the tissue of the arm305is intended as a non-limiting illustrative example of the detection of flow properties of environments that scatter light and that include scatterers that have time-dependent properties (e.g., location, orientation) related to flow in the environment. For example, the environment could be any tissue of a human (e.g., an ankle, an ear, a neck, a portion of central vasculature) or animal, and the flow property could be a property of flow in any fluid of the human or animal body (e.g., arterial blood, capillary blood, venous blood, lymph, interstitial fluid, stomach or other digestive contents, air in the airways and/or lungs, cerebrospinal fluid). The environment could be an in vivo biological environment (e.g., a tissue of a living human, animal, plant, etc.) or an in vitro environment. The environment could be a biological sample in a sample container, cuvette, pipette, microscope slide, or other vessel. The environment could be part of a biological or chemical process. For example, the environment could be a fluid in a water treatment process, a fluid in a food or drug preparation process, a lake, stream, or river in a natural environment, a, or some other environment. The environment could be a liquid, a gel, a solid, or some other phase of matter or combination of phases (e.g., an emulsion). The environment could include biological samples that had been freeze-dried, desiccated, frozen, vaporized, alkalated, or otherwise prepared, including adding the imaging agent (i.e., functionalized nanodiamonds and functionalized magnetic particles) to the environment. Scatterers in the environment could be discrete particles (e.g., blood cells, other cells, micelles, vacuoles, immiscible globules (e.g., oil globules in water), engineered particles (e.g., quantum dots, PEG particles, microparticles of a conductive, semiconductive, magnetic, or other material)) in the environment, or could be discontinuities within the fluid whose flow is being determined (e.g., cavitation bubbles, localized turbulence, high thermal and/or pressure gradients, shock waves). The scatterers could be present in the environment (e.g., cells in blood or other biological fluids, microorganisms, particles of silt, or other scatterers in an environmental fluid (e.g., a stream, a pond)) or could be introduced (e.g., production of cavitation bubbles by application of directed energy and/or mechanical intervention, injection of scattering particles (e.g., functionalized particles) into the bloodstream of a human or animal). Scatterers in an environment could have one or more properties that can be detected and that are related to one or more properties of the environment. For example, a scatterer could selectively interact with an analyte of interest (e.g., the scatterer could be functionalized with a bioreceptor specific to the analyte) and a drag coefficient or other property of the scatterer could be related to the scatterer binding to the analyte. Thus, detection of the velocity of such an individual scatterer or population of such scatterers, relative to one or more determined and/or detected flow properties of the environment containing the scatterer(s), could enable determination of one or more properties of the analyte (e.g., a concentration of the analyte). Those of skill in the art will understand the term “scatterer” in its broadest sense and that it may take the form of any natural or fabricated material, a cell, a protein or aggregate of proteins, a molecule, cryptophan, a virus, a micelle, a phage, a nanodiamond, a nanorod, a quantum dot, a single-magnetic-domain crystal of a metal, etc. that can interact with light incident on the scatterer to reflect, refract, diffract, or otherwise scatter the incident light. Scatterers could be naturally present in an environment of interest (e.g., blood cells in a portion of subsurface vasculature) or could be added to the environment of interest. Further, a scatterer may be of any shape, for example, spheres, rods, non-symmetrical shapes, etc., and may be made of a solid, liquid or gaseous material or combinations thereof. III. Example Devices A wearable device400(illustrated inFIG.4) can automatically measure a flow property of blood in a portion of subsurface vasculature of a person wearing the device. The term “wearable device,” as used in this disclosure, refers to any device that is capable of being worn at, on or in proximity to a body surface, such as a wrist, ankle, waist, chest, or other body part. In order to take in vivo measurements in a non-invasive manner from outside of the body, the wearable device may be positioned on a portion of the body where subsurface vasculature is easily observable, the qualification of which will depend on the type of detection system used. The device may be placed in close proximity to the skin or tissue, but need not be touching or in intimate contact therewith. A mount410, such as a belt, wristband, ankle band, etc. can be provided to mount the device at, on or in proximity to the body surface. The mount410may prevent the wearable device from moving relative to the body to reduce measurement error and noise. In one example, shown inFIG.4, the mount410, may take the form of a strap or band420that can be worn around a part of the body. Further, the mount410may be an adhesive substrate for adhering the wearable device400to the body of a wearer. A measurement platform430is disposed on the mount410such that it can be positioned on the body where subsurface vasculature is easily observable. An inner face440of the measurement platform is intended to be mounted facing to the body surface. The measurement platform430may house a data collection system450, which may include at least one laser480configured to emit a beam of coherent illumination into a portion of subsurface vasculature. The measurement platform430additionally includes at least one light sensor460configured to detect a pattern of constructive and destructive interference in light emitted from the portion of subsurface vasculature in response to illumination from the laser480. In a non-exhaustive list, the light sensor460may include one or more of a photodiode, a phototransistor, a photoresistor, an active pixel sensor, a CCD, a camera, or some other light sensitive element configured to detect one or more properties of a pattern of constructive and destructive interference in the emitted light. The components of the data collection system450may be miniaturized so that the wearable device may be worn on the body without significantly interfering with the wearer's usual activities. The data collection system450may additionally include additional detectors for detecting other physiological parameters, which could include any parameters that may relate to the health of the person wearing the wearable device. For example, the data collection system450could include detectors configured to measure blood pressure, pulse rate, respiration rate, skin temperature, etc. In a non-exhaustive list, additional detectors may include any one of an optical (e.g., CMOS, CCD, photodiode), acoustic (e.g., piezoelectric, piezoceramic), electrochemical (voltage, impedance), thermal, mechanical (e.g., pressure, strain), magnetic, or electromagnetic (e.g., magnetic resonance) sensor. The laser480is configured to transmit a beam of coherent illumination that can penetrate the wearer's skin into the portion of subsurface vasculature, for example, into a lumen of the subsurface vasculature. The transmitted illumination can be any kind of illumination that is benign to the wearer and that results at least in scattering of the beam of illumination to produce a pattern of constructive and destructive interference in light emitted from the portion of subsurface vasculature that is related to the disposition of scatterers (e.g., blood cells) in a flow of blood in the portion of subsurface vasculature. The wavelength of the transmitted illumination could be specified to penetrate biological tissues of a wearer; for example, the transmitted illumination could have a wavelength within a near-infrared (NIR) transparency window of biological tissue (e.g., between approximately 780 nanometers and approximately 810 nanometers). The wavelength of the transmitted illumination could be specified to be a wavelength that is scattered by blood cells. The wavelength of the transmitted illumination could be between approximately 400 nanometers and approximately 1000 nanometers. The wearable device400may also include a user interface490via which the wearer of the device may receive one or more recommendations or alerts generated either from a remote server or other remote computing device, or from a processor within the device. The alerts could be any indication that can be noticed by the person wearing the wearable device. For example, the alert could include a visual component (e.g., textual or graphical information on a display), an auditory component (e.g., an alarm sound), and/or tactile component (e.g., a vibration). Further, the user interface490may include a display492where a visual indication of the alert or recommendation may be displayed. The display492may further be configured to provide an indication of the measured physiological parameters, for instance, a determined rate of flow of blood in a portion of subsurface vasculature. In some examples, the wearable device is provided as a wrist-mounted device, as shown inFIGS.5A,5B, and6A-6C. The wrist-mounted device may be mounted to the wrist of a living subject with a wristband or cuff, similar to a watch or bracelet. As shown inFIGS.5A and5B, the wrist mounted device500may include a mount510in the form of a wristband520, a measurement platform530positioned on the anterior side540of the wearer's wrist, and a user interface550positioned on the posterior side560of the wearer's wrist. The wearer of the device may receive, via the user interface550, one or more recommendations or alerts generated either from a remote server or other remote computing device, or alerts from the measurement platform. Such a configuration may be perceived as natural for the wearer of the device in that it is common for the posterior side560of the wrist to be observed, such as the act of checking a wrist-watch. Accordingly, the wearer may easily view a display570on the user interface. Further, the measurement platform530may be located on the anterior side540of the wearer's wrist where the subsurface vasculature may be readily observable. However, other configurations are contemplated. The display570may be configured to display a visual indication of the alert or recommendation and/or an indication of the measured physiological parameters, for instance, the flow rate or other flow property of blood in a portion of subsurface vasculature of the wearer. Further, the user interface550may include one or more buttons580for accepting inputs from the wearer. For example, the buttons580may be configured to change the text or other information visible on the display570. As shown inFIG.5B, measurement platform530may also include one or more buttons590for accepting inputs from the wearer. The buttons590may be configured to accept inputs for controlling aspects of the data collection system, such as initiating a measurement period, or inputs indicating the wearer's current health state (i.e., normal, migraine, shortness of breath, heart attack, fever, “flu-like” symptoms, food poisoning, etc.). In another example wrist-mounted device600, shown inFIGS.6A-6C, the measurement platform610and user interface620are both provided on the same side of the wearer's wrist, in particular, the anterior side630of the wrist. On the posterior side640, a watch face650may be disposed on the strap660. While an analog watch is depicted inFIG.6B, one of ordinary skill in the art will recognize that any type of clock may be provided, such as a digital clock. As can be seen inFIG.6C, the inner face670of the measurement platform610is intended to be worn proximate to the wearer's body. A data collection system680housed on the measurement platform610may include a laser686and light sensor684. FIG.7is a simplified schematic of a system including one or more wearable devices700. The one or more wearable devices700may be configured to transmit data via a communication interface710over one or more communication networks720to a remote server730. In one embodiment, the communication interface710includes a wireless transceiver for sending and receiving communications to and from the server730. In further embodiments, the communication interface710may include any means for the transfer of data, including both wired and wireless communications. For example, the communication interface may include a universal serial bus (USB) interface or a secure digital (SD) card interface. Communication networks720may be any one of may be one of: a plain old telephone service (POTS) network, a cellular network, a fiber network and a data network. The server730may include any type of remote computing device or remote cloud computing network. Further, communication network720may include one or more intermediaries, including, for example wherein the wearable device700transmits data to a mobile phone or other personal computing device, which in turn transmits the data to the server730. In addition to receiving communications from the wearable device700, such as collected physiological parameter data and data regarding health state as input by the user, the server may also be configured to gather and/or receive either from the wearable device700or from some other source, information regarding a wearer's overall medical history, environmental factors and geographical data. For example, a user account may be established on the server for every wearer that contains the wearer's medical history. Moreover, in some examples, the server730may be configured to regularly receive information from sources of environmental data, such as viral illness or food poisoning outbreak data from the Centers for Disease Control (CDC) and weather, pollution and allergen data from the National Weather Service. Further, the server may be configured to receive data regarding a wearer's health state from a hospital or physician. Such information may be used in the server's decision-making process, such as recognizing correlations and in generating clinical protocols. Additionally, the server may be configured to gather and/or receive the date, time of day and geographical location of each wearer of the device during each measurement period. Such information may be used to detect and monitor spatial and temporal spreading of diseases. As such, the wearable device may be configured to determine and/or provide an indication of its own location. For example, a wearable device may include a GPS system so that it can include GPS location information (e.g., GPS coordinates) in a communication to the server. As another example, a wearable device may use a technique that involves triangulation (e.g., between base stations in a cellular network) to determine its location. Other location-determination techniques are also possible. The server may also be configured to make determinations regarding the efficacy of a drug or other treatment based on information regarding the drugs or other treatments received by a wearer of the device and, at least in part, the physiological parameter data and the indicated health state of the user. From this information, the server may be configured to derive an indication of the effectiveness of the drug or treatment. For example, if a drug is intended to treat nausea and the wearer of the device does not indicate that he or she is experiencing nausea after beginning a course of treatment with the drug, the server may be configured to derive an indication that the drug is effective for that wearer. In another example, a wearable device may be configured to measure blood glucose. If a wearer is prescribed a drug intended to treat diabetes, but the server receives data from the wearable device indicating that the wearer's blood glucose has been increasing over a certain number of measurement periods, the server may be configured to derive an indication that the drug is not effective for its intended purpose for this wearer. Further, some embodiments of the system may include privacy controls which may be automatically implemented or controlled by the wearer of the device. For example, where a wearer's collected physiological parameter data and health state data are uploaded to a cloud computing network for trend analysis by a clinician, the data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Additionally or alternatively, wearers of a device may be provided with an opportunity to control whether or how the device collects information about the wearer (e.g., information about a user's medical history, social actions or activities, profession, a user's preferences, or a user's current location), or to control how such information may be used. Thus, the wearer may have control over how information is collected about him or her and used by a clinician or physician or other user of the data. For example, a wearer may elect that data, such as health state and physiological parameters, collected from his or her device may only be used for generating an individual baseline and recommendations in response to collection and comparison of his or her own data and may not be used in generating a population baseline or for use in population correlation studies. A device800as illustrated inFIG.8can determine a flow property (e.g., a flow rate, a velocity of one or more particles in a fluid flow) of an environment830by emitting a beam of coherent illumination862into the environment830using a laser860and detecting a pattern of constructive and destructive interference in emitted light852that is emitted by the environment830in response to illumination using a light sensor850. The environment830can be any environment containing scatterers880such that the scatterers880and other elements of the environment830scatter the beam of coherent light862in a manner that causes the pattern of constructive and destructive interference in the emitted light852to have one or more time-dependent properties related to a flow property of the environment830. The environment830could be an in vivo biological environment (e.g., a tissue of a living human, animal, plant, etc.) or an in vitro environment. The environment830could be a biological sample in a sample container, cuvette, pipette, microscope slide, or other vessel. The environment830could be part of a biological or chemical process. For example, the environment830could be a fluid in a water treatment process, a fluid in a food or drug preparation process, a lake, stream, or river in a natural environment, or some other environment. The environment830could be a liquid, a gel, or some other phase of matter or combination of phases (e.g., an emulsion). The environment830could include biological samples that had been freeze-dried, desiccated, frozen, vaporized, alkalated, or otherwise prepared, including adding natural and/or artificial scatterers880to the environment830. The light sensor850and laser860could be configured as illustrated inFIG.8(i.e., separate, parallel, non-coaxial) or could be configured in another way, according to an application. In some examples, the light sensor850and laser860could be coupled to a set of optical elements to enable some function. In an example, the laser860could include two laser light sources configured to produce beams of illumination, where the directions of the beams are controllable using some apparatus, for example a set of galvanometer-driven mirrors. The galvanometers could be operated such that flow properties in specified regions (where the beams from the laser light sources are directed) could be illuminated such a flow property of fluid flows in the specified regions could be determined. Other configurations and applications are anticipated. IV. Example Electronics Platform for a Device FIG.9is a simplified block diagram illustrating the components of a device900, according to an example embodiment. Device900may take the form of or be similar to one of the wrist-mounted devices100,300,400,500,600shown inFIGS.1,3A-C,4,5A-B, and6A-6C. However, device900may also take other forms, such as an ankle, waist, or chest-mounted device. Device900could also take the form of a device that is not configured to be mounted to a body. For example, device900could take the form of a handheld device configured to be maintained in proximity to an environment of interest (e.g., a body part, a biological sample container, a volume of a water treatment system) by a user or operator of the device900or by a frame or other supporting structure. Device900could also take the form of a device configured to illuminate and to detect emitted light from an in vitro biological environment or some other environment, for example, a fluid volume within a water treatment process. Device900also could take other forms (e.g., device800illustrated inFIG.8). In particular,FIG.9shows an example of a wearable device900having a detection system910, a user interface920, communication interface930for transmitting data to a remote system, processor940, and a computer readable medium960. The components of the wearable device900may be disposed on a mount or on some other structure for mounting the device to enable stable detection of a flow property of an environment of interest, for example, to an external body surface where a portion of subsurface vasculature is readily observable. Processor940may be a general-purpose processor or a special purpose processor (e.g., digital signal processors, application specific integrated circuits, etc.). The one or more processors940can be configured to execute computer-readable program instructions970that are stored in the computer readable medium960and that are executable to provide the functionality of a device900described herein. The computer readable medium960may include or take the form of one or more non-transitory, computer-readable storage media that can be read or accessed by at least one processor940. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of the one or more processors940. In some embodiments, the computer readable medium960can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other embodiments, the computer readable medium960can be implemented using two or more physical devices. Detection system910includes a light sensor914and a laser912. The laser912is configured to emit a beam of coherent illumination into an environment of interest. The detection system910additionally includes at least one light sensor914configured to detect a pattern of constructive and destructive interference in light emitted from the environment of interest in response to illumination from the laser912. In a non-exhaustive list, the light sensor914may include one or more of a photodiode, a phototransistor, a photoresistor, an active pixel sensor, a CCD, a camera, or some other light sensitive element configured to detect one or more properties of a pattern of constructive and destructive interference in the emitted light. The detection system910may additionally include additional detectors for detecting physiological parameters of a human whose body includes the environment of interest (e.g., the environment of interest is a portion of subsurface vasculature of the human), which could include any parameters that may relate to the health of the person being measured by the device900. For example, the detection system910could include detectors configured to measure blood pressure, pulse rate, respiration rate, skin temperature, etc. In a non-exhaustive list, additional detectors may include any one of an optical (e.g., CMOS, CCD, photodiode), acoustic (e.g., piezoelectric, piezoceramic), electrochemical (voltage, impedance), thermal, mechanical (e.g., pressure, strain), magnetic, or electromagnetic (e.g., magnetic resonance) sensor. The detection system910could additionally include electronics configured to operate the laser912and the light sensor914. The electronics could include a high-speed analog-to-digital converter (ADC) configured to sample an output (e.g., a voltage, a current) of the light sensor914at a specified high rate (e.g., one megahertz) to detect features of individual speckle events in the output of the light sensor914that have one or more properties (e.g., a pulse width, a rise time, a rise rate) related to a flow property of an environment of interest. Additionally or alternatively, the electronics could include an analog frontend that includes analog circuitry configured to filter, decimate, quantize, or otherwise alter and/or perform other analog operations or computations on the output of the light sensor914to produce an output electronic signal that is related to the flow property of the environment (e.g., a flow property in the portion of subsurface vasculature). This output electronic signal could then be used (e.g., sampled by an ADC of a microcontroller) to determine the flow property. In examples wherein the light sensor914has a plurality of electronic outputs (e.g., a plurality of voltage outputs relating to the amplitude of light detected by a plurality of light-sensitive elements) and/or wherein the device900includes a plurality of light sensors, the electronics could include a plurality of such analog frontends configured to receive respective outputs from respective lights sensors/elements of light sensors and to output respective electronic signals related to the respective received sensor output signals. Additionally or alternatively, the electronics could include fewer instances of such an analog frontend, and the outputs of respective light sensors could be electronically multiplexed such that the fewer instances of the analog frontend could be operated in combination with the outputs of the respective light sensors. The program instructions970stored on the computer readable medium960may include instructions to perform any of the methods described herein. For instance, in the illustrated embodiment, program instructions970include a controller module972, calculation and decision module974and an alert module976. The controller module972can include instructions for operating the detection system910, for example, the laser912and the light sensor914. For example, the controller module972may operate laser912and the light sensor914during each of a set of pre-set measurement periods. In particular, the controller module972can include instructions for operating the laser912to emit a beam of coherent illumination into a target environment (e.g., tissue of a wearer of the device900) and controlling the light sensor914to detect a pattern of constructive and destructive interference in light responsively emitted by the environment being interrogated by the device900. The controller module972can also include instructions for operating a user interface920. For example, controller module972may include instructions for displaying data collected by the detection system910and analyzed by the calculation and decision module974, or for displaying one or more alerts generated by the alert module976. Further, controller module972may include instructions to execute certain functions based on inputs accepted by the user interface920, such as inputs accepted by one or more buttons disposed on the user interface. Communication platform930may also be operated by instructions within the controller module972, such as instructions for sending and/or receiving information via a wireless antenna, which may be disposed on or in the device900. The communication interface930can optionally include one or more oscillators, mixers, frequency injectors, etc. to modulate and/or demodulate information on a carrier frequency to be transmitted and/or received by the antenna. In some examples, the device900is configured to indicate an output from the processor by modulating an impedance of the antenna in a manner that is perceivable by a remote server or other remote computing device. Calculation and decision module974may include instructions for receiving data from and/or operating the data collection system910, analyzing the data to determine a flow property of the environment (e.g., a flow rate of blood in a portion of subsurface vasculature), analyzing the determined flow property at one or more points in time to determine if a medical condition is indicated, or other analytical processes relating to the environment proximate to the device900. In particular, the calculation and decision module974may include instructions for determining, for each preset measurement time, a flow property (e.g., a flow rate, a mean flow rate, a velocity of one or more particles in a fluid flow, a distribution of particle velocities in a fluid flow) of the environment based on one or more properties of the pattern of constructive and destructive interference detected using the light sensor914of the device900. These instructions could depend on the configuration of electronic circuits of the device900(e.g., of the detection system910). For example, the device900could include one or more ADCs configured to sample one or more outputs of the light sensor914at a specified high frequency, and the instructions could be executed by the processor(s)940to operate the one or more ADCs and to determine a flow property of the environment based on the output of the one or more ADCs. Additionally or alternatively, the device910could include circuitry (e.g., an analog frontend) configured to filter, modify, rectify, or otherwise perform analog operations on the one or more outputs of the light sensor914to produce an output electronic signal that is related to the flow property of the environment. The output electronic signal could be sampled by an ADC, comparator, or other electronic device, and the instructions could be executed by the processor(s)940to operate the ADC, comparator, or other electronic device to determine a flow property of the environment based on the output of the ADC, comparator, or other electronic device. These instructions could be executed at each of a set of preset measurement times. The program instructions of the calculation and decision module974may, in some examples, be stored in a computer-readable medium and executed by a processor located external to the device900. For example, the device900could be configured to collect certain data regarding physiological parameters from the user and then transmit the data to a remote server, which may include a mobile device, a personal computer, the cloud, or any other remote system, for further processing. The computer readable medium960may further contain other data or information, such as medical and health history of a user of the device900, that may be useful in determining whether a medical condition is indicated. Further, the computer readable medium960may contain data corresponding to certain blood flow profile baselines, above or below which a medical condition is indicated. The baselines may be pre-stored on the computer readable medium960, may be transmitted from a remote source, such as a remote server, or may be generated by the calculation and decision module974itself. The calculation and decision module974may include instructions for generating individual baselines for the user of the device900based on data collected over a certain number of measurement periods. For example, the calculation and decision module974may generate a baseline blood flow profile for each of a plurality of measurement periods by averaging a blood flow profile from one or more heart beat cycles during each of the measurement periods measured over the course of a few days, and store those baseline blood flow profiles in the computer readable medium960for later comparison. Baselines may also be generated by a remote server and transmitted to the device900via communication interface930. The calculation and decision module974may also, upon determining that a medical condition is indicated, generate one or more recommendations for the user of the device900based, at least in part, on consultation of a clinical protocol. Such recommendations may alternatively be generated by the remote server and transmitted to the device900. In some examples, the collected physiological parameter data, baseline blood flow profiles, health state information input by device users and generated recommendations and clinical protocols may additionally be input to a cloud network and be made available for download by a user's physician. Trend and other analyses may also be performed on the collected data, such as physiological parameter data and health state information, in the cloud computing network and be made available for download by physicians or clinicians. Further, physiological parameter and health state data from individuals or populations of device users may be used by physicians or clinicians in monitoring efficacy of a drug or other treatment. For example, high-density, real-time data may be collected from a population of device users who are participating in a clinical study to assess the safety and efficacy of a developmental drug or therapy. Such data may also be used on an individual level to assess a particular wearer's response to a drug or therapy. Based on this data, a physician or clinician may be able to tailor a drug treatment to suit an individual's needs. In response to a determination by the calculation and decision module974that a medical condition is indicated, the alert module976may generate an alert via the user interface920. The alert may include a visual component, such as textual or graphical information displayed on a display, an auditory component (e.g., an alarm sound), and/or tactile component (e.g., a vibration). The textual information may include one or more recommendations, such as a recommendation that the user of the device contact a medical professional, seek immediate medical attention, or administer a medication. FIG.10Ais a functional block diagram of components that could be included in an analog frontend as described herein (e.g., an analog frontend that could be a part of the detection system910or of other devices described herein, e.g.,100,300,400,500,600,800). The example analog frontend1000illustrated inFIG.10Aincludes a light sensor1010configured to detect a pattern of constructive and destructive interference in emitted light1005that is emitted from an environment of interest (e.g., a portion of subsurface vasculature) in response to illumination by a beam of coherent light. The light sensor output1015is a signal related to the amplitude of the emitted light1005at a specified location (e.g., a location of a light-sensitive element of the light sensor1010).FIG.10Billustrates an example waveform of the light sensor output1015that includes a trapezoidal pulse corresponding to a speckle event. One or more properties of the trapezoidal pulse (e.g., a pulse width, a rise time, a rise rate, a fall time, a fall rate) could be related to a velocity of one or more scatterers in the environment of interest and/or to some other flow property of the environment of interest. The example analog frontend1000additionally includes a differentiator1020configured to output a differentiator output1025related to a rate of change of the light sensor output1015. The differentiator could be passive (e.g., an RC and/or RL filter circuit), active (e.g., an op-amp configured with capacitors, resistors, and/or other elements as a differentiator), or some combination thereof. Further, the differentiator output1025could be related to the rate of change of the light sensor output1015; for example, the differentiator1020could output a low-passed, rectified, or otherwise altered version of the rate of change of the light sensor output1015.FIG.10Cillustrates an example waveform of the differentiator output1025corresponding to the trapezoidal pulse in the example light sensor output1015waveform illustrated inFIG.10B. The example waveform inFIG.10Cincludes a first pulse having an amplitude related to a rise rate of the trapezoidal pulse illustrated inFIG.10Band a timing corresponding to the rising edge of the trapezoidal pulse. The example waveform inFIG.10Cadditionally includes a second pulse having an amplitude related to a fall rate of the trapezoidal pulse illustrated inFIG.10Band a timing corresponding to the falling edge of the trapezoidal pulse. Note that a differentiator output1025waveform corresponding to the example trapezoidal pulse could have a different shape according to the configuration of the differentiator1020. For example, the differentiator1020could be configured to output a signal corresponding to a rectified or otherwise filtered version of the light sensor output1015and the example differentiator output1025would be changed correspondingly (in this example, the first pulse in the example differentiator output1025would be filtered (e.g., would have some larger, finite rise time/fall time, etc.) and would substantially lack to second pulse). The example analog frontend1000additionally includes a peak detector1030configured to output a peak detector output1035related to a maximum value of the differentiator output1025during a specified previous time period. The peak detector1030could include passive and active components configured in a variety of ways. In some examples, the peak detector1030could include an op-amp, a rectifier, and a capacitor configured to output a peak detector output1035corresponding to a maximum value of the differentiator output1025in the past. The peak detector1030could additionally include a reset electronic switch that could be operated to reset the peak detector1030, allowing the peak detector output1035to correspond to a maximum value of the differentiator output1025during a previous time period specified by the operation of the electronic switch. Additionally or alternatively, the peak detector1030could include a lossy integrator.FIG.10Dillustrates an example waveform of the peak detector output1035corresponding to the positive and negative pulses in the example differentiator output1025waveform illustrated inFIG.10C. The example waveform inFIG.10Dincludes a positive step pulse having an amplitude corresponding to the amplitude of the first pulse illustrated inFIG.10Cand a timing corresponding to the rising edge of the first pulse. Note that a peak detector output1035waveform corresponding to the example first and second pulses could have a different shape according to the configuration of the peak detector1030. For example, the peak detector1030could include a lossy integrator, and the example peak detector output1035would be changed correspondingly (in this example, the step response would decay to lower signal levels over time). In another example, the peak detector1030could include an electronic switch operated to periodically reset the peak detector1030, and the example peak detector output1035would be changed correspondingly (in this example, the step response would be replaced with a pulse having a duration corresponding to a difference in time between the timing of the first pulse of the example differentiator output1025and the timing of a subsequent operation of the electronic switch). The peak detector output1035could form the output of the example analog frontend1000, and could be used to determine a flow property of the environment. As illustrated inFIG.1040, an analog-to digital (ADC) converter1040could be configured and operated to sample the peak detector output1035at one or more points in time. For example, the ADC1040could be operated by a microcontroller, and the microcontroller could use the output of the ADC1040to determine a flow property of the environment of interest (e.g., the microcontroller could determine a flow rate corresponding to an amplitude of the peak detector output1035measured using the ADC1040). V. Illustrative Methods FIG.11is a flowchart of a method1100for measuring a flow property in a portion of subsurface vasculature using a wearable device. The wearable device includes a light source (e.g., a laser) configured to illuminate the portion of subsurface vasculature with a beam of coherent illumination and a light sensor configured to detect a pattern of constructive and destructive interference in light emitted from the portion of subsurface vasculature in response to illumination by the light source. The wearable device could include additional elements according to an application. For example, the wearable device could include a controller, a battery, a user interface, analog electronics, or other components configured to facilitate operation of the laser and the light sensor. The wearable device could include a mount or other component configured to position the wearable device on the body of a wearer, for example a band configured to attach the wearable device around a wrist, ankle, or other part of a wearer's body where a flow property in a portion of subsurface vasculature could be detected. The method1100includes illuminating the portion of subsurface vasculature with a beam of coherent illumination using the light source1110. The coherent illumination is such that scatterers and other elements in the environment scatter the coherent illumination such that light is responsively emitted from the environment having a pattern of constructive and destructive interference that is related at least to the configuration of the scatterers in the fluid flow. As such, the pattern of constructive and destructive interference could have a time-dependence related to a flow property of the environment (e.g., a flow rate of the fluid flow). This can include emitting coherent illumination having a specific wavelength or coherence length, such that the coherent illumination can be scattered by scatterers disposed in a fluid flow in the environment, efficiently transmitted through the environment, or other considerations. Exposing the environment to coherent illumination1110can include emitting coherent illumination having a specified amplitude, wavelength, coherence length, spectral line width, polarization, or other property. Further, exposing the environment to coherent illumination1110can include emitting coherent illumination having different properties at different points in time. For example, it could include emitting coherent illumination having a first amplitude, wavelength, coherence length, spectral line width, polarization, or other property at a first point in time and emitting coherent illumination having a second amplitude, wavelength, coherence length, spectral line width, polarization, or other property at a second point in time. The method1100additionally includes detecting a pattern of constructive and destructive interference in light emitted from the portion of subsurface vasculature in response to the coherent illumination using the light sensor1120. This can include detecting the amplitude, wavelength, degree of polarization, orientation of polarization, or other properties of the emitted light at a point on or within the wearable device (e.g., a sensitive area of a photodetector, an aperture of a camera). It can also include detecting one or more properties of light emitted from the portion of subsurface vasculature at more than one point. For example, the light sensor could include a plurality of light sensitive elements (e.g., photodiodes, phototransistors, active pixel sensors, pixels of a CCD) disposed in an array or other arrangement on a surface of the wearable device. The method1100additionally includes determining a flow property in the portion of subsurface vasculature based at least on a time dependence of the pattern of constructive and destructive interference detected using the light sensor1130. This could include determining a flow rate of blood in the portion of subsurface vasculature, a mean flow rate of blood in the portion of subsurface vasculature, a flow profile of blood at different locations in the portion of subsurface vasculature, a velocity of one or more scatterers or other elements in blood in the portion of subsurface vasculature, a distribution of velocities of scatterers or other elements in blood in the portion of subsurface vasculature, or some other flow property or properties in the portion of subsurface vasculature. Determining a flow property in the portion of subsurface vasculature1130could include sampling an output of the light sensor (e.g., an output related to an intensity of light, a local or overall contrast of the pattern of subsurface vasculature, or some other property or properties of the detected pattern of constructive and destructive interference) at a high frequency and then performing some calculation on the sampled output (e.g., determining a rate of amplitude change, a number of speckle events, a temporal property (e.g., duration, rise time) of speckle events) to determine the flow property. Additionally or alternatively, electronics (e.g., an analog frontend) of the wearable device could be configured to filter, modify, rectify, or otherwise perform analog operations on an output of the light sensor to produce an output electronic signal that is related to the flow property of the environment. Determining a flow property in the portion of subsurface vasculature1130could include sampling the output electronic signal and then performing some calculation on the sampled output electronic signal to determine the flow property. Additional or alternative embodiments and/or steps of determining a flow property in the portion of subsurface vasculature1130are anticipated. The method1100could include additional steps or elements in addition to exposing the environment to coherent illumination1110, detecting a pattern of constructive and destructive interference1120, and determining a flow property in the portion of subsurface vasculature1130. For example, the method1100could include mounting the wearable device to an external body surface of the wearer proximate to the portion of subsurface vasculature. The method could include indicating a determined flow property to a user using a user interface of the device or to some other person or system(s) by some other means (e.g., a wireless communications component of the wearable device). The method1100could include introducing scatterers into the environment (e.g., injecting, ingesting, transdermally transferring, or otherwise introducing the scatterers into a lumen of vasculature of a human). The method1100could include determining some other information about a wearer based on one or more determined flow properties (e.g., flow rates of blood) in the portion of subsurface vasculature during one or more periods of time. For example, a determined flow rate or other flow property of blood in the portion of subsurface vasculature determined at one or more points in time could be used to determine a blood pressure in the portion of subsurface vasculature. One or more determined flow properties in the portion of subsurface vasculature could be used to determine a health state of the wearer. For example, a plurality of flow rates of blood determined at a respective plurality of points in time could be used to determine a heart rate of the user that could further be used to determine a health state of the wearer (e.g., tachycardia, bradycardia, sleep apnea, irregular heartbeat). Additionally or alternatively, a plurality of flow rates of blood could be used to determine a flow and/or pressure profile of the blood that could further be used to determine a health state of the wearer (e.g., hypertension, aortic regurgitation). Other additional steps of the method1100are anticipated. CONCLUSION Where example embodiments involve information related to a person or a device of a person, the embodiments should be understood to include privacy controls. Such privacy controls include, at least, anonymization of device identifiers, transparency and user controls, including functionality that would enable users to modify or delete information relating to the user's use of a product. Further, in situations in where embodiments discussed herein collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's medical history, social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a content server. The particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an exemplary embodiment may include elements that are not illustrated in the Figures. Moreover, it is particularly noted that while devices, systems, methods, and other embodiments are described herein by way of example as being employed to detect one or more flow properties of blood in portions of subsurface vasculature of a human body, it is noted that the disclosed devices, systems, and methods can be applied in other contexts as well. For example, detection systems configured to detect one or more flow properties of fluid using laser and light sensors as disclosed herein may be included in wearable (e.g., body-mountable) and/or implantable devices. In some contexts, such a detection system is situated to be substantially encapsulated by bio-compatible polymeric material suitable for being in contact with bodily fluids and/or for being implanted. In one example, an implantable medical device that includes such a detection system may be encapsulated in biocompatible material and implanted within a host organism. Such body-mounted and/or implanted detection systems can include circuitry configured to operate lasers, light emitters, light sensors, or other elements to enable detection of flow properties of a target fluid by detecting changes over time in a speckle pattern of light scattered and/or otherwise emitted by the target fluid. The detection system can also include a communication system for wirelessly indicating detected and/or determined flow properties of a target fluid. In other examples, devices, systems, and methods disclosed herein may be applied to measure flow properties of one or more fluids that are not in or on a human body. For example, detection systems disclosed herein may be included in body-mountable and/or implantable devices used to measure flow properties in a fluid of an animal. In another example, devices, systems, and methods disclosed herein may be applied to measure flow properties of an environmental fluid, such as a fluid in a river, lake, marsh, reservoir, water supply, sanitary sewer system, storm sewer system, or the atmosphere. In another example, devices, systems, and methods disclosed herein may be applied to measure flow properties of a fluid that is part of a process, such as a waste treatment process, industrial process, pharmaceutical synthesis process, food preparation process, fermentation process, or medical treatment process Additionally, while various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein. | 103,483 |
11857302 | DETAILED DESCRIPTION In a preferred method according to the disclosure, in the first step, pulse signals of a patient are measured. Preferably, the pulse signals are measured using the oscillometric non-invasive blood pressure measurement method, which method can easily be carried out without adversely affecting the patient. The oscillometric non-invasive blood pressure measurement method, as described above in view of the schematic representations shown inFIGS.3,4a,4b, and4c, is well known in the art. Notably, other pulse measurement methods may equally be applied to measure the patient's pulse. Similar to the curve described above in view ofFIG.4c, the measured pulse signals are represented inFIG.1as a curve (dotted line) oscillating around an average value thereof. The average value may preferably be determined as moving average over a period of one pulse cycle of the patient. In this example, the pressure applied to the pressure cuff is continuously increased from about 40 mmHg to about 120 mmHg at a substantially constant rate. Thus, it is possible to represent the oscillating measured pulse signals as a function over the clamping pressure being constantly increased over time. As a matter of fact, a representation as a function over the time may be equally displayed. By the example shown inFIG.1, the pressure is increased from about 40 mmHg to about 120 mmHg within a detection time period of about one minute. Thus, in this example, about 60 heart beats and about ten respiratory cycles of a patient are captured within the detection time period. Next, as the second step of the method, the envelope(signal)-curve is determined. In this example, the envelope(signal)-curve is determined by continuously determining a distance dimension of the measured pulse signals from the average value thereof and by then applying a low pass filter to the distance dimension. The low pass filter has a cutoff frequency below the pulse rate of the patient. In other words, the portion below the average value of the oscillating curve of measured pulse signals is folded up to the upper portion, so as to obtain exclusively positive pressure values. Then, the resulting curve is flattened by using the low pass filter with a cutoff frequency below the pulse rate of the patient. Thereby, the curve is flattened in such a way that the area below the curve remains substantially unchanged. In the present example, the flattened curve is then multiplied by (a), so that the finally obtained envelope(signal)-curve is essentially positioned at the level of the upper extremes in amplitude of the measured pulse signals. As can be seen fromFIG.1, the envelope(signal)-curve (light dashed line) is modulated by another influential factor (i.e. the patient's respiration or ventilation) which has an impact on the measured blood pressure signals. In contrast, the high frequency variation of the pulses, caused by the patient's heart beats, has been filtered out by the manipulation according to the second step of the disclosure. Next, in the third step of the method, a fit(envelope(signal))-function is determined (dark dashed line inFIG.1) using a functional prototype. The functional prototype chosen in this example is a non-negative smooth bell-shaped curve, namely a Cauchy-Lorentz function, exhibiting the following generic formula: f(p)=famp1+(p-fmaxfbw)2 Thus, there are three parameters that can be freely selected to fit the functional prototype to the envelope(signal)-curve, namely famp, fmax, and fbw. The parameter fampis decisive for the amplitude of the bell-shaped curve of the functional prototype; the parameter fmaxis decisive for the location of the maximum on the pressure-axis; and the parameter fbwis decisive for the width at half maximum of the bell-shaped curve. In this example, the Levenberg-Marquardt algorithm, which is known to those skilled in the art, is applied as optimization algorithm to identify values for the parameters famp, fmax, and fbwthat lead to the best fitting of the fit(envelope(signal))-function to the envelope(signal)-curve. However, other known optimization algorithms may equally be applied for that fitting step. Then, by the fourth step of the method according to the disclosure, the respiratory pulse variation signals are determined which substantially correspond to the difference between the envelope(signal)-curve and the fit(envelope(signal))-function. The respiratory pulse variation signals are determined in such a way that the respiratory pulse variation signals oscillate around an average value thereof. Preferably, a low pass filter is applied to the envelope(signal)-curve to determine the average value of the respiratory pulse variation signals, the low pass filter having a cutoff frequency below the respiratory frequency of the patient. To obtain respiratory pulse variation signals, the above mentioned average value has to be subtracted from the envelope(signal)-curve. Thus, an oscillating curve, similar to the curve of the measured pulse signals, is obtained, but having a lower oscillating frequency which is caused only by the patient's respiration. As the fifth and sixth step of the method according to the disclosure, an envelope(respiration)-curve and a fit(envelope(respiration))-function are determined. This is done substantially the same way in which the envelope(signal)-curve and the fit(envelope(signal))-function have been previously determined in method steps2and3, respectively. Notably, inFIG.1, only the fit(envelope(signal))-function is shown (dark line), but not the envelope(signal)-curve. As fifth step of the method, the envelope(respiration)-curve is determined. As before, the envelope(respiration)-curve is determined by continuously determining a distance dimension of the respiratory pulse variation signals from the average value thereof and by then applying a low pass filter to the distance dimension. The low pass filter has a cutoff frequency below the respiration frequency of the patient. In other words, the portion below the average value of the oscillating curve of respiratory pulse variation signals is folded up to the upper portion, so as to obtain exclusively positive pressure values. Then, the resulting curve is flattened by using the low pass filter with a cutoff frequency below the respiration frequency of the patient. Thereby, the curve is flattened in such a way that the area below the curve remains substantially unchanged. In the present example, the flattened curve is then multiplied by the value of √{square root over (2)}, so that the finally obtained envelope(respiration)-curve is essentially positioned at the level of the upper extremes in amplitude of the respiratory pulse variation signals. Next, in the sixth step of the method, a fit(envelope(respiration))-function is determined using a functional prototype. The same functional prototype as before is chosen, namely the Cauchy-Lorentz function, exhibiting the following generic formula: g(p)=gamp1+(p-gmaxgbw)2 Thus, there are again three parameters that can be freely selected to fit the functional prototype to the envelope(respiration)-curve, namely gamp, gmax, and gbw. The parameter gampis decisive for the amplitude of the bell-shaped curve of the functional prototype; the parameter gmaxis decisive for the location of the maximum on the pressure-axis; and the parameter gbwis decisive for the width at half maximum of the bell-shaped curve. In this example, again the Levenberg-Marquardt algorithm is applied as optimization algorithm to find those parameters gamp, gmax, and gbwthat lead to the best fitting of the fit(envelope(respiration))-function to the envelope(respiration)-curve. Finally, in the seventh step of the method according to the present disclosure, the indicator VR for the patient's volume responsiveness is determined, i.e. by calculating the ratio of the previously identified parameters gampand fampof the corresponding functional prototypes. As set forth above, it is known in the art that respiration or ventilation of a patient has an impact on his pulse pressure. That is, variations of arterial pulse can be detected, which variations have a frequency corresponding to the respiration frequency. The magnitude of these pulse variations substantially depends on the patient's heart's position on the Frank-Starling curve (as schematically illustrated inFIG.2). If the magnitude of the pulse variations caused by the patient's respiration is relatively large, the patient is supposed to be on the steep part of the Frank-Starling curve, which means that the patient exhibits relatively “good” volume responsiveness. To the contrary, if the magnitude of the pulse variations caused by the patient's respiration is relatively small, the patient is supposed to be on the flat part of the Frank-Starling curve, which means that the patient exhibits relatively “bad” or no volume responsiveness. The magnitude of the pulse variations caused by the patient's respiration is represented by the parameter gampin the above example of the method. Similar to the calculation of the pulse pressure variation (PPV) as indicator for the patient's volume responsiveness, the parameter gampis “normalized”. That is, gampis divided it by the magnitude fampof the pulse variations caused by heart beats. Therefore, the indicator VR obtained according to the present disclosure is similar to the pulse pressure variation (PPV) as indicator for the patient's volume responsiveness known in the art. However, the method according to the present disclosure—unlike the methods known in the art—does not rely on single maximum/minimum values of measured pulse pressure variations corresponding to one single heart beat. Instead, the method according to the present disclosure takes all pulse signals measured within the detection period into account. Therefore, bias by any artefacts or arrhythmias of the heart occurring within the detection period, is avoided by the method according to the present disclosure. Accordingly, the physician is provided with highly reliable information by the present method which allows him (in combination with further information of the patient's status) to make a well-founded decision as to the patient's volume responsiveness. Summarizing the above, the present disclosure provides a simple and robust method (and means) for reliably determining an indicator representative for the patient's volume responsiveness. Moreover, the method (or means) can be easily and reliably implemented on the basis of the oscillometric non-invasive blood pressure measurement method known in the art. | 10,664 |
11857303 | DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS In illustrative embodiments, a method and apparatus analyze a patient's foot to determine the blood flow of a patient's foot through photoplethysmography (“PPG”) measurements. Among other ways, those measurements may be taken from the plantar surfaces of the feet of patients (e.g., the sole of the foot). A plurality of light sources, such as lasers and/or light emitting diodes (LEDs), are coupled with a photodetectors to form PPG sets positioned against the sole of the patient's/user's foot. The light source in each PPG sets transmits light of a given wavelength into the surface of the foot and the detector measures an intensity of the light signal that can be detected through the sole of the foot. The change of the intensity of the light absorbed represents blood volume changes in the microvascular bed of the plantar tissue. The measured PPG signal is analyzed to determine cardiac health parameters, which may include heart rate, heart rate variability (HRV), respiration rate, and oxygen saturation (SpO2), blood perfusion in the feet, and regional differences in the blood flow and tissue oxygenation between the feet and in different portions of the feet. The PPG sets are coupled with springs that are movably coupled with to a platform (e.g., a rigid platform or flexible platform). A user contacts the rigid platform with the bottom of one or both of the user's feet and the springs are biased to hold the PPG sets against the sole (e.g., plantar region) of the user's foot. In some embodiments, the user may stand on the rigid platform, or may place their feet on the rigid platform while sitting or reclining. The biasing force of the spring coupled to the PPG pair is substantially independent of the force of the user's foot against the rigid platform. Accordingly, the biasing force applied to the patient's foot is the same regardless of the patient's position, intended interaction with the PPG pair, or weight. The specific wavelengths of the light sources may range from the visible light wavelengths to infra-red (IR) wavelengths. In particular, the PPG light source may use red, infrared, and/or green lasers and/or LEDs. The platform takes PPG measurements from some or all of the PPG sets and records those readings in memory. A feedback mechanism may be included for users to visually observe part of or the entire process, such as when: a) the PPG reading begins; b) it is in progress; and c) the process is successfully completed. The feedback may be in the form of visual, acoustic, or other indicia, such as lights, sounds, or a graphical user interface (GUI) with visual indicia (e.g., words or numbers). A data file containing the recorded data may be transmitted to a remote processor. In illustrative embodiments, a platform uses this data with signal processing and filtering functionality, as well as with a process to calculate heart rate and/or signal-to-noise ratio. This permits patients, their health care providers, and/or their caregivers to intervene earlier, reducing the risk of more serious complications. Details of illustrative embodiments are discussed below. As known by those in the art, photoplethysmography (PPG) is a simple and low-cost optical technique that can be used to detect blood volume changes in the microvascular bed of tissue. It is often used non-invasively to make measurements at the skin surface. The PPG waveform comprises a pulsatile (“AC”) physiological waveform attributed to cardiac synchronous changes in the blood volume with each heartbeat, and is superimposed on a slowly varying (“DC”) baseline with various lower frequency components attributed to respiration, sympathetic nervous system activity and thermoregulation. Although the origins of the components of the PPG signal are not fully understood, it is generally accepted that they can provide valuable information about the cardiovascular system. PPG products can be implemented using low cost, simple and portable technology for the primary care and community based clinical settings. This is facilitated by the wide availability of low cost and small semiconductor components, and the advancement of computer-based pulse wave analysis techniques. The PPG technology has been used in a wide range of commercially available medical devices for measuring oxygen saturation, blood pressure and cardiac output, assessing autonomic function and also detecting peripheral vascular disease. Foot photoplethysmograms therefore include physiological information from the heart towards the lower extremities because PPG detects light variations originating from changes in the blood volume being transmitted from the left ventricle. Previous studies have reported various medical applications of foot PPG, including the monitoring of the vascular status, the prevention of diabetic foot ulcers FIG.1Aschematically shows the plantar region (e.g., sole) of a foot10with a box indicating a region of the fore foot that is particularly useful when making PPG measurements. On and around the forefoot, one or more PPG sets can target three or more key locations for each foot: the lateral plantar artery, medial plantar artery, and deep plantar artery. In one embodiment, PPG sets are positioned at the midfoot of each foot (about halfway of a universal foot), and the sensors are positioned with a pitch of 35-45 mm (e.g., 39 mm) from each other. Alternative embodiments use other locations on the foot to collect PPG signals (e.g., for redundancy and accuracy despite foot morphology differences or motion during scanning) and to measure relative blood flow differences across the foot. PPG signals may be collected from regions of the sole of the foot that has thicker skin, such as the heel and the ball, as well as regions that have thinner skin, such as the midfoot. FIG.1Bschematically illustrates a user's foot10positioned on a rigid platform16. PPG sets20are shown coupled to the rigid platform16with springs, and the PPG sets20are being held in contact with the plantar surface of the user's foot by the pressure applied toward the foot by the springs. The light source illuminates the skin and the detector measures changes in light absorption. In this way, the PPG set monitors the perfusion of blood to the dermis and subcutaneous tissue of the skin. This is schematically represented by the curved arrows inFIG.1Billustrating a light signal that emanates from the source and is measured at the detector. Accordingly, in illustrative embodiments, light from the light source penetrates into the foot and reflects back to the detector. A simplified schematic illustration of a sensor20(e.g., PPG set) mounted in a rigid platform is shown inFIG.2. In this embodiment, the light source and the detector (e.g., PPG set) are shown as a sensor20being mounted in a PPG set casing/housing28(e.g., sensor casing) that is coupled to a spring26mounted on a spring base30. The PPG sensor20may move freely up and down on the spring26within a through hole34in a cover32of a device platform16(e.g., rigid platform). It is this device platform16, or its top surface, that supports the weight or force of the foot (e.g., the full weight if the patient is standing, or the reduced force if the foot is simply placed against it). As noted above and below, the spring26produces a return/biasing force that is substantially independent of the force on the platform. The spring keeps the PPG set casing20firmly against the sole of the foot. Preferably, the PPG set20and spring26effectively move like a plunger in that they are constrained to move in one direction (e.g., generally normal to the surface of the rigid platform16). In some embodiments, the detector and the light source may be mounted on separate springs so that they may each move independently. The detector and light source may be mounted on springs with the same force constant or different force constants. The encased sensors20can be directly or indirectly connected to a spring26affixed to the base of the device. The encased sensor20may be movably coupled within the through hole34in a manner that enables the sensor20to move up and down through the hole34as pressure is applied. This movement may be substantially normal to the top surface of the platform or at a non-normal angle to the top surface. Depending on the location of the foot, the spring heights and strengths can be altered to achieve the best signal possible. For example, one or more springs may be nominally set so that when at rest (i.e., when the foot is not on the platform), their corresponding PPG sets are about 7.4 mm above the top surface of the platform to target the deep plantar artery, about 15.4 mm above the top surface of the platform to target the medial plantar artery, and about 11.4 mm above the top surface of the platform to target the lateral plantar artery. The springs preferably are configured to apply a moderated force to the foot so as not to unduly constrict blood flow. At the same time, the spring force (aka “biasing force”) should be significant enough to both minimize light scattering at the interface of the light source and maximize light capture at the detector. To those ends, springs on PPG sets targeting the deep plantar artery and lateral plantar artery can be configured to a variety of dimensions, stiffnesses, for example, with a spring constant of 1.0 N/mm. Furthermore, the springs on the medial plantar artery can may have a spring constant of about 0.8 N/mm. Indeed, those in the art may adjust those values to accommodate different requirements (e.g., 10-20% range above, below, or both above and below the noted values). In some embodiments, the spring force may be between about 0.5 N or 1 N to about 10 N, or the spring force may be between about 1 N to about 5 N. FIG.3Aschematically shows a cross section of a foot while PPG measurements are being performed. A PPG set20is shown with a light source22and a detector24side-by-side next to each other. They are mounted on an upper surface of a sensor casing28, and a spring26is shown coupled to a lower surface of the sensor casing28. The spring26is coupled to the inner surface of the base (not shown). The source22emits one or more wavelengths of light38(e.g., electromagnetic radiation) into the bottom of the foot. The emitted light38may include green, red, and/or infra-red light which travels through the tissue of the foot. Some of the emitted light38will interact with arteries36in the foot. After interacting with the arteries of the foot, some of the light may be detected as detected light40by the detector24that is adjacent to the source22, in an operative mode known as a reflective mode. Furthermore, after interacting with the arteries36of the foot, some of the detected light40may be detected by a detector24that is opposite the source, at the top surface of the foot, in an operative mode known as a transmissive mode. The detectors24measure the magnitude of the amount of the detected light40that is gathered by the detector24as a function of a time that the measurements (e.g., tests) are conducted, and they produce a pulse signal that comprises an AC (pulsatile) and a DC (slowly varying) component. The AC component is attributed to changes in the blood volume synchronous with each heartbeat, whereas the DC component is related to respiration, tissues, and average blood volume. AC amplitude represents the strength of the arterial pulsation, and a large AC amplitude indicates strong arterial pulsation. As shown, the PPG set20may protrude into the skin of the foot such that the PPG set20forms a temporary concave region42in the skin of the foot. The concavity42shown inFIG.3Amay exaggerate the size of the concave region20, as it is not intended to be drawn to scale, but to illustrate a concave effect. As noted, the spring26provides sufficient bias (force) to make a firm contact with the surface of the foot, but not so much force as to attenuate the blood flow in region being measured in the artery36. That is, the spring tension (e.g., bias) is tuned to be able to hold the PPG sets20with optimal bias against the skin to detect blood volume changes in the microvascular bed of the plantar tissue without substantially effecting the blood flow changes. As discussed above, measuring detected light40in the “reflective mode” of PPG detection involves having the light source22and the detector24side-by-side on the sensor casing28. This embodiment is useful in an open platform (discussed below) when the light source22and the detector24are facing the skin from the same direction, as illustrated by the PPG set20inFIG.3A. Moreover, this reflective mode may also be useful in a closed platform (discussed below), such as a shoe or foot fixture to receive the foot. Alternatively, rather than use reflective mode with an adjacent or nearby detector, the emitted light38that enters the skin and passes through the artery36may be measured as detected light40by a detector24′ opposite from the light source22; in this example, a detector24′ on the top of the foot, which may be considered to be operating in a “transmissive mode” of PPG detection. In addition to being useful in an open platform, this embodiment also may be particularly useful in a closed platform when the light source22and the detector24are facing the skin on opposite sides of the foot. In some embodiments, there may be more than one light source and more than one detector. In particular, some embodiments may include sources providing light (e.g., electromagnetic radiation) of one or more wavelengths. One example uses a green light source and an infra-red (e.g., IR) light source. Green, red, and IR wavelengths have been shown to be useful at extracting pulse rates from users. IR wavelengths have also shown to be suitable for PPG-based measurements, including oxygen saturation determination, such as SpO2. In some embodiments, PPG measurements using a combination of light sources and detectors operating simultaneously with multiple wavelengths, such as the red and IR wavelengths, measures SpO2. FIG.3Bschematically illustrates the potential effects of spring bias on the measurement of the arterial flow. As shown, three PPG sets are mounted on springs and are shown extending against the bottom of a foot. A curved arrow loop representing a light signal is shown being emitted from the source and detected by the detector. An artery is also shown. The left PPG set is contacting the skin, but is not protruding into the bottom of the foot and thus, is not causing a concave region. In this example, the light is not interacting very strongly with the subcutaneous blood flow, and the AC signal likely would be weak. The middle PPG set is protruding into the bottom of the foot, but is not protruding into the artery. In this example, the light is interacting very strongly with the artery, and the AC signal would be strong. AC amplitude represents the strength of the arterial pulsation, and a large AC amplitude indicates strong arterial pulsation. With each heart beat there are force vectors which project outward against the blood vessel walls in a radial direction (normal force vectors). The resultant force in the vertical direction is the sum of all vertical force components of the normal force vectors. The right PPG set is protruding into the bottom of the foot and is protruding into the artery. In this example, the blood flow may be attenuated and the AC signal would be weaker. As such, the contacting force exerted on the photoplethysmographic (e.g., PPG) sensor is too high, causing the arterial wall to begin to flatten. As the result of compression, the external pressure caused by the contacting force from the PPG set approaches the intra-arterial pressure. The difference between the intra-arterial pressure and external pressure is defined as transmural pressure, and when the transmural pressure decreased the blood flow, and therefore the AC component of the PPG signal, is attenuated. From the viewpoint of the arterial wall properties, the AC amplitude increases as the contacting force increases up to a certain point, as exemplified by the middle PPG set, where the transmural pressure goes to zero. After that peak point, the AC amplitude decreases since the artery begins to be occluded, as exemplified by the right PPG set. Eventually, the artery is pushed against bone and other tissue, and the distal arterial wall will eventually flatten completely and the pulsation will disappear. FIG.3Cschematically illustrates one embodiment of an open platform configured to perform PPG measurements. The open platform includes a cover32having a platform contact surface14. PPG sets protrude above the platform contact surface (aka the “top surface”) through holes and contact/abut the bottom surface of a foot10. The top contact surface of each of the PPG sets is pressed against the bottom surface of the foot at different heights relative to the cover contact surface based on the shape of the foot. For example, one or more PPG sets contact the arch of the foot farther away than the location where one or more PPG sets contact the heel of the foot. In locations where the skin of the foot may actually protrude into the through hole in the platform contact surface14of the rigid platform, such as at the heel of the foot, the top contact surface of a PPG set will come into contact with the skin of the foot a distance below the contact surface of the rigid platform. The spring26between the bottom of the PPG set20and the inner surface of the base48is configured to produce a biasing force that brings the top contact surface of the PPG set20into contact with skin of the foot10regardless of whether the skin is above the contact surface of the rigid platform (such as with the arch) or below the contact surface14of the rigid platform (such as with the heel). The bias provided by the spring26does not depend on the weight (e.g., mass) of the person on the rigid platform16. Whether the user is large or small, standing, sitting, or lying down, as long as a portion of the foot10of the user is in contact with the contact surface14of the rigid platform, the springs26will apply a bias force sufficient to take PPG measurements with each PPG set20. Accordingly, this bias forced is independent of the force that the foot10applies to the top surface of the platform. In some embodiments, the rigid platform16includes one or more pressure sensors66, as shown inFIG.3C. These pressure sensors66provide a signal to a placement system configured to start analyzing a foot10in response to measurements of the pressure sensor66. Furthermore, a measurement of the pressure sensor66may be combined with a measurement of at least one PPG set20to initiate analysis of one or both feet of the user. FIG.3Dschematically illustrates one embodiment of a closed platform to perform PPG measurements. In this embodiment, an enclosure68is provided as a part of the platform16to bring PPG sets20and/or detectors24into contact with the top and/or sides of a 10 foot. As shown in the embodiment ofFIG.3C, the cover32and the base48of the closed platform are substantially similar to the open platform. That is, the rigid platform includes at least PPG sets20, springs26, and pressure sensors66. In addition, the closed platform also encloses one or both feet of the user with the enclosure68that is attached to, or part of the rigid platform. The closed platform may be manufactured as a single unit, or it may be assembled from several parts. In some embodiments, the closed platform may operate in a transmissive mode, in a reflective mode, or both simultaneously. By having detectors24located on the inside surface of the enclosure68, it is possible for a light source22in the rigid platform to transmit light of a wavelength through the foot and have its intensity measured by the detector located on the inside of the enclosure68. In some embodiments, the PPG sets20and/or detectors24are mounted in the enclosure68in a manner analogous with how they are mounted in the rigid platform16. That is, the PPG sets20and/or detectors24are mounted on springs26with a similar bias as described for the springs26in the rigid open platform. Furthermore, the PPG sets and/or detectors move normal or not-normal to the platform surface (e.g., like plungers) through openings in a contact surface of the enclosure. Furthermore, in some embodiments, the PPG sets20mounted in the enclosure68take PPG measurements in substantially the same manner that the PPG measurements are taken by PPG sets20in the rigid platform16. Of note is the fact that this embodiment ofFIG.3Dtakes readings from another part of the foot10—the top of the foot—not necessarily the plantar portion. In fact, some embodiments may take measurements from the side of the foot. Accordingly, discussion of just taking measurements from the bottom of the foot is for exemplary purposes only and not intended to limit all embodiments. The measurements in the foot may be used to detect irregularities in the blood flow indicative of early stages of the formation of a foot ulcer. Generally speaking, an ulcer is an open sore on a surface of the body generally caused by a breakdown in the skin or mucous membrane. Diabetics often develop foot ulcers on the soles of their feet as part of their disease. In this setting, foot ulcers often begin as a localized inflammation or infection (e.g. as a pre-ulcer, which has not broken through the skin but exhibits an elevated temperature) that may progress to skin breakdown and infection. It should be noted that discussion of diabetes and diabetics is but one example and used here simply for illustrative purposes only. Accordingly, various embodiments apply to other types of diseases (e.g., stroke, deconditioning, sepsis, friction, coma, etc.) and other types of ulcers—such embodiments may apply generally where there is a compression or friction on the living being's body over an extended period of time. For example, various embodiments also apply to ulcers formed on different parts of the body, such as on the back (e.g., bedsores), inside of prosthetic sockets, or on the buttocks (e.g., a patient in a wheel chair). Moreover, illustrative embodiments apply to other types of living beings beyond human beings, such as other mammals (e.g., horses or dogs). Accordingly, discussion of diabetic human patients having foot ulcers is for simplicity only and not intended to limit all embodiments of the invention. The approach described here is useful in improving patient compliance in measuring blood flow in affected areas of the body. If a diseased or susceptible patient does not regularly check his/her feet, then that person may not learn of an ulcer or a pre-ulcer until it has emerged through the skin and/or requires significant medical treatment. Accordingly, illustrative embodiments implement an ulcer monitoring system in any of a variety of forms—preferably in an easy to use form factor that facilitates and encourages regular use. FIGS.4A and4Bschematically show one form factor, in which a patient/user steps on an open platform16that gathers data about that user's feet10. In this particular example, the open platform16is in the form of a floor mat placed in a location where he the patient regularly stands, such as in front of a bathroom sink, next to a bed, in front of a shower, on a footrest, or integrated into a mattress. As an open platform16, the patient simply may step on the top sensing surface of the platform16to initiate the process. Accordingly, this and other form factors favorably do not require that the patient affirmatively decide to interact with the platform16. Instead, many expected form factors are configured to be used in areas where the patient frequently stands during the course of their day without a foot covering. Alternatively, the open platform16may be moved to directly contact the feet10of a patient that cannot stand. For example, if the patient is bedridden, then the platform16may be brought into contact with the patient's feet10while in bed. A bathroom mat or rug are but two of a wide variety of different potential form factors. Others may include a platform16resembling a scale, a stand, a footrest, a console, a tile built into the floor, or a more portable mechanism that receives at least one of the feet10. The implementation shown inFIGS.4A and4Bhas a top surface area that is larger than the surface area of one or both of the feet10of the patient. This enables a caregiver to obtain a complete view of the patient's entire sole, providing a more complete view of the foot10. The open platform16also has some indicia or display18on its top surface they can have any of a number of functions. For example, the indicia can turn a different color or sound an alarm after the readings are complete, show the progression of the process, or display results of the process. Of course, the indicia or display18can be at any location other than on the top surface of the open platform16, such as on the side, or a separate component that communicates with the open platform16. In fact, in addition to, or instead of, using visual or audible indicia, the platform16may have other types of indicia, such as tactile indicia/feedback, our thermal indicia. Rather than using an open platform16, as noted above, alternative embodiments may be implemented as a closed platform, such as a shoe or sock that can be regularly worn by a patient, or worn on an as-needed basis. For example, the insole of the patient's shoe or boot may have the functionality for detecting the emergence of a pre-ulcer or ulcer, and/or monitoring a pre-ulcer or ulcer. To monitor the health of the patient's foot (discussed in greater detail below), the platform16ofFIGS.4A and4Bgathers PPG data about a plurality of different locations on the sole of the foot10. This PPG data provides the core information ultimately used to determine the health of the foot10. Various Embodiments of the PPG Measurement Implementations FIG.5Aschematically shows an isometric view of one embodiment of open platform16. Of course, this embodiment is but one of a number of potential implementations and, like other features, is discussed by example only. As shown, the platform16is formed as a stack of functional layers sandwiched between a cover32having a top surface to support the foot, and a rigid base48. Some of those functional layers may provide physical support, intelligence/logic (e.g., circuitry), among other things. PPG sets20are shown extending above the top cover32via the through holes. For safety purposes, the base48preferably has rubberized or has other non-skid features on its bottom side. The platform16preferably has relatively thin profile to avoid tripping the patient and making it easy to use. To measure blood flow on the bottom of the feet, the platform16has an array or matrix of PPG sets20fixed in place directly underneath the cover20. More specifically, the PPG sets20are positioned in recesses/apertures in the surface of the base. The PPG sets26preferably are laid out in a two-dimensional array/matrix with a relatively small pitch or distance between the different PPG sets20. As shown, this array may be in two discrete areas—one for each foot, or across a larger area taking up substantially the entire top surface. In some embodiments, the array of PPG sets20may include temperature sensors which may include temperature sensitive resistors (e.g., printed or discrete components mounted onto a circuit board), thermocouples, fiberoptic temperature sensors, or a thermochromic film. Accordingly, when used with PPG sets20that require direct contact, illustrative embodiments form the cover26with a thin material having a relatively high thermal conductivity. The platform16also may use temperature sensors that can still detect temperature through a patient's socks. As discussed in greater detail below and noted above, regardless of their specific type, the plurality of PPG sets20generate a plurality of corresponding blood flow data values for a plurality of portions/spots on the patient's foot10to monitor the health of the foot10. Furthermore, temperature data gathering sensors may be included in the platform16, and the subsequent temperature data may be included with the PPG data in the analysis of the health of the user's foot. Some embodiments also may use pressure sensors for various functions, such as to determine the orientation of the feet10and/or to automatically begin the measurement process. Among other things, the pressure sensors may include piezoelectric, resistive, capacitive, or fiber-optic pressure sensors. The platform16also may have additional sensor modalities beyond PPG sets, temperature sensors, and pressure sensors, such as positioning sensors, GPS sensors, accelerometers, gyroscopes, and others known by those skilled in the art. FIG.5Bshows an exploded view of the open platform16shown inFIG.5A. This figure shows the rigid base48separated from the cover32, and the PPG sets20/springs26aligned with their prescribed apertures34in the cover32and recesses72in the base. Accordingly, this figure more clearly shows how each PPG set20has an associated spring26coupled at one end to the bottom of the recess72in the rigid base48and at the other end to the bottom of the PPG set20. Each PPG set20extends through the cover32and is configured to move upwardly and downwardly, in a direction normal to the platform surface14(i.e., in the Z-direction) within and outside of its aperture34. While this one-directional motion may be substantially perpendicular to the x-y plane of the top surface14of the cover32, some embodiments also may make this movement move at an angle to the top surface14. For example, to contact portions of the foot that are angled, one or more of the PPG sets20may be oriented at an angle. Each PPG set20moves on its own spring26independently of the movement of the other PPG sets20so that the plurality of PPG sets array can conform to the shape of the bottom of a user's foot. The PPG sets may be movably coupled in a manner that enables the sensor to move up and down through the hole as pressure/force (normally from the user's foot) is applied. Depending on the location of the user's foot, the spring heights and strengths can be preconfigured to achieve the best signal possible. For example, one or more springs may be nominally set so that the top of their PPG sets normally (i.e., before foot force is applied) between about 5 mm to 10 mm above flush with the top surface of the open platform to target the deep plantar artery, about 10 mm to about 20 mm above flush to target the medial plantar artery, and/or about 7 mm to 15 mm above flush to target the lateral plantar artery. Springs on the deep plantar artery and lateral plantar artery can be configured to a variety of dimensions, including 0.8×9.5×20 mm with a spring constant of 1.067 N/mm. Furthermore, the springs on the medial plantar artery can include dimensions of 0.6×9.5×10 mm with a spring constant of 0.789 N/mm. Indeed, those in the art may adjust those values to accommodate different requirements (e.g., 10-20% range above, below, or both above and below the noted values). FIG.5Cshows a cross sectional view of the open platform16shown inFIGS.5A and5B. As shown inFIG.5B, the spring26for each PPG set20is coupled, at one end, to the bottom surface of the recess72and, at the other end, to the bottom surface of the PPG set20. The PPG set20is positioned in the aperture34with the top contact surface of light source and the detector extending through the through hole34in the cover to normally be positioned above the top surface14of the cover32. This figure shows one embodiment of the PPG casing28, which at least partially encases the light source and detector. As shown, the PPG casing28has an integrated stop to set a maximum outward distance the PPG set20can traverse away from the top surface14. The geometry of this stop portion is configured to ensure the top surface of the PPG set is appropriately positioned. Moreover, the spring26is biased and selected so that it applies sufficient force to ensure that when at rest (i.e., when not receiving a downward force), the stop abuts the corresponding surface of the aperture shown in this figure. FIG.6Aschematically shows an isometric view of another embodiment of open platform16. Of course, this embodiment is but one of a number of potential implementations and, like other features, is discussed by example only. Unlike the embodiment ofFIGS.5A-5C, the cover of this embodiment has through trenches70that, together with the apertures of a plurality of other PPG sets, form one aperture. As shown inFIG.6A, the through trenches70have a predetermined pattern that is for illustrative purposes and is not intended to be limitation to the type of patterns that may be used. FIG.6Bshows an exploded view of the open platform16shown inFIG.6A. This figure shows how the cover32forms the trenches70, while the rigid base48is substantially the same as the embodiment ofFIGS.5A-5C. Further, as shown inFIGS.6B and6C, rather than being coil springs, the springs26of this embodiment are formed as leaf springs. These and other embodiments may use other types of spring, such as elastomeric springs, cantilever springs, foam springs, and/or viscoelastic springs. Indeed, the springs of the various embodiments discussed herein may apply to other embodiments. For example, the springs ofFIGS.6A-6Cmay be used in the embodiment ofFIGS.5A-5C. FIGS.7A-7Cschematically show different views of another embodiment of open platform16. Unlike the above embodiment, the base48and cover32are configured so that the top surface14of the cover32is at an angle relative to the bottom of the base48. This angle may be a generally neutral angle, such as 45 degrees, although it may be another angle between zero degrees and 90 degrees, such as between 10, 20, or 30 degrees and 40, 50, or 60 degrees. This embodiment may be particularly useful when the user is sitting. This embodiment also has guide indicia74on the top surface of the cover to help the user in positioning their feet. As shown, in this embodiment, the indicia74is in the form of a curved, relatively flat member intended for the user to use to position their feet. Specifically, during use, the back of the user's heel should contact/abut the large, concave surface of the member to properly align their feet with the PPG sets20(to the extent possible). Other embodiments, however, may not curve the members, or even form them with other geometries and sizes. As such, in some embodiments, the indicia74may be straight or bumps protruding from the surface. In other embodiments, rather than being raised above the surface, the indicia74may be flush with the top surface14, and/or recessed into the stop surface. For example, the indicia74may be a depression in the top surface14of the cover32, or they may be a marking that is substantially in the plane of the top cover32. FIG.7Bshows an exploded view of the open platform16shown inFIG.7A. As shown and like the embodiments ofFIGS.6A-6C, the PPG sets20and springs of this embodiment are generally mounted and operate in a manner similar to those ofFIGS.5A-5C. Note that althoughFIGS.5A-7Cdo not explicitly show circuitry layer(s) underlying the PPG sets, various embodiments do have printed circuit boards with circuitry for controlling operation of the overall apparatus. These figures therefore are simplified and not intended to suggest the underlying circuitry is not on-board the platform. As discussed above and below, this circuitry can perform various roles, from PPG energizing, detection management, system control, data storage (e.g., read only memory), networking/communication devices (e.g., modems), etc. Networking and Data Measurement Although it gathers PPG and other data about the patient's foot, illustrative embodiments may locate some or substantially all (or other) logic for monitoring foot health at another location. For example, such additional logic may be on a remote computing device. To that and other ends,FIG.8schematically shows one way in which the platform16can communicate with a larger data network44in accordance with various embodiments the invention. As shown, the platform16may connect with the Internet through a local router, through its local area network, or directly without an intervening device. This larger data network44(e.g., the Internet) can include any of a number of different endpoints that also are interconnected. For example, the platform16may communicate with an analysis engine46that analyzes the PPG data and/or thermal data from the platform16and determines the health of the patient's foot10. The platform16also may communicate directly with a health care provider84, such as a doctor, nurse, relative, and/or organization charged with managing the patient's care. In fact, the platform16also can communicate with the patient, such as through text message, telephone call, e-mail communication, or other modalities as the system permits. FIG.9schematically shows a block diagram of a foot monitoring system, showing the platform16and the network44with its interconnected components in more detail. As shown, the patient communicates with the platform16by standing on or being received in some manner by the array of PPG sets, which is represented in this figure as a “sensor matrix52.” A data acquisition block54, implemented by, for example, a motherboard and circuitry, controls acquisition of the PPG data and other data for storage in a data storage device56. Among other things, the data storage device56can be a volatile or nonvolatile storage medium, such as a hard drive, high-speed random-access-memory (“RAM”), or solid-state memory. The input/output interface port58, also controlled by the motherboard and other electronics on the platform16, selectively transmits or forwards the acquired data from the storage device to the analysis engine46on a remote computing device, such as a server60. The data acquisition block54also may control the user indicators/displays18, which provide feedback to the user through the above mentioned indicia (e.g., audible, visual, or tactile). FIG.10schematically shows a block diagram of one embodiment of the various components in the platform16. It should be noted that like other figures,FIG.10is a mere schematic representation to facilitate understanding, and each of the components is operatively connected by any conventional interconnect mechanism. In some embodiments, the device ofFIG.10is integrated into the platform16as shown inFIG.9. In some embodiments the device may be used with a wireless connection (e.g., Bluetooth, 4G, 5G, or WIFI) and a power source instead of direct connection to the remote server60, or a computing device, such as a personal computer. Those skilled in the art should understand that each of these components can be implemented in a variety of conventional manners, such as by using hardware, software, or a combination of hardware and software, across one or more other functional components. For example, some functional components may be implemented using a plurality of microprocessors executing firmware. As another example, one or more components may be implemented using one or more application specific integrated circuits (e.g., “ASICs”) and related software, or a combination of ASICs, discrete electronic components (e.g., transistors), and microprocessors. Accordingly, the representation of a component as a single box ofFIG.10is for simplicity purposes only. In fact, in some embodiments, some components may be spread across a plurality of different machines—not necessarily within the same housing or chassis. FIG.11shows method of measuring blood flow in a foot in accordance with illustrative embodiments. It should be noted that this method is substantially simplified from a longer process that normally would be used to measure the blood flow. Accordingly, the method may have other steps that those skilled in the art likely would use. In addition, some of the steps may be performed in a different order than that shown, or at the same time. Those skilled in the art therefore can modify the process as appropriate. The method begins at1110, in which a foot10is positioned on the platform16configured like one or more of the above embodiments discussed above. To that end, the user physically brings down the foot, first contacting the top surfaces of the PPG sets20and then moving them downwardly (e.g., normal relative to the platform surface) into their apertures34. By doing so, the foot produces a foot force directed toward the contact surface when positioned on the platform contact surface14. This foot force overcomes the biasing force the springs26apply to their PPG sets20. At1120, when the PPG sets20unseat from their biased position against the stops, the foot10begins to receive the biasing force. This biasing force preferably is applied to the foot10during the some or all of the entire path of each PPG set20. Specifically, for a given PPG set20, the foot10continues receiving the biasing force until and the foot10is supported on the top surface14of the platform16. Preferably, when the foot10is supported (whether it is fully or partially supported on the surface14), the spring26of each PPG set20(or at least a plurality of the PPG sets) is configured to have excess distance to travel downwardly (from the perspective of the drawings). As such, the spring26is not considered to have “bottomed out” against some surface or stop within its aperture34. Accordingly, when supported on the top surface14and the PPG set20is relatively static (i.e., no longer moving), the foot receives a consistent biasing force. In preferred embodiments, this biasing force received by the foot10has magnitude that is substantially independent of the foot force—that is the force applied by the foot10to the top surface34of the platform16. In other words, the springs26and PPG sets20are configured so that the weight of the user, the position of the user, the position of the feet10on the platform16, and/or the way the user positions their foot/feet on the top surface14(e.g., from a sitting or standing position) has any non-negligible effect on magnitude of the biasing force received by the foot10. For example, when supported on the platform16, the foot10of a 120 pound user should feel substantially the same biasing force as would a 200 pound user who subsequently uses the same platform16(i.e., ensuring the same configurations for the springs26). In some embodiments, the dynamic biasing force received by the user (i.e., while the net foot force is applied and moving the PPG set20downwardly) may vary to some extent depending on the motion and foot force applied, but when static, the foot10should receive the biasing force so that the biasing force is independent of the force received by the platform16/top surface14. Accordingly, in illustrative embodiments, each PPG set20and its corresponding one or more springs26are configured so that the biasing force has a magnitude that is substantially independent of the foot force magnitude. Indeed, as noted, when the user's foot10is static on the platform, some PPG sets20will be flush with the top surface14, some will extend outwardly from the top surface14, and/or some may even be below the top surface14(e.g., if the skin of the foot enters the top of the aperture34). After producing the biasing force, the process continues to step1130, in which the light source22directs emitted light38into the foot10. Although the light ideally has zero reflection from the outside surface of the foot10, it is expected that some portion of the emitted light38may reflect back and reduce the signal to noise ratio. As discussed below, the light source22preferably is configured to abut into the surface of the foot to form an effective light seal, minimizing reflections or light leakage at that interface. The detector24also preferably is configured in a similar manner (with a light seal). Next, at step1140, some fraction of the light transmitted38within the foot naturally reflects back toward the detector24. This reflection has data that step1150uses to determine blood flow characteristics. Some embodiments may use conventional techniques to determine blood flow characteristics, while other embodiments may use proprietary techniques. Calculation of Cardiovascular Parameters Among others, cardiac health parameters the platform determines may include heart rate, heart rate variability (e.g., HRV), respiration rate, and oxygen saturation. Post-processing signal processing devices configured with signal processing capabilities can be used to gather heart rate variability which follows calculation of consecutive RR intervals given a photoplethysmography (e.g., PPG) waveform. Heart rate variability is a metric of interest given the growing elderly population because it calculates the variation in the time interval between consecutive heartbeats. Heart rate variability is a marker for how well one can adapt to environmental and psychological challenges, with elevated values being linked with atrial fibrillation, and lower values linked to coronary artery disease. Establishing a pattern of variability given a patient's history allows for fast diagnosis of cardiac complications. In addition, calculation of heart rate can also be completed using an algorithm in post-processing devices (e.g., processors using software) that involves peak detection. Irregular heart rate is a strong predictor of arrhythmia and cardiomyopathy. Lastly, respiration rate is a metric from a PPG signal given a set of band pass filtering or other sets of digital filters with varying cutoff frequencies. Increasing respiration rates can be indicative of patient deterioration and respiratory failure. Information Output The PPG signal may be output to the user in the form of a summary statistic, such as heart rate, heart rate variability, respiration rate, or blood oxygenation. Alternatively, the waveform can be stored and analyzed offline by a medical professional to diagnose other systemic cardiovascular conditions such as arrhythmias or more local conditions such as peripheral vascular disease. FIGS.12-14show processes that may be used to implement various embodiments. These processes are not described in detail as those skilled in the art can ascertain how to implement them from the charts themselves. It should be noted that those process are simplified from longer processes. Accordingly, the process may have additional steps that those skilled in the art likely would use. In addition, some of the steps may be performed in a different order than that shown, or at the same time. Those skilled in the art therefore can modify the process and specific parameters as appropriate. Specifically,FIG.12shows an embodiment of an algorithm76to analyze PPG data.FIG.12shows a “find peaks” technique, which determines whether a certain peak that it finds is valid. In summary, the find peaks technique checks both the amplitude and distance from the previous peak and determines whether or not to include it in the detection process. The algorithm76can be adjusted for each amplitude and distance depending on the heart rate and waveform of the signal collected. Illustrative embodiments also determine a “similarity index,” which is indicative of the accuracy of the device in relation to a finger pulse oximeter. Preferably, the similarity index is an output from the device, preferably on a display, although heart rate calculations also may be displayed. In summary, the similarity index is calculated by 1) averaging the heart rates extracted the determined to be in a valid range by the find peaks technique, 2) subtracting that average from a heart rate from the finger, and 3) taking an average. Exemplary logic for the inclusion of certain heart rates is included in a flowchart inFIG.12. FIG.13shows an embodiment of an algorithm78to analyze PPG data to determine SpO2, also known as “oxygen saturation,” which is an important reading for the patient population as it can be indicative of early cardiac decline. The SpO2can be gathered from the device by collecting data with red and infrared wavelengths, as shown inFIG.13. This data will be filtered and sorted a similar manner, but the amplitude of each wavelength compared to each other will provide an oxygen saturation measurement. Oxygen saturation can be calculated as shown below, where RED and IR refer to the wavelength of light being used. SpO2≃110-25(RMS(RED)MEAN(RED)RMS(IR)MEAN(IR)) HRV is another important metric that is often used to determine cardiac health over an extended period of time.FIG.14illustrates a flow diagram80for the determination of HRV from a PPG signal. Specifically, after calculating the average heart rate, the intervals between each “R” section of the peaks, shown inFIG.14, may be calculated. The length of these intervals is then averaged and used to calculate a root mean squared of the differences (“RMSSD”). After the RMSSD is calculated, heart rate variability can be extracted. EXAMPLES The following examples are intended to further illustrate illustrative embodiments. Example 1: PPG Apparatus Prototype A prototype apparatus for measuring blood flow in feet with PPG sets was constructed. To that end,FIG.15Ashows a picture of an embodiment of an open platform modality16that allows one or more photoplethysmography (“PPG”) measurements to be taken from the plantar surfaces of the feet of patients. This prototype uses two 1×3 arrays of LED-photodetector pair sensors (e.g., PPG sets20) mounted in the rigid platform16. Guide indicia74are indicated on the top surface14of the cover to direct where the patient communicates his/her foot/feet10with the platform16(e.g., stepping on the noted open platform). FIG.15Bshows an expanded view of the left 1×3 array ofFIG.15A. The PPG sets20are housed in PPG casings28, as shown inFIG.16A, and connected to springs. The top view of PPG casing28shown inFIG.16Aillustrates the placement of the PPG set20as shown inFIG.15B. Each of the PPG sets20are be placed into a designed cover for protection from the environment and from the foot10, with a hole for an LED22and a hole for a photodetector24. The LED-photodetector casing28(e.g., PPG set casing) is made from a thermoplastic polyurethane. These casings28provide structural and waterproofing protection to the electrical LEDs22and photodetectors24, while providing adequate space for soldered-on electrical wiring. Furthermore, the casings28allow the LED-photodetector combination20(e.g., PPG sets) to protrude above flush of the cover14thickness to facilitate contact with the feet and the production and detection signals. The encased PPG sets20and springs26are mounted to a base (e.g., a waterproof base) that houses the electronics. To minimize accidents, the base has anti-slip materials on the bottom surface. The PPG sets20are preprogrammed to have specific brightness settings and sampling frequencies. The springs26control the force of contact between the PPG sets20and plantar surfaces of the foot10. A film82transparent at specific wavelengths (e.g., visible light wavelengths or IR wavelengths) at least in part covers the PPG sets20and acts as the interface between the PPG sets20and the feet10, as shown inFIGS.15A and15B. The rigid platform16is further illustrated inFIG.16. A top view of the cover32of the platform16is shown with the two 1×3 arrays of PPG sets20. Two side views of the platform16showing the cover32and the base48are also shown inFIG.17. The platform16provides an inner housing that fully encloses all electronics components and includes cutouts for each of the following: battery, power switch, microcontroller, sensor, wires, pressure sensor, and LED strip for an interface screen. A prototype apparatus was used to take PPG measurements from the photodetector-LED sensors in the array(s), and record those readings in memory. A data file containing the recorded data was transmitted to a remote processor. The data was analyzed using data with signal processing and filtering functionality, as well as with a process to calculate heart rate and signal-to-noise ratio. These and other details are discussed in more detail below. After receipt by the processing device, the raw data was filtered using a combination of a high-pass and low-pass digital filters operating at prescribed, different cutoff frequencies and within the frequency range for detectable photoplethysmography waveforms, as described above forFIGS.12-14. There are multiple ways to filter the data so that heart rate and other metrics can be determined from the waveforms; the combination filtering discussed below is one exemplary method. Heart rate is an informative metric that the inventors recognized may be acquired from the PPG waveform for the purposes of internal validation of the device. Calculation of heart rate involves an automated peak detection algorithm that can be implemented using a variety of formulations in which peaks of certain amplitude are marked as being of interest. The peaks from this detection algorithm are then utilized to extract a heart rate over a time course determined by the time of interest and total time spent. Another way to verify the value of the signal is determining the signal-to-noise ratio, which can be calculated by utilizing a pre-programmed signal to noise ratio function in post-processing software. There are multiple other ways to calculate a signal to noise ratio, including calculating the power in both a signal and a noise reading, or taking the ratio of the amplitudes of these signals, both of which can be done using a post-processing or analytic software with some level of signal processing capabilities. The prototype apparatus shown inFIGS.15A and15Bwas utilized to take PPG measurements. As shown inFIGS.15A and15B, the apparatus has two 1×3 arrays with top, middle, and bottom PPG sets for each foot. The data sets measured therefore have three sets of PPG data for each foot. PPG Measurements and Data Processing The process ofFIG.12is effective at finding peaks in raw PPG set data that can be used to generate information about blood flow in a person's foot. It is important to note that using the process ofFIG.12, during testing, even the poor data was able to be filtered, and peaks were able to be detected so that heart rate could be extracted. FIG.17shows final peaks found for three sets of PPG data (top, middle, and bottom) for each foot after processing the filtered data. By using the “find peaks” algorithm76ofFIG.12with the filtered data, it was possible to find the peaks necessary to calculate s similarity index. This is notable because the initial data was of poor quality. Various embodiments of the invention may be implemented at least in part in any conventional computer programming language. For example, some embodiments may be implemented in a procedural programming language (e.g., “C”), or in an object oriented programming language (e.g., “C++”). Other embodiments of the invention may be implemented as a pre-configured, stand-alone hardware element and/or as preprogrammed hardware elements (e.g., application specific integrated circuits, FPGAs, and digital signal processors), or other related components. In an alternative embodiment, the disclosed apparatus and methods (e.g., see the various flow charts described above) may be implemented as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible, non-transitory medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk). The series of computer instructions can embody all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. Among other ways, such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). In fact, some embodiments may be implemented in a software-as-a-service model (“SAAS”) or cloud computing model. Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software. The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art. Such variations and modifications are intended to be within the scope of the present invention as defined by any of the appended claims. | 57,683 |
11857304 | While implementations are described herein by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to. DETAILED DESCRIPTION The human body utilizes many different kinds of molecules to function. For example, glucose provides energy for cellular activity while water provides a medium to carry molecules such as glucose and also acts as a reactant. Other molecules may be introduced into the body. For example, alcohol may be consumed, carbon monoxide may be inhaled, a pesticide may be absorbed through the skin, and so forth. Information about the concentration of one or more types of molecules within the tissues of the body is useful in many situations. For example, a person who is diabetic needs to know the concentration of glucose in their blood in order to keep that concentration in a healthy range. In another example, an athlete needs to make sure they are sufficiently hydrated to maximize their physical performance and avoid injury due to dehydration. Continuing the example, the athlete may also want to monitor their sodium and potassium levels to maintain an optimal level of electrolytes. Traditionally information about the concentration of one or more types of molecules has been obtained through invasive measurement of a sample obtained from the person. For example, to measure glucose levels a sample of blood is taken and applied to a chemical test strip. In another example, a rough estimate of dehydration can be obtained by assessing skin turgor, such as by pinching the skin on the back of the hand. However, traditional methods have significant drawbacks. Obtaining samples of blood or other tissues within the body requires piercing the skin, injuring the person and introducing a possibility of infection. Additionally, such testing can be costly due to special handling considerations, use of consumables such as reagents, and so forth. Mechanical measurements, such as assessment of skin turgor, lack precision. Described in this disclosure is an antenna that facilitates non-invasively measuring molecular concentration of one or more types of molecules within a user. The antenna is used to emit and acquire radio frequency (RF) signals. The presence or concentration of types of molecules change the characteristics of the emitted signal. These changes may then be used to determine the presence or concentration of the types of molecules. Measuring the molecular concentration of the one or more types of molecules is facilitated by acquiring information about signals that penetrate beyond the surface of the skin to some sample depth. For example, the sample depth may describe a useful distance into the user with which the signal is able to penetrate and produce usable data. The sample depth may be determined based on the radiated power of the RF signal, efficiency of the antenna, power of the signal being acquired, sensitivity of the receiver, and so forth. A particular sample depth may be used to assess types of molecules of interest. For example, for assessing the concentrations of molecules expected to be present in interstitial fluids between cells, such as glucose, a sample depth of between 5 mm and 10 mm may provide useful information. Impedance in an electronic circuit measures the opposition of that circuit to the flow of current when a voltage is applied. Circuits that involve RF signals have a given impedance at a given frequency. A match in the impedance of a source to a load results in an efficient transfer of signal power. For example, a transmitter may have an output impedance of 50 ohms at 1 GHz. If the transmitter output is connected to a load, such as an antenna, which has a matching impedance of 50 ohms at 1 GHz, the power produced by the transmitter is transferred to the antenna with greatest efficiency. Similarly, a receiver with an input impedance of 50 ohms connected to the same antenna will transfer an acquired signal to the receiver with greatest efficiency. A mismatch in the impedance of the source to the load can produce a variety of undesirable effects. A mismatch results in a loss of power transferred between the source and the load. In another example, the transmitter output with an output impedance of 50 ohms at 1 GHz is connected to an antenna with a mismatching impedance of 100 ohms at 1 GHz. Due to this mismatch, only a portion of the power produced by the transmitter is transferred to the antenna, decreasing the overall efficiency of the system. The power which is not transferred to the antenna may be reflected back to the transmitter output. As a result, less energy ultimately is emitted by the antenna and the transmitter may be damaged. Similarly, a mismatch between the antenna and the receiver input decreases the power of the acquired signal that is transferred to the receiver. As a result, the strength of the received signal is decreased. Traditionally, impedance mismatches may be either avoided by engineering the circuitry to exhibit substantially the same impedance, or by using an impedance matching network. However, such engineering results in antennas which do not provide the necessary signal penetration to provide a useful sample depth. Furthermore, such engineering may result in antenna designs which are not suitable for use in a wearable device where the available volume for an antenna is constrained. For example, such antenna designs may require too much volume to be practicable for a wearable device. Impedance matching networks comprise circuitry that operate to transform one impedance to another. However, impedance matching networks introduce additional signal losses, also require volume within a wearable device, increase overall system complexity, and increase cost. The antennas described in this disclosure present, at a first portion, a first impedance which facilitates power transfer to devices connected to the first portion, such as a transmitter or receiver. Within the antennas, a second portion presents a second impedance due to a change in the geometry of the antenna elements. The transition from the first impedance to the second impedance is gradual, reducing or eliminating reflected power. This maximizes the transfer of power of a signal to be emitted as well as power of a signal to be acquired. The second portion of the antenna produces an electric field which extends farther away from the antenna, to the desired sample depth. A particular sample depth may be selected by selecting a particular geometry. The antennas may comprise several elements arranged with variable spacing between the elements. In one implementation, the antenna may comprise a first element and a second element that are separated from one another by a distance of five millimeters (mm) at a first end of the first section, gradually widen out to be separated by a distance of fifteen mm at a midpoint of the second section, and then narrow back to a five mm distance at a second end. The changes in distance also increase fringing effects, resulting in the electric field associated with the antenna extending farther from the antenna and toward the user. This increases the sample depth to which a transmitted signal penetrates and from which a receive signal may be detected. The change in distance between the elements is gradual in that the elements may curve or slow towards or away from one another, avoiding discontinuities or transitions in the structure of the elements that would introduce an abrupt change in impedance at the frequency or frequencies that the antenna will be used for. The antennas described in this disclosure are very low profile, having a minimum overall height, allowing them to be readily incorporated into a wearable device. By providing a desired impedance in the first section, no matching network is necessary improving overall efficiency while reducing size, complexity, and cost. The antennas are relatively broadband, allowing operation on various different frequencies. The antennas are also durable and relatively inexpensive to manufacture. Arrays of antennas may also be used. For example, multiple antennas may be used to increase the volume of the user sampled. During operation of the system, a radio frequency transmitter generates a first signal that is emitted from one or more antenna elements of the antenna. The same antenna or another antenna may be used to acquire the first signal, which is detected using a radio frequency receiver. One or more signal characteristic values are determined. For example, the signal characteristic values may be indicative of a frequency of the received signal, a phase difference between the signal as transmitted and the signal as received, amplitude of the received signal, and so forth. Signals may be transmitted and received in different frequency bands, providing signal characteristic values for the different bands. For example, a first signal may be transmitted at 5 GHz, a second signal at 10 GHz, a third signal at 50 GHz, a fourth signal at 100 GHz, and so forth. The signal characteristic values may be compared to molecular reference data to determine one or more of presence of or concentration of one or more types of molecules present within the user. In one implementation, the phase differences at different frequencies may be used to determine a concentration of a type of molecule, such as glucose. For example, the molecular concentration data may describe a linear relationship between phase differences at particular frequencies and glucose concentration. In other implementations, the concentration of other types of molecules may be determined. For example, the concentration of water may be determined, providing information about a hydration level of the user. Overall exposure to radio frequency (RF) signals is limited, as the output power is extremely low and duration of the radio frequency (RF) signals may be very short. For example, the modulation of the signals may be a continuous wave with a total duration of less than 1 millisecond (ms) and with a transmitter output power of 0 decibel-milliwatts (dBm). The sampling frequency, that is how often the RF signals are transmitted to gather data, may also be low, further reducing RF exposure. For example, the system may transmit signals once every six minutes, producing sets of ten samples per hour with each set comprising signal characteristic data for the various frequency bands. By using the system with the antenna and techniques described in this disclosure, information about the concentration of various types of molecules at the sample depth within the user may be determined non-invasively. The information provided by the system may be used to help diagnose, treat, or inform the user as to their physiological status. By acting on this information, the overall health of the user may be improved. Illustrative System FIG.1is an illustrative system100that may include a user102and a wearable device104that uses radio frequency (RF) signals108to determine molecular concentrations of molecules of interest in at least a portion of the user's102body, according to one implementation. The user102may have one or more devices on or about their person, such as the wearable device104. The wearable device104may be implemented in various physical form factors including, but not limited to, the following: wrist bands, torcs, arm bands, and so forth. The user's102body contains one or more different types of molecules106. For example, the blood of the user102may include glucose, water, creatinine, and so forth. Sometimes the body may include molecules106that are exogenous. For example, if the user102consumes alcohol, inhales carbon monoxide, absorbs a pesticide through the skin, and so forth, presence or concentration of those types of molecules106may be determined in the dermis, within the blood, or other tissues within the body. As described below, a radio frequency (RF) signal108may be used to determine information about one or more molecules106. The wearable device104may include at least one support structure110that supports one or more of the following components. For example, the wearable device104may comprise a housing or capsule that is attached to a wrist band, allowing the wearable device104to be retained on the wrist of the user102. The wearable device104includes one or more antennas112. The one or more antennas112may be mounted to the support structure110. The antennas112may comprise two or more antenna elements in particular arrangements. For example, the antenna112may comprise a first antenna element and one or more other antenna elements with variable distances between one another. The arrangement of antenna elements is discussed in more detail below with regard toFIGS.4-7. In some implementations the wearable device104may include antennas112with different configurations or geometries, allowing for operation at different sample depths. The antenna elements within a particular antenna112are connected to one or more radio frequency (RF) transceivers114. In one implementation each antenna112may be connected to a particular RF transceiver114. In other implementations, a single RF transceiver114may be connected via a switching network to two or more antennas112. The RF transceiver114may comprise an oscillator116, a transmitter118, and a receiver120. The oscillator116may be used to provide a reference frequency for operation of the transmitter118, the receiver120, a clock (not shown), and so forth. The transmitter118is configured to generate the RF signal108. The transmitter118may be able to generate RF signals108at one or more frequencies, in one or more frequency bands or ranges, and so forth. For example, the transmitter118may be able to generate RF signals108at one or more of the 5 GHz, 10 GHz, 50 GHz, 75 GHz, 100 GHz, or other bands. The RF signal108that is generated may be modulated with a continuous wave. During transmission, the transmitter118provides the RF signal108to one or more of the antenna elements in one or more antennas112. For example, output from the transmitter118may be connected to a first antenna element in the antenna112. The antennas112emit the signal108which then impinges on the body of the user102while the device104is being worn or held close to the user102. In some implementations, during reception, one or more of the antenna elements in the antenna112that are not connected to the transmitter118are connected to an input of the receiver120. Continuing the example above, the receiver120may be connected to the second antenna element in the antenna112. The receiver120detects the RF signal108. The receiver120may comprise analog hardware, digital hardware, or a combination thereof. For example, the receiver120may comprise a direct sampling software defined radio (SDR). In another example, the RF signal108as acquired by the one or more antennas112may be mixed with output from the oscillator116. In other implementations the same antenna element of the antenna112may be connected to the transmitter118and the receiver120simultaneously using one or more directional couplers, duplexers, or other devices. The RF transceiver114may be configurable to operate in simplex, duplex, or combinations thereof. For example, the RF transceiver114may be configurable to transmit on one band while receiving on another band. In one implementation, the RF transceiver114may comprise the BGT24LTR11 device from Infineon Technologies AG that is capable of transmitting and receiving in the 24 GHz band. While a RF transceiver114is shown, it is understood that in other implementations other components such as a discrete transmitter118and receiver120could be used. A control module122may be used to direct operation of the RF transceivers114or other components. For example, the control module122may comprise a hardware processor (processor) executing instructions that operate one or more of the transmitters118to transmit particular signals at particular frequencies at particular times, to operate one or more of the receivers120to receive the signals108generated by the one or more transmitters118, to operate a switching device or other circuitry to connect one or more particular antennas112to the transmitter118output, to operate a switching device or other circuitry to connect one or more particular antennas112to the input of the receiver120, and so forth. The control module122may use one or more data acquisition parameters124to control operation. For example, the data acquisition parameters124may specify a sample frequency that indicates how often to transmit and receive signals108, sample depth within the user102to be used, and so forth. In some implementations the data acquisition parameters124may be specific to a particular type of molecule106that is being detected. For example, the data acquisition parameters124for glucose may have a first sample depth that is different from a second sample depth used for organophosphates. The data acquisition parameters124may reference specific operating parameters126. The operating parameters126may specify one or more of frequency, output power, modulation, signal duration, particular antenna112used to emit the signal108, particular antenna112to acquire the signal108, and so forth. For example, the operating parameters126may specify that a signal108is to be transmitted with a center frequency of 5.201 GHz at 0 dBm, continuous wave (CW) modulation, for 1 ms using a chirp with ascending frequency from 5.200 to 5.202 GHz, emitted from the antenna112. The operating parameters126may relate a sample depth specified by the data acquisition parameters124to a particular antenna configuration. For example, the data acquisition parameters124may indicate a depth in terms of linear measurement such as millimeters or with a relative indicator such as “shallow”, “medium”, or “deep”. Responsive to the data acquisition parameters124, the control module122may determine operating parameters126that are indicative of a particular antenna configuration. For example, a “shallow” sample depth may correspond to a first antenna112(1) with a first antenna configuration. In comparison, a “deep” sample depth may correspond to a second antenna112(2) with a second antenna configuration. Once the operating parameters126have been determined, the control module122or another component may operate the circuitry in the wearable device104. For example, first circuitry may be operated to selectively couple the output from the transmitter118to the first antenna element and second circuitry may be operated to selectively couple the input to the receiver120to the second antenna element. The receivers120produce signal characteristic values128that are representative of the received signals. The signal characteristic values128may include, but are not limited to, frequency data130, phase data132, amplitude data134, and so forth. Frequency data130is indicative of frequency of the received signal. The phase data132provides information about the phase of the received signal, and in some implementations may be used to determine a phase difference between the transmitted signal and the received signal. The amplitude data134provides information indicative of amplitude of the received signal. For example, the amplitude data134may indicate a received signal strength at different frequencies. Other signal characteristic values128may include received signal polarization. As the RF signals108pass through the body of the user102, they are affected by the molecules106therein. Various interactions take place between the signals108and the molecules106. For example, the presence of glucose in the body along the line extending from the antenna112(1) that is emitting the signal108and the antenna112(2) that is acquiring the signal108may result in a change in the phase of the received signal, relative to the transmitted signal. In some implementations, a phase difference that is indicative of this change in phase of the received signal relative to the transmitted signal, may be indicative of the concentration of glucose. For example, with no glucose present a 0 degree phase difference may be detected, while a 0.004 degree phase difference may be associated with the presence of glucose. As described below, a presence or concentration of a type of molecule106may be determined based on the phase difference or other signal characteristics. A data processing module136may use one or more of the operating parameters126of the transmitted signal(s) or the signal characteristic values128of the received signal(s) as input. The data processing module136may also access molecular reference data138. The molecular reference data138comprises information that, for a particular type of molecule106, associates one or more signal characteristics with information such as concentration of the particular type of molecule106. The molecular reference data138may be general or specific to a particular user102. For example, the molecular reference data138may be generated and associated with particular user102(1) “Pat”. The data processing module136uses the signal characteristic value(s)128and the molecular reference data138to determine molecular concentration data140. The molecular concentration data140may specify a mass per unit volume. For example, the signal characteristic value128indicates the phase difference at a particular frequency is 0.004 degrees. This value may be used as input to the molecular reference data138which corresponds to molecular concentration data140indicative of a mass per volume, such as a glucose concentration of 159 milligrams per deciliter (md/dL). As described below in more detail, the signal characteristic values128may be obtained for a plurality of different frequencies and may be obtained using a variety of different combinations of antennas112to emit and acquire the signals108. The signal characteristic values128may be used to determine the molecular concentration data140for one or more different types of molecules106. For example, the molecular concentration data140may indicate the concentration of glucose and water in the body of the user102. The wearable device104may include, or receive data from, one or more other sensors142. For example, a temperature sensor may be used to provide an indication of the body temperature of the user102. The body temperature may then be used as an input to the data processing module136to improve the accuracy of the molecular concentration data140. These sensors142are discussed in more detail below with regard toFIG.2. In other implementations data from the sensors142may be obtained to provide other information about physiological status, activity level, and so forth. Output from the sensors142may also be used to determine operation of the data processing module136. For example, the sensors142may include one or more accelerometers. If the accelerometers detect motion that exceeds a threshold value, the data processing module136may be operated to determine molecular concentration data140. For example, if the user102has been running, the system may operate to determine glucose concentration. In another example, if the motion of the user102is less than a threshold value, the data processing module136may be operated to determine molecular concentration data140. For example, if no movement has been detected for 2 minutes, such as if the user is asleep or unconscious, the data processing module136may be operated to determine molecular concentration data140. A user interface module144may be configured to use the molecular concentration data140and produce output data146. For example, based on the molecular concentration data140indicating that the blood glucose level is below a threshold value, output data146may be generated. One or more output devices148may be used to present a user interface based on at least a portion of the output data146. Continuing the example, the user interface module144may produce output data146that comprises instructions to operate a speaker to present an audible prompt indicating a low blood glucose level. In another example, the output data146may be provided to an other device150. For example, the wearable device104may be connected via Bluetooth or another wireless protocol to a smartphone, wireless access point, in vehicle computer system, or other device. Based on the output data146the other device150may present an output to the user102, alert someone else, modify operation of another device, and so forth. For example, if the wearable device104provides data to a vehicle that indicates the user102in the driver's seat has a concentration of alcohol that exceeds a threshold value, the vehicle may be prevented from moving, or may only be able to operate in a fully autonomous mode. FIG.2illustrates a block diagram200of sensors142and output devices148that may be used by the devices of the system100during operation. One or more sensors142may be integrated with or internal to the wearable device104or the other device150. For example, the sensors142may be built-in to the wearable device104during manufacture. In other implementations, the sensors142may be part of another device which is in communication with the wearable device104. For example, the sensors142may comprise a device external to, but in communication with, the wearable device104using Bluetooth, Wi-Fi, 3G, 4G, 5G, LTE, ZigBee, Z-Wave, or another wireless or wired communication technology. The sensors142may include the RF transceivers114. The one or more sensors142may include one or more buttons142(1) that are configured to accept input from the user102. The buttons142(1) may comprise mechanical, capacitive, optical, or other mechanisms. For example, the buttons142(1) may comprise mechanical switches configured to accept an applied force from a touch of the user102to generate an input signal. A proximity sensor142(2) may be configured to provide sensor data324indicative of one or more of a presence or absence of an object, a distance to the object, or characteristics of the object. The proximity sensor142(2) may use optical, electrical, ultrasonic, electromagnetic, or other techniques to determine a presence of an object. For example, the proximity sensor142(2) may comprise a capacitive proximity sensor configured to provide an electrical field and determine a change in electrical capacitance due to presence or absence of an object within the electrical field. A heart rate monitor142(3) or pulse oximeter may be configured to provide sensor data324that is indicative of a cardiac pulse rate, and data indicative of oxygen saturation of the user's102blood, and so forth. For example, the heart rate monitor142(3) may use an optical emitter such as one or more light emitting diodes (LEDs) and a corresponding optical detector such as a photodetector to perform photoplethysmography, determine cardiac pulse, determine changes in apparent color of the blood of the user102resulting from oxygen binding with hemoglobin in the blood, and so forth. The sensors142may include one or more touch sensors142(4). The touch sensors142(4) may use resistive, capacitive, surface capacitance, projected capacitance, mutual capacitance, optical, Interpolating Force-Sensitive Resistance (IFSR), or other mechanisms to determine the position of a touch or near-touch of the user102. For example, the IFSR may comprise a material configured to change electrical resistance responsive to an applied force. The location within the material of that change in electrical resistance may indicate the position of the touch. One or more microphones142(5) may be configured to acquire information about sound present in the environment. In some implementations, arrays of microphones142(5) may be used. These arrays may implement beamforming techniques to provide for directionality of gain. The one or more microphones142(5) may be used to acquire audio data, such as speech from the user102. A temperature sensor (or thermometer)142(6) may provide information indicative of a temperature of an object. The temperature sensor142(6) in may be configured to measure ambient air temperature proximate to the user102, the body temperature of the user102, and so forth. The temperature sensor142(6) may comprise a silicon bandgap temperature sensor, thermistor, thermocouple, or other device. In some implementations, the temperature sensor142(6) may comprise an infrared detector configured to determine temperature using thermal radiation. The sensors142may include one or more cameras142(7). The cameras142(7) may comprise a charge couple device, complementary oxide semiconductor, or other image sensor that is able to acquire images. One or more radio frequency identification (RFID) readers142(8), near field communication (NFC) systems, and so forth, may also be included as sensors142. The user102, objects around the computing device, locations within a building, and so forth, may be equipped with one or more radio frequency (RF) tags. The RF tags are configured to emit an RF signal. In one implementation, the RF tag may be a RFID tag configured to emit the RF signal upon activation by an external signal. For example, the external signal may comprise a RF signal or a magnetic field configured to energize or activate the RFID tag. In another implementation, the RF tag may comprise a transmitter and a power source configured to power the transmitter. For example, the RF tag may comprise a Bluetooth Low Energy (BLE) transmitter and battery. In other implementations, the tag may use other techniques to indicate its presence. For example, an acoustic tag may be configured to generate an ultrasonic signal, which is detected by corresponding acoustic receivers. In yet another implementation, the tag may be configured to emit an optical signal. The sensors142may include an electrocardiograph142(9) that is configured to detect electrical signals produced by the heart of the user102. The sensors142may include one or more accelerometers142(10). The accelerometers142(10) may provide information such as the direction and magnitude of an imposed acceleration. Data such as rate of acceleration, determination of changes in direction, speed, and so forth, may be determined using the accelerometers142(10). A gyroscope142(11) provides information indicative of rotation of an object affixed thereto. For example, the gyroscope142(11) may indicate whether the device has been rotated. A magnetometer142(12) may be used to determine an orientation by measuring ambient magnetic fields, such as the terrestrial magnetic field. For example, output from the magnetometer142(12) may be used to determine whether the device containing the sensor142, such as a computing device, has changed orientation or otherwise moved. In other implementations, the magnetometer142(12) may be configured to detect magnetic fields generated by another device. A location sensor142(13) is configured to provide information indicative of a location. The location may be relative or absolute. For example, a relative location may indicate “kitchen”, “bedroom”, “conference room”, and so forth. In comparison, an absolute location is expressed relative to a reference point or datum, such as a street address, geolocation comprising coordinates indicative of latitude and longitude, grid square, and so forth. The location sensor142(13) may include, but is not limited to, radio navigation-based systems such as terrestrial or satellite-based navigational systems. The satellite-based navigation system may include one or more of a Global Positioning System (GPS) receiver, a Global Navigation Satellite System (GLONASS) receiver, a Galileo receiver, a BeiDou Navigation Satellite System (BDS) receiver, an Indian Regional Navigational Satellite System, and so forth. In some implementations, the location sensor142(13) may be omitted or operate in conjunction with an external resource such as a cellular network operator providing location information, or Bluetooth beacons. A pressure sensor142(14) may provide information about the pressure between a portion of the wearable device104and a portion of the user102. For example, the pressure sensor142(14) may comprise a capacitive element, strain gauge, spring-biased contact switch, or other device that is used to determine the amount of pressure between the user's102arm and an inner surface of the wearable device104that is in contact with the arm. In some implementations the pressure sensor142(14) may provide information indicative of a force measurement, such as 0.5 Newtons, a relative force measurement, or whether the pressure is greater than a threshold value. In some implementations, operation of one or more components in the wearable device104may be based at least in part on information from the pressure sensor142(14). For example, based on data provided by the pressure sensor142(14) a determination may be made as to whether at least a portion of the wearable device104is in contact with the user102or another object. Continuing the example, if the pressure indicated by the pressure sensor142(14) exceeds a threshold value, the wearable device104may be determined to be in contact with the user102. Based on this determination that the wearable device104is in contact with the user102, one or more of the transmitter118, receiver120, sensors142, and so forth may be operated. Likewise, data from the pressure sensor142(14) may be used to determine the wearable device104is not in sufficient physical contact with the user102. As a result, one or more of the transmitter118, receiver120, sensors142, and so forth may be turned off. The sensors142may include other sensors142(S) as well. For example, the other sensors142(S) may include strain gauges, anti-tamper indicators, and so forth. For example, strain gauges or strain sensors may be embedded within the wearable device104and may be configured to provide information indicating that at least a portion of the wearable device104has been stretched or displaced such that the wearable device104may have been donned or doffed. In some implementations, the sensors142may include hardware processors, memory, and other elements configured to perform various functions. Furthermore, the sensors142may be configured to communicate by way of the network or may couple directly with the computing device. The computing device may include or may couple to one or more output devices148. The output devices148are configured to generate signals which may be perceived by the user102, detectable by the sensors142, or a combination thereof. Haptic output devices148(1) are configured to provide a signal, which results in a tactile sensation to the user102. The haptic output devices148(1) may use one or more mechanisms such as electrical stimulation or mechanical displacement to provide the signal. For example, the haptic output devices148(1) may be configured to generate a modulated electrical signal, which produces an apparent tactile sensation in one or more fingers of the user102. In another example, the haptic output devices148(1) may comprise piezoelectric or rotary motor devices configured to provide a vibration that may be felt by the user102. One or more audio output devices148(2) are configured to provide acoustic output. The acoustic output includes one or more of infrasonic sound, audible sound, or ultrasonic sound. The audio output devices148(2) may use one or more mechanisms to generate the acoustic output. These mechanisms may include, but are not limited to, the following: voice coils, piezoelectric elements, magnetostrictive elements, electrostatic elements, and so forth. For example, a piezoelectric buzzer or a speaker may be used to provide acoustic output by an audio output device148(2). The display devices148(3) may be configured to provide output that may be seen by the user102or detected by a light-sensitive detector such as an image sensor or light sensor. The output may be monochrome or color. The display devices148(3) may be emissive, reflective, or both. An emissive display device148(3), such as using light emitting diodes (LEDs), is configured to emit light during operation. In comparison, a reflective display device148(3), such as using an electrophoretic element, relies on ambient light to present an image. Backlights or front lights may be used to illuminate non-emissive display devices148(3) to provide visibility of the output in conditions where the ambient light levels are low. The display mechanisms of display devices148(3) may include, but are not limited to, micro-electromechanical systems (MEMS), spatial light modulators, electroluminescent displays, quantum dot displays, liquid crystal on silicon (LCOS) displays, cholesteric displays, interferometric displays, liquid crystal displays, electrophoretic displays, LED displays, and so forth. These display mechanisms are configured to emit light, modulate incident light emitted from another source, or both. The display devices148(3) may operate as panels, projectors, and so forth. The display devices148(3) may be configured to present images. For example, the display devices148(3) may comprise a pixel-addressable display. The image may comprise at least a two-dimensional array of pixels or a vector representation of a two-dimensional image. In some implementations, the display devices148(3) may be configured to provide non-image data, such as text or numeric characters, colors, and so forth. For example, a segmented electrophoretic display device, segmented LED, and so forth, may be used to present information such as letters or numbers. The display devices148(3) may also be configurable to vary the color of the segment, such as using multicolor LED segments. Other output devices148(T) may also be present. FIG.3illustrates a block diagram of a computing device300configured to support operation of the system100. As described above, the computing device300may be the wearable device104, the other device150, and so forth. One or more power supplies302are configured to provide electrical power suitable for operating the components in the computing device300. In some implementations, the power supply302may comprise a rechargeable battery, fuel cell, photovoltaic cell, power conditioning circuitry, and so forth. The computing device300may include one or more hardware processors304(processors) configured to execute one or more stored instructions. The processors304may comprise one or more cores. One or more clocks306may provide information indicative of date, time, ticks, and so forth. For example, the processor304may use data from the clock306to generate a timestamp, trigger a preprogrammed action, and so forth. The computing device300may include one or more communication interfaces308such as input/output (I/O) interfaces310, network interfaces312, and so forth. The communication interfaces308enable the computing device300, or components thereof, to communicate with other devices or components. The communication interfaces308may include one or more I/O interfaces310. The I/O interfaces310may comprise interfaces such as Inter-Integrated Circuit (I2C), Serial Peripheral Interface bus (SPI), Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-232, and so forth. The I/O interface(s)310may couple to one or more I/O devices314. The I/O devices314may include input devices such as one or more of a camera142(7), a sensor142, keyboard, mouse, scanner, and so forth. The I/O devices314may also include output devices148such as one or more of a display device148(3), printer, audio output device148(2), and so forth. In some embodiments, the I/O devices314may be physically incorporated with the computing device300or may be externally placed. The network interfaces312are configured to provide communications between the computing device300and other devices, such as the sensors142, routers, access points, and so forth. The network interfaces312may include devices configured to couple to wired or wireless personal area networks (PANs), local area networks (LANs), wide area networks (WANs), and so forth. For example, the network interfaces312may include devices compatible with Ethernet, Wi-Fi, Bluetooth, ZigBee, 4G, 5G, LTE, and so forth. The computing device300may also include one or more busses or other internal communications hardware or software that allow for the transfer of data between the various modules and components of the computing device300. As shown inFIG.3, the computing device300includes one or more memories316. The memory316comprises one or more computer-readable storage media (CRSM). The CRSM may be any one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, a mechanical computer storage medium, and so forth. The memory316provides storage of computer-readable instructions, data structures, program modules, and other data for the operation of the computing device300. A few example functional modules are shown stored in the memory316, although the same functionality may alternatively be implemented in hardware, firmware, or as a system on a chip (SOC). The memory316may include at least one operating system (OS) module318. The OS module318is configured to manage hardware resource devices such as the I/O interfaces310, the network interfaces312, the I/O devices314, and provide various services to applications or modules executing on the processors304. The OS module318may implement a variant of the FreeBSD operating system as promulgated by the FreeBSD Project; other UNIX or UNIX-like operating system; a variation of the Linux operating system as promulgated by Linus Torvalds; the Windows operating system from Microsoft Corporation of Redmond, Washington, USA; the Android operating system from Google Corporation of Mountain View, California, USA; the iOS operating system from Apple Corporation of Cupertino, California, USA; or other operating systems. Also stored in the memory316may be a data store320and one or more of the following modules. These modules may be executed as foreground applications, background tasks, daemons, and so forth. The data store320may use a flat file, database, linked list, tree, executable code, script, or other data structure to store information. In some implementations, the data store320or a portion of the data store320may be distributed across one or more other devices including the computing devices300, network attached storage devices, and so forth. A communication module322may be configured to establish communications with one or more of other computing devices300, the sensors142, or other devices150. The communications may be authenticated, encrypted, and so forth. The communication module322may also control the communication interfaces308. One or more of the data acquisition parameters124, operating parameters126, signal characteristic values128, molecular reference data138, or the molecular concentration data140may be stored in the memory316. The memory316may also store the control module122. As described above, the control module122may operate the RF transceivers114to produce signal characteristic values128. The memory316may store the data processing module136. The data processing module136uses the signal characteristic values128, the molecular reference data138, and so forth as input to generate the molecular concentration data140. In one implementation, the data processing module136may use molecular reference data138to generate molecular concentration data140that is indicative of a concentration of one or more types of molecules106in the user102. In some implementations, a calibration process may be performed in which an external sensor is used to obtain external sensor data326that is indicative of a concentration of a type of molecule106. For example, a blood glucose meter that uses a sample of a drop of blood may be used as the external sensor. At a contemporaneous time, the RF transceivers114may be used to obtain the signal characteristic values128. The external sensor data326comprising concentration data from the external sensor may be used in conjunction with the signal characteristic values128to determine a correspondence between one or more signal characteristic values128and molecular concentration. This correspondence may be stored as the molecular reference data138. The molecular reference data138may be specific to a particular user102. For example, the molecular reference data138may be specific to user102“Pat”. In some implementations, the molecular reference data138may be processed using one or more techniques to interpolate values between those which have been measured. In some implementations, previously acquired molecular reference data138may be used, and a calibration factor may be determined based on the molecular reference data138. Threshold data328may be stored in the memory316. The threshold data328may be used to designate a threshold to which molecular concentration data140may be compared. For example, the threshold data328may specify threshold values for particular types of molecules106. If the molecular concentration data140is less than a first threshold or greater than a second threshold, the user interface module144may generate an alarm and present that information using the output device148. The user interface module144provides a user interface using one or more of the I/O devices314. The user interface module144may be used to obtain input from the user102, present information to the user102, and so forth. For example, the user interface module144may present a graphical user interface on the display device148(3) and accept user input using the touch sensor142(4). Continuing the earlier example, if the molecular concentration data140indicates that user's102blood glucose level is less than a threshold value, the user interface module144may present information indicative of this on the display device148(3). The user102may then take corrective actions, such as consuming glucose to raise their blood sugar level, reducing activity, and so forth. The computing device300may maintain historical data330. For example, the historical data330may comprise the signal characteristic values128, molecular concentration data140, or sensor data324from one or more of the sensors142obtained at different times. The historical data330may be used to provide information about trends or changes over time. For example, the historical data330may comprise an indication of average daily blood glucose levels of the user102over a span of several weeks. The user102may then use this data to assist in managing their diet and insulin dosage. Other modules332may also be present in the memory316, as well as other data334in the data store320. In different implementations, different computing devices300may have different capabilities or capacities. For example, the other device150may have significantly more processor304capability and memory316capacity compared to the wearable device104. In one implementation, the wearable device104may determine the signal characteristic values128and send those values to the other device150. Other combinations of distribution of data processing and functionality may be used in other implementations. FIG.4illustrates a first implementation400of the antenna112in a slotline configuration. Depicted are a top view402, a side view404of a cross section along a centerline of the antenna112, an end view406of a first cross section perpendicular to the centerline, and an end view408of a second cross section perpendicular to the centerline. The antenna112comprises a substrate410. The substrate410may comprise an electrical insulator such as plastic, glass, fiberglass, and so forth. For example, the substrate410may comprise a dielectric. A first antenna element412and a second antenna element414may be affixed to a first surface of the substrate410. The antenna elements may comprise a wire, trace, or other electrically conductive material. A geometry or relative arrangement of the antenna elements may be described in terms of three sections. While the geometry depicted here is symmetrical with respect to two axes, asymmetrical designs are also possible, such as described below with regard toFIG.7. For ease of discussion, and not necessarily as a limitation, the antenna112is shown divided into a pair of first sections416(1) and416(2), a pair of second sections418(1) and418(2), and a third section420. The first section416(1) is proximate to a first end of the antenna112while the first section416(2) is proximate to a second end of the antenna112. The second section418(1) is adjacent to the first section416(1) and the second section418(2) is adjacent to the first section416(2). The third section420is between the second section418(1) and the second section418(2). Proximate to the first end of the antenna112, the first antenna element412and the second antenna element414are separated from one another by a distance422. For example, a first point on the first antenna element412and a second point on the second antenna element414that are both a first distance from the first end of the antenna112may be separated by the distance422. The first section416may include the terminals, contacts, or other structure to which other circuitry is attached to the antenna elements. The first section416may exhibit a first impedance at a specified frequency. For example, the first section416may exhibit an impedance that matches an impedance of the RF transceiver114at a frequency used by the RF transceiver114. As the distance from the first end increases, the spacing between the first antenna element412and the second antenna element414increases. For example, within the second section418(1) that is a second distance from the first end of the antenna112, the first antenna element412and the second antenna element414are a second distance424apart. The second distance424is greater than the first distance422. The antenna elements are separated with a geometry that avoids discontinuities such as two straight line sections meeting at a vertex. Instead, the antenna elements gradually separate and merge due to the curvature of the antenna elements. By avoiding discontinuities, the antenna112avoids an abrupt change in impedance which would introduce reflections of the signals on the antenna112. Within the third section420that is shown here centered on a midpoint between the first end and the second end, the first antenna element412and the second antenna element414are a third distance426apart. The third distance426is greater than the second distance424. The impedance in the third section420differs from the impedance in the first section416. Due to the third distance426being greater than the first distance422, the fringe effect of the electric field associated with the signal108in the antenna112is increased, as shown below, resulting in an increased sample depth for the signal108. In some implementations a cover428may be used that is adjacent to the antenna elements and is between the antenna elements and the user102. The cover428may comprise a non-conductive material. For example, the cover428may comprise plastic, glass, and so forth. The cover428may be transparent to the signal(s)108. In the implementation depicted, the cover428is arranged atop the first sections416(1) and416(2) while the portion of the first antenna element412and the second antenna element414in the second sections418(1) and418(2) and the third section420may come into direct contact with the skin of the user102. In other implementations the cover428may cover all sections of the antenna112or may be omitted. The antenna elements may comprise a biocompatible material such as gold, silver, rhodium, and so forth. In addition to being used to emit and acquire the signal108, in implementations where the antenna elements are in contact with the user102, they may be used to acquire other information. For example, galvanic skin conductivity may be measured using two or more antenna elements, cardiac electrical signals may be acquired using one or more of the antenna elements, and so forth. The antenna elements may be on, affixed to, incorporated within, or otherwise maintained by the substrate410. The substrate410may be rigid or flexible. For example, the substrate410may comprise a plastic layer upon which the antenna elements have been deposited, printed, adhered, and so forth. In one implementation the antenna112may comprise a flexible printed circuit with the antenna elements comprising traces thereon. One or more apertures430or sensors142(not shown) may be located between or near the antenna elements of the antenna112. The aperture430may provide a window or opening in the substrate410to facilitate operation of the wearable device104. For example, the aperture430may provide a window through which an optical sensor such as a light emitting diode (LED) or a camera142(7) is able to operate and acquire data about the user102. In another example, the aperture430may be used by another sensor142, such as a capacitive sensor, pressure sensor142(14), and so forth. Some devices may be mounted to the substrate410or may be located between the antenna112and the user102during operation. For example, an LED may be affixed to the substrate410and when operated may illuminate a portion of the user102that is proximate to the inner surface of the wearable device104. In some implementations sensors142may operate through the substrate410. For example, if the substrate410is flexible a pressure sensor142(14) may operate through the substrate410. In another example the substrate410may be transmissive to a signal being detected, such as a particular frequency of light. The side view404depicts a cross section of the antenna112along a centerline indicated by line A-A. Shown in this view is a representation of an electric field432associated with a signal108. In the first sections416(1) and416(2) the field432extends a distance434from the substrate410. In the third section420the field432extends a distance436from the substrate410, where distance436is greater than distance434. During use, the antenna112is placed proximate to the user102such that the field432impinges on at least a portion of the user102. For example, the antenna112may be arranged such that the portion of the first antenna element412and the second antenna element414in the third section420are in contact with the skin of the user102. During operation, the field432extends into the user102. The end view406along line B-B also shows the distance434while the end view408along line C-C shows the distance436. FIG.5illustrates a second implementation500of the antenna112in a planar waveguide configuration. Depicted are a top view502, a side view504of a cross section along a centerline of the antenna112, an end view506of a first cross section perpendicular to the centerline, and an end view508of a second cross section perpendicular to the centerline. The antenna112comprises a substrate410. A first antenna element510, a second antenna element512, and a third antenna element514may be affixed to a first surface of the substrate410. The antenna elements may comprise a wire, trace, or other electrically conductive material. For example, the transmitter118may have a transmitter output comprising a first output terminal and a second output terminal. The first output terminal may comprise a signal line while the second output terminal may comprise a ground line associated with an amplifier of the transmitter118. The first output terminal may be connected to the second antenna element512at a first end of the antenna112while the second output terminal may be connected to one or more of the first antenna element510or the third antenna element514at the first end of the antenna112. The second antenna element512extends along a centerline of the antenna112. The first antenna element510and the third antenna element514mirror one another, on opposite sides of the second antenna element512. For example, a first point on an inner edge of the first antenna element510and a second point on an inner edge of the third antenna element514are at substantially equal distances from the centerline112, wherein a line extending through the first point and the second point is perpendicular to the centerline. A first distance and a second distance may be deemed substantially equal if they are within a threshold value of one another. The threshold value may be determined based on manufacturing processes, tolerances associated with design of the antenna, and so forth. A geometry or relative arrangement of the antenna elements may be described in terms of three sections. While the geometry depicted here is symmetrical with respect to two axes, asymmetrical designs are also possible, such as described below with regard toFIG.7. For ease of discussion, and not necessarily as a limitation, the antenna112is shown divided into a pair of first sections516(1) and516(2), a pair of second sections518(1) and518(2), and a third section520. The first section516(1) is proximate to a first end of the antenna112while the first section516(2) is proximate to a second end of the antenna112. The second section518(1) is adjacent to the first section516(1) and the second section518(2) is adjacent to the first section516(2). The third section520is between the second section518(1) and the second section518(2). Proximate to the first end of the antenna112, the first antenna element510and the second antenna element512are separated from one another by a distance522. For example, a first point on the first antenna element510and a second point on the second antenna element512that are both a first distance from the first end of the antenna112may be separated by the distance522. Outermost edges, farthest from the second antenna element512, of the first antenna element510and the third antenna element514may be parallel to the second antenna element512. In contrast, at least a portion of the innermost edges of the first antenna element510and the third antenna element514are not parallel to the second antenna element512. The first section516may include the terminals, contacts, or other structure to which other circuitry is attached to the antenna elements. The first section516may exhibit a first impedance at a specified frequency. For example, the first section516may exhibit an impedance that matches an impedance of the RF transceiver114at a frequency used by the RF transceiver114. As the distance from the first end increases, the spacing between the innermost edges of first antenna element510and the second antenna element512increases. For example, within the second section518(1) that is a second distance from the first end of the antenna112, the first antenna element510and the second antenna element512are a second distance524apart. The second distance524is greater than the first distance522. The antenna elements are separated with a geometry that avoids discontinuities such as two straight line sections meeting at a vertex. Instead, the antenna elements gradually separate and merge due to the curvature of the antenna elements. By avoiding discontinuities, the antenna112avoids an abrupt change in impedance which would introduce reflections of the signals on the antenna112. Within the third section520that is shown here centered on a midpoint between the first end and the second end, the first antenna element510and the second antenna element512are a third distance526apart. The third distance526is greater than the second distance524. The impedance in the third section520differs from the impedance in the first section516. Due to the third distance526being greater than the first distance522, the fringe effect of the electric field associated with the signal108in the antenna112is increased, as shown below, resulting in an increased sample depth for the signal108. In some implementations a cover428may be used that is adjacent to the antenna elements and is between the antenna elements and the user102. The cover428may comprise a non-conductive material. For example, the cover428may comprise plastic, glass, and so forth. The cover428may be transparent to the signal(s)108. In the implementation depicted, the cover428is arranged atop the first sections516(1) and516(2) while the portion of the first antenna element510and the second antenna element512in the second sections518(1) and518(2) and the third section520may come into direct contact with the skin of the user102. In other implementations the cover428may cover all sections of the antenna112or may be omitted. As described above, the antenna elements may comprise a biocompatible material such as gold, silver, rhodium, and so forth. In addition to being used to emit and acquire the signal108, in implementations where the antenna elements are in contact with the user102, they may be used to acquire other information. The antenna elements may be on, affixed to, incorporated within, or otherwise maintained by the substrate410. The substrate410may be rigid or flexible. For example, the substrate410may comprise a plastic layer upon which the antenna elements have been deposited, printed, adhered, and so forth. In one implementation the antenna112may comprise a flexible printed circuit with the antenna elements comprising traces thereon. One or more apertures430or sensors142(not shown here) may be located between or near the antenna elements of the antenna112. The side view504depicts a cross section of the antenna112along a centerline indicated by line A-A. Shown in this view is a representation of an electric field432associated with a signal108. In the first sections516(1) and516(2) the field432extends a distance528from the substrate410. In the third section520the field432extends a distance530from the substrate410, where distance530is greater than distance528. During use, the antenna112is placed proximate to the user102such that the field432impinges on at least a portion of the user102. The end view506along line B-B also shows the distance528while the end view508along line C-C shows the distance530. Also depicted in this illustration is that the first antenna element510, the second antenna element512, and the third antenna element514are located in a common plane, that is they are coplanar with one another. FIG.6illustrates a third implementation600of the antenna112in a microstrip configuration. Depicted are a top view602, a side view604of a cross section along a centerline of the antenna112, an end view606of a first cross section along line B-B that is perpendicular to the centerline, and an end view608of a second cross section along line C-C that is perpendicular to the centerline. The antenna112comprises a dielectric610. For example, the dielectric610may comprise a plastic, glass, fiberglass, or other material. The antenna elements may be on, printed onto, deposited, adhered, affixed to, laminated to, incorporated within, or otherwise maintained by the dielectric610. The antenna elements of the antenna112may comprise a wire, trace, or other electrically conductive material. The dielectric610may be rigid or flexible. In some implementations the dielectric610may comprise a substrate. A first antenna element612is shown affixed to a first surface of the dielectric610. The first antenna element612extends along a centerline of the antenna112. As described above, the geometry or relative arrangement of the antenna elements may be described in terms of three sections. While the geometry depicted here is symmetrical with respect to two axes, asymmetrical designs are also possible, such as described below with regard toFIG.7. For ease of discussion, and not necessarily as a limitation, the antenna112is shown divided into a pair of first sections614(1) and614(2), a pair of second sections616(1) and616(2), and a third section618. The first section614(1) is proximate to a first end of the antenna112while the first section614(2) is proximate to a second end of the antenna112. The second section616(1) is adjacent to the first section614(1) and the second section616(2) is adjacent to the first section614(2). The third section618is between the second section616(1) and the second section616(2). The first antenna element612has a first width620and may include the terminals, contacts, or other structure to which other circuitry is attached to the antenna elements. The side view604depicts a cross section of the antenna112along a centerline indicated by line A-A. A second antenna element622is shown affixed to a second surface of the substrate410. The second surface is on a side opposite the first surface. The second antenna element622also extends along a centerline of the antenna112and is wider than the first width620. As the distance from the first end increases, the thickness of the dielectric610varies. The end view606at the first end of the antenna112shows the dielectric610with a first thickness624. Towards a midpoint of the antenna112as shown in the end view608, the dielectric610has a second thickness626that is greater than the first thickness624. Shown in this view is a representation of the electric field432associated with a signal108. In the first sections614(1) and614(2) the field432extends a distance628from the dielectric610. In the third section618the field432extends a distance630from the dielectric610, where distance630is greater than distance628. During use, the antenna112is placed proximate to the user102such that the field432impinges on at least a portion of the user102. Also shown in the end views606and608is that the second antenna element622is wider than the first antenna element612. The first section614may exhibit a first impedance at a specified frequency. For example, the first section614may exhibit an impedance that matches an impedance of the RF transceiver114at a frequency used by the RF transceiver114. The antenna elements are separated with a geometry that avoids discontinuities such as two straight line sections meeting at a vertex. Instead, the antenna elements gradually separate and then merge due to the change in thickness of the dielectric610. By avoiding discontinuities, the antenna112avoids an abrupt change in impedance which would introduce reflections of the signals on the antenna112. The impedance in the third section618differs from the impedance in the first section614. Due to the second thickness626being greater than the first thickness624, the fringe effect of the electric field associated with the signal108in the antenna112is increased, as shown below, resulting in an increased sample depth for the signal108. In some implementations a cover428(not shown) may be used that is adjacent to the antenna elements and is between the antenna elements and the user102. The cover428may comprise a non-conductive material. For example, the cover428may comprise plastic, glass, and so forth. The cover428may be transparent to the signal(s)108. The cover428may be arranged atop the first sections614(1) and614(2) while the portion of the first antenna element612and the second antenna element622in the second sections616(1) and616(2) and the third section618may come into direct contact with the skin of the user102. In other implementations the cover428may cover all sections of the antenna112or may be omitted. As described above, the antenna elements may comprise a biocompatible material such as gold, silver, rhodium, and so forth. In addition to being used to emit and acquire the signal108, in implementations where the antenna elements are in contact with the user102, they may be used to acquire other information. One or more apertures430or sensors142(not shown here) may be located between or near the antenna elements of the antenna112. For example, an aperture430may extend through the dielectric610and the second antenna element622. FIG.7illustrates various implementations700of the antenna112. While the antenna elements in the implementations shown inFIGS.4-6are symmetrical with respect to at least one axis, asymmetrical geometries may be used. A first implementation702depicts the substrate410, a first antenna element704, and a second antenna element706on a same surface or side of the substrate410. At a first end of the antenna112in this implementation, the innermost edges of the first antenna element704and the second antenna element706are a first distance708apart. The second antenna element706extends in a straight line along a long axis of the antenna112. In comparison, the first antenna element704is arcuate, curving away from the second antenna element706until a maximum distance710between the innermost edges of the first antenna element704and the second antenna element706is attained. The point of maximum distance710is located closer to the second end of the antenna112than the first end of the antenna112. The first antenna element704then curves back towards the second antenna element706to being the first distance708apart at the second end of the antenna. The placement of the portion of the antenna112with the enhanced fringing due to the maximum distance710may take into consideration anatomical features of the user102. For example, the skin in humans on the ventral (inner) portion of the wrist is typically thinner than the skin on the dorsal (outer) portion of the wrist. In one implementation the antenna112may be incorporated into the support structure110, such as a wrist band. To improve the detection of particular types of molecules106, the maximum distance710between the antenna elements704and706may be positioned to be proximate to the ventral portion during wear. Also shown is a sensor142that is also affixed to the substrate410. A second implementation712depicts the substrate410, a first antenna element714, and a second antenna element716. At a first end of the antenna112in this implementation, the innermost edges of the first antenna element714and the second antenna element716are a first distance718apart. The second antenna element716extends in a straight line along a long axis of the antenna112. In comparison, the first antenna element714is arcuate, curving away from the second antenna element716until a maximum distance720between the innermost edges of the first antenna element714and the second antenna element716is attained. The point of maximum distance720is located at or near a midpoint between the first end and the second end of the antenna112. The first antenna element714then curves back towards the second antenna element716to being the first distance718apart at the second end of the antenna. An aperture430is shown in the substrate410, located between the first antenna element714and the second antenna element716. A third implementation722depicts the substrate410, a first antenna element724, a second antenna element726, and a third antenna element728. The second antenna element726extends along a centerline of the antenna112. The first antenna element724and the third antenna element728mirror one another, on opposite sides of the second antenna element726. At a first end of the antenna112in this implementation, the innermost edges of the first antenna element724and the second antenna element726are a first distance730apart. The second antenna element726extends in a straight line along a long axis of the antenna112. Proximate to the first end of the antenna112, the first antenna element724and the second antenna element726are separated from one another by a distance730. For example, a first point on the first antenna element724and a second point on the second antenna element726that are both a first distance from the first end of the antenna112may be separated by the distance730. Outermost edges, farthest from the second antenna element726, of the first antenna element724and the third antenna element728may be parallel to the second antenna element724. In contrast, at least a portion of the innermost edges of the first antenna element724and the third antenna element728are not parallel to the second antenna element726. As the distance from the first end increases, the spacing between the innermost edges of first antenna element724and the second antenna element726increases. For example, at a second distance from the first end of the antenna112, the first antenna element724and the second antenna element726are a maximum distance732apart. The maximum distance732is greater than the first distance730. As shown in this illustration, the point of maximum distance732is located between a midpoint of the antenna112and the second end of the antenna112. A fourth implementation734depicts the dielectric610, a first antenna element736on a first side of the dielectric610and a second antenna element738on a second side of the dielectric610. The thickness of the dielectric610varies from a first thickness740at the first end to a second thickness742that is greater than the first thickness. Towards the second end, the thickness of the dielectric610then reduces back to the first thickness740. The bulge in the dielectric610to the second thickness742may be located between the first end and a midpoint of the antenna112. In some implementations the thickness of the antenna elements may change. These changes in thickness may also be gradual, to avoid discontinuities that would reflect power along the antenna elements. For example, a feature744comprising a first portion of the first antenna element736that is thicker than a second portion of the first antenna element736may be provided. In the implementation depicted here, the feature744is positioned within the area with the second thickness742. This feature744may further enhance the fringing effect, changing the sample depth. In other implementations, instead of, or in addition to, a change in thickness, one or more antenna elements may exhibit a change in width. The antennas112described in this disclosure may be used in various configurations. In one implementation, the antenna112may be used in a double-ended or through mode, with the transmitter118connected to the antenna elements on the first end of the antenna112and the receiver120connected to the antenna elements on the second end. In another configuration, the antenna112may be used in a single-ended mode in which a device such as the transmitter118or the receiver120is attached to the antenna elements on the first end and an electrical resistance is placed across the antenna elements on the second end. For example, a 50 ohm resistor may have a first terminal and a second terminal. The first terminal may be connected to an end of the first antenna element proximate to the second end of the antenna112while the second terminal is connected to the end of the second antenna element proximate to the second end of the antenna112. In other implementations the resistor may comprise a feature, such as a printed resistor pattern placed on the substrate410or the dielectric610. In some implementations the opposite ends of the antenna112may have different impedances. For example, the terminals of the antenna112at the first end may be separated by a first distance and have a first impedance at a given frequency while the terminals at the second end may be separated by a second distance greater than the first distance and have a second impedance at the given frequency that differs from the first. A resistance that corresponds to the impedance may be placed across the terminals at the second end. For example, the antenna112may experience 50 ohms of impedance at the given frequency across the terminals at the first end and 1000 ohms of impedance at the given frequency across the terminals at the second end. A resistance of 1000 ohms may connect the terminals at the second end. The transmitter118may have a transmitter output comprising a first output terminal and a second output terminal. For example, the first output terminal may comprise a signal line while the second output terminal may comprise a ground line associated with an amplifier of the transmitter118. The first output terminal may be connected to the first antenna element at a first end of the antenna112while the second output terminal may be connected to the second antenna element at the first end of the antenna112. The receiver120may have a receiver input that comprises a first input terminal and a second input terminal. For example, the first input terminal may comprise a signal line while the second input terminal may comprise a ground line. In the double-ended mode, the first input terminal may be connected to the first antenna element at a second end of the antenna112while the second input terminal may be connected to the second antenna element at the second end of the antenna112. In other implementations other arrangements of electrical conductors and insulators may be used to produce the antenna112. For example, in the implementations depicted at500,702,712,722and so forth, the arrangement of electrical conductors and substrate or dielectric may be inverted. Continuing the example, in the implementation of722the shaded area(s) indicated as substrate410may instead be electrical conductors acting as antenna elements while the dark areas indicated as antenna elements may instead be the substrate or a dielectric. FIG.8illustrates a flow diagram800of a process of using radio frequency signals108emitted and acquired by one or more antennas112to determine molecular concentration data140, according to one implementation. The process may be implemented at least in part by the wearable device104. At802a first signal108is emitted using the first antenna112. For example, the transmitter118may generate the signal108which is provided to the first antenna112(1) which emits or radiates the signal108towards the user102. At804, the first signal is acquired using the first antenna112(1) or another antenna112(N). In some implementations, the first signal may be acquired using an antenna112not connected to the transmitter118. For example, the first signal108may be emitted by the first antenna112(1) and be acquired by a second antenna112(2). In other implementations one or more directional couplers, duplexers, or other devices may be used to transmit and acquire the first signal using the first antenna112(1). At806first signal characteristic values128of the first signal as acquired are determined. For example, the signal characteristic values128may include frequency data130, phase data132, amplitude data134, and so forth. At808, based on the first signal characteristic values128, molecular concentration data140is determined. For example, the first signal characteristics values128may be used as input to the molecular reference data138to determine a corresponding concentration of a particular type of molecule106. In another example the first signal characteristic value(s)128may be provided as input to a machine learning system which then provides as output the molecular concentration data140. At810output indicative of the molecular concentration data140is presented. In one implementation, the user interface module144may generate output data146that is used by the one or more output devices148to present output to the user102. For example, a graphical indication may be provided using a display device148(3) of the other device150. In some implementations a sample depth may be determined. For example, the sample depth may be determined based on a type of molecule106that is being measured, data from one or more of the other sensors142, and so forth. For example, the sample depth may be determined based on sensor output from the temperature sensor142(6). A first sample depth may be determined if the user's102temperature is within a first range of temperatures while a second sample depth may be determined if the user's102temperature is within a second range of temperatures. In another example, the sample depth may be determined based on the amplitude and duration of motion as indicated by the accelerometer142(10). For example, if amplitude and duration of motion is less than a first threshold value, a first sample depth may be determined. If the amplitude and duration of motion is greater than the first threshold value, a second sample depth may be determined. In still another example, the sample depth may be provided based on information about the user102. For example, the sample depth may be determined based on the diameter of the user's102wrist. Based on the first sample depth, a determination is made as to which of the one or more of the antennas112to use. For example, the first antenna112(1) may have provide the first sample depth, while the second antenna112(2) provides a second sample depth, while a third antenna112(3) provides a third sample depth, and so forth. In another example, two or more antennas112may be used that are separated by some distance to produce a desired sample depth. For example, the first antenna112(1) may be connected to the transmitter118while the second antenna112(2) is connected to the receiver120. The increased distance between the two antennas may provide an increased sample depth. The switching circuitry or other circuitry in the wearable device104may be operated to provide a particular combination of antennas112. The antenna112may be implemented in various combinations of the implementations described. For example, one combination may comprise a first, second, and third antenna element. The first antenna element and the second antenna element may be arranged on a first side of a dielectric610with a variable spacing between the two elements, such as depicted in the top view402inFIG.4. The third antenna element may be arranged on a second side of the dielectric610that is opposite the first side. The thickness in the dielectric610may vary, as depicted in the side view604inFIG.6. Other combinations are also possible. While the system and techniques described herein are used with respect to measure humans, it is understood that these techniques may be used to monitor other types of animals. In some implementations, the systems and techniques may be used to characterize other objects. For example, the system may be used to determine a sugar concentration in a fruit, water concentration in a mixture, and so forth. The processes discussed herein may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more non-transitory computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation. Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described herein. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage media may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further, embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet. Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Thus, although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art. Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, environments, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims. | 86,669 |
11857305 | The drawings are not necessarily to scale and the dimensions of certain features may have been exaggerated for the sake of clarity. Emphasis is instead placed upon illustrating the principle of the embodiments herein. DETAILED DESCRIPTION The embodiments herein relate to detection of an internal object in a body, and more specifically an internal object that is asymmetrically positioned with respect to a symmetry line of the body. This may also be referred to as detection of one or more dielectric target, with certain properties, such as size, shape, position, dielectric parameters, etc. that is immersed inside another dielectric medium. A further description of the embodiments herein is that they relate to interrogating the interior of a larger object/body and to detect the presence, occurrence of variations in the properties of one or more immersed objects with different dielectric property than that of the larger object in the presence of microwave signals. The dielectric constant is the ratio of the permittivity of a substance to the permittivity of free space. The dielectric property of a substance is usually also referring to both the permittivity and conductivity of a substance and thereby the dielectric constant is represented in form of a complex number. The definition and implication of dielectric properties, represented as a permittivity, conductivity or a complex dielectric parameter is well known by a person skilled in the art of microwave theory and practice. FIG.1aillustrates an example of a system for detecting an internal object100in a body103. The internal object100and the body103are not parts of the system. The body103may be a head, a brain, an abdomen, a thorax, a leg or any other body part of a human, an animal, or it may be any other form of biological tissue such as for example a tree or wood. The body103may also be non-living tissue, and of non-biological origin, such as but not limited to plastics, etc. The body103may also be referred to as a dielectric medium, an object under investigation, a larger object, etc. The internal object100may also be referred to as an immersed object, a dielectric target, etc. The internal object100may be in the form of solid, semisolid, liquid or gas. The internal object100may be referred to as an immersed object in a larger object or body103. The internal object100may also be referred to as a dielectric target, with certain properties, such as size, shape, position, dielectric parameters, etc. that is immersed inside another dielectric medium, i.e. the body103. The internal object100may be a bleeding, a clot, an oedema, a nail, a twig etc. Note thatFIG.1aonly illustrate one internal object100, but any other number of internal objects100may be present in the body103. One internal object100is shown for the sake of simplicity. FIG.1afurther illustrates that an example where the system comprises two antennas105which are adapted to be symmetrically positioned around the body103. The reference numbers for the antennas105inFIG.1aare shown with the letters a and b, and this difference will be explained later. These two antennas105may be described as an antenna pair. Note that any other 2*n number of antennas105and n antenna pairs are applicable, where n is a positive integer. In the case one at least one antenna105or an odd integer number of antennas105are positioned at the symmetry line108the number of antennas could be 2n+1. One antenna in the pair may be a transmitter which is adapted to transmit microwave signals to be received by the other antenna which acts as a receiver. One antenna105in the pair may be a combined transmitter and receiver antenna, and the other antenna105may also be a combined transmitter and receiver antenna, i.e. both antennas105in the pair may act as both transmitter and receiver. In one such combined example of the system, both antennas105may be adapted to transmit and receive microwave signals from each other. In another such combined example of the system, each antenna105may be adapted to transmit and receive microwave signal from itself, i.e. the microwave signal is transmitted from one antenna105, reflected by e.g. the internal object100or any other part of the body103, and received at the same antenna105. The antennas105may be adapted to generate, transmit and receive electromagnetic signals in the microwave range. Microwave signals may be described as electromagnetic waves which have wavelengths in the range from one meter to one millimetre, and they have frequencies between 300 MHz (101 cm) and 300 GHz (0.1 cm). The term microwave is used herein when referring to electromagnetic signals in the microwave range. The terms transmitter and transmitter antenna are used interchangeably herein. Similarly, the terms receiver and receiver antenna are used interchangeably herein. The antennas105may of various types such as e.g. monopoles, patches, horns, etc. or any other suitable antenna type. Other types of emitters and or receivers can be used. The two antennas105in one pair is of the same type, and antennas105in two pairs can be of different types as long as they are of the same types within each pair. In one embodiment, the system may comprise a configuration of at least two antennas105, where each antenna105acts both as transmitter and receiver, as illustrated inFIG.1a. In another embodiment, it may comprise at least one separate transmitter antenna105and at least two separate receiver antennas105adapted to be positioned around the body103at positions that are symmetrically located with respect to the symmetry line108, as illustrated inFIG.1d. In yet another embodiment, the system may comprise at least four antennas105acting both as transmitters and receivers and positioned symmetrically around the body103at position that are symmetrically located with respect to the symmetry line108, as illustrated inFIGS.1band1c. A possible operation is that all antennas105in turn, are used as transmitters while the rest of the antennas105are receiving or alternatively a subset of antennas105in turn can be used simultaneously as transmitters while a subset of antennas105are receiving or alternatively all antennas105can be used as transmitters simultaneously and all the antennas105can be used as receivers simultaneously. A measurement, when all combinations or a subset of antennas105are used as transmitter-receiver pairs are denoted a “full measurement set” in the following text. A measurement refers to the reflected and/or scattered microwave signal which is received by the receiving antenna105. To obtain time resolution data, several “full measurement sets” after each other are collected during a period of time. Or alternatively, several partial measurement sets are measured, meaning that only a subset of all antennas105are used as transmitters and/or receivers. A different way to describe a “full measurement set” is that it constitutes the measurement of a given set of antenna combinations among the possible combinations of pairs out of all antennas105. The meaning of time resolution data is thus that the same antenna combinations are measured repeatedly at two or more occasions in time. A “full measurement set” must be completely measured before a new “full measurement set” can be measured at a later time. The body103may be substantially symmetric. The body103may have a symmetry line108which represents a division of the body103in two counterparts. The symmetry line108may also be referred to as a line of symmetry, a center line etc. For example, when the body103represents a human brain, the one side of the symmetry line108is the left side of the brain and the other side of the symmetry line108is the right side of the brain. Each body part on the side of the symmetry line108is substantially symmetric in both shape and in content. In the example inFIG.1awith two antennas105, one antenna105is adapted to be positioned at a first position on the left side of the symmetry line108and the other antenna105adapted to be positioned at a second position on the right side of the symmetry line108. Thus, the antennas105are adapted to be positioned at opposite positions around the body103with respect to the symmetry line108. This may also be described as the antennas105are adapted to be symmetrically positioned around the body103in relation to a line of symmetry108in the body103. The antennas105are divided into one or more subsets or pairs, that are adapted to be positioned around the body103, and the antennas105are to be placed symmetrically in relation to a line of symmetry108in the body103. The system may further comprise a structure or arrangement (not illustrated inFIG.1a) to which the antenna105are attached and which makes the system suitable to be positioned around the body103. For example, the system may be a wearable system adapted to be worn by a person, in particular on the persons head. In such example the structure may have a head shape such as e.g. a helmet to which the antennas105may be attached. The antennas105may be fixed or releasable attached to such structure. In a case where the internal object100is present in the body103, it is assumed that the internal object100is located substantially on one side of the symmetry line108. Thus, the internal object100may be on one of the body halves separated by the symmetry line108. Or alternatively the internal object100can be located such that it is partly located on both sides of the symmetry line108, but then the internal object100itself must be asymmetric with respect to the symmetry line108, for example such that a larger part of the internal object100is located on one side of the symmetry line108and a smaller part on the other side. One of the antennas105may transmit a microwave signal in the direction towards the other antenna105. Since the internal object100is located in the path of the microwave signal on its way from one antenna105to the other, the microwave signal propagates through the internal object100and it may also be scattered or reflected by the internal object100. Consequently, the properties of the microwave signals when received at two receiver antennas105that are symmetrically positioned with respect to the symmetry line108are not the same as the properties of the received microwave signals at the same antennas105when the internal object100is not present. For example, the signal strength of the microwave signal may be different, the amount of signal may be different, the phase may be different etc. This difference will be described in more detail later. On example of how the system with one antenna pair illustrated inFIG.1afunctions will now be shortly described. As mentioned earlier, the reference numbers for the antennas105inFIG.1ahave the letters a and b. These letters will be used when explaining the system. In this example, the first antenna105ais a combined transmitter and receiver antenna and the second antenna105bis also a combined transmitter and receiver antenna. The transmitter and receiver functions of each antenna105may always be enabled, or the antenna105may switch between enabling the transmitter and receiver function. The internal object100in the example ofFIG.1ais shown to be between the first antenna105aand the second antenna105b.1) The first antenna105aacts as a transmitter and transmits microwave signal into the internal object100.2) The microwave signal propagates into the body103and is reflected by the internal object100.3) The first antenna105aacts as a receiver and receives the reflected microwave signal.4) The second antenna105bacts as a transmitter and transmits microwave signal into the internal object100.5) The microwave signal propagates into the body103and is reflected by the internal object100.6) The second antenna105bacts as a receiver and receives the reflected microwave signal.7) The two received reflected signals at the first antenna105aand at the second antenna105bare compared in order to detect any differences between them. FIG.1billustrates an example of the system comprising four antennas105, i.e. two antenna pairs. The system comprises a first antenna pair comprising a first antenna105aand a second antenna105b, and a second antenna pair comprising a third antenna105cand a fourth antenna105d. The first antenna105ain the first pair is adapted to be a transmitter and the second antenna105bin the first pair is adapted to be a receiver. The third antenna105cin the second pair is adapted to be a transmitter and the fourth antenna105din the second pair is adapted to be a receiver. The transmitter and receiver within the antenna pair can be interchanged, so that the receiving antennas are operated as transmitters and then the transmitting antennas are operated as receivers. The internal object100is between the antennas in the first antenna pair. There is no internal object100between the antennas105in the second antenna pair.1) The first antenna105ain the first pair acts as a transmitter and transmits microwave signal towards the second antenna105b, i.e. towards the other antenna in the first pair.2) Microwave signal propagates through and is scattered and/or reflected by the internal object100.3) The second antenna105bin the first par acts as a receiver and receives scattered microwave signal.4) The third antenna105cin the second pair acts as a transmitter and transmits microwave signal towards the fourth antenna105din the same pair.5) Microwave signal propagates through body103because there is no internal object100in the signal path between these antennas.6) The third antenna105cin the second pair acts as a receiver and receives microwave signal from the fourth antenna105d.7) Compare microwave signals received at the fourth antenna105din the second pair and the second antenna105bin the first pair. In one of the antenna pairs, i.e. the pair where the path for the microwave signal is not interfered by the internal object100, microwave signals with certain properties are received. In the other antenna pair, i.e. the pair where the path for the microwave signal is interfered by the internal object100, microwave signals with different properties are received. For references, if the internal object100is not present the received signals in both antenna pairs are substantially identical as the measurement scenario for both antenna pairs are identical. When the internal object100is present the difference in the received microwave signals between the two antenna pairs are used as a means for detecting the internal object100. This situation is referred to as that the measured signals are asymmetric. The microwave signals between at least two antenna pairs may be compared, i.e. resulting in symmetric signals, when no internal object100is present, or in asymmetric signals when the internal object100is present. FIG.1cillustrates a different configuration of the two antenna pairs that be used to analyze the presence of the internal object100in an analogous way as inFIG.1b. In this case the first antenna105ais adapted to be a transmitter and the second antenna105bis adapted to be a receiver. The third antenna105cis adapted to be a transmitter and the fourth antenna105dis adapted to be a receiver. The transmitter and receiver within the antenna pair can be interchanged, so that the receiving antennas105are operated as transmitters and then the transmitting antennas105are operated as receivers. FIG.1dillustrates a case where one can imagine that the two transmitting antennas in the two pairs are collocated on a position on the symmetry line. In practice, the two antennas105are replaced with one single antenna105. In that way two antenna pairs can be formed as illustrated inFIG.1d. In this case the first antenna105ais adapted to be a transmitter and the antennas105band105care adapted to be receivers. In this case antenna105ais positioned on the symmetry line108and the antennas105band105care symmetrically positioned with respect to the symmetry line108. It the internal object100is not present, antenna105band105creceives identical microwave signals if antenna105atransmits. If the internal object100is present the difference in the received microwave signals between the two antenna pairs are used as a means for detecting the internal object100. The transmitter and receiver within the antenna pair can be interchanged, so that the receiving antennas105are operated as transmitters and then the transmitting antennas105are operated as receivers. When the term “received microwave signals” refers to reflected or scattered electromagnetic waves in the microwave region. An example of the antenna configuration in the system where eight transmitting/receiving antennas105are adapted to be configured around the body103is illustrated inFIG.2. The transmitting/receiving antennas105are numbered 1-8. An example of a symmetry line108has also been sketched in the figure. The example inFIG.2does not illustrate the internal object100to be detected. FIG.3illustrates an example with more antennas than what is shown inFIGS.1a-1d. Measurement strategies from all of theFIGS.1a-1dcan be adopted as the configurations of 8 antennas105can be arranged in multiple pairs located symmetrically around the symmetry line108.FIG.3shows an example where the internal object100is located in the body103. The system exemplified inFIG.3comprises eight antennas105. The figure illustrates the case when one antenna105is transmitting microwave signals301, the internal object100scatters the irradiated waves and the transmitting antenna105itself as well as the rest of the antennas105receives the scattered microwave signals303. The internal object100is located asymmetrically in relation to the symmetry line108. The embodiments herein relate to analyzing the data for asymmetries relating to the internal object100being detected in an otherwise symmetric or near symmetric background medium, i.e. the body103, as seen inFIG.3. Of the eight antennas105, all or a subset of them can be used as transmitters and receivers and they could be operated interchangeably as pairwise transmitters and receivers. With reference to theFIG.3the following antenna pairs may constitute symmetric pairs when signals are transmitted from one to the other antenna105within the pairs:antenna 1-antenna 2 and antenna 1-antenna 8antenna 1-antenna 3 and antenna 1-antenna 7antenna 1-antenna 4 and antenna 1-antenna 6antenna 2-antenna 3 and antenna 8-antenna 7antenna 2-antenna 4 and antenna 8-antenna 6antenna 2-antenna 5 and antenna 8-antenna 5antenna 2-antenna 7 and antenna 8-antenna 3antenna 2-antenna 6 and antenna 8-antenna 4antenna 3-antenna 4 and antenna 7-antenna 6antenna 3-antenna 5 and antenna 7-antenna 5antenna 3-antenna 8 and antenna 7-antenna 2antenna 3-antenna 6 and antenna 7-antenna 4antenna 4-antenna 5 and antenna 6-antenna 5antenna 4-antenna 8 and antenna 6-antenna 2antenna 4-antenna 7 and antenna 6-antenna 3 Without the internal object100present, antenna pairs associated to one side of the symmetry line108could be measured and compared with its symmetric counterpart on the other side of the symmetry line108and both antenna pairs should measure substantially identical received signals. In other words both antenna pairs in the symmetric combinations listed above would measure identical signals. With the internal object100present, one, more or all the symmetric antenna pairs would measure different received signals that could be used for detecting the presence of the internal object100. Furthermore, in this example symmetrically positioned antennas105could be used to both transmit and to receive a reflected signal. The following antennas105may be symmetrically placed with respect to the symmetry line108:antenna 2 and antenna 8antenna 3 and antenna 7antenna 4 and antenna 6 Without the internal object100present in the body103, the two antennas105in each pair may measure substantially identical signals. With the internal object100present in the body103, one or more of the microwave signals in each pair may be different and may be used for detecting the presence of the internal object100. FIG.4illustrates an example where the internal object100is located in the body103. The internal object100is changing its properties, for example its size as illustrated here. That change causes a corresponding change in the scattered waves received by the antennas105. The difference between the scattered microwave signals scattered from the larger and the smaller instance of the internal object100can be used to identify the presence of an internal object100. The time evolution of the difference can also be used to identify and diagnose the internal object100. By analyzing changes in asymmetries between the symmetric antenna combinations listed above further detection accuracy can be obtained in identifying and diagnosing the internal object100. Measurements may be collected at a single instance or repeatedly during a period of time. If data are collected at different time instances, the result will be a time series of measurement data. This way changes in the shape or properties of the internal object100that develop in time can be monitored. A time series of multiple “full measurement sets” may be collected. Alternatively, a subset of the antennas105may be used to collect a partial measurement set. An internal object100that expand or shrink, change shape, or change properties during the measurements will manifest itself as causing a corresponding change in the received microwave signals and a change in the asymmetry between the symmetric antenna pairs. This change can be analyzed to determine the presence or the properties of the internal object100. In one embodiment, a sequence of at least two “full measurement sets” may be analyzed by an analyzer or by the system to identify and diagnose an internal object100. If the first measurement is denoted the reference measurement, and the second measurement is differing from the reference measurement it is an indication of a change of properties of the internal object100between the measurements of the two “full measurement sets”. Another possibility is that the second measurement is adopted to be the reference measurement, and the first measurement is differing from the reference measurement it is an indication of a change of properties of the internal object100between the measurements of the two “full measurement sets”. The difference data can be obtained by subtracting for example S-parameter data measured at different times from the reference measurement or by subtracting pulsed data measured at different times from the reference measurement. As a result of the subtractions, a number of differential data points are obtained. Changes in time of this differential data can be related to expansion or shrinking of the internal object100, or that it changes its shape or properties otherwise. The reference measurement can for example be the first, last, or some other point in time when the body103which is under test is in a known state, e.g. the internal object100is not present. For a patient, this is the same as being healthy. The reference measurement could also be the patient when the clot is still present, and monitoring is made during thrombolytic treatment in order to detect when the clot has been resolved. The reference measurement could be any stable, i.e. non-changing state when the patient is healthy or non-healthy, and monitoring is made to detect if that particular state changes over time. It could either be a situation when the patient gets sicker or healthier. The reference measurement could be measured at the same or another patient at a different time or location. It can also be generated based on a simulation model or a tissue mimicking phantom. In one embodiment, the detection of the internal object100may be based on the calculated differences in the received microwave signals. In another embodiment, the differential received microwave signals may be fed into a classifier. It could be the classifier described in the patent “Classification of microwave scattering data” with application number U.S. Ser. No. 13/386,521, or it could be any other classification algorithm. The differences in the received microwave signals could alternatively be fed to an image reconstruction algorithm, set up to generate a microwave tomographic image of the changes of the internal object100. In another embodiment this differential data can be processed by an analyzer to monitor pressure inside a body103being a closed cavity. If measured data change as a function of time it can be related to a change of pressure inside the body103. An internal object100located inside a body103, such as inside the skull, that expands its volume will do so by pushing away the background medium represented by the body103. The expanding dielectric internal object100could be a blood volume that increases its volume as a function of time. The result is that more matter will be located inside the body103, leading to an increased pressure inside the body103. Such change of the amount of tissue inside the body103will lead to a corresponding change in the S-parameters when compared to the S-parameter data representing the reference measurement, or the corresponding pulsed time domain data when compared with the time domain representation of the reference measurement. When the internal object100is located asymmetrically with respect to the symmetry line108the received microwave signals at the symmetrically located antennas105will exhibit asymmetric properties or asymmetric changes in the properties. In one embodiment this differential data is used to identify internal objects100that appear inside the body103of investigation. This could for example be a bleeding that is caused by a sudden rupture of a vessel in the brain, i.e. a stroke. In one embodiment, an alarm may only be triggered if a change of any kind has occurred without the system making any diagnosis of what actually occurred inside the body103under investigation. The sensing of an internal object100may be accomplished by illuminating the body103with electromagnetic radiation in the frequency range of microwave signals that is propagating through and scattered from the different internal objects100. The scattered radiation is carrying the information utilized for detecting and analyzing possible abnormalities inside the body103. Use a series of several “full measurement sets,” or subsets hereof, i.e. a sequence of “full measurement sets,” or subsets, that has been collected during a period of time, to analyze asymmetries evolving as a function of time that relate to the internal object100and or changes in the internal object100being detected. For the analysis the data is divided in two subsets, relating to the two subsets of antennas105that are analyzed separately. The two sets are divided in relation to the symmetry line108, seeFIG.2. An example, expressed in terms of S-parameters, of two symmetric subsets inFIG.2is Subset 1: S23, S24, S34and Subset 2: S67, S68, S78. In general, one S-parameter may be written as Sij, where i is the number of one of the antennas105in the pair and j is the number of the other antenna105in the pair. The definition of S-parameters is commonly known by persons skilled in the art of microwave theory and practice. S-parameters are defined in terms of incident and reflected waves at ports, (antenna ports). S-parameters are used primarily at UHF and microwave frequencies where it becomes difficult to measure voltages and currents directly. On the other hand, incident and reflected power are easy to measure using directional couplers. The definition is, [b1b2]=[S11S12S21S22][a1a2] where the akare incident (transmitted) waves from antenna number k and bkare the waves received at port k. It is conventional to define the akand bkin terms of the square root of power. Consequently, there is a relationship with the wave voltages at the transmitting and receiving ports, the coefficients S11and S22are denoted reflection coefficients, and S12and S21transmission coefficients. For reciprocal systems, like the case for us, S12=S21. And for symmetric systems S11=S22. If data from symmetric antenna pairs105are subtracted from each other the residual will ideally be zero in a completely symmetric antenna configuration and with a symmetric internal object100inside the setup. That means for example:symS23-symS67=0,symS24-symS68=0,symS34-symS78=0. In an example with real measurement data, perfect symmetry is not achievable, either because the antenna configuration is not completely symmetric, or because the body103is not perfectly symmetric, for example due to manufacturing tolerances or the body103not being perfectly symmetric. In all cases the residual will be close to zero and can come arbitrarily close to zero if the system or the body103is made more symmetric, e.g. if the manufacturing tolerances are made smaller or by calibration methods. That means for example: |approx symS23-approx symS67|=∥23-67, |approx symS24-approx symS68|=∥24-68, |approx symS34-approx symS78|=∥34-78, and with ∥x-ybecoming smaller the more perfectly symmetric the body103, where x and y are positive integers and represents the number of the antenna pairs. This may be called asymmetry data. Approx. is short for approximation. If an internal object100is located asymmetrically in the body103, in the otherwise symmetric system, the residual is nonzero. If the magnitude of the residual with asymmetrically placed dielectric internal object100is larger than the magnitude of the residual for an empty body103, the residual can be used as an indicator of detection. That is for example if |assym objS23-assym objS67|=|∥23-67|, |assym objS24-assym objS68|=∥|24-68|, |assym objS34-assym objS78|=∥|34-78|, with the asymmetric internal object100present during collection microwave signals that are represented in form of S-parameters. This asymmetry in the data is manifested as a difference in the transmission and reflection data, for example but not limited to, asymmetry in the S-parameters, asymmetry in the received pulsed data, or asymmetry in the received p-n sequence data. The transmission data and reflection data used can be the complex transmission and reflection data. It could also be the magnitude of the transmission and reflection data or the phase of the transmission and reflection data. If the above asymmetry in the data is changing in time it indicates that the dielectric internal object100is changing its size and/or position during a set of measurements. This can be used as a detection/monitoring criteria. Examples could be that an initially symmetric data becomes non-symmetric as a dielectric internal object100appears asymmetrically. This could for example be a bleeding100in the head103that is caused by a sudden rupture of a vessel and where the volume of blood gradually increases. It can also be a situation where initially the data is asymmetric, and where it as time evolves becomes more or less asymmetric. An asymmetry measure can be achieved by integrating the asymmetry data over a specified frequency range that could be the whole or part of the frequency range over which the measurement is performed. The specified frequency range is fstop-fstart, where fstartand fstopare the beginning and end of the integration. fstartis the frequency for the start of the measurement and fstopis the frequency of the stop of the measurement. The frequency is measured in Hertz (Hz). The measured microwave signals can be corrected for intrinsic asymmetries of the transmitting and receiving antennas105, for example if the antennas have different orientation. This can be achieved for example by subtracting the asymmetry measure obtained from measurements on one or more symmetric bodies103from the corresponding asymmetry measures obtained when making a measurement to identify an internal object100. Alternatively, it can be done by subtracting transmission signals and reflection signals obtained from one or more symmetric bodes103from the corresponding asymmetry measures obtained when making a measurement to identify an internal object100. The asymmetry measure obtained for each set of discrete fstartand fstopvalues can be combined in to matrix form where for example the rows correspond to different fstartvalues and the columns corresponds to different fstopvalues. The asymmetry measure matrix then obtained can be used to evaluate the asymmetry in different ways. An example of such evaluation is to add all the elements in the matrix to a single total asymmetry measure. Another way is to feed the different elements into a classifier in order to determine which class the internal object100corresponds to. For a set of several bodies103, e.g. a set of different patients that are measured, there will be a set of asymmetry measure matrices. In one embodiment, the S-parameter data, or any other form of the data, is normalised before the analyzis. There are different ways to normalise this set of asymmetry measure matrices. One such normalisation is done so that the total received power in each antenna105is made equal for all measurements. That could for example be realized such that the transmission coefficients are normalized such that they are equal in amplitude. Another such normalisation is one where the total power entering all bodies103are equal. That could for example be realized such that the reflection coefficients are equal in amplitude. Yet another such normalisation is such that the maximum total individual asymmetry measure among the set of bodies103are set to unity. For example: Max (|assym objS23-assym objS67|, |assym objS24-assym objS68|, |assym objS34-assym objS78|)=1. The embodiments herein relate to analyzing the data in order to detect differences in transmission and reflection data relating to differences in dielectric properties of bodes103being studied. One way to analyze the data is to divide the antennas105into one or more pairs that are positioned around the body103. Time resolved data, i.e. a sequence of “full measurement sets” may be used to analyze transmission that relate to the internal object100being detected. This may be called the transmission data. This difference between different bodies103measured is manifested as a difference in the transmission and reflection data. The transmission data and reflection data used can be the total complex transmission and reflection data. It could also be the magnitude of the transmission and reflection data or the phase of the transmission and reflection data. If the above transmission data is changing in time it indicates that the dielectric internal object100is changing its size and/or position during a set of measurements. This can be used as a detection/monitoring criteria. A transmission measure can be achieved by integrating the transmission data over a specified frequency range that could be the whole or part of the frequency range over which the measurement is performed. The specified frequency range is fstop-fstart, where fstartand fstopare the beginning and end of the integration. The transmission measure obtained for each set of discrete fstartand fstopvalues can be combined into matrix form where for example the rows correspond to different fstart values and the columns corresponds to different fstopvalues. The transmission measure matrix then obtained can be used to evaluate the internal object100in different ways. An example of such evaluation is to add all the elements in the matrix to a single total transmission measure. Another way is to feed the different elements in to a classifier in order to determine which class the internal object100corresponds to. It could be the classifier described in the patent “Classification of microwave scattering data” with application number U.S. Ser. No. 13/386,521, or it could be any other classification algorithm. For a set, e.g. two or more, of internal objects100there will be a set of transmission measure matrices. There are different ways to normalise this set of transmission measure matrices. One such normalisation so that total received power equal for all subjects. Another such normalisation is one where the total power entering all individuals is equal. Another such normalisation is such that the maximum total individual transmission measure among the set of individuals are set to unity. As mentioned earlier, all antennas105the antenna pair of the system may be positioned with the same angle. Such system may be referred to as a symmetric system. In another example, the antennas105in different pairs are positioned with different angles, i.e. the antennas105inside a pair have the same angle and the antennas in different pairs have different angles. Such system with different angles may be referred to as an asymmetric system. The symmetric and asymmetric system will now be shortly described: Symmetric System All antennas105in the system are positioned symmetrically around the body103, and they are positioned with the same angle or orientation with respect to the body103, e.g. the surface of the body103. Initially it is assumed that the body103does not have any internal object100and is therefore a substantially symmetric body103. The measured signals in the symmetric body103are consequently also symmetric. Symmetric signals may be an indication of a healthy body103, i.e. a body103without the internal object100present. This could for example represent a healthy person. When measurements are performed at a later time, and the measured signals are asymmetric signals, it is an indication of that an internal object100has been detected in the body103. The body103is no longer symmetric and has become an asymmetric body103due to the internal object100. The signal change over time indicates the presence of the internal object100. This could for example represent a patient with a haemorrhage in the head, where the patent's head is the body103and the haemorrhage is the internal object100. Asymmetric System All antennas105in the system are positioned symmetrical around the body103. The antennas105in one pair is positioned with an angle or orientation with respect to the body103which is different from the angle in which antennas105in another pair is positioned with respect to the body103. Initially it is assumed that the body103does not have any internal object100and is therefore a substantially symmetric body103. However, the measured signals in the symmetric body103in the asymmetric system are asymmetric due to the different angles of the antenna pairs. Asymmetric signals may therefore in this case be an indication of a healthy person, even though no internal object100has been detected. When measurements are also performed at a later time, and the measured signals still are asymmetric signals but the asymmetry measures have changed, it is an indication of that an internal object100has been detected in the body103due to time changes in the asymmetric signals. FIG.5illustrates an example of a system comprising at least one of: a microwave signal generator501, an antenna system502and a signal receiver503, both connected to form the system, used to collect data. The received microwave signals may be seen as the basis for a data converter505which calculates or coverts the received microwave signals in to a data representation, e.g. S-parameters. The data representation of the received microwave signals, e.g. the S-parameters, may be fed to an analyzer508where the detection of the object100may be made. The result may be presented for example but not limited to a screen510or in monitoring applications. Also, an alarm may be triggering in order to alert staff. The alarm may be at least one of: an audio alarm, a visual alarm, a haptic alarm or any other suitable alarm type with the purpose of informing a user of the system about the detected internal object100or a change in the internal object100. The system exemplified inFIG.5comprises a signal generator501, an applicator, i.e. the system, an antenna system502, a signal receiver503, a converter505, an analyzer508and a screen or a different device510for presenting the result. The antennas105in the system transmit and receive microwave signals. The measured microwave signals collected for instance from continuous microwave signals measured at single or multiple discrete frequencies, or from pulsed or p-n sequence microwave signals. The microwave signals are spanning a given and pre-defined frequency range. The frequency range is for example but not limited to 100 MHz to 10 GHz or more. The frequency interval can be narrower but also wider. Based on the measured microwave signals, transmission and/or reflection data, for example but not limited to S-parameter data are calculated in the data converter505. S-parameter data, or other representations of the measured data, are calculated from the transmitted and/or received microwave signals and then fed to the analyzer508. Other representations of the microwave signal is in form of z-, y-, h-, t-parameter or ABODE-parameters, reflection coefficients, insertion loss, a percentage parameter, a magnitude parameter, a phase parameter, a time-domain pulse or any other representation of the received microwave signal. These different representations of microwave signals are well known to persons skilled in the art of microwave theory and practice. The result from the analyzer is presented on a screen or some other relevant device510. The system may be seen as to constitute the interface between the microwave system and the body103, and it consists of at least one transmitter of microwave signals, e.g. an antenna105, that is adapted to be placed outside the body103and arranged to send microwave signals into the body103. The antennas105transmitting and receiving the microwave signals may be connected to a signal generator501and a signal receiver503respectively. One or more signal receivers503may be adapted to be located outside the body103and detects the scattered radiation which is later processed by an algorithm for data analyzis and diagnosis. In some embodiments, one or more of the signal generator501, the signal receiver503, the data converter505, the analyzer508and the screen510may be incorporated into the system, e.g. one of the antennas105in the system. In another embodiment, all modules illustrated inFIG.5are separate standalone modules. In a further embodiment, some of the modules illustrated inFIG.5may be co-located with each other, for example, the data converter505and the analyzer508may be co-located in one module and the signal-generator501and the system may be co-located in one module. The analyzer508may be for example a processor. In addition, the system may comprise at least one memory (not shown inFIG.5) which is adapted to store the measured signals. Some embodiments described herein may be summarised in the following manner: A system for detecting an internal object100in a body103. The internal object100and the body103have different dielectric properties. The system comprises at least one antenna pair comprising two antennas105which are adapted to be symmetrically positioned around the body103in relation to a line of symmetry108in the body. The system being adapted to:Transmit one microwave signal or multiple microwave signals into the body103from at least one of the antennas105in the system. The transmitted microwave signals are reflected and/or scattered from the internal object100.Receive the reflected and/or scattered microwave signals at the other antenna105and/or at the transmitting antenna105whereby it is operated as a receiver after it has transmitted or it is operated as a receiver at the same time as it is transmitting.Compare the received microwave signals at the symmetrically positioned antennas105.Detect the internal object100or a change in an already detected internal object100when there is a difference between the received microwave signals at symmetric antenna pairs. The difference is related to the different dielectric properties between the internal object100and the body103. The system may be further adapted to determine a type of the internal object100or properties of the internal object100based on the difference between the received microwave signals at symmetric antenna pairs related to the different dielectric properties and their spatial distribution. The system may be further adapted to provide, to a user of the system, information associated with the detected internal object or the change in the already detected internal object100. The microwave signals may be transmitted from all antennas105in the system or from a symmetrically positioned subset of the antennas105in the system. The system may be further adapted to obtain a representation of the received microwave signal. The representation of the received microwave signal may be used when comparing the received microwave signals. The representations of the received microwave signal at each receiver antenna105may be normalized with a common normalization factor before analyzing and comparing the received microwave signals The representation of the received microwave signal may be in the form of pairs of a S-, z-, y-, h-, t-parameter or ABCDE-parameters, reflection coefficients, insertion loss, a percentage parameter, a magnitude parameter, a phase parameter, a time-domain pulse or any other representation of the received microwave signal. The received microwave signals may be received for all or a subset of symmetric antenna pairs for a single instance in time or for several instances during a period of time. The difference between the received microwave signals at symmetric antenna pairs during a period of time may be analyzed and may be used to detect presence or change of properties of the internal object100. The system may be further adapted to determine that there is no internal object100in the body103when there is not any difference between the received microwave signals at symmetrically positioned antennas in the pair. The two antennas105in the antenna pair may be adapted to be symmetrically positioned around the body103in relation to the line of symmetry108in the body103so that one antenna105is at a first position on one side of the line of symmetry108and the other antenna105is at a corresponding and symmetric second position on the other side of the line of symmetry108. The first and second position may be symmetrical positions on each side of the line of symmetry108. One antenna in an antenna pair may be common between the two pairs where the common antenna105may be adapted to be positioned on the symmetry line108and where the remaining antennas105may be adapted to be symmetrically positioned around the body103in relation to the line of symmetry108in the body103so that one antenna105is at a first position on one side of the line of symmetry108and the other antenna105is at a corresponding and symmetric second position on the other side of the line of symmetry108. The first and second position may be symmetrical positions on each side of the line of symmetry108. The two antennas105in the antenna pair may be adapted to be positioned with the same angle in relation to the line of symmetry. The body103may initially be a substantially symmetric body and the internal object (100) may be asymmetrically located in the substantially symmetric body103. The term substantially may refer to that there is some tolerance when it comes to how symmetric the body103may be. The body may initially be an asymmetric body and the internal object100may be asymmetrically located in the asymmetric body103. The internal object100or the change in the already detected internal object100may be detected when the difference between the received microwave signals are equal to or above a threshold. No internal object100or no change in the already detected internal object100may be detected when the difference between the received microwave signals at symmetric antenna pairs is below the threshold. The antennas may be adapted to be symmetrically positioned and asymmetrically oriented, and differences as a function of time may be indicative of the presence of an internal object100. The transmitted microwave signals may be in the frequency range of 100 MHz-10 GHz or the range of 100 MHz to 10 GHz or the range of 100 MHz to 5 GHz. The body103may be a human body part, an animal body part, it may be made of biological tissue, wood, plastic or any other non-organic or organic material. The internal object100may be solid, semisolid, liquid or gas. The internal object100may represent a bleeding, a clot, an ongoing bleeding, a reoccurring bleeding, a tumour, a malignant lesion, a haemothorax, a pneumothorax, a defect in wood, a knot, a nail, a tree rot, an impurity or any internal object with different dielectric properties than the body103. A method performed by a system for detecting an internal object100in a body103. The internal object100and the body103have different dielectric properties. The system comprises at least one antenna pair comprising two antennas105which are adapted to be symmetrically positioned around the body103in relation to a line of symmetry in the body103. The method comprises at least one of the following steps, which steps may be performed in any suitable order than described below:Transmitting one or a plurality of microwave signals into the body103from at least one of the antennas105. The transmitted microwave signals is reflected and/or scattered from the internal object100.Receiving the reflected and/or scattered microwave signals at the other antenna105and/or at the transmitting antenna105whereby it is operated as a receiver after it has transmitted or it is operated as a receiver at the same time as it is transmitting.Comparing the received microwave signals at the symmetrically positioned antennas105.Detecting the internal object100or a change in an already detected internal object100when there is a difference between the received microwave signals at symmetric antenna pairs, and wherein the difference is related to the different dielectric properties between the internal object100and the body103. The method may further comprise:Determining a type of the internal object100or properties of the internal object100based on the difference between the received microwave signals at symmetric antenna pairs related to the different dielectric properties and their spatial distribution. The method may further comprise:Providing, to a user of the system, information associated with the detected internal object or the change in the already detected internal object100. The microwave signals may be transmitted from all antennas105in the system or from a symmetrically positioned subset of the antennas105in the system. The method may further comprise:Obtaining a representation of the received microwave signal. The representation of the received microwave signal may be used when comparing the received microwave signals. The representation of the received microwave signal at each receiver antenna105may be normalized with a common normalization factor before analyzing and comparing the received microwave signals. The representation of the received microwave signal may be in the form of pairs of a S-, z-, y-, h-, t-parameter or ABCDE-parameters, reflection coefficients, insertion loss, a percentage parameter, a magnitude parameter, a phase parameter, a time-domain pulse or any other representation of the data. The received microwave signals are received for all or a subset of symmetric antenna pairs for a single instance in time or for several instances during a period of time. The difference between the received microwave signals at symmetric antenna pairs during a period of time may be analyzed and may be used to detect presence or change of properties of the internal object100. The method may further comprise:Determining that there is no internal object100in the body103when there is not any difference between the received microwave signals at symmetrically positioned antennas105in the pair. The two antennas105in the antenna pair may be symmetrically positioned around the body103in relation to the line of symmetry108in the body103so that one antenna105is at a first position on one side of the line of symmetry108and the other antenna105is at a corresponding and symmetric second position on the other side of the line of symmetry108. The first and second position may be symmetrical positions on each side of the line of symmetry108. One antenna105in an antenna pair may be common between two antenna pairs. The common antenna105may be adapted to be positioned on the symmetry line108and the remaining antennas105may be adapted to be symmetrically positioned around the body103in relation to the line of symmetry108in the body103so that one antenna105is at a first position on one side of the line of symmetry108and the other antenna105is at a corresponding and symmetric second position on the other side of the line of symmetry108. The first and second position may be at symmetrical positions on each side of the line of symmetry108. The two antennas105in the antenna pair may be positioned with the same angle in relation to the line of symmetry. The body103may be initially a substantially symmetric body and the internal object100may be asymmetrically located in the substantially symmetric body103, or the body103may be initially an asymmetric body and the internal object100may be asymmetrically located in the asymmetric body103. The internal object100or the change in the already detected internal object100may be detected when the difference between the received microwave signals are equal to or above a threshold. No internal object100or no change in the already detected internal object100may be detected when the difference between the received microwave signals at symmetric antenna pairs is below the threshold. The antennas105may be symmetrically positioned and asymmetrically oriented, and where differences as a function of time may be indicative of the presence of an internal object. The transmitted microwave signals may be in the frequency range of 100 MHz-10 GHz. The body103may be a human body part, an animal body part, it may be made of biological tissue, wood, plastic or any other non-organic or organic material. The internal object100may be solid, semisolid, liquid or gas. The internal object100may represent a bleeding, a clot, an ongoing bleeding, a reoccurring bleeding, a tumor, a malignant lesion, a haemothorax, a pneumothorax, a defect in wood, a knot, a nail, a tree rot, an impurity or any internal object with different dielectric properties than the body103. A computer program may comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method described above. A carrier may comprise the computer program, and the carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium. An antenna system comprises at least one antenna pair comprising two antennas105which are adapted to be symmetrically positioned around the body103in relation to a line of symmetry in the body103. In other words the antennas described above in relation to the system may be the antennas in the antenna system. Thus, the system may comprise the antenna system. The antenna system may be the antenna system502illustrated inFIG.5. The antenna system is adapted to:Transmit one or multiple microwave signals into a body103from at least one of the antennas105in the pair. The transmitted microwave signals is reflected and/or scattered from an internal object100in the body103.Receive the reflected and/or scattered microwave signals at the other antenna105and/or at the transmitting antenna105whereby it is operated as a receiver after it has transmitted or it is operated as a receiver at the same time as it is transmitting.Provide information associated with the received microwave signals to an analyzing unit. Other examples of applications for the embodiments herein are for various monitoring applications. It could be monitoring of patients with an increased risk of getting a stroke. Monitoring could be made during sleep or while the patient lies down in bed, or with compact wearable systems during daytime. If the system detects an occurrence of a stroke it will immediately trigger an alarm to alert that a stroke has occurred, and also immediately give a diagnosis whether it was a bleeding or a clot. It could also be used for monitoring of patients that undergo thrombolytic treatment. In this case, it is of interest to monitor if the treatment is effective, the clot resolved and the circulation restored. In patients with a bleeding stroke the system could be used for monitoring if the bleeding is ongoing or if it has stopped. In patients that have had a bleeding it could be used to detect if the bleeding starts over. There exist also a number of other monitoring situations where it is of interest to monitor the occurrence, presence, or changes in a present bleeding, or other types of liquids, e.g. edemas, in the skull where a wearable version of the system could give a detection and a diagnosis. Applications, such as bedside intracranial monitoring will be possible where the pressure is coupled to the amount of liquid inside the skull. The embodiments herein could also be used in other conditions, such as pre-hospital diagnostics of traumatic brain injury patients or monitoring of various conditions in the head at the neuro intensive care, or in other similar applications. It would also be possible to apply the embodiments herein to diagnostics of other parts of the human body, e.g. the abdomen in case of suspected internal bleeding or the thorax for detection of pneumothorax or hemothorax. In that case, the system has to be suitably designed but the analyzis could be done with the same equipment as for the brain applications. The embodiments herein provide additional information and at an earlier time in the chain of care compared to the current system. This makes it possible to distinguish between healthy people and stroke patients, and further to diagnose between ischemic and hemorrhagic stroke. Deployed for example in ambulances, or in any pre-hospital setting, or at the arrival point of ambulances at hospitals, embodiments herein facilitate earlier diagnosis and thus enable earlier treatment. One area of application for the embodiments herein is in diagnosing stroke patients by means of a system that can be used in an ambulance or a pre-hospital setting for assessments of patients with suspected stroke. The system could also be used at the hospital for assessment of patients with suspected stroke. The embodiment herein helps to reduce the time from the occurrence of the stroke till the time of making a correct diagnosis of the stroke. In one embodiment, the system is setup to generate microwave signals and take measurements that are used to calculate, for example but not limited, to S-parameter data in the desired frequency range between the transmit-receive antenna combinations. Other examples of representations of the measurement data could be z, y or h-parameters, reflection coefficients, insertion loss, etc. Data can be collected at one or more frequencies in the range. Data collected in time-domain or in frequency domain are both related by the Fourier Transformation. From a point of data analyzis, the two different sets of data are of equal value. The measurement electronics for frequency domain measurements, or pulsed measurements differ. Necessary information may be extracted from the measured data, represented for example as the calculated S-parameter data, and processed with an algorithm in order to perform the detection of the internal object100. This may be done by one of the antennas105, an analyzer in one of the antenna105, an external analyzer or any other suitable unit. The detections are based on a dielectric contrast between the internal object100that is being detected and the body103in which the internal object100is located. The internal object100that is detected could be a gas, liquid, a solid or a semisolid substance that is immersed inside another dielectric object, i.e. the body103. The body103and the internal object100should have different dielectric properties in order to create a detectable contrast. The dielectric properties may be apparent in the presence of microwave signals. To improve data quality in terms of, for example but not limited to, signal to noise ratio, several full measurement sets after each other could be measured. The data quality of the data could then be improved for example by averaging. The embodiments described herein detects and monitors of dielectric internal objects100by microwave tomography utilizing the dielectric contrasts between the different parts of objects, i.e. the internal object100and the body103, and the effects that has on the measured data. The embodiments herein detects and monitors dielectric internal objects100in a body103by classification utilizing the differences in the dielectric contrasts between the different classes of objects100and their effects on the measured data. It could be the classifier described in the patent “Classification of microwave scattering data” with application number U.S. Ser. No. 13/386,521, or it could be any other classification algorithm. The detection and monitoring of dielectric internal objects100that are, for example but not limited to, expanding, decreasing its size, changing its shape, or changing its properties or moving while the measurements are ongoing. The internal object100may be manifested in the measurements in terms of the transmission and reflection data, for example but not limited to, the S-parameters, the received pulsed data, or the received p-n sequence data. Microwave techniques can provide non-invasive, easy access, to the human brain at a relatively low cost providing a large amount of multi frequency scattering data that can be used to analyze the continues developments of the dielectric and geometric properties of the human brain. An imaging modality for traumatic brain injury patients may allow for a continuous bedside brain imaging system. The embodiments herein include monitoring of other parts of the body103, e.g. trunk, abdomen and extremities, in case of suspected internal bleeding. In that case the system is suitably designed but the analyzis could be done with the same equipment as for the brain monitoring. In such system, electromagnetic radiation in the microwave region is injected into the body with the help of an antenna105. Other antennas150are used to receive the radiation after it has propagated through the head. On its way through the body103the waves have been affected by the internal objects100and thus bear information about the tissue or material they propagated through. The embodiments herein are directed towards the estimation of an internal condition in an enclosed volume. The internal condition is represented by the internal object100and the enclosed volume is represented by the body103. A few example applications have been presented above, such as medical diagnosis for obtaining information about internal objects100of human or animal body103. However, one of skill in the art would appreciate that the example embodiments discussed may be used in any type of application utilizing microwave scattering data for the purpose of monitoring, detection, and/or diagnosis. For example, the embodiments presented herein may be utilized for various bodies103such as trees, buildings, etc. Various different types of internal objects103may be monitored, for example, the presence of a particular liquid100in the enclosed volume103. The body103may be a patient and the internal object100may represent a medical condition, which manifests itself as a dielectric contrast with respect to the healthy tissue comprised in the body103. The body103may also be a tree and the internal object100may represent tree health, which manifests itself as a dielectric contrast with respect to the healthy wood. In principle, all internal conditions of a body103which can be expressed as a dielectric contrast, where the dielectric properties of the internal condition is different from the healthy or normal background tissue. The various embodiments described herein is described in the general context of method steps or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), USB, Flash, HD, Blu-Ray, etc. Generally, program modules may include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. The embodiments herein are not limited to the above described embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the embodiments, which is defined by the appending claims. A feature from one embodiment may be combined with one or more features of any other embodiment. It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components, but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. It should also be noted that the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements. The terms “consisting of” or “consisting essentially of” may be used instead of the term comprising. Several “means” or “units” may be represented by the same item of hardware. The term “configured to” used herein may also be referred to as “arranged to”, “adapted to”, “capable of” or “operative to”. It should also be emphasised that the steps of the methods defined in the appended claims may, without departing from the embodiments herein, be performed in another order than the order in which they appear in the claims. | 67,184 |
11857306 | DETAILED DESCRIPTION OF THE INVENTION Detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed method, structure, or system. Further, the terms and phrases used herein are not intended to be limiting, but rather to provide an understandable description of the invention. MRSI is typically performed with water suppression to reduce the overwhelming water signal in the proton MR spectrum. The water signal is considered a nuisance signal and discarded often in the water suppression modules. In one embodiment, the present invention makes use of the available water signal after or immediately after excitation before it is suppressed, using spatial gradient encoding. Specifically, a readout module with or without radiofrequency (RF) refocusing pulses may be inserted between the water excitation RF pulse and the dephasing gradients. Readout modules can be inserted in any of the water suppression modules of a multi-pulse water suppression scheme to encode the water signal spatially. The elongation of the water suppression module(s) due to the insertion of the short readout module is minor. In case outer volume suppression is used, the minimum duration of the water suppression modules is dominated by the duration of the spatial outer volume suppression modules that are inserted into the last water suppression module. A preferred embodiment of the invention uses a spatial-spectral water excitation RF pulse that pre-localizes the water signal within a slice or a slab. The excited water signal can be mapped using the following methods (a)1D spatial encoding with phase encoding in one or two dimensions, (b) 2D single-shot spatial encoding with or without phase encoding in one dimension or (c) 3D single-shot spatial encoding. Image encoding can be accelerated with established k-space undersampling methods, including parallel imaging, such as GRAPPA and SENSE, and compressed sensing to achieve higher spatial and/or temporal resolution. The insertion of multiple spatial-spectral and/or spatial encoding modules into the water suppression modules enables encoding a multitude of MRI contrasts concurrently with the water-suppressed spectroscopic image. Examples include blood-oxygenation-level-dependent contrast, high-resolution proton-density and T2-weighted contrast, and diffusion tensor contrast. The acquisition of serial images enables mapping of blood-oxygenation-level-dependent signal changes during changes in brain activation, to map brain function concurrently with mapping metabolites. Embedding encoding modules into the water suppression modules results in an SNR penalty compared with a conventional MRI sequence due to (a) the short encoding duration of the embedded encoding module (Temb) relative to TR, which can be as large as Temb/TR depending on the acquisition duty cycle of the conventional MRI sequence, and (b) the amplitude of the residual water signal in a given water suppression module. Further, the spatial uniformity of the signal may vary depending on the spatial uniformity of the water excitation RF pulse and the residual water signal in a given water suppression module, which depends on B0and B1inhomogeneity in the volume of interest. As a consequence, acquisitions that require high signal intensity and/or high image intensity uniformity (e.g. a water reference or high-resolution MRI scan) are preferably collected using the first water suppression module. A hybrid proton-echo-planar-spectroscopic-imaging (PEPSI) pulse sequence with concurrent acquisition of a water reference spectroscopic image using the first water suppression module, a volumetric fMRI scan in each of the second water suppression modules and a water-suppressed spectroscopic image was developed on a Siemens Trio 3 Tesla MRI scanner (operating system version: VB17A). The pulse sequence diagram is shown inFIG.4A. Binomial spatial-spectral water excitation RF pulses are used to pre-localizes the water signal within a slab.FIG.3shows the second water suppression module with the embedded single-shot echo-volumar-imaging (EVI) module with binomial excitation RF pulse and navigator acquisition modules. In vivo data in healthy controls were acquired on a Siemens Trio 3 Tesla MRI scanner using a 32 channel head array RF coil. A sensori-motor fMRI experiment was concurrently conducted during the MRSI acquisition. The block design paradigm consisted of 8 repetitions of simultaneous finger tapping and eyes open (8 seconds) versus rest and eyes closed (12 seconds). Water reference and water suppressed data were acquired using either TR/TE=1250/15 ms, 32×32×8 spatial matrix, 7×7×7 mm3voxel size and 3 min scan time or TR/TE=1160/38.5 ms, 64×64×8 spatial matrix, 4×4×7 mm3voxel size and 5:26 min scan time. Water reference and water suppressed data reconstructed online as described previously. Embedded EVI data (TEeff=35 ms) were acquired at every TR using either 3.5×7×7 mm3voxel size and 64×24×6 raw data matrix or 7×7×7 mm3voxel size and 32×24×6 raw data matrix. Both kyand kzwere encoded using 6/8 partial Fourier acquisition. The duration of the EVI encoding module was 63 ms. EVI data were extracted from the raw data and reconstructed offline using zero-filling of the kyand kzdimensions into a 64×64×8 or 32×32×8 spatial matrix using custom software routines written in MATLAB. Model-based, seed-based and data-driven fMRI analysis of the reconstructed EVI data was performed using the TurboFIRE fMRI analysis software4and GIFT ICA analysis software tools (http://mialab.mrn.org/software/gift/). Examples of concurrently acquired and online reconstructed metabolite and water reference spectra using this hybrid PEPSI sequence are shown inFIG.7.FIGS.10aand10bshows reconstructed EVI images showed brain activation in visual and motor cortex as well as resting state connectivity in major resting state networks. In yet other embodiments, the present invention concerns WR acquisition that consists of a binomial RF pulse, a short navigator with a bipolar readout gradient along the slice direction to monitor water phase and frequency changes, conventional phase encoding along kyand kz, and a spatial-spectral encoding module using a train of echo-planar readout gradients. The spatial-spectral encoding module was shorter in duration (128 gradients vs. 2048 gradients) but was otherwise identical to the echo-planar spatial-spectral encoding module for metabolite signals as shown inFIG.1. A binomial RF pulse, offers flexibility in spectral selectivity with tolerance to B0inhomogeneity, was chosen for slab and spectrally selective water suppression. A simulation of the excitation profile (FIG.2) showed that a 10 RF sub-pulse monopolar design with 2 ms inter-pulse spacing provides an acceptable compromise between the non-excitation spectral range (<0.5% suppression between 1.8 and 3.43 ppm) to encompass all metabolites of interest at 3T (including Inositol, Lactate and the 1.3 ppm Lipid peak), B0offset tolerance, water suppression bandwidth (full width at half maximum (FWHM): 116 Hz) and RF pulse duration (21 ms). The EVI module using repeated EPI modules with interleaved kzphase encoding gradients for this embodiment is shown inFIG.3. Two navigator signals were acquired before the first EPI module using a bipolar readout gradient. The EPI modules consisted of trapezoidal gradients (GRO) along the readout direction and a series of blipped primary phase encoding gradients (GPE1) that were rewound at the end of every partition. A blipped secondary phase encoding gradient (GPE2) encodes the third spatial dimension and may be applied after each EPI module. Kyand Kz, -space was either fully sampled or encoded asymmetrically with 6/8 partial Fourier acquisition using a dephasing gradient before the first EPI module (kmax/2) The kyand kz, space trajectories for each kzstep were traversed in the same direction using either full sampling or 4-fold acceleration for GRAPPA reconstruction. Sixteen GRAPPA auto-calibration signal (ACS) lines for in-plane GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA) reconstruction using the same pulse sequence with segmented EPI acquisition were measured in a separate prescan. Multiple-slab EVI encoding in consecutive TRs of the MRSI acquisition were also supported. For concurrent acquisition of WS and WR data, the spatial-spectral echo-planar WR acquisition was integrated into the first WS module of a 3-pulse WET water suppression sequence. For concurrent acquisition of WS, WR and fMRI data, the EVI module was initially integrated into the second WS module of the 3-pulse WET WS sequence as shown inFIG.4a. In the first subsequent implementation of concurrent fMRI and MRSI, the WR acquisition in the first WS module was shortened to a bipolar readout gradient and the EVI module was moved from the second to the first WS module, immediately behind the shortened WR acquisition. A second longer WR data acquisition module was integrated into the second WS module to enable eddy current correction as shown inFIG.4b. This approach of concurrent fMRI and MRSI with integration of MS-EVI into the first WS module of PEPSI reduces fMRI signal intensity compared with conventional fMRI, since a flip angle close to 90 degrees is required for optimal water suppression and the water saturation recovery period is reduced by the duration of the WS module. The steady-state signal amplitude of this fPEPSI approach relative to conventional fMRI is therefore: RfPEPSI/fMRI=(1-e-TR/T1)sinαErnst(1-cosαWSe-(TR-TWS)/T1)(1-cosαErnste-TR/T1)(1-e-(TR-TWS)/T1)sinαWSEq.1 where αErnstis the Ernst angle, αWSis the flip angle of the first WET WS-RF pulse (89.2 degrees) and TWSis the duration of the WET sequence. A simulation of the fMRI signal amplitude in fPEPSI relative to conventional fMRI as a function of TR is shown inFIG.5. Data were acquired on clinical 3T Siemens Trio scanners equipped with 12 and 32 channel array coils. Seven healthy adults (4F, 3M, 24-59 yrs), a 3-month old infant (M) and a patient (F) with world health organization (WHO) grade III anaplastic astrocytoma were scanned using PEPSI with concurrent WR acquisition. Four healthy adults (1F, 3M, 19-57 yrs) were scanned using PEPSI with concurrent fMRI and WR acquisition. Three different pulse sequence versions and parameter settings, with manually prescribed 8-slice OVS, were used: (1) 2D PEPSI with integrated WR acquisition was performed with slice-selective spin-echo excitation or PRESS prelocalization using cardiac gating and: TR/TE: 2000/90 ms, spatial matrix: 32×32, FOV: 320 or 480 mm, slice thickness: 2 cm, nominal voxel size: 1×1×2 cc or 1.5×1.5×2 cc spectral width: 1087 Hz, number of spectral points (WS/WR): 1024/64, digital spectral resolution: 1 Hz (metabolites) and 16 Hz (water), and scan time: 64 s using single signal average. (2) 3D PEPSI with integrated WR acquisition was performed with slab-selective spin-echo excitation (TR/TE: 1250/15 ms) or semi-LASER slab-selective double spin-echo excitation with adiabatic refocusing RF pulses (TR/TE: 1320/32-36 ms) using spatial matrix: 32×32×8, elliptical sampling, in-plane FOV: 224 mm, slab FOV: 56 mm, slab thickness: 42 mm, nominal voxel size: 7×7×7 mm3(0.34 cc), spectral width: 1087 Hz, number of spectral points (WS/WR): 1024/128, digital spectral resolution: 1 Hz (metabolites) and 8 Hz (water), scan time using single signal average: 2:56 or 3:08 min depending on TR. (3) Semi-LASER 3D PEPSI with integrated EVI and WR acquisition (fPEPSI, as discussed below) was performed using: TR/TE: 1160/38.5 ms or 1500/38.5 ms, spatial matrix: 64×64×8, elliptical sampling, in-plane FOV: 256 mm, slab FOV: 48 or 56 mm, slab thickness: 36 or 42 mm, nominal voxel size: 4×4×6 mm3(0.096 cc) or 4×4×7 mm3(0.112 cc), spectral width: 735 Hz, digital spectral resolution: 1.4 Hz (metabolites) and 12 Hz (water), scan time using single signal average: 5:26 or 7:02 min, depending on TR. Two different pulse sequence versions and parameter settings were used concurrently during high spatial resolution (64×64×8) PEPSI acquisitions: (1) Semi-LASER 3D fPEPSI with EVI inside the second WS module: TR/TE: 1160/50 ms, partial kyand kzacquisition (without using GRAPPA acceleration) with raw data matrix: 64×24×6, voxel size: 4×8×7 mm3, and scan time: 5:26 min. A simultaneous block-design motor-visual task (8 seconds of 2 Hz index finger tapping and eyes open vs. 12 seconds of rest with eyes closed) was performed. (2) Semi-LASER 3D fPEPSI with EVI inside the first WS module using 4-fold GRAPPA acceleration: TR/TE: 1500/39 ms, partial kyand kzacquisition with 4-fold undersampling along kyand raw data matrix: 64×12×6, voxel size: 4×4×6 mm3, and scan time: 7:02 min. ACS data were acquired in a short prescan using the same pulse sequence with segmented EPI acquisition and without using kyencoding in the PEPSI acquisition. A 3-minute block-design task (8 seconds of 2 Hz index finger tapping or eyes open vs. 12 seconds of rest with eyes closed) followed by 4 minutes of resting-state (eyes open with fixation of a cross-hair) was performed. WR and WS data were reconstructed online in the image calculation environment (ICE) using re-gridding to correct ramp sampling, separate spatial-spectral reconstruction of even-echo and odd-echo data, navigator-based phase correction and combination of even-echo and odd-echo data as described previously. WR data were zero-filled to the same number of data points (1024) as the WS data. Spectral quantification of WS data in reference to tissue water was performed using LCModel fitting with simulated basis sets containing 18 metabolites. mLCModel fitting of the WR data was performed using a truncated and edited singlet (N-acetyl-aspartate (NAA) or choline (Cho)) basis set that was line-shape matched to the WR data by applying a boxcar filter in the time domain to shorten the signal to match the duration of the WR data. The results were scaled to account for the difference in protons in the edited basis set and water. Ratio images of metabolite/water were computed and corrections for water content and water age-specific signal relaxation correction were applied as described in our recent study. The following thresholds were applied: Cramer-Rao Lower Bounds (CRLBs)<20% for singlets, Inositol, and Glutamate/Glutamine (Glx), and <40% for lower concentration multiplets; linewidth <0.08 ppm, SNR >2. For 3D data, the results were averaged across the 4 central slices. Correction was additionally applied for water content, partial volume effects, and tissue-specific water and metabolite relaxation times in one subject to compute fully quantitative metabolite levels in gray and white matter tissues. Single and multi-slab EVI data were reconstructed offline using MATLAB routines. The reconstruction pipeline extracted the fMRI data segments from the raw data file and applied regridding to correct ramp sampling. Partial Fourier data were zero-filled and navigator-based phase correction was applied. GRAPPA reconstruction was performed based on the ACS data acquired in the prescan. Final data matrix size was 64×64×8. Model-based and seed-based fMRI analyses of the reconstructed EVI data were performed using the TurboFIRE. Preprocessing included motion correction, spatial smoothing of fMRI analysis software tool raw images using a 5 mm3Gaussian spatial filter, and 8 s moving average time domain low pass filter with a 100% Hamming window width to reduce signal fluctuations due to cardiac and respiratory pulsations. Task-based fMRI data were processed using model-based correlation analysis with a reference vector that was convolved with a canonical hemodynamic response function. Resting-state fMRI data were processed using seed-based, moving average sliding-window (15 s width) correlational analysis with regression of motion parameters and signals from white matter and cerebrospinal fluid. Cluster analysis was applied to compute the spatial extent (number of voxels), peak and mean correlation of the activation and connectivity maps. Integration of the WR and EVI acquisition into the PEPSI pulse sequence with OVS prolongated minimum TR by less than 50 ms. The integration of these modules had negligible impact on SNR and water suppression efficiency. The spectral quality (i.e., line-shape and width, baseline distortion, and lipid contamination) of 3D short TE PEPSI data acquired with integrated WR acquisition, 32×32×8 matrix size, 7×7×7 mm3voxel size and 3 min scan time were comparable to conventional PEPSI data as shown inFIGS.6and7. LCModel fitting results of both WS and WR data were consistent with conventional water-suppressed and non-water-suppressed PEPSI data as shown inFIG.8. Metabolite concentration values were in the range of previous studies (Table 1a) with Cramer-Rao lower bounds ranging from 8.6-13.6 on average across subjects for major singlets, to 16.4 for Glu+Gln (Table 1b). Subject:12345MeanSDCho1.71.91.71.81.81.80.1Cr +5.87.15.46.46.06.10.6PCrGlu +12.311.210.410.813.511.71.1GlnNAA +10.810.810.510.510.110.50.3NAAGSNR5.65.04.35.25.15.00.4FWHM0.0460.0530.0490.0500.0540.0500.003[Hz] Table 1a: Volume-averaged metabolite concentrations, FWHM and SNR and (b) Cramer-Rao lower bounds in short TE 3D PEPSI in 5 healthy controls. Subject:12345MeanSDCho14.112.513.713.514.013.60.6Cr + PCr11.59.212.510.810.710.91.1Glu + Gln15.516.117.918.114.216.41.5NAA + NAAG8.67.99.08.88.68.60.4 Table 1b: Volume-averaged Cramer-Rao lower bounds in short TE 3D PEPSI in 5 healthy controls. The SNR ranged from 4 in central voxels to 7 in lateral voxels, reflecting the sensitivity profile of the 32-channel array coil. The average SNR across voxels and subjects was 5. The FWHM varied between 0.04 and 0.06 ppm, depending on slice position, and was 0.05 ppm on average across subjects. Inter-subject variability reflects in part differences in slab location and angulation. Metabolite concentration values in white and gray matter, which were computed in one of the subjects using partial volume and tissue-specific relaxation correction (Table 2), were found to be within the range of a prior study. WMGMSlopeMeanSDMeanSDMeanSDMeanSDMeanSDMeanSD[mmol][mmol][mmol]/% GMCho1.60.31.40.2−0.0020.004Cr + PCr5.50.97.50.50.0210.005Glu + Gln6.90.612.60.80.0570.013NAA + NAAG9.10.810.21.10.0120.017 Table 2: Metabolite concentration values in healthy control1after partial volume and relaxation corrections, averaged across slices. Metabolite concentration values measured with single-slice 2D PEPSI at long TE, using either spin-echo slice/slab selection or PRESS prelocalization, were also in the range of results published in previous studies as shown in Table 3. Age/Pre-Slice/slabVoxel sizetChoCr + PCrGlu + GlnNAA + NAAGSubjectGenderLocalizationlocation[mm]Concentration [mmol] (CRLB [%])159 y/MSliceperi-10 × 10 × 201.4 (7.6)7.5 (7.4)6.8 (16.5)8.1 (4.8)ventricular258 y/MPRESSsupra-15 × 15 × 201.8 (7.8)7.2 (8.1)5.4 (22.4)8.3 (4.8)ventricular317 y/MPRESSsupra-15 × 15 × 201.4 (9.8)6.3 (8.1)N/A8.8 (5.6)ventricular43 mo/MPRESSsupra-15 × 15 × 201.5 (6.6)5 (7.55)6.2 (15.3)7.5 (6.2)ventricular Table 3: Slice-averaged metabolite concentrations and Cramer-Rao lower bounds in long TE 2D PEPSI using single-spin-echo and PRESS prelocalization with cardiac gating (TR/TE: 2000/90 ms, scan duration: 1:06 s). Slice-averaged Cramer-Rao lower bounds in these single-slice 2D PEPSI scans were less than 11% for major singlet resonances (Table 3). The PRESS and spin-echo pre-localized 2D-PEPSI data were affected by differential chemical shift displacement between the 90 degrees and 180 degrees since RF pulses that introduced chemical shift dependent signal attenuation in the frequency range of Glu+Gln and NAA, thus, may help to explain lower Glu+Gln and NAA concentrations in the 2D PEPSI adult data sets compared with the 3D short TE PEPSI data. Regional differences in relaxation times and volume fraction of gray matter, not accounted for in the analyses, may also have contributed to the apparent differences in metabolite concentration measured in 2D and 3D PEPSI. The 2D PEPSI concentration values of Cr+PCr and NAA+NAAG in the infant were reduced compared to 2D PEPSI adult values, which is expected based on previous studies. Metabolite maps acquired using Semi-LASER PEPSI with concurrent WR (TR/TE: 1350/36, 0.34 cc voxel, 3:10 min) in the patient with WHO grade III anaplastic astrocytoma show an unusual spectral profile with elevated Inositol and Creatine and decreased NAA in the tumor, but only minor elevation of Choline as shown inFIG.9. The effect of this spectral profile is not apparent in the Cho/NAA map, which displays focal enhancement in central regions of the tumor. Concurrent fMRI and MRSI in fPEPSI showed clearly detectable task activation signal changes and resting-state connectivity. The initial implementation of EVI into the second WS module using the residual water signal enabled detection of visual and motor activation, albeit at reduced signal intensity and image uniformity compared with conventional fMRI as shown inFIGS.6a-c. The corresponding WS and WR data were acquired at considerably higher spatial resolution (0.11 cc) than in the previous series of experiments and clearly delineated the ventricles as shown inFIGS.6d-f. The SNR of NAA in cortical gray matter was 3-4, which is consistent with previous studies using conventional PEPSI. The subsequent implementation of EVI into the first WS module and integration of GRAPPA acceleration strongly improved signal intensity and image quality (image uniformity, image distortion and ghosting), which were comparable to the multi-slab EVI method of the present embodiment. Activation in the visual cortex was detected after the first task block and the resulting map shows an average correlation coefficient of 0.63 with spatial extent of 71.1 cc and a peak correlation of 0.8 as shown inFIG.6g. A 4% signal change with contrast-to-noise ratio >10 was measured in a visual cortex as shown inFIG.6h. Resting-state connectivity in the visual network was detected in the final 4 minutes of the scan as shown inFIG.10i. Similar results were obtained in motor cortex as shown inFIGS.10j-l. Motor activation (FIG.10j) with signal changes on the order of 5% and high contrast-to-noise (FIG.10k) were measured in the initial 3 minutes of the scan. Motor resting-state connectivity was detected in the final 4 minutes of the scan (FIG.10l). In a preferred embodiment, the present invention concerns a method for acquiring a WR scan concurrently during MRSI acquisition, which provides maximum sensitivity for measuring both metabolites and tissue water in a single acquisition. The embodiment is efficient and minimizes the impact of prolonging TR that is encountered with traditional approaches of interleaving multiple scan acquisitions into an MRSI sequence. When using OVS, which itself increases time delays between the individual WS modules, the integration of the WR acquisition into the first WS module does not significantly increase these WS time delays. This approach requires shortening the readout of the WR signal to a duration on the order of water T2in gray matter. Although it may impact the precision of water referencing, any change in spectral quantification is expected to be minor given the high SNR of the water signal and the relatively short T2* of tissue water. For similar reasons, the effects of the short readout on eddy current correction is expected to be limited. Nor does interleaving the WR acquisition significantly reduce the performance of water suppression in PEPSI, which employs OVS, since the additional increase in WS delay times is minor. The spectral bandwidth of the WS-RF pulse needs to be large enough to encompass all water frequency offsets in the imaging slab, at the expense of attenuating metabolite signals close to water. Given the known excitation profile of the binomial RF pulse, an intensity correction for off-resonance water signals and metabolite signals within the passband of the binomial RF pulse (e.g. Inositol) could be applied to improve metabolite quantification. Metabolite concentrations of major singlets and Glutamate/Glutamine that were measured with the embodiments of the present invention were in the range of results published in previous studies both at short and long TE, for spin-echo slice selection and for PRESS prelocalization. Differences in slice profiles and chemical shift displacements between 90 degree and 180 degree RF pulses resulted in chemical shift dependent attenuation of metabolites in 2D data and at the edges of the imaging slab in 3D data, which may have contributed to the differences in NAA concentration between 2D and 3D acquisitions. Differences in slice profiles and chemical shift displacement artifacts between binomial and since RF pulses also may have affected the amplitudes of the WR signals relative to the metabolite signals and thus biased spectral quantification. Static magnetic field inhomogeneity across large volume requires increased WS bandwidth that can lead to local frequency-shift dependent decreases in water amplitude and suppression of metabolite resonances in the vicinity of the water peak. Mapping of B0inhomogeneity will assist in predicting these signal changes to correct metabolite concentration values during postprocessing. Alternatively, when segmenting multi-slice/slab MRSI data, it is possible to mitigate the effects of B0inhomogeneity by modulating the frequency offsets and water-excitation profiles of the binomial RF pulses on a slab/slice-by-slab/slice basis and by using slice/slab-specific shimming. This will also provide more consistent water suppression across larger volumes. The unusual Cho/Cr contrast in the patient with WHO grade III anaplastic astrocytoma with reduced Cho peak amplitude, and elevated Inositol with possible contributions of Glycine, was not an artifact of the data acquisition. Visual inspection of single-voxel spectra confirmed this metabolite contrast and showed acceptable data quality over the entire sensitive volume. A published case report has similarly described such a finding in a tumor with oligodendroglial neoplastic components, hypothesized to reflect a lesion with low growth fraction. In other aspects, the present invention concerns a novel hybrid fMRI/MRSI sequence that integrates echo-volumar-imaging and a WR acquisition into the water suppression module of PEPSI to simultaneously acquire fMRI, WS and WR data in a single acquisition. The sensitivity of metabolite mapping was comparable to conventional PEPSI. While this approach slightly reduces the SNR of fMRI and limits the minimum TR compared with conventional fMRI, task-based activation and resting-state connectivity maps were similar to results obtained with conventional fMRI. An approach similar to integrated simultaneous multi-slab encoding into multi-slab EVI may be used to increase the limited volume coverage of the various embodiments. This would also allow implementation of slab-specific shimming to mitigate the B0offset sensitivity of water-excitation based fMRI. The higher spatial resolution of the PEPSI acquisition in these fMRI/MRSI scans requires considerably longer scan times to achieve acceptable SNR. Task-based fMRI typically requires multiple scans to map different brain functions, which may take 15-30 minutes. Resting-state fMRI in single subjects also requires long scan times, and multiple acquisitions, in excess of 15 minutes. Effective scan times of 15-20 minutes for PEPSI could thus easily be achieved by averaging across fMRI scans, which would support the high spatial resolution of the PEPSI sequence. While the bipolar first WR acquisition in the first WS module is adequate for spectral quantification, frequency-shift and eddy current correction require the second longer WR acquisition embedded in the second WS module. Correction of the k-space signal amplitude and phase of the second longer WR acquisition in reference to the first WR acquisition is currently under investigation. Future work is aimed at further improving quantification by integrating chemical shift displacement correction and real-time navigator-based correction of movement, frequency instability, and phase drifts. The fPEPSI approach of the present invention to concurrently acquire fMRI and MRSI is generalizable to other MRS acquisition methods including spectral editing. It is therefore applicable to characterizing neurotransmitter and Lactate concentration changes in relation to BOLD signal changes, which has recently attracted considerable interest in neuroscience research. Of particular interest are region-specific measurements of GABA concentrations and concurrent fMRI experiments that map the amplitude of BOLD signal changes during cognitive tasks, which in the past have been acquired separately. The fPEPSI approach further opens up the potential integration of other imaging modalities, such as diffusion tensor imaging or perfusion imaging. While the foregoing written description enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above described embodiments, methods, and examples, but by all embodiments and methods within the scope and spirit of the disclosure. | 30,039 |
11857307 | DETAILED DESCRIPTION OF EMBODIMENTS Overview Embodiments of the present invention initially generate a map of a heart chamber where a single-source arrythmia is present, and in addition measure local activation times (LATs) over the surface of the chamber. The arrythmia is assumed to radiate a conduction wave from an origin, so that at a given point on the surface of the chamber the LAT for that point is indicative of the time at which the wave from the origin passes the point. If a pair of points on the surface is selected, the points will typically have different LATs, and the difference in the LATs gives a measure of the difference in distance from the two points to the arrythmia origin. The actual difference in distance is the product of the conduction wave velocity and the LAT difference, and this is termed herein an LAT-derived distance. (It will be understood that if the two points have the same LATs, they are equidistant from the arrythmia origin.) From the selected pair of points, a locus of possible positions for the arrythmia origin can be found, all the points on the locus having the actual difference in distance to the selected pair that is described above. If another pair of points on the surface is selected the same procedure, to find a second locus of positions for the arrythmia origin, can be applied to the second pair of points. The intersection of the loci corresponds to the position of the origin of the arrythmia, and this may be displayed on the map. The loci themselves may also be displayed on the map. It will be understood that the origin location is such that a difference in distances over the surface from the origin location to a first pair of the points is equal to the LAT-derived distance for the first pair, and is also such that a difference in distances over the surface from the origin location to a second pair of the points is equal to the second LAT-derived distance for the second pair. Typically, the two pairs of points comprise four physically separated distinct points (in two pairs). However, in some embodiments one of the points is common to both pairs, so that in these embodiments three physically separated distinct points comprise the two pairs. System Description In the following description, like elements in the drawings are identified by like numerals, and like elements are differentiated as necessary by appending a letter to the identifying numeral. Reference is now made toFIG.1, which is a schematic illustration of an arrythmia origin locating system20, according to an embodiment of the present invention. For simplicity and clarity, the following description, except where otherwise stated, assumes a medical procedure is performed by an operator22of system20, herein assumed to be a medical practitioner, wherein the operator inserts a catheter24into a left or right femoral vein of a patient28. The procedure is assumed to comprise investigation of a chamber of a heart34of the patient, and in the procedure, the catheter is initially inserted into the patient until a distal end32of the catheter, also herein termed probe32, reaches the heart chamber. System20may be controlled by a system processor40, comprising a processing unit (PU)42communicating with an electromagnetic tracking module36and/or a current tracking module37. PU42also communicates with an ablation module39and an ECG (electrocardiograph) module43. The functions of the modules are described in more detail below. PU42also communicates with a memory44. Processor40is typically mounted in a console46, which comprises operating controls38, typically including a pointing device such as a mouse or trackball, that operator22uses to interact with the processor. The processor uses software stored in memory44to operate system20. Results of the operations performed by processor40are presented to the operator on a display48, which typically presents a map of heart34. The software may be downloaded to processor40in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory. For tracking the path of probe32in a mapping region30containing heart34, embodiments of the present invention use at least one of a current based tracking system21and an electromagnetic based tracking system23. Both systems are described below. Tracking system21comprises a current measuring tracking system, similar to that described in U.S. Pat. No. 8,456,182 to Bar-Tal et al., whose disclosure is incorporated herein by reference. The Carto™ system produced by Biosense-Webster of 33 Technology Drive, Irvine, CA 92618 USA, also uses a current measuring tracking system. The current measuring tracking system is under control of current tracking module37. Probe32has one or more probe electrodes50, and in tracking system21module37injects currents to the one or more electrodes50being tracked. The currents are received, by a plurality of generally similar patch electrodes77, also herein termed patches, which are positioned on the skin of patient28, and transferred back to the module. While conductive cabling to patch electrodes77and for other skin electrodes described herein is present for each of the electrodes, for clarity cabling is only shown in the figure for some of the electrodes. The currents between a given probe electrode50and skin patches77vary according to the location of the electrode, because, inter alia, of the different distances of the electrode from the patches, which cause different impedances between the given probe electrode and the different patches. Module37measures the different currents received by the different patches77on respective channels connected to the patches, and may be configured to generate an indication of the location of the given probe electrode from the different currents. Electromagnetic tracking system23is similar to that described in U.S. Pat. No. 6,690,963 to Ben-Haim et al., whose disclosure is incorporated herein by reference, and to that used in the Carto™ system produced by Biosense-Webster. The electromagnetic tracking system is under control of electromagnetic tracking module36. The electromagnetic tracking system comprises a plurality of magnetic field generators, herein assumed to comprise three sets of generators66, each set comprising three orthogonal coils, so that the plurality of generators comprises a total of nine coils. Generators66are placed in known locations beneath patient28, the known locations defining a frame of reference of the generators. Module36controls, inter alia, the amplitude and frequency of the alternating magnetic fields produced by the generators. The alternating magnetic fields interact with a coil located in probe32, so as to generate alternating electropotentials in the coil, and the electropotentials are received as a signal by tracking module36. The module, together with processing unit42, analyzes the received signal, and from the analysis is able to determine a position, i.e., a location and an orientation, of the probe coil in the defined frame of reference. Typically the tracking by either or both of the systems may be presented visually on display48, for example by incorporating an icon representing the probe into an image of heart34, as well as a path taken by the icon. For clarity, in the following description, only electromagnetic tracking system23is assumed to be use, but the description may be adapted, mutatis mutandis, for cases where both system23and system21are used, or if only system21is used. Ablation module39comprises a radiofrequency (RF) generator which delivers RF power to a region of heart34that is selected by operator22, so as to ablate the region. Operator22selects the region by positioning an ablation probe, with an ablation electrode, at the region. In some embodiments probe32and one of electrodes50may be used as an ablation probe and an ablation electrode. Alternatively a separate ablation probe and ablation electrode may be used for the ablation provided by module39. ECG module43receives ECG signals from electrodes50, and together with PU42analyzes the signals to find, inter alia, local activation times (LATs) of the signals. The module typically measures the LAT values relative to a reference ECG signal, such as may be provided by an electrode positioned in the coronary sinus of heart34. FIG.2is a schematic diagram illustrating results produced by electromagnetic tracking system23, according to an embodiment of the present invention. During the procedure referred to above, probe32is moved within heart34, herein assumed to be within a chamber of the heart, and as it is moved tracking module36acquires positional signals from the probe, and uses the signals to find three-dimensional (3D) positions of the probe. The multiple positions found comprise a point cloud of locations on the surface of the chamber, as well as locations within the chamber. From the point cloud processor40generates a triangular mesh of a 3D enclosing surface, i.e., a surface enclosing all the acquired points in the point cloud, corresponding to the surface of the heart chamber. The processor uses any method known in the art to produce the mesh. Typically processor40“covers” the triangular mesh to form a smooth continuous 3D surface, and the processor may display a graphic representation49of the smooth 3D surface on display48. In addition, the processor typically covers triangles of the mesh with equally spaced sample points, and these points provide processor40with a method to perform discrete calculations on the continuous 3D surface. An example of the use of the sample points described herein is provided below, with reference toFIG.6. FIG.2schematically shows a set of points80, comprising some of the points in the point cloud referred to above, each of the points corresponding to respective points on the surface of the heart chamber. The figure also illustrates edges84connecting the points, the edges corresponding to line segments joining the points when processor40generates the triangular mesh. Points80and edges84are vertices and sides of triangles, and are herein also termed vertices80and sides84. FIG.3is a schematic diagram of results generated by ECG module43, according to an embodiment of the present invention. The results are generated from ECG signals acquired by probe32as the probe contacts the surface of the heart chamber wherein the probe is moved. The ECG signals may be acquired concurrently with the positional signals referred to above, or alternatively at a different time, and possibly with a different probe. The figure illustrates four voltage V vs. time t graphs90,92,94,96, respectively corresponding to ECG signals acquired at four points80A,80B,80C,80D (FIG.2) on the surface of the heart chamber. For each set of graphs the ECG module calculates the position (in time) of the local activation time (LAT), and the LAT value is shown schematically on each of the graphs as circles90L,92L,94L,96L. Each of the four graphs has been drawn with the V axis corresponding to a reference time of zero derived from the reference ECG signal referred to above. In embodiments of the present invention processor40selects pairs of points80and calculates the difference in LAT values of the ECG signals of the points. Thus, inFIG.3processor40may select points80A and80B, in which case it finds the difference in LAT values as Δt(AB). Similarly the processor may select points80C and80D, in which case it finds the difference in LAT values as Δt(CD). The following description assumes that for a general pair of points80selected by processor40the difference in LAT values of the ECG signals is Δt. Embodiments of the present invention assume that the ECG signals are generated from a single-source arrythmia, and that the ECG signals traverse the surface of the heart chamber with a conduction velocity v. In this case, for the general pair of points assumed herein there is a difference in path length ΔP, from the single source to the two points of the general pair, given by equation (1): ΔP=v·Δt(1) The difference in path length ΔP is a distance, and is also herein termed the LAT-derived distance. FIG.4illustrates how the difference in path lengths given by equation (1) is used for specific pairs of points80selected by processor40, according to an embodiment of the present invention. The points illustrated inFIG.4correspond to points80A,80B,80C, and80D, (FIG.2) and are herein respectively termed A, B, C, D. For points A, B, a point Q can be the origin of the single source arrythmia if equation (2) is valid. BQ−AQ=v·Δt(AB) (2) where BQ is the distance on the surface of the heart chamber between points B and Q, AQ is the distance on the surface of the heart chamber between points A and Q, and Δt(AB) is the difference in LAT values between the ECG signals from points B and A. The LAT-derived distance for points A, B is the product vΔt(AB). It will be understood that to satisfy equation (2) the point Q may be in a plurality of locations i.e., Q may be on a locus, or line,100, where any point on the locus obeys equation (2). Line100is a curved line that predicts where an origin of the single source arrythmia may be, and lines such as line100are also referred to herein as prediction lines. For points C, D, a point R can be the origin of the single source arrythmia if equation (3) is valid. DR−CR=v·Δt(CD) (3) where DR is the distance on the surface of the heart chamber between points D and R, CR is the distance on the surface of the heart chamber between points C and R, and Δt(CD) is the difference in LAT values between the ECG signals between points D and C. The LAT-derived distance for points C, D is the product vΔt(CD). As for equation (2), equation (3) generates a locus or line102, and point R may be any point on the locus. Line102is also a prediction line for the position of the origin of the single-source arrythmia. Loci100and102intersect at a region104, and since region104is on both loci, region104corresponds to a predicted origin of the single-source arrythmia generating the ECG signals acquired at points80A,80B,80C, and80D. FIG.5is a flowchart of steps performed by processor40and operator22in determining the location of an origin of a single-source arrythmia using system20, according to an embodiment of the present invention. In an initial step150operator22inserts probe32into a chamber of heart34of the patient. Probe32is typically a multi-electrode probe having electrodes50, such as the Pentaray probe produced by Biosense-Webster. The operator may also set values of parameters, such as an assumed conduction velocity in heart34and a margin of error to be used in evaluating results, to be used by the processor in performing its calculations for the origin location. In one embodiment the conduction velocity v in the heart chamber is set to be 1.0 mm/ms, and an acceptable error E in determining the origin location is set to be 1 mm. It will be understood that both these figures are by way of example, and operator22may use other values for the conduction velocity and for the acceptable error. Alternatively the conduction velocity may be calculated after steps154and156below have been implemented. In this case a wavefront, of a conduction wave originating from the arrythmia, is mapped from locations and LAT values of points found in step156. The conduction velocity may be calculated from pairs of points that are selected to have respective line segments, joining the points, that are parallel to an assumed wavefront vector of the conduction wave. In a data acquisition step154, ECG signal data and positional signals are acquired from each of electrodes50of the probe. In an analysis step156, processor40uses the positional signals to first identify locations of points80, and then construct a 3D triangular mesh of the surface of the heart chamber. The mesh is constructed by joining the points with edges, as described above with reference toFIG.2, so forming a mesh with triangle vertices80and triangle sides84. The mesh and/or the surface generated by the mesh may be presented to operator22on display48. The processor also analyzes the ECG signals to find the LAT values for each point80on the heart chamber surface contacted by electrodes50, as described above with reference toFIG.3. In a pair selection step160, a pair of locations on the surface of the heart chamber, i.e., two locations that the processor identifies in step156, are selected. The locations are herein termed X1, X2. In the following description the selection is assumed to be performed by operator22, typically using the map presented to the operator in step156. Once the pair has been selected, the processor determines the LAT values for each of the ECG signals of the pair, and then finds the difference in LAT values Δt. The processor then uses equation (1), with the conduction velocity v value set in step150, to determine a path length difference, the LAT-derived distance, ΔP, from the single source to the selected pair of locations. In a plurality of locations generation step164, the processor finds possible locations for the origin of the single-source arrythmia, herein assigned the label S, by finding locations of S that satisfy equation (4): SX1−SX2=v·Δt(4) where SX2is the distance on the surface of the heart chamber between points S and X2, and SX1is the distance on the surface of the heart chamber between points S and X1. The LAT-derived distance for points X1, X2is the product vΔt. (Equation (4) has the same form as equations (2) and (3); only the point identifiers have been changed.) In order to check if equation (4) is satisfied, processor40calculates values of SX1and SX2separately, as follows: For point X1the processor finds the shortest distances, along sides84, to each of the vertices80acquired in step156. The processor also finds the shortest distances, along sides84, from X2to each of these vertices. In one embodiment the processor uses Dijkstra's algorithm to find the shortest distances. For each of the vertices80, the processor finds the difference Δ in the shortest distances. In a comparison step168, for each of the vertices80, the processor checks if the difference Δ is close to the path length difference value ΔP found in step160. I.e., the processor checks if the expression (5) is correct. |Δ−ΔP|≤E(5) where E is the margin of error set in initial step156. If comparison168returns negative, the vertex80being checked is not considered to be a possible origin of the arrythmia. In this case control proceeds to a comparison step172where the processor checks if all the vertices have been checked for the locations selected in step160d If comparison step168returns positive, the vertex80being checked is a possible origin of the arrythmia. In this case processor40may mark the vertex on the map presented to the operator in a mark vertex step176, and control continues to comparison step172. As stated above, in comparison step172the processor checks if all vertices have been checked for the locations selected in step160. If the comparison returns negative, i.e., all vertices have not been checked, control returns to step164. If comparison step172returns positive, the processor continues to a further comparison step180, wherein the processor checks if more than one pair of locations, chosen in step160, has been selected and analyzed in the iteration of steps164,168,176, and172. If multiple pairs of locations have not been selected (comparison180returning negative), i.e., only one pair of locations has been selected in step160, then the processor selects another pair of locations, from those found in step156, in a select pair location step184. Control then returns to step160, where the new location pair is analyzed. Typically, locations of each pair are selected to be distinct, so that for two pairs there are four distinct locations. However, in some embodiments, any two pairs may have one common location, so that in these cases there are three distinct locations. If comparison180returns positive, then two or more pairs of locations have been selected and analyzed. For each pair vertices have been marked, in step176, as a locus of possible points for the arrythmia origin, and so there are two or more loci marked on the map. As described above, the intersection of the multiple loci corresponds to the predicted origin of the arrythmia, so that when comparison180returns positive, in a final step188of the flowchart, processor40may mark the loci intersection as the predicted arrythmia origin on the map. In comparison step180, the processor may stop the return to step160once a predetermined number of pairs of locations has been has been checked, and the number may be set by operator22. In one embodiment the predetermined number of pairs is 10, from 20 locations. Alternatively, rather than the processor checking if multiple pairs have been checked, the operator may stop the return to step160once two or more location pairs have been checked, and invoke final step188so that the processor marks the intersection. The description above has assumed that the processor measures distances, from each location selected in step160, to triangle vertices along sides184. The distances are measured so that the processor can find the shortest distance to a triangle vertex. FIG.6illustrates an alternative method for measuring distances to pairs of locations, according to an embodiment of the present invention. Rather than measuring to triangle vertices, the processor may measure distances from the selected locations, along edges84, to a sample point P in a given triangle of the mesh generated in step156. As described above with reference toFIG.2, processor40typically covers triangles of the triangular mesh with equally spaced sample points.FIG.6illustrates a triangle200of the triangular mesh, and sample points204within the triangle. An example of the sample points has been assigned a label P. Thus, inFIG.6, if point A is one of the selected pair of locations in step160, the processor first calculates distances from A to vertices V1, V2, V3of triangle200, along edges84of triangles (not shown in the figure) connecting to triangle200. The processor then sums the lengths V1P, V2P, and V3P to find possible distances from A to point P. The processor performs a similar calculation for point B as the other location of the selected pair. The description of the flowchart ofFIG.5may be adapted, mutatis mutandis, to use the alternative method for measuring distances to pairs of locations described with reference toFIG.6, and thus find one or more sample points corresponding to a predicted origin of the arrythmia. The inventors have tested an embodiment of the present invention using data from patients who have undergone successful ablation. The following summarizes the results of the test. FIG.7is a schematic diagram illustrating, from a conceptual point of view, how an origin of an arrythmia may be determined, according to an embodiment of the present invention. The figure assumes that a selected pair of locations, such as locations A, B ofFIG.4, have been drawn on a plane surface, herein assumed to be an xy plane. The figure further assumes that points A, B are separated by a distance z, and that a point C in the plane is a possible origin of an arrythmia. As for the system described above with reference toFIG.4, there is a local activation time difference between locations A, B, in the figure assumed to be t. There is thus, again as for the system ofFIG.4, a path difference between CA and CB of CV·t, where CV is the conduction velocity of the wave from the arrythmia, giving the equation: CV·t=CA−CB(6) It will be appreciated that equation (6) corresponds to equations (2), (3), and (4) above. As shown in the figure, if C is a vertical distance y above AB, and a horizontal distance x from B, then both CA and CB can be expressed in terms of x, y, and z. The expressions are presented in the figure, and substituting them into equation (6) gives CV·t=√{square root over ((z−x)2+y2)}−√{square root over (x2+y2)} (7) Assuming CV, t and z are known, equation (7) can be solved for possible values of (x,y) for C, the values (x,y) generating a prediction curve of possible positions in the plane of the origin of the arrythmia. If another pair of locations in the plane is selected, then the other pair generates a second prediction curve of possible positions of the origin of the arrythmia in the plane, and the intersection of the two lines corresponds to a predicted position for the origin of the arrythmia in the plane. FIG.8is a schematic diagram illustrating a result obtained by the inventors, according to an embodiment of the present invention. Prediction curves for two pairs of locations were plotted on a 3D electroanatomic map where premature ventricular contraction (PVC) was occurring. One prediction curve comprises sets of dark gray points; a second prediction curve comprises sets of light gray points. (The actual map was in color, and the curves were in different colors.) The intersection of the two curves is marked in the figure by a white ellipse, corresponding to the predicted origin of the arrythmia. The true origin, corresponding to a region where ablation was performed successfully, is shown in the figure, and is very close to the predicted origin. FIG.9is a diagram of results obtained by the inventors, according to an embodiment of the present invention. Patients with successful ablation of a focal wavefront, as confirmed by traditional mapping and successful ablation with a focal ablation lesion, were retrospectively enrolled. Two or more pairs of prediction curves were generated in each patient. For each prediction curve conduction velocity was assessed in each patient using point pairs parallel to a mapped wavefront vector. The main outcome was the distance between the predicted and the true origins for each prediction curve pair. The inventors produced prediction curves for 28 cases. As shown in the figure, the overall results for the 28 intersections gave the distance between the predicted and true origins of the wavefront generated by the arrythmia as 6.4±7.8 mm. The figure also gives a breakdown of distances between predicted and true origins for different types of arrythmia. In the figure “ORT” is orthodromic reciprocating tachycardia, “PVI” is pulmonary vein isolation, “AT” is atrial tachycardia, and “PVC/VT” is premature ventricular contraction/ventricular tachycardia. Using univariate analysis, accuracy was found to be related to chamber of origin, conduction velocity, the standard deviation of conduction velocity measurements, the distance between the point pairs, and the cycle length of the rhythm, but not to the average distance of the point pairs to the wavefront origin or to the activation timing between the point pairs. Using multivariate analysis only chamber of origin was significant. It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. | 27,694 |
11857308 | DETAILED DESCRIPTION FIG.1illustrates a system100for respiratory monitoring of a subject. The system100may be configured to generate a current signal S1that is to be applied to a subject and may comprise a bioimpedance measurement sensor110to measure a bioimpedance signal S2providing information of the bioimpedance of the subject, which may be further processed for monitoring respiration of the subject. The bioimpedance signal S2may be provided together with a reference signal S3to a processing unit120which may divide the bioimpedance signal S2into an effort component S4and a flow component S5. As shown inFIG.1, the system100comprises a current signal injection module112. The current signal injection module112may be configured to generate and output the current signal S1, which is to be applied to the subject. The current signal injection module112may comprise a current source for generating a current signal S1. The current signal injection module112may be configured to output an AC current signal. The system100further comprises a bioimpedance measurement sensor110. The bioimpedance measurement sensor110may be configured to receive voltage input signals representing a voltage generated by the current signal S1applied to the subject. The bioimpedance measurement sensor110may be configured to extract a measured bioimpedance signal S2from the received voltage input signals. The bioimpedance measurement sensor110may be configured to process the received voltage input signals, e.g. by filtering the input signals, in order to extract relevant information. The bioimpedance measurement sensor110may comprise two or more electrodes114, which may be arranged to be in contact with skin of the subject. The electrodes114may be connected to the current signal injection module112to receive the current signal S1and provide the current signal through tissue of the subject. The electrodes114may also be connected to the bioimpedance measurement sensor110for providing voltage input signals that may be used for measuring the bioimpedance signal S2. The electrodes114may be arranged in a bipolar arrangement, wherein the same electrodes114are used for providing the current signal S1to the subject and for acquiring the voltage input signals. However, the electrodes114may alternatively be arranged in a tetrapolar arrangement, wherein two electrodes are used for providing the current signal S1to the subject and two other electrodes are used for acquiring the voltage input signals. More than two (or four) electrodes114may be provided, which may allow selection of which electrodes114to be used in a measurement, so that electrodes114providing highest quality bioimpedance signal S2may be selected. The selection of which electrodes114to be used may be performed in set-up of the system100or may be dynamically changed during signal acquisition e.g. when conditions for acquiring the bioimpedance signal change. The bioimpedance measurement sensor110with electrodes114may be configured to be attached on a thorax region of the subject. The bioimpedance measurement sensor110may be arranged on a carrier116configured for being arranged on a thorax region of the subject, wherein the electrodes114may be mounted to be exposed on the carrier116, such that the electrodes114may be arranged in contact with the skin of the subject. The carrier116may for instance comprise an adhesive patch, a textile/garment being worn by the subject, or a belt, which may be configured to be attached around the torso of the subject. When a bioimpedance measurement is performed based on electrodes114arranged on the thorax of a subject, chest expansion may cause a change in a current path between the electrodes114, such that the bioimpedance is changed in relation to a respiratory effort. Also, air has a different impedance than tissue. As an amount of air present in the lungs varies during a respiratory cycle, the bioimpedance is also changed in relation to respiratory airflow. Thus, the bioimpedance measurement sensor110may be configured for acquisition of a bioimpedance signal S2which holds information of both respiratory effort and respiratory airflow. The processing unit120may be configured to receive the bioimpedance signal S2from the bioimpedance measurement sensor110. The processing unit120may further be configured to receive a reference signal S3from a reference measurement sensor130. The reference signal S3may be acquired so as to isolate respiratory effort from respiratory airflow, e.g. by using a sensor which is placed or configured for acquiring a signal which is only affected by either respiratory effort or respiratory airflow. Hence, the reference signal S3may represent respiratory effort or respiratory airflow. The processing unit120may be configured to process the bioimpedance signal S2and the reference signal S3so as to divide the bioimpedance signal S2into an effort component S4representing respiratory effort and a flow component S5representing respiratory airflow. The processing unit120may be implemented in hardware, or as any combination of software and hardware. The processing unit120may, for instance, be implemented as software being executed on a general-purpose computer. The system100may thus comprise one or more physical processors, such as a central processing unit (CPU), which may execute the instructions of one or more computer programs in order to implement functionality of the processing unit120. Thus, the system120may comprise a single processing unit, which may provide a plurality of functionalities e.g. as separate threads within the processing unit120. The processing unit120may alternatively be implemented as firmware arranged e.g. in an embedded system, or as a specifically designed processing unit, such as an Application-Specific Integrated Circuit (ASIC) or a Field-Programmable Gate Array (FPGA). The reference measurement sensor130may be part of and may be delivered with the system100. The system100may thus be set-up for communication between the reference measurement sensor130and the processing unit120. However, the reference measurement sensor130may alternatively be separately delivered, e.g. by a different vendor than the vendor providing the system100. A user may thus connect the reference measurement sensor130to the processing unit120, e.g. by attaching a wire between the reference measurement sensor130and a port in a housing in which the processing unit120is arranged, whereby the processing unit120and the reference measurement sensor130may then exchange set-up messages for automatically setting up communication between each other. Alternatively, a user may initiate a discovery procedure for allowing a wireless communication between the reference measurement sensor130and the processing unit120to be established and again for automatically setting up communication between the reference measurement sensor130and the processing unit120. In a further alternative, the reference measurement sensor130and the bioimpedance measurement sensor110may be configured to separately communicate the reference signal S3and the bioimpedance signal S2to a remotely arranged processing unit120, e.g. a processing unit120arranged “in the cloud”. The signals may be communicated after an entire period of gathering the signals, such as signals acquired during a night's sleep of the subject. The processing unit120may then synchronize the signals before processing. A reference measurement sensor130configured to acquire a reference signal representing a respiratory effort may be any sensor which may be configured to acquire a representation of the respiratory effort. For instance, the reference measurement sensor130may include an oesophageal manometer, a respiratory inductance plethysmography (RIP) belt, a thoracoabdominal polyvinylene fluoride (PVDF) belt, an accelerometer, or an electromyograph (EMG) sensor. A reference measurement sensor130configured to acquire a reference signal representing a respiratory airflow may be any sensor which may be configured to acquire a representation of the respiratory airflow. For instance, reference measurement sensor may include an oro-nasal thermal sensor, such as a thermistor, a polyvinylene fluoride sensor, or a thermocouple, a nasal pressure transducer, a pneumotachograph sensor, or a spirometer. The processing unit120may be configured to receive reference signals S3from a plurality of reference measurement sensors130. The plurality of reference measurement sensors130may comprise only sensors configured to acquire a reference signal S3representing respiratory effort, only sensors configured to acquire a reference signal S3representing respiratory airflow, or one or more sensors configured to acquire a reference signal S3representing respiratory effort combined with one or more sensors configured to acquire a reference signal S3representing respiratory airflow. To illustrate these options, reference measurement sensors130are indicated by dashed lines inFIG.1. The system100may comprise one or more housings, in which the bioimpedance measurement sensor110, the processing unit120and the reference measurement sensor130may be arranged. The housings may be connected by wires for allowing communication between the sensors and the processing unit120. Alternatively, one or more of the sensors110,130and the processing unit120may be set up for wireless communication. The system100may thus be delivered to be ready to use, e.g. in a single package with all parts of the system100already set up to communicate with each other. The processing unit120may be arranged in a housing on the carrier116. The reference measurement sensor130may also be arranged on the same carrier116. However, in an alternative embodiment, the processing unit120may be arranged in a central housing, which may be separate from the carrier116. The central housing may further comprise an output port for connection to an external unit, which may receive the effort component S4and the flow component S5for further processing of the components. Alternatively or additionally, the central housing may comprise a communication unit for wireless communication of the effort component S4and the flow component S5to the external unit. The central housing may also be connected to a display for enabling the effort component S4and the flow component S5to be output on the display. Also, the reference signal may be output on the display S3. This may allow a physician, nurse or any other person, to manually inspect signals representing respiration of the subject, e.g. for manual analysis of the respiration. Referring now toFIG.2, processing of the bioimpedance signal S2and the reference signal(s) S3will be further described. The bioimpedance signal S2may first be provided to a preprocessing unit200. The preprocessing unit200may apply preprocessing of the bioimpedance signal S2, which may be configured to filter the bioimpedance signal S2, e.g. for noise removal and/or for removing contribution of cardiac activity in the bioimpedance signal S2. The preprocessing of the bioimpedance signal S2may also or alternatively be configured to perform one or more of data cleaning, resampling, and shifting of the bioimpedance signal S2. The preprocessing unit200may output a cleaned bioimpedance signal S2′ which is a combined representation of respiratory effort and respiratory airflow. The cleaned bioimpedance signal S2′ may be provided to a signal separator202. The signal separator202may also receive a reference signal S3from a reference measurement sensor130. The reference signal S3may also have been subject to preprocessing, e.g. to remove noise, before being received by the signal separator202. The signal separator202may or may not first apply a transformation to the bioimpedance signal S2′. As will be exemplified below, processing of the bioimpedance signal S2′ may be performed directly on the bioimpedance signal S2′, on a derivative Z′ of the bioimpedance signal S2′ or a transform using the derivative Z′ and square Z2of the bioimpedance signal S2′ (Z′/Z2). It should also be realized that transformation of the biompedance signal S2′ may also include multiplying by a constant K1and adding another constant K2. The signal separator202may then apply an algorithm for dividing the, possibly transformed, bioimpedance signal S2′ into a contribution from respiratory effort and a contribution from respiratory airflow using the information in the reference signal S3. The signal separator202may possibly further process the signals after dividing of the biompedance signal S2′. The signal separator202may then output an effort component S4, indicating contribution from respiratory effort, and a flow component S5, indicating contribution from respiratory airflow. The contribution of respiratory airflow may optionally be provided to a signal transformer204. The signal transformer204may may process the flow component S5, by integrating the flow component S5and possibly adding a constant, in order to provide an estimated measure of lung volume The effort component S4and the flow component S5may further be provided to separate further processing steps, which may be specifically adapted for processing of the component received, e.g. for further cleaning the signals. The processing made by the signal separator202according to a first embodiment using a blind source separation (BSS) algorithm will now be described. In this example, a model used for representing relations between contributions of effort and airflow is described by: 1/Z=1/Zl+1/Zc+1/ZnwhereZ: combined measured bioimpedanceZc: impedance of chest wallZl: impedance of the lungs; andZn: impedance of other tissues and heart/blood vessels. In addition, applying the derivatives of Zcand Zl:Z′crepresents the respiratory effort andZ′lrepresents the respiratory airflow. A blind source separation algorithm may then use reference signal(s) as observable variables providing reference effort and/or reference flow. This may be used to estimate the underlying effort source and flow source signals. Assuming preprocessing filters out unwanted noise, e.g. impedance of other tissues Zn, and interference from other physiological process such as cardiac activity, the model may be described as:Z=Zc*Zl/(Zc+Zl), where Z is the observed bioimpedance.Applying a derivative, measures of flow and effort are obtained, Z′land Z′c, which relate to a derivative of the observed bioimpedance Z′ as follows: Z′/(Z2)=Z′l/(Z2)+Z′c/(Zc2). The measure of flow, Z′l, is a function of the flow component sfl, a source signal that the BSS algorithm targets to estimate and separate, i.e. Z′l=Fl(sfl), where Fldenotes the function relating Z′lto the flow component sfl. Similarly, the derivative measure of effort, Z′c, is a function of the effort component seff, another source signal that the BSS algorithm targets to estimate and separate, i.e. Z′c=Fc(seff), where Fcdenotes the function relating Z′cto the effort component seff. Further, each reference signal is a transformation of the source signal, and may be represented as: Xref,fl=Gfl(sfl); andXref,eff=Geff(seff),where Xref,flis a reference signal representing respiratory airflow and Gfldenotes the function relating Xref,flto the flow component sfland where Xref,effis a reference signal representing respiratory effort and Geffdenotes the function relating Xref,effto the effort component seff. Blind source separation algorithms may use different approaches to extract the source signals from the observed variables. The acquired bioimpedance signal as well as the effort component and the flow component are typically sinusoidal (having specific frequency and phase, time varying). This may be exploited by the blind source separation algorithm for simple parameterization of transformation functions to find the source signals. If several reference signals are available, each of these may be used by the blind source separation algorithm, with a respective function relating the reference signal to the source component. However, according to an alternative, a single reference signal may be formed based on a plurality of reference signals. The processing made by the signal separator202according to a second embodiment using an adaptive filter will now be described. In this embodiment, surrogates of the effort component and the flow component are used (i.e. signals related to the effort component and the flow component, respectively). Then, an additive model for the surrogates of effort and flow components of the bioimpedance signal may be used and less complex signal processing may be used in order to divide the bioimpedance signal into the effort component and the flow component. Thus, the processing of the signal separator202may be faster and may require less computer resources. However, at least in some cases, the use of the blind source separation algorithm as described in the first embodiment above may more accurately extract the effort and flow components. In this embodiment, the relation Z′/Z2=Z′c/Zc2+Z′l/Zl2after appropriate preprocessing to form the bioimpedance signal Z as earlier described is used. After measurement of Z, a transformation of the bioimpedance signal BioZtmay be computed as BioZt=Z′/Z2. Further, correspondingly transformed signals of the effort component and the flow component may be used as surrogates, i.e. Z′l/Zl2as a surrogate for the flow component and Z′c/Zc2as a surrogate for the effort component, instead of the direct flow estimate (Z′l) and direct effort estimate (Z′c). Thus, it is possible to apply a simpler signal processing method, such as Kalman or Wiener filtering. For instance, Wiener filtering may be used as is generally illustrated inFIG.3. In the present case, the input signal (denoted x[n] inFIG.3) is the transformation of the bioimpedance signal, BioZt. The reference signal (denoted d[n] inFIG.3) is the reference signal received by the processing unit120, which may be either Xref,fl(if a reference representing respiratory airflow is received) or Xref,eff(if a reference representing respiratory effort is received). The reference signal may alternatively be a transformed measure of the signal received by the processing unit, such as X′ref/Xref2. For simplicity, only the reference signal Xref,flor Xref,effis considered below. Based on this model, the Wiener filter (denoted as f) needs to be computed, such that the Wiener filter will minimize a certain cost function of the error (denoted e[n] inFIG.3). Typically, a Mean Square Error may be used as a cost function. Then, the Wiener filter is computed based on the autocorrelation (an estimate) of the input signal BioZtand the cross-correlation (an estimate of the cross-correlation with finite samples) between BioZtand Xref,effor Xref,fl. Once the filter coefficients (arbitrary filter length) are computed, it is possible to obtain a component from the transformed bioimpedance signal after filtering, i.e. BioZtf=BioZt*f. If the reference signal is a representation of respiratory effort, Xref,eff, the filter then provides the following surrogate estimation of respiratory effort: SurrBioZeff=BioZtf, which is a measure of Z′c/Zc2, and which is an effort component of the transformed bioimpedance signal. Then, it is also possible to compute the surrogate estimation of respiratory airflow as: SurrBioZfl=BioZt−BioZtf. Similarly, if the reference signal is a representation of respiratory airflow, Xref,fl, the filter then provides the following surrogate estimation of respiratory airflow:SurrBioZfl=BioZtf, which is a measure of Z′l/Zl2, and which is a flow component of the transformed bioimpedance signal. Then, it is also possible to compute the surrogate estimation of respiratory effort as: SurrBioZeff=BioZt−BioZtf. The computed surrogate estimations of respiratory effort and respiratory flow may be sufficient for signal representation, given that the surrogate estimations are proportional to the variations of chest wall impedance variations (effort) and lung impedance variations (flow). Thus, the surrogate estimations may be output as representations of effort component and flow component. However, it is also possible to generate the estimated Z′land Z′csignals (starting from the surrogate estimations, Z′l/Zl2and Z′c/Zc2, respectively). The generation may include the following steps: integrate the surrogate estimation, remove a DC component, apply a negative inversion and derive the signal, whereby the estimated Z′land Z′csignals may be obtained. The extracted effort component and flow component may be used in detection of respiratory events. The effort component and the flow component may also be used in classifying of respiratory events based on an indication received that a respiratory event is occurring. As illustrated inFIG.4, the processing unit120may be configured to receive a respiratory event signal, in addition to receiving reference signal(s) S3. The respiratory event signal may provide a classification into one or more categories of respiratory events and may also provide an indication of a time period during which the respiratory event occurs/occurred. The respiratory event signal may be received from a respiratory event detector400. The respiratory event detector400may process one or more of the bioimpedance signal S2, the reference signal(s) S3or other signals in order to determine respiratory events. The respiratory event signal may alternatively be provided through manual input, e.g. by a nurse providing manual annotation of an acquired signal during respiratory monitoring. Referring now toFIG.5, the processing of the bioimpedance signal S2in order to divide the bioimpedance signal S2into an effort component and a flow component may be simplified. Thus, a selector500may select how the bioimpedance signal S2is to be processed based on a type of respiratory event occurring. The selector500may transfer the bioimpedance signal S2to a signal separator corresponding to the respiratory event. If an obstructive sleep apnea (OSA) event occurs, a signal separator502may operate on the bioimpedance signal. Then, the bioimpedance signal, BioZ, may be considered to be equivalent to the respiratory effort component within the OSA period, i.e. BioZ≈BioZeff. The flow component, BioZfl, is in this case 0 as no airflow occurs during OSA. The OSA periods may be used to parameterize the function feffof respiratory effort, as BioZeff=feff(Xref,eff). This may then be used outside OSA periods as well for estimating the effort component. Further, outside OSA periods, the flow component may then be estimated as BioZfl=BioZ−feff(Xref,eff). If a central sleep apnea (CSA) event occurs, neither respiratory effort nor respiratory airflow occurs. The signal separator504then represents the bioimpedance as BioZ=0. This is not further used for estimating flow component or effort component outside CSA periods. If an obstructive hypopnea (HA) event occurs, a signal separator506may operate on the bioimpedance signal. Then, the bioimpedance signal may be represented as BioZ=an*BioZeff+bn*BioZfl, where bn<b0, wherea0is the weighing coefficient for the effort component in periods without respiratory events,b0is the weighing coefficient for the contribution of the flow component in periods without respiratory events,anis the weighing coefficient for the effort component within obstructive HA periods, andbnis the weighing coefficient for the contribution of the flow component within obstructive HA periods. It may be possible to make assumptions that may be used in estimation of anand bn. For instance, the coefficient for effort component anmay be set to equal the coefficient a0. The coefficient for flow component b0could be chosen e.g. based on a range assumption, e.g. 0.1*b0<=bn<=0.3*b0. A relation in this range may be used, e.g. bn=0.2*b0. If a central hypopnea (HA) event occurs, a signal separator508may operate on the bioimpedance signal. Then, the bioimpedance signal may be represented as BioZ=am*BioZeff+bm*BioZfl, where bm<b0and am<a0, where amis the weighing coefficient for the effort component within central HA periods, and b0is the weighing coefficient for the contribution of the flow component within central HA periods. As for the discussion with regard to obstructive HA events, assumptions could be used for relating the coefficients for effort component and flow component in periods without respiratory events to the coefficients determined in the central HA periods. If there is no event occurring, the signal separator202described above with reference toFIG.2may operate on the bioimpedance signal. In the above the inventive concept has mainly been described with reference to a limited number of examples. However, as is readily appreciated by a person skilled in the art, other examples than the ones disclosed above are equally possible within the scope of the inventive concept, as defined by the appended claims. | 25,077 |
11857309 | DETAILED DESCRIPTION It is to be understood that the following disclosure describes several exemplary embodiments for implementing different features, structures, or functions of the invention. Exemplary embodiments of components, arrangements, and configurations are described below to simplify the present disclosure; however, these exemplary embodiments are provided merely as examples and are not intended to limit the scope of the invention. Additionally, the present disclosure can repeat reference numerals and/or letters in the various embodiments and across the figures provided herein. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. Moreover, the formation of a first feature over or on a second feature in the description that follows can include embodiments in which the first and second features are formed in direct contact, and can also include embodiments in which additional features can be formed interposing the first and second features, such that the first and second features are not in direct contact. Finally, the embodiments presented below can be combined in any combination of ways, i.e., any element from one embodiment can be used in any other embodiment, without departing from the scope of the disclosure. Additionally, certain terms are used throughout the following description and claims to refer to particular components. As one skilled in the art will appreciate, various entities can refer to the same component by different names, and as such, the naming convention for the elements described herein is not intended to limit the scope of the invention, unless otherwise specifically defined herein. Further, the naming convention used herein is not intended to distinguish between components that differ in name but not function. Additionally, in the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to.” All numerical values in this disclosure can be exact or approximate values unless otherwise specifically stated. Accordingly, various embodiments of the disclosure can deviate from the numbers, values, and ranges disclosed herein without departing from the intended scope. Furthermore, the term “or” is intended to encompass both exclusive and inclusive cases, i.e., “A or B” is intended to be synonymous with “at least one of A and B,” unless otherwise expressly specified herein. The indefinite articles “a” and “an” refer to both singular forms (i.e., “one”) and plural referents (i.e., one or more) unless the context clearly dictates otherwise. The terms “up” and “down”; “upward” and “downward”; “upper” and “lower”; “upwardly” and “downwardly”; “above” and “below”; and other like terms as used herein refer to relative positions to one another and are not intended to denote a particular spatial orientation since the apparatus and methods of using the same can be equally effective at various angles or orientations. A detailed description of the respiration monitoring device and methods for using the same will now be provided. Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references to the “invention” may in some cases refer to certain specific embodiments only. In other cases, it will be recognized that references to the “invention” will refer to subject matter recited in one or more, but not necessarily all, of the claims. Each of the inventions will now be described in greater detail below, including specific embodiments, versions and examples, but the inventions are not limited to these embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the inventions, when the information in this disclosure is combined with publicly available information and technology. FIG.1depicts a schematic of an exterior of an illustrative respiration monitoring device100, according to one or more embodiments. The respiration monitoring device100can include a case or enclosure101that provides a housing for a display screen102, power button103, and/or reset button104. The case101can include an adhesive, clip or other mechanism for attaching the device100to a patient to be monitored. Exemplary locations include direct contact to the patient's body (e.g., skin, hair, fur, scales, feathers, or the like) or indirect contact via clothing (e.g., shirts, undershirts, gowns, blouses, or the like), badges, necklaces, wrappings, or the like, on or near the torso of the patient. For example, the case101can best attach to the breast pocket of a shirt. The term “patient” as used herein refers to humans as well as any other breathing life form, including dogs, cats, cows, sheep, pigs, reptiles, fish, whales, sharks, and all other animals. The case101can be readily detached from the patient after the monitoring period ends. The respiration monitoring device100can be made of any one or more disposable materials or sterilizable materials (e.g., plastics, stainless steel, glass, rubber, or the like) for reuse. The respiration monitoring device100can be disposable and intended for a single use or for a single patient and then be thrown away. The respiration monitoring device100can also be sterilized after each use or after each patient. The respiration monitoring device100can re-used multiple times for the same patient or for multiple patients or for single use only. The power button103can initiate a monitoring process and the reset button104can restart an ongoing monitoring process. The monitoring process can be performed automatically by the respiration monitoring device100. The respiration monitoring device100can perform the monitoring process hands-free from any operator (e.g. holding the device, manual counting, manual timing, manual calculations, or the like). Hands-free operation has a distinct advantage over manual operation because it can minimize the opportunity for operator error. The display screen102can display the rate of respiration. The monitoring display102can also be used to display the date, time, unit ID, and/or battery status of the device100. The size of the display screen102can vary and is typically as big as the case101will allow. The respiration monitoring device100can also include RF, IR, WiFi, cellular, or other suitable modes of communication for transmitting output information to a receiver or other device that is located remotely from the patient, such as a nurse's station or call center or the like. FIG.2depicts a schematic of an interior of the respiration monitoring device100, according to one or more embodiments. The interior case105can include a controller106, two or more accelerometers107A,B, at least one acoustic sensor108, and a power supply109. The accelerometers107A,B can be any suitable motion detection device capable of detecting motion in the x-, y-, and z-axis direction, and/or rotation relative to the z-axis (i.e. “tilt”). As used herein, the “x-axis” refers to the axis that goes side to side in a horizontal plane, the “y-axis” refers to the axis that is up to down in a vertical plane and is orthogonal to the x-axis, and the “z-axis” is forward to backward in the same horizontal plane as the x-axis. Each accelerometer107A,B can contain a cantilever beam attached to a mass and a fixed beam. Under the influence of external accelerations, i.e. the patient's breath, the mass deflects from its neutral position and this deflection can be measured in an analog or digital manner. The accelerometers107A,B can measure deflection by measuring capacitance between a set of fixed beams and a set of beams attached to the mass. The measured acceleration can be output to other devices from the accelerometers107A,B as a motion signal containing analog or digital data. In operation, the device100is placed on the patient, preferably near or proximate the heart or lungs. The first or motion accelerometer107A within the device100measures or otherwise determines the x-, y-, and z-axis accelerations based on relative displacement of the device100, which is caused by the patient's breathing. The second or rotation accelerometer107B within the device100measures or otherwise determines the rotation of the device100relative to the z-axis (i.e. “tilt”), which may also be caused by the patient's breathing. As used herein, the term “tilt” refers to rotation relative to the z-axis. Suitable accelerometers107A,B are capable of detecting accelerations greater than 40 cm/sec2, greater than 60 cm/sec2, and greater than 80 cm/sec2. For example, suitable accelerometers107A,B can detect accelerations of about 40 cm/sec2, about 60 cm/sec2, or about 80 cm/sec2to about 120 cm/sec2, about 180 cm/sec2, or about 240 cm/sec2. The acoustic sensor108can be any sensor that is capable of detecting sound between 0 Hz and 1500 Hz. The acoustic sensor108can contain a flexible membrane and a fixed plate with perforations. Under the influence of external sound waves, the flexible membrane deflects due to changes in pressure, and this deflection can be measured in an analog or digital manner. In a preferred embodiment, two or more acoustic sensors108are used. Any two acoustic sensors108can be spaced apart to provide stereophonic data. For example, the sensors108can be spaced 1 mm to 5 mm apart, 5 mm to 20 mm apart, or 20 mm to 75 mm apart. Stereophonic data is sound data produced by a pair of acoustic sensors in which specific sounds can be located and isolated through triangulation. Monophonic data is sound data produced by a single acoustic sensor. Stereophonic data has a distinct advantage over monophonic data because it provides more accurate sound detection. The measured sound can be output to other devices from the acoustic sensor108as a sound signal containing analog or digital data. FIG.3depicts a schematic of an illustrative respiration monitoring device controller106, according to one or more embodiments. The controller106can include a memory201, a processor205, and a network adapter206. The controller106can receive a motion signal from the accelerometers107A,B which provide respiratory motion data207and a sound signal from the acoustic sensors108which provide respiratory sound data208. The controller106can provide output to the display screen102or one or more external devices300. The memory201can store respiratory motion data207and respiratory sound data208. The memory201can include one or more lookup tables202containing distance values and/or sound values, a real-time clock203, and processor instructions204. The processor instructions204direct the processor205to filter respiratory motion data207using the lookup table202in order to remove motion (e.g., standing, sitting, sitting up, laying down, talking, or the like) that is not associated with breathing. The processor instructions204can contain an algorithm to convert accelerations detected by the accelerometers107A,B to distance values using time data from the real-time clock203. The processor instructions204can contain a filter to remove distance values greater than 5 cm, greater than 7.5 cm, and greater than 10 cm. The processor instructions204can further direct the processor205to calculate local maximum distance values in order to determine when the patient is between inhalation and exhalation based upon the accelerometers107A,B. The processor instructions204can direct the processor205to filter respiratory sound data208using the lookup table202in order to remove sound values (e.g., heartbeat, intestinal, stomach, talking, coughing, sneezing, standing, sitting, or the like) that are not associated with breathing. The processor instructions204can contain a filter to remove sound values lesser than 600 Hz, lesser than 500 Hz, and lesser than 400 Hz, and greater than 1100 Hz, greater than 1200 Hz, and greater than 1300 Hz. The processor instructions204can further direct the processor205to calculate local minimum sound values in order to determine when the patient is between inhalation and exhalation based upon the acoustic sensors108. The processor instructions204can then direct the processor205to count and store a completed breath by comparing the time of local maximum distance value and the time of local minimum sound value. Local minimum respiratory sound data can occur between every inhalation and exhalation. Local maximum distance values can only occur after inhalation and before exhalation. The processor205can count completed breaths by counting every local maximum distance value that occurs during a local minimum sound value. The processor205can then calculate a respiration rate by dividing breath count by time elapsed according to the real-time clock203. In a preferred embodiment, the processor205can send the respiration rate to the display screen102. In some cases it can be necessary to account for input disruptions in the respiratory motion data207or respiratory sound data208. In such cases, the recording process can be reset using the reset button104. The processor205cannot count completed breaths without comparing the times of distance values and sound values. Exemplary input disruptions include nearby conversation, environmental noises, sudden movement, repositioning of the respiration monitoring device, or the like. The network adapter206can receive the calculated respiration rate from the processor205. The network adapter206can transmit data from the respiration monitoring device100to an external device300. In one example, the network adapter206transmits data to an external computer system wirelessly. FIG.4depicts a graph illustrating a relationship between respiration sounds and respiration motion, according to one or more embodiments. The graph illustrates the filtered distance values and filtered sound values from the processor205. The actual distance values and sound values during operation can be different than shown. Embodiments of the present disclosure further relate to any one or more of the following paragraphs:1. A device for determining respirations in a patient, the device comprising a first accelerometer for detecting a motion in the x, y and z-axis; a second accelerometer for detecting a rotation about the z-axis; at least one acoustic sensor for detecting sound related to the patient's breath; a memory comprising a lookup table, a real-time clock, and one or more instructions; a processor for receiving one or more motion signals from each accelerometer and one or more sound signals from each acoustic sensor, wherein the processor: receives the one or more signals from each accelerometer and correlates the one or more signals to a distance value; receives the one or more signals from the acoustic sensors and correlates the one or more signals to a sound value; filters the distance value and the sound value according to limitations stored in the lookup table; assigns a time stamp to each filtered value using the real-time clock within the memory; determines local maximums for the filtered motion values and local minimums for the filtered sound values; and matches the local maximums for the filtered motion values to the local minimums for the filtered sound values to confirm a completed breath from the patient.2. The device of paragraph 1, wherein the memory further comprises instructions that cause the processor to disregard the local minimums for the filtered sound values that do not match the local maximums for the filtered motion values.3. The device of paragraph 1 or 2, wherein the memory further comprises instructions that cause the processor to count the completed breaths and calculate a respiration rate using the real-time clock within the memory.4. The device according to any paragraph 1 to 3, further comprising a display screen configured to display the respiration rate.5. The according to any paragraph 1 to 4, wherein the memory further comprises instructions that cause the processor to wirelessly transfer the complete breath count to an external device.6. The according to any paragraph 1 to 5, wherein the device is attachable to a surface on the body.7. The according to any paragraph 1 to 6, wherein the device is made of disposable materials.8. A method for monitoring respirations in a patient, comprising: disposing a device on a surface of the body wherein the device comprises: locating a device about a surface of the patient, wherein the device comprises: a first accelerometer for detecting a motion in the x, y and z-axis; a second accelerometer for detecting a rotation about the z-axis; at least one acoustic sensor for detecting sound related to the patient's breath; a memory comprising a lookup table, a real-time clock, and one or more instructions; and a processor for receiving one or more signals from each accelerometer, the one or more signals correlate to a distance value for the motion detected by the accelerometers; receives one or more signals from the acoustic sensors, wherein the one or more signals correlate to a sound value for the sound detected by the acoustic sensors; filters the distance value and sound value according to limitations stored in the lookup table; assigns a time stamp to each filtered value using the real-time clock within the memory; determines local maximums for the filtered motion values and local minimums for the filtered sound values; and matches the local maximums for the filtered motion values to the local minimums for the filtered sound values to confirm a completed breath from the patient.9. The method according to paragraph 8, wherein the memory further comprises instructions that cause the processor to disregard the local minimums for the filtered sound values that do not match the local maximums for the filtered motion values.10. The method according to paragraph 8 or 9, wherein the memory further comprises instructions that cause the processor to count the completed breaths and calculate a respiration rate using the real-time clock within the memory.11. The method according to any paragraph 8 to 10, wherein the device further comprises a display screen, wherein the display screen displays the respiration rate.12. The method according to any paragraph 8 to 11, wherein the memory further comprises instructions that cause the processor to wirelessly transfer the counted breath count to an external device.13. The method according to any paragraph 8 to 12, wherein the device is attachable to a surface on the body.14. The method according to any paragraph 8 to 13, wherein the device is disposable after a single use or a single patient. Certain embodiments and features have been described using a set of numerical upper limits and a set of numerical lower limits. It should be appreciated that ranges including the combination of any two values, e.g., the combination of any lower value with any upper value, the combination of any two lower values, and/or the combination of any two upper values are contemplated unless otherwise indicated. Certain lower limits, upper limits and ranges appear in one or more claims below. All numerical values are “about” or “approximately” the indicated value, meaning the values take into account experimental error, machine tolerances and other variations that would be expected by a person having ordinary skill in the art. The foregoing has also outlined features of several embodiments so that those skilled in the art can better understand the present disclosure. Those skilled in the art should appreciate that they can readily use the present disclosure as a basis for designing or modifying other methods or devices for carrying out the same purposes and/or achieving the same advantages of the embodiments disclosed herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they can make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure, and the scope thereof is determined by the claims that follow. Various terms have been defined above. To the extent a term used in a claim is not defined above, it should be given the broadest definition persons in the pertinent art have given that term as reflected in at least one printed publication or issued patent. Furthermore, all patents, test procedures, and other documents cited in this application are fully incorporated by reference to the extent such disclosure is not inconsistent with this application and for all jurisdictions in which such incorporation is permitted. | 20,913 |
11857310 | DETAILED DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the physiological parameter processing apparatus of the disclosure will be described with reference to the drawings.FIG.1is a conceptual diagram of a physiological parameter processing apparatus1of the embodiment of the disclosure. The physiological parameter processing apparatus1is an apparatus for analyzing in real time a measurement result of a respiratory gas. As shown inFIG.1, for example, the physiological parameter processing apparatus1is configured by including an input interface section2, an analyzing section3, a storage section4, a display5, an alarm controller6, and a speaker7. The display5is an example of a displaying section and the alarm controller6and the speaker7are example of a notifying section. <Input Interface Section> The input interface section (hereinafter, referred to as “input I/F”)2is configured so as to acquire respiration data that are produced based on a respiratory gas which is obtained from at least one of the mouth and nose of the subject. The respiratory gas consists of an expiration gas and inspiration gas of the subject, and exhibits vital signs information useful in analyzing the presence/absence of apnea, hypopnea, and upper airway occlusion. The respiration data acquired by the input I/F2contain detection results relating to a respiratory gas, such as the expiratory volume, the inspiratory volume, and detection time periods of these volumes. The respiration data are data which are produced from the respiratory gas. The respiratory gas is an example of measurement results of the subject. For example, the respiration data may be data relating to the air flow component, snore component, or the like which is produced by signal processing the respiratory pressure of the respiratory gas. The input I/F2may be configured so as to acquire respiration data from various media. From a respiratory gas sensor using a cannula attached to at least one of the mouth and nose of the subject, the input I/F2may acquire respiration data which are analyzed and produced by the sensor, through a connector that connects the sensor with the physiological parameter processing apparatus1. Alternatively, the input I/F2may be configured so as to acquire respiration data which is to be analyzed by the physiological parameter processing apparatus1, through a wired or wireless communication network. The input I/F2is configured so as to output the acquired respiration data to the analyzing section3. The input I/F2may be configured so as to include various wired connection terminals for communicating with various media through a communication network, and various processing circuits for wireless connections, and meet communication standards for communicating through the communication network. Here, the communication network may be a LAN (Local Area Network), a WAN (Wide Area Network), the Internet, or the like. The input I/F2may be wirelessly connected to various media through access points, or in ad-hoc mode. <Analyzing Section> The analyzing section3is configured so as to analyze in real time the respiration data acquired from the input I/F2, to produce analysis result data that may be displayed in real time. The analyzing section3is configured so as to perform various analyzations relating to respiration data.FIG.2is a diagram showing an example of the internal structure of the analyzing section3. As shown inFIG.2, the analyzing section3includes a filtering section31, a respiration determining section32, an inspiratory flow limitation determining section33, and an apnea/hypopnea determining section34. Results of determinations which are performed by the filtering section31, respiration determining section32, inspiratory flow limitation determining section33, and apnea/hypopnea determining section34in the analyzing section3are acquired and stored in the storage section, and supplied to the display5to be displayed thereon in real time. For example, the analyzing section3may be configured by a controller which includes a memory and a processor. The memory is configured so as to store computer-readable commands (programs), and consists of a ROM (Read Only Memory) which stores various programs and the like, a RAM (Random Access Memory) having work areas in which various programs to be executed by the processor, and the like are stored, etc. The processor consists of, for example, a CPU (Central Processing Unit), a MPU (Micro Processing Unit), and/or a GPU (Graphics Processing Unit), and is configured so as to develop a designated one of the various programs incorporated in the ROM, in the RAM, and execute various processes in cooperation with the RAM. <Filtering Section> The filtering section31is configured so as to perform waveform processing on the respiration data. The waveform processing includes a process such as removal of components which are not required in analysis of the respiration data. For example, the waveform processing which is performed in the filtering section may be a process of removing noises from the respiration data, that of removing high-frequency components, that of removing low-frequency components, or that of performing the square root amplification or square root amplification correction on an AD value contained in the respiration data. Alternatively, the filtering section31may be a digital filter. The digital filter may be configured so as to acquire a digital signal of the respiration data from the input I/F2, and perform waveform processing on the signal. In the case where analog data are acquired through the input I/F2, an A/D converter may be disposed in the filtering section31. The respiration data which have undergone the waveform processing are output to the storage section4and the display5, and used in various analyses in the analyzing section3. <Respiration Determining Section> The respiration determining section32is configured so as to compare and analyze the respiration data and a first value relating to the respiratory gas in an expiration gas scan mode, an inspiration gas scan mode, and an expiration gas search mode.FIG.5is a view showing an example of analysis of respiration data in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode. As shown inFIG.5, the expiration gas scan mode is a mode in which a second value P3 that is on the minus side of the first value P1 is detected, and the respiration data are scanned until expiration gas is detected. In a respiration waveform that is the respiration data, for example, a predetermined reference value of the respiration waveform is set as the first value P1, and a value which is equal to or smaller than an arbitrary value P2 that is on the minus side of the first value P1 is set as the second value P3. The expiration gas scan mode may be configured so that expiration gas is detected by detecting the second value P3. The inspiration gas scan mode is a mode which is executed after the expiration gas scan mode to detect inspiration gas. For example, the inspiration gas scan mode may be configured so that, after the process of the expiration gas scan mode, a value which is equal to or larger than an arbitrary value P4 that is on the plus side of the first value P1 (the reference value of the respiration waveform) in the expiration gas scan mode is set as a third value P5, and inspiration gas is detected by detecting the third value P5. The expiration gas search mode is a mode in which, after the process of the inspiration gas scan mode, expiration gas is detected, and the end point of the respiration waveform that is used in analyzation is searched. The expiration gas search mode may be configured so that, after the process of the inspiration gas scan mode, a value which is equal to or smaller than an arbitrary value P6 that is on the minus side of the first value P1 (the reference value of the respiration waveform) in the expiration gas scan mode is set as a fourth value P7, and expiration gas indicating the end point of the respiration waveform is detected by detecting the fourth value P7. In an example of determination performed by the respiration determining section32, with respect to the respiration waveform of respiration data in which expiration gas is minus, and inspiration gas is plus, the first value P1 is set to 0 mmH2O, the arbitrary value P2 that is in the expiration gas scan mode, and that is on the minus side of the first value P1 is set to −0.05 mmH2O, and −0.06 mmH2O which is the second value P3 is detected, thereby detecting expiration gas. The arbitrary value P4 that is in the inspiration gas scan mode, and that is on the plus side of the first value P1 is set to +0.2 mmH2O, and +0.25 mmH2O which is the third value P5 is detected, thereby detecting inspiration gas. The arbitrary value P6 that is in the expiration gas search mode, and that is equal to or smaller than the first value P1 is set to 0 mmH2O, and −0.01 mmH2O which is the forth value P7 is detected, thereby detecting expiration gas indicating the end point of the respiration waveform. The respiration determining section32can calculate the respiration rate of the subject based on the numbers of expirations and inspirations which are detected during a predetermined period of time in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode. In the case where the predetermined period of time is set to 30 seconds, for example, the number of sets of an expiration gas and inspiration gas that are detected in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode is doubled, whereby the respiration rate for one minute can be calculated. <Inspiratory Flow Limitation Determining Section> The inspiratory flow limitation determining section33is configured so as to determine whether an inspiratory flow limitation is present or absent, based on the waveform of the inspiration gas in the respiration data. The inspiratory flow limitation is an index showing the obstructive apnea/hypopnea condition. The inspiratory flow limitation determining section33is configured so as to determine the inspiratory flow limitation based on: the presence/absence of a predetermined shape in the inspiration gas waveform in the respiration data; and a ratio of the width of the inspiration gas waveform having the predetermined shape, and that of the waveform of the predetermined shape. For example, the width of the inspiration gas waveform shows the zone between the value P8 at which the inspiration gas waveform indicated inFIG.5rises, and P9 at which the inspiration gas waveform ends. In the case where an inspiration gas waveform which is partly recessed is detected, for example, the inspiratory flow limitation determining section33determines whether an inspiratory flow limitation is present or absent, from the waveform width occupied by the recess of the inspiration gas waveform, in the width of the inspiration gas waveform containing the recess. For example, the above-described detection of a recess can be performed in the following manner. The inspiration gas waveform is secondarily differentiated. When the calculation result (amplitude) of the secondary differentiation has a negative value, it is determined that there is a recess. In the case where, with respect to the width of the inspiration gas waveform, the waveform width of the detected recess exceeds a predetermined threshold, it can be determined that the inspiratory flow limitation is present. The predetermined threshold can be appropriately set by the user in consideration of individual variation depending on the subject, and the operational standards of the facility. The result of the determination performed by the inspiratory flow limitation determining section33is supplied to the storage section4and the display5. <Apnea/Hypopnea Determining Section> When the amplitude of the respiration data during a predetermined period of time is equal to or lower than a predetermined ratio of the average amplitude, the apnea/hypopnea determining section34determines that apnea or hypopnea occurs. Alternatively, the apnea/hypopnea determining section34may make the determination depending on whether the amplitude of the respiration data at a time zone in the immediate vicinity of the measurement time of the respiration data is equal to or lower than the predetermined ratio of the average amplitude or not. For example, the time zone in the immediate vicinity of the measurement time of the respiration data extends from 120 seconds before the measurement time to the measurement time. An example of the amplitude at which it is determined that apnea occurs is an amplitude which is equal to or lower than 10% of the average amplitude, and that at which it is determined that hypopnea occurs is an amplitude which is equal to or lower than 50% of the average amplitude. <Storage Section> The storage section4is configured so as to store analysis result data. The storage section4is configured so as to sequentially store the respiration data output from the filtering section31, and the determination results of the determining sections32to34, as analysis result data. The analysis result data stored in the storage section4are as required sent to the display5to be displayed thereon, and supplied to the alarm controller6to be used in determination of the necessity of notification. The storage section4may store also information relating to notification such as contents of abnormality notified from the notifying section, and the manner of the notification, as the notification history. The storage section4may be a memory configured by a ROM (Read Only Memory) which stores various programs and the like, a RAM (Random Access Memory) having work areas in which various programs to be executed by the processor, and the like are stored, etc. <Displaying Section> As shown inFIG.1, the physiological parameter processing apparatus includes the displaying section which acquires at least real-time analysis result data from the analyzing section3, and which displays the data.FIG.3is a view showing a display example of the display5which is an example of the displaying section. The respiration waveform51of the respiration data, “40” (referenced by52) which is a numerical value indicating the respiration rate, and the inspiratory flow limitation waveform53are shown in the display5ofFIG.3. The display5may display in real time analysis result data which are obtained by acquiring the respiration data immediately after the measurement, and performing real-time analysis on the data, or display past (120 seconds prior to the display timing) analysis result data which are stored in the storage section4. For example, the past analysis result data are in the form of a trend graph or a list of respiration data.FIG.4shows an example in which the respiration waveform51that is formed by displaying in real time the respiration data, and a respiration waveform54consisting of the past analysis result data are displayed on the display5. InFIG.4, a screen of the past analysis result data is displayed overlappingly on the real time display screen, the display manner is not limited to the example, and a plurality of screens may be displayed so as not to overlap with one another. The display5may be configured so as to read out past notification information stored in the storage section4, and display the information (review displayed). For example, the past notification information which is to be displayed on the display5includes a trend graph of respiration information, an alert condition, a vital list, or the like. Also data other than analysis result data relating to the respiration data may be displayed on the display5. Data to be displayed may be a plurality of kinds of vital signs information. For example, the plurality of kinds of vital signs information may be data of at least one of the transcutaneous arterial oxygen saturation, the heart rate, the blood pressure, and an electrocardiogram. The display may be configured so as to display these data in a state (in-phase) where a plurality of kinds of vital signs information acquired from the same subject are synchronized in phase with one another.FIG.3shows, in addition to the respiration waveform51of the respiration data, the waveform55of an electrocardiogram, “80” (referenced by56) which indicates the heart rate, the waveform57of the blood pressure, “123/81(97)” (referenced by58) which indicate the blood pressure values, the waveform59of the transcutaneous arterial oxygen saturation, and “98” (referenced by60) which indicates the value of the transcutaneous arterial oxygen saturation. InFIG.3, the reference numeral61indicates that the respiration waveform51, the electrocardiogram waveform55, the waveform57of the blood pressure, and the waveform59of the transcutaneous arterial oxygen saturation are in-phase. <Notifying Section> The physiological parameter processing apparatus1is configured so as to include the notifying section which notifies of the notification information based on the analysis result data. The notifying section is configured so as to, in the case where notification is necessary, such as that where a measurement result is abnormal, notify that an abnormality occurs. The notifying section includes the alarm controller6which is shown inFIG.1, and the speaker7which notifies by voice of the contents of the notification. The notification of the notification information is not limited to that performed by sound produced by the speaker7. In the case of visual notification, the notification information may be displayed on the display5, or, when notification is necessary, a lamp may be lit on. As an example of visual notification, a notification displaying section71which displays the notification information by using characters is shown in the display5ofFIG.3. The contents notified by the notifying section may be determination in which the apnea/hypopnea determining section34determines that apnea or hypopnea occurs, or that in which it is determined that an inspiratory flow limitation is present. The contents which are to be notified may include an abnormality relating to a measurement, such as that the cannula is detached and therefore respiration data cannot be acquired. <Operation Example> Next, an operation example of the physiological parameter processing apparatus1will be described. The input I/F2of the physiological parameter processing apparatus1is connected to a sensor which detects the respiratory gas from the mouth or nose of the subject, and acquires respiration data from the sensor. The input I/F2supplies the acquired respiration data to the filtering section31of the analyzing section3. The filtering section31performs waveform processing on the acquired respiration data. The filtering section31outputs the respiration data which have undergone the waveform processing, to the storage section4. The storage section4stores the respiration data acquired from the filtering section31. The analyzing section3performs various analyses based on the respiration data acquired from the input I/F2. The filtering section31of the analyzing section3outputs the respiration data which have undergone the waveform processing, to the respiration determining section32, the inspiratory flow limitation determining section33, and the apnea/hypopnea determining section34. The respiration determining section32performs analyses relating to the respiration data and the respiratory gas in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode, and determines the respiration rate. The inspiratory flow limitation determining section33determines the presence/absence of an inspiratory flow limitation. The apnea/hypopnea determining section34determines whether the apnea condition is present or not, and whether the hypopnea condition is present or not. The results of the determinations are supplied to the storage section4to be stored therein. Moreover, the determination results are displayed in real time on the display5. The alarm controller6acquires analysis result data from the analyzing section3, and determines whether notification is necessary or not, based on the analysis result data. If it is determined that notification is necessary, the alarm controller6outputs notification information to the speaker7. The speaker7converts the acquired notification information to sound, and outputs the sound. The output notification information is output also to the storage section4to be stored therein. As described above, the physiological parameter processing apparatus1of the disclosure includes: the input interface section2which acquires respiration data that are produced based on the respiratory gas which is obtained from at least one of the mouth and nose of the subject; and the analyzing section3which analyzes in real time the respiration data acquired from the input interface section2, and which produces analysis result data that can be displayed in real time. Therefore, the apparatus can analysis in real time the respiration data, and moreover display in real time the result of the analysis. Conventionally, respiration monitoring is performed using an EtCO2 (End Tidal CO2) capnometer or the thoracic impedance method. An EtCO2 capnometer is a device which is used for measuring carbon dioxide contained in the respiratory gas, and has a matter that, when an airway is not secured, the measurement may be incorrect. An EtCO2 capnometer acquires a result of measurement of the respiratory gas in the form of a respiratory gas CO2 concentration curve (capnogram). In an EtCO2 capnometer, therefore, it is difficult to distinguish between hypopnea and hyperventilation, and also the respiration depth cannot be measured. An EtCO2 capnometer is insufficient for monitoring the respiration during the non-intubation period, and cannot perform a real-time analysis. Among the conventional respiration monitoring methods, the thoracic impedance method is a method in which, when the respiration is to be measured, an AC current is flown through a portion where no action potential exists, and a resistance change in the body of the subject is detected as a voltage change. Therefore, the method is easily affected by noises due to motion artifacts of the subject. In the thoracic impedance method, moreover, it is impossible to distinguish a difference between the respiratory motion under the normal condition, and that under an abnormal condition. Consequently, the method cannot be sufficiently used in a respiration monitoring apparatus, and cannot perform a real-time analysis. Most of postoperative respiratory complications are complications relating to upper airway occlusion. It is important for prevention of complications to confirm the presence/absence of upper airway occlusion. In the conventional respiration monitoring methods (an EtCO2 capnometer, the thoracic impedance method), however, it is not impossible to confirm the presence/absence of upper airway occlusion. In order to check the presence/absence of upper airway occlusion, conventionally, a screening test using a sleep apnea test apparatus has been performed. In the screening test using a sleep apnea test apparatus, values of intranasal pressures are collected and analyzed to check the presence/absence of upper airway occlusion in the subject. In the conventional screening test using a sleep apnea test apparatus, however, the collected values of intranasal pressures are not analyzed in real time, and there is no test apparatus for monitoring in real time the presence/absence of upper airway occlusion. According to the physiological parameter processing apparatus1of the disclosure, a real-time analysis, which is impossible in the prior art, may be performed on a result of measurement of the respiratory gas, and moreover it is possible to check in real time the presence/absence of upper airway occlusion. Furthermore, a result of the analysis may be displayed in real time. According to the physiological parameter processing apparatus1of the disclosure, moreover, the analyzing section3includes the filtering section31which performs waveform processing on the respiration data, and therefore analysis result data which may be displayed in real time is produced by using the filtering section31. The analyzing section3analyzes the respiration data which have undergone the waveform processing in the filtering section31. When the respiration data which have undergone the waveform processing in the filtering section31are analyzed, therefore, the analysis accuracy of real-time analysis of the respiration data is enhanced. The analyzing section3produces the respiration data which have undergone the waveform processing in the filtering section31, as at least a part of the analysis result data, and therefore the respiration data which have undergone the waveform processing in the filtering section31may be displayed in real time. The physiological parameter processing apparatus1of the disclosure includes the respiration determining section32which analyzes respiration data in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode, and therefore can simply perform a real-time analysis on the respiration data. Moreover, analysis is performed in in the expiration gas scan mode, the inspiration gas scan mode, and the expiration gas search mode, and hence it is possible to determine the the respiration rate The physiological parameter processing apparatus1of the disclosure has the inspiratory flow limitation determining section33which determines an inspiratory flow limitation, and therefore may determine in real time the presence/absence of an inspiratory flow limitation. Moreover, a result of the determination of an inspiratory flow limitation may be displayed in real time. In the physiological parameter processing apparatus1of the disclosure, the analyzing section3includes the apnea/hypopnea determining section34which determines one of at least apnea and hypopnea, and therefore it is possible to analyze in real time whether apnea occurs or not, and whether hypopnea occurs or not. Moreover, a result of the real-time analysis may be displayed in real time. The physiological parameter processing apparatus1of the disclosure determines whether apnea or hypopnea occurs or not, based on the amplitude of the respiration data in the immediate vicinity of the measurement time of the respiration data, and hence may analyze in real time more accurately whether apnea or hypopnea occurs. A result of the real-time analysis is displayed. The physiological parameter processing apparatus1of the disclosure includes the alarm controller6and the speaker7as the notifying section which notifies of the notification information, and hence surely notifies of the notification information such as apnea, hypopnea, and an inspiratory flow limitation, so that it is possible to appropriately cope with change in the condition of the subject. The physiological parameter processing apparatus1of the disclosure includes the display5which displays analysis result data, and hence may display in real time an analysis result. Therefore, it is possible to rapidly perform medical decision support in respiratory management. The physiological parameter processing apparatus1of the disclosure includes the storage section4which stores analysis result data, and therefore past analysis result data of the subject may be displayed. According to the physiological parameter processing apparatus1of the disclosure, moreover, an analysis result and other vital signs information such as an electrocardiogram are displayed in phase with each other, and hence in-phase monitoring of the analysis result and the other vital signs information is enabled. With respect to various kinds of vital signs information including respiratory management, therefore, medical decision support is rapidly performed. A configuration may be possible where, in addition to the physiological parameter processing apparatus1of the disclosure that displays in phase an analysis result and other vital signs information such as an electrocardiogram are displayed in phase with each other, an EtCO2 capnometer or thoracic impedance method, which is conventionally used, is employed. In the configuration where the physiological parameter processing apparatus1of the disclosure, and an EtCO2 capnometer or thoracic impedance method, which is conventionally used, are combinedly employed as described above, respiration monitoring during the non-intubation period, postoperative respiratory management, that during endoscopic gastrointestinal surgery, or that during nasal high flow is performed. Therefore, a configuration including respiratory monitoring of the pressure of the respiratory gas (the intranasal pressure) is realized, and it is possible to provide optimum respiratory monitoring according to the situation of the subject. In the above configuration, the embodiment in which the input I/F2acquires respiration data from a sensor that is not shown has been described, the method of acquiring respiration data is not limited to the above-described configuration. The input I/F2may be connected to past respiration data (measurement results) which are stored in an external medium such as a CD-R, and acquire the past respiration data, or alternatively acquire in real time respiration data which are measured in real time in a remote place, by using a communication network. According to the configuration, an external apparatus may be connected to the physiological parameter processing apparatus1through the Internet, and a measurement result of a respiratory gas may be displayed in real time on the display5. In the above configuration, the embodiment in which the physiological parameter processing apparatus1has the analyzing section3has been described, the invention is not limited to the configuration where the analyzing section3is included in the physiological parameter processing apparatus1. A configuration may be employed where an analyzing section is disposed outside the physiological parameter processing apparatus1, and respiration data produced in the external analyzing section are sent to the physiological parameter processing apparatus1through an input I/F to be displayed on the display5. The above-described configuration may include: an input interface section which acquires respiration data in which a real-time analysis is performed on respiration data that are produced based on a respiratory gas which is obtained from at least one of a mouth and nose of a subject; and a displaying section which may display in real time the respiration data that are acquired from the input interface section, and therefore a result of the real-time analysis of the respiration data may be displayed in real time. The invention is not limited to the above-described embodiment and modifications, and may be adequately subjected to modifications, improvements, and the like. In addition, the materials, shapes, dimensions, values, forms, numbers, places, and the like of the components of the above-described embodiment are arbitrary and not limited insofar as the invention is achieved. | 31,477 |
11857311 | DETAILED DESCRIPTION OF THE INVENTION FIGS.1to2illustrate a preferred embodiment of the multi-purpose video monitoring camera of the present invention, which comprises a microprocessor1, a CMOS video camera2, a thermographic video camera3, a display unit4and a mic array5. The CMOS video camera2is a low lux CMOS video camera operably coupled to the microprocessor1. The thermographic video camera3is operably coupled to the microprocessor1and aligned with the CMOS video camera2so that the thermographic video camera3and the CMOS video camera2have overlapping fields of view. More specifically, the CMOS video camera2and the thermographic video camera3are calibrated to have the same point of view in any rotational position. In other words, the center point of video data and thermographic data is equal. In this embodiment, the thermographic video camera3has a resolution of 32×32 pixels which can measure human body temperature with an accuracy of up to ±0.2° C. at a distance of 1.5 meters from a human body. The display unit4is operably coupled to the microprocessor1. The mic array5is operably coupled to the microprocessor1. The multi-purpose video monitoring camera is fixedly mounted next to a crib where a baby sleeps. In this embodiment, the multi-purpose video monitoring camera further comprises a rotate-pan-tilt mount6operably coupled to the microprocessor1and controlled by the microprocessor1to adjust the CMOS video camera2and the thermographic video camera3such that the baby in the crib is always in center of the fields of view of the CMOS video camera2and the thermographic video camera3. The microprocessor1comprises a baby presence detection module101, a warning module102, a motion detection module103, a breathing detection module104, a hot air exhalation detection module105, a chest movement detection module106, a breathing sound detection module107, a suffocation detection module108, a temperature measurement module109, a urination and defecation detection module110, an air conditioning module111and a baby growth rate module112. The baby presence detection module101is configured to:(1) obtain input from the CMOS video camera2;(2) perform face recognition on the input obtained in (1) to determine if a baby is present in the field of view of the CMOS video camera2;(3) if it is determined from (2) that a baby is present in the field of view of the CMOS video camera2, confirm baby presence;(4) if it is determined from (2) that a baby is not present in the field of view of the CMOS video camera2, obtain input from the thermographic video camera3to identify a human head region in the field of view of the thermographic video camera3;(5) perform face recognition on the input obtained in (1) at a region corresponding to the human head region in the field of view of the thermographic video camera3to determine if a baby is present in the field of view of the CMOS video camera2;(6) if it is determined from (5) that a baby is present in the field of view of the CMOS video camera2, confirm baby presence;(7) if it is determined from (5) that a baby is not present in the field of view of the CMOS video camera2, confirm baby absence;(8) if baby presence is confirmed, update check-in/check-out status as check-in;(9) if baby absence is confirmed, update check-in/check-out status as check-out. In operation, when the parent or helper is carrying the baby close to the crib and put the baby into the crib, the head of the baby should be visible to both the CMOS video camera2and the thermographic video camera3, and thus the baby presence detection module101should be able to confirm baby presence and update check-in/check-out status as check-in. The check-in process starts the monitoring on the baby. If the parent or helper may remember to bring the baby to the area in front of the CMOS video camera2and the thermographic video camera3, the check-in process can be more secured. Also, a higher resolution snapshot of the baby head including cloth, hat, or other objects can be recognized and used in later stage to monitor the movement of baby. Of course, from baby caring point of view, it is not recommended for baby to wear more things on the head. Thus, in most cases, the baby head is in clearance without any cover. Similarly, when an adult is coming to the crib and taking away the baby from the crib for other activity such as meal time or play time, the baby presence detection module101should be able to confirm baby absence and update check-in/check-out status as check-out. At this time, the face of the parent or helper may also be visible to the CMOS video camera2. Face recognition can be applied to the adult who takes the baby away from crib. The person and time record can be recorded and parent is able to check who is the last person to carry the baby and when the baby leaves the crib. Moreover, the baby presence detection module101may also detect that the baby is moving up into the air, which is not possible for the baby to do by himself/herself. This is a clear signal for check-out process when the baby is not under monitoring anymore. Using the check-in and check-out mechanism, the presence of baby in the crib is definitely confirmed without any false trigger on baby activity when baby is not in the crib. In contrast, after the check-in process, the baby should be visible in the fields of view of the CMOS video camera2and the thermographic video camera3without any exception since the baby is not possible to leave the crib by himself/herself. Therefore, by means of the rotate-pan-tilt mount6, the CMOS video camera2and the thermographic video camera3may change and enlarge the monitoring area to search the baby if the monitoring area cannot cover the whole crib. In other words, the CMOS video camera2and the thermographic video camera3should follow the baby movement and make the center of monitoring area around the baby. If the baby is not detected between check-in and checkout mechanism, the baby may be covered by blanket, toy or other things in the crib. In this situation, the parent should be warned unless baby movement is indirectly confirmed by checking the movement of object which covered the baby. For example, if the baby is playing inside the blanket, the baby movement may couple to the blanket. It is possible to confirm baby presence since the blanket movement can be detected by the CMOS video camera2. Apart from detecting baby presence by the CMOS video camera2, there is a second level of baby presence detection by the thermographic video camera3. The thermographic video camera3captures every position in the crib and creates a heat zone image in crib region. Then, the region will be filtered by the temperature ranging from 34-37 degree Celsius. Due to the heat radiation from the baby, the filtered area of heat zone image may be the location of baby head or limbs. Normally, the crib is placed in the room where the ambient temperature is not higher than 30 degree Celsius. So, the filtered area by using heat zone image should be accurate. If the thermographic video camera3has higher resolution, it is even possible to obtain a head contour due to the blood circulation in forehead. Then, this thermal fingerprint can be used to recognize the head. Once the potential location of baby head is identified, the CMOS video camera2can be used to do face recognition on the area identified by the thermographic video camera3. If the check-in mechanism is somehow failed due to unknown reason, this second level will trigger monitoring process on the baby automatically when baby presence is detected. Using these two levels, the accuracy of presence detection is secured. The warning module102is configured to output warning signals; the warning signals output from the microprocessor1are in form of warning messages displayed on the display unit4. The multi-purpose video monitoring camera is wirelessly connected to an external mobile device7which is installed with an app for receiving and outputting the warning messages output from the microprocessor1. The motion detection module103is activated after the baby presence detection module101updates check-in/check-out status as check-in and is configured to:(1) obtain input from the CMOS video camera2and the thermographic video camera3;(2) detect positions of head and limbs of the baby in the fields of view of the CMOS video camera2and the thermographic video camera3from the input obtained in (1);(3) determine posture of the baby by measuring height levels of the detected head and limbs, curvature and orientation of the detected limbs, and separation among the detected head and limbs;(4) determine motion of the baby based on frame by frame analysis of the posture determined in (3) over a period of time;(5) build a history of the motion determined in (4);(6) determine whether the baby is idle without significant movement for a predetermined period of time from the history built in (5). In operation, after the baby presence detection module101updates check-in/check-out status as check-in, the monitoring with the CMOS video camera2and the thermographic video camera3is continuous. Apart from baby head, the heat radiation of two arms and two legs are other baby parts which are clearly visible in thermographic image within body temperature range. Using the movement of head, arm and legs, the baby motion can be recognized. For example, when the baby is standing, head is coming to higher position, two limbs with larger separation will be middle and another two limbs with smaller separation will be lower in position; when the baby is crawling, four limbs will have similar height position while two of them are curved with knees touching down. By measuring the height levels of different body parts, curvature and orientation of limbs, as well as separation among head, arms and legs, the motion detection module103can determine the posture of the baby. Using frame by frame transition tracking, the motion is detected. Similarly, head, arms and legs can be recognized in the CMOS video camera2too. Then, motion detection through the CMOS video camera2will provide another level of confirmation on the baby motion. This second level is useful when the resolution of the thermographic video camera3is low. The motion detection module103allows understanding of the activities of the baby in the crib and clearly identifies the period when the baby is idle without significant movement. In the period when the baby is idle, it is possible for the baby to have nap or suck a doll/pacifier. Then, the image changes either in the CMOS video camera2or the thermographic video camera3is very small compared frame to frame. From monitoring point of view, it poses a challenge to identify whether the baby is safe or not. So, additional processing should be done. By having the motion detection module103to build a history of the motion of the baby, it would then be possible to generate the activity pattern or favorite behavior of the baby when the baby is alone in the crib. The baby may do certain activities in sequence or based on the length of time when the baby is alone. Each baby is different but somehow there is a trend or track record to follow. If the motion detected by the motion detection module103deviates from the history a lot, some issues may happen. Secondly, based on the history built by the motion detection module103, an indicator of activity energy level can be calculated based on the amount of certain activities the baby performed. Some activities need more energy than others to do. So, the transition of activity energy should be reasonable. For example, it requires so much energy for the baby pushing around on its stomach. So, the duration may not be too long. If the detection is longer than normal with an idle position follow-up, certain attention should be acted to cross check other indicator to see if the alert is needed to generate. Thirdly, the posture of idle position is correlated to the previous motion the baby performed. When the baby transits from sitting to lying, it is highly possible for the baby to lie on its back. When the baby transits from pushing around on stomach to lying, it is highly possible for the baby to lie on its stomach. With the history built by the motion detection module103, the aforementioned posture information could also be used to analyze if the baby is idle without significant movement for a predetermined period of time. When the baby is idle without significant movement for a predetermined period of time, other detection in the following sections will be activated to further identify the baby situation and generate alert to parent when it is determined that the baby may be in danger. The breathing detection module104calculates the breathing rate of the baby based on outputs from the hot air exhalation detection module105, the chest movement detection module106and the breathing sound detection module107if the motion detection module103determines that the baby is idle without significant movement for a predetermined period of time. The hot air exhalation detection module105is configured to:(1) obtain the position of head of the baby in the fields of view of the CMOS video camera2and the thermographic video camera3from the motion detection module103;(2) detect position of nose and mouth of the baby in the fields of view of the CMOS video camera2and the thermographic video camera3from the position of head of the baby obtained in (1);(3) detect temperature variation pattern in the position of nose and mouth of the baby in the fields of view of the thermographic video camera3detected in (2);(4) calculate a first breathing rate of the baby based on output obtained in (3). In operation, when the baby is idle because of lying, sleeping or sucking a doll/pacifier, there is some hot air exhaled from the mouth or nose if the baby is breathing. The temperature of hot air exhaled is close to the body temperature. In air conditioning environment, the ambient air temperature is 25 degree Celsius which is much lower than the human body temperature. The exhaled air from the baby may be lower than 37 degree Celsius but it is still higher than the room temperature. With the high resolution of the thermographic video camera3, the exhaled air is clearly seen as a temperature changing pattern near the nose or mouth. The pattern is very special since the area will increase from room temperature to a certain temperature close to baby body temperature in a short time. Then, the temperature will decrease naturally. The affected area of heat zone due to hot air exhalation is maintained constant as long as there is no movement for the baby. The variation of temperature in the area near the nose and mouth will form a cyclic pattern. By measuring the number of cycle or the frequency of the periodic signal, the first breathing rate of the baby could be obtained. The chest movement detection module106is configured to:(1) obtain positions of head and limbs of the baby in the fields of view of the CMOS video camera2and the thermographic video camera3from the motion detection module103;(2) estimate position of chest based on positions of head and limbs of the baby obtained in (1);(3) detect pixel-wise spatial variation of the position of chest from frame to frame and convert the spatial variation into frequency domain;(4) determine the largest magnitude in the frequency domain;(5) calculate a second breathing rate of the baby based on output obtained in (4). In operation, when the baby is idle, there is still some slightly movement. When the baby is breathing, the chest volume increases and decreases periodically. In the motion detection module103, the head, arms and legs are already identified. Using triangularly among these visible body parts, the chest area is estimated. This area of interest from both the CMOS video camera2and the thermographic video camera3will feed into a special algorithm in order to detect the breathing rate. The algorithm can be Eulerian Video Magnification or other method to recognize the subtle movement under an almost still video streaming from the CMOS video camera2and the thermographic video camera3. When the area of interest is focused on the chest area, the major component of subtle movement is caused by breathing. The subtle movement of object on top of chest will be analyzed to extract subtle movement due to increase and decrease of chest volume. The object on top of chest may experience up/down or left/right movement depending on the facing direction of baby face. The pixel-wise spatial changes of the area of interest around the chest area from frame to frame will be converted into frequency domain. The largest magnitude in frequency domain is mainly caused by the breathing frequency. Therefore, the corresponding frequency of breathing rate is significant in the frequency domain of the result of these algorithms, and the breathing rate is calculated from this frequency. In most cases, the baby chest is not visible since the baby body is covered by cloth or blanket. The detection of chest movement is still possible. In most cases, the covered cloth and blanket has some weight and lay on the chest. When the baby is breathing in, the chest movement will push the cover material up. When the baby is breathing out, the chest volume reduces and the covered material will go down too due to the weight. Thus, the chest movement will couple to those covered material. This indirect detection on the covered material over the chest area will imply the same movement of chest. Then, the breathing rate can be calculated according in the previous description. The breathing sound detection module107is configured to:(1) obtain input from the mic array5;(2) perform beamforming on the input obtained in (1) to focus on sound produced near mouth and nose of the baby;(3) detect sound pressure level of output from (2);(4) build a history of the sound pressure level detected in (3);(5) calculate a third breathing rate of the baby by counting number of cycles per minute between sound pressure level higher than a predetermined threshold value and sound pressure level lower than a predetermined threshold value from the history of sound pressure level in (4). In operation, the breathing sound detection module107offers a third level of breathing detection. Wheezing is a high-pitched whistling sound made while the baby breathes. It's heard most clearly in exhalation, but in some cases, it can be heard when baby inhales. It's caused by narrowed airways. Using the mic array5consisting of 4 mic components or even more, it is possible to create beamforming on audio reception in order to focus on the sound produced near to baby mouth and nose. Baby breathing sound may be whistling, snoring, stridor or grunting. Also, irregular breathing pattern can be generated. The detection algorithm needs to use sound pressure level and silent period instead of audio characteristics to detect the breathing rate. In this embodiment, the threshold is dynamically adjusted so that it is 30% higher than the level in silent period. The threshold selection is to eliminate the variation among breathing sound patterns due to the conditions of nose, mouth and lung. The breathing detection module104is configured to determine the breathing rate of the baby by comparing the first breathing rate obtained from the hot air exhalation detection module105, the second breathing rate obtained from the chest movement detection module106and the third breathing rate obtained from the breathing sound detection module107and selecting a maximum amongst the first, second and third breathing rates. The suffocation detection module108is activated after the breathing rate of the baby determined by the breathing detection module104falls below a predetermined threshold and is configured to:(1) obtain input from the CMOS video camera2and the thermographic video camera3;(2) detect position of head of the baby in the field of view of the thermographic video camera3and perform face recognition on the input obtained in (1) to determine if forehead of the baby is present in the field of view of the CMOS video camera2;(3) if output from (2) indicates that forehead of the baby is not present, control the warning module102to output a warning signal;(4) if output from (2) indicates that forehead of the baby is present, perform face recognition on the input obtained in (1) to determine if nose and mouth of the baby is present in the field of view of the CMOS video camera2;(5) if output from (4) indicates that nose and/or mouth of the baby is not present, determine head position of the baby from the input obtained in (1), measure head length and head width of the baby, and determine if a ratio of the head length to the head width deviates from a predetermined ratio;(6) if output from (5) indicates that the ratio of the head length to the head width deviates from a predetermined ratio, control the warning module102to output a warning signal. In operation, it should be possible to estimate the breathing rate of the baby in all possible circumstance by the hot air exhalation detection module105, chest movement detection module106and the breathing sound detection module107. The breathing rate can be as low as 20 times per minute during sleep but it will not reduce to zero. If zero breathing rate is detected for more than 10 seconds, there is a high chance that the baby has suffocation. There are two common situations for the baby to have suffocation. The first one is caused by the object blockage to baby's nose and mouth. Another one is because baby is sleeping on its stomach. When the forehead of baby is visible to the present invention without any breathing rate detected, the face recognition and facial feature extraction is applied on the video data to identify the location of nose and mouth. If the extraction is successfully, there is little or no chance for baby's nose and mouth to be covered by object. Otherwise, the normal video data cannot recognize the mouth and nose or cannot recognize mouth and nose being covered. Then, additional processing should be applied to compensate the fault of face recognition due to visibility of partial face or other reason. There are two approaches of additional processing. Firstly, the thermographic video camera3can determine which part of baby head is visible in the heat zone image. If the baby is sleeping sideway, left face or right face can be captured by the thermographic video camera3. The curvature of head skull in the heat zone image will indicate if the whole head is exposed to the air or part of the head is covered by object such as blanket or cloth. When the nose and mouth is covered by object, the temperature of that area drops significantly. Thus, the head region with body temperature will not form a complete head skull curvature since the temperature of lower region for nose and mouth covered by object will drop significantly. Secondly, the activity history constructed by the motion detection module103will estimate the length, width and orientation of head. The length is defined from cheek to the top of head while the width is the dimension measurement perpendicular to the length. When the nose and mouth is not covered by object, the aspect ratio between length and width will maintain similar. Otherwise, the length is much shorter. Based on the monitoring history, the aspect ratio range is calculated between length and width. When there is a suspicious situation for suffocation, the estimated length will compare to the actual measured length of the head. If the estimated length is shorter, it has high possibility that the nose and mouth is covered by object. There is another reason to cause suffocation when the baby is sleeping on its stomach. In this situation, the forehead is fully facing down. In other words, the whole face is covered by mattress. In this situation, the face recognition fails since there is no facial feature visible. The thermographic video camera3will pick up the curvature of full head skull but the maximum temperature of the head region is 2 degree Celsius lower than the historical forehead temperature. In either way, when the mouth and nose is covered by object or the baby is lying on its stomach, the parent should be warned to take some precaution action in order to prevent further suffocation in baby. In 2017, there were about 1,400 deaths due to SIDS, about 1,300 deaths due to unknown causes, and about 900 deaths due to accidental suffocation and strangulation in bed. Thus, the alert feature in the present invention is very important to keep baby safe. The temperature measurement module109is configured to:(1) determine a body temperature of the baby by selecting the highest temperature in the field of view of the thermographic video camera;(2) build a history of the body temperature obtained in (1);(3) control the warning module to output a warning signal if the body temperature in the history exceeds a predetermined level longer than a predetermined period of time. In operation, the temperature measurement module109is activated whenever the baby forehead is visible to the thermographic video camera3so that the baby body temperature can be measured. The measured body temperature is recorded to the history of the body temperature for parent or medical person to review. If the baby is in any medical treatment, the temperature history is a useful information for understanding the effect of medicine to baby. Normally, when the forehead is visible in the monitoring area, temperature of the area under monitoring will be captured; maximum temperature will be reported to local screen of the device and reported to system for record. The maximum temperature usually came from the two sides of the forehead where blood flows through the blood vessels. In babies and children, the average body temperature ranges from 97.9° F. (36.6° C.) to 99° F. (37.2° C.). Among adults, the average body temperature ranges from 97° F. (36.1° C.) to 99° F. (37.2° C.). In older adults over age 65, the average body temperature is lower than 98.6° F. (36.2° C.). In order to further improve the measurement accuracy of temperature, a thermistor8is provided for calibrating input of the thermographic video camera3by compensating difference between ambient temperature and operation temperature of the multi-purpose video monitoring camera. The temperature measurement module109will measure the body temperature continuously and keep them in history for later review. At the same, the temperature variation during a day and record at the similar time period among several days will be analyzed to see if the baby gets fever. In general, the baby body temperature is in the range from 97.9° F. (36.6° C.) to 99° F. (37.2° C.). For some reasons the baby will get higher than the aforementioned range while the baby is not fever, for example, if the baby is wrapped up tightly in a blanket, or the baby is in a very warm room, or the baby is very active, or the baby is cuddling a hot water bottle, or the baby is wearing a lot of clothes, or the baby is just having a bath, etc. In some cases, the body temperature will return to normal range after some time. In some other cases, the body temperature will maintain at a higher than normal range. Therefore, the present invention has a grace period for the high body temperature. In other words, if the temperature measurement module109detects that the body temperature in the history for only a short period of time, it would not control the warning module102to output a warning signal; but if the temperature measurement module109detects that the body temperature in the history exceeds a predetermined level longer than a predetermined period of time, it will control the warning module102to output a warning signal to alert the parent that either the baby really gets fever or the baby is wearing too much clothes in a very warm room. In both cases, parent should take some precaution in order to comfort the baby. The urination and defecation detection module110is configured to:(1) obtain positions of legs of the baby in the fields of view of the CMOS video camera and the thermographic video camera from the motion detection module;(2) estimate position of diaper based on positions of legs of the baby obtained in (1);(3) detect and build a history of temperature at the position of diaper estimated in (2);(4) control the warning module to output a warning signal if the temperature in the history built in (3) indicates a rapid increase follows by a gradual decrease. In operation, if the baby urinates and defecates, the diaper temperature will go up suddenly and gradually reduce. They will suggest that the baby urinates or defecates, and the urination and defecation detection module110can give alert to parents for diaper change. The diaper has different temperature in dry and wet state. In dry state, the diaper is a good thermal isolation and the body temperature is not visible in the diaper area. However, when it is wet due to baby urination and defecation, the diaper temperature will increase since the temperature of urine or wet defecation is closed to the body temperature. Thus, the temperature of diaper will increase from room temperature to body temperature when it is full. The location of diaper in dry state is a relative location compared to the location of head and legs. Depending on the baby posture, the location of diaper can be triangularly identified among the head, arms and legs. So, the wet status of diaper should be a comparison of diaper temperature in the similar body posture. The air conditioning module111is configured to:(1) measure ambient temperature near the baby in the fields of view of the CMOS video camera and the thermographic video camera;(2) control the IR transmitter to send control signal to an air conditioner to adjust control temperature of the air conditioner if the ambient temperature measured in (1) exceeds or falls below a predetermined range. In real life, some air conditioners are inaccurate in terms of temperature control. As a result, the room temperature or specially the temperature of baby crib is not a comfortable setting for baby. In the present invention, when the air conditioning module111detects that the difference between body temperature and room temperature is too high, it can control the room temperature by controlling the air conditioner via an IR transmitter9coupled to the microprocessor1. If the room temperature is higher than comfortable temperature, the air conditioning module111will decrease the target temperature of the air conditioner. If the room temperature is lower than comfortable temperature, the air conditioning module111will increase the target temperature of the air conditioner. In most cases, the air conditioner will control condenser operational time by detecting the room temperature with reference to outdoor environment. When the outdoor environment is very hot, the resultant room temperature is high too. When the outdoor environment is very cold, the resultant room temperature is low too. In contrast, the air conditioner will not sense the human body temperature in its operation. Therefore, we may feel discomfort. As the present invention can detect the body temperature and room temperature with an accuracy of up to ±0.2° Celsius, it is possible to provide a comfortable environment to the baby no matter how the air conditioner works. The methods and processes by which the IR transmitter9sends control signal to an air conditioner is well known in the prior art and are not detailed herein. The baby growth rate module112is configured to:(1) obtain positions of head and limbs of the baby in the field of view of the CMOS video camera from the motion detection module;(2) determine a baby length by measuring distance between top of head to toes if the posture determined by the motion detection module is a straight posture;(3) calculate actual baby length based on the baby length determined in (2) and a scale factor which is based on either dimension of object nearby the baby, manual measurement of the actual baby length in regular basis, or proximity distance between the baby and the multi-purpose video monitoring camera; specifically:(3.1) perform object recognition on input obtained from the CMOS camera to identify an object positioned near the baby;(3.2) measure dimension of the object identified in (3.1);(3.3) recall actual dimension of the object identified in (3.2) from a database which stores actual dimensions of objects;(3.4) calculate actual baby length based on the baby length determined in (2), the dimension measured in (3.2) and the actual dimension recalled in (3.3);(4) build a history of the actual baby length calculated in (3). In operation, during motion detection, there are some situations when the head and four limbs are clearly visible to the CMOS video camera2. The baby growth rate module112can measure length of baby from top of head to the toss and build a history of baby growth rate. In most cases, parent will put some toys or dolls inside the crib for baby to entertain during alone time. These objects are fixed in dimension after they appear in the crib. In the worst case, the crib itself is the last object we can use as reference. When the baby growth rate module112uses object recognition to find out any new objects in the crib, a snapshot of the object will be sent to parent for requesting parent to enter the dimension of the object. For crib, the dimension of mattress can be used for this purpose. The provided dimension is used as a reference to estimate the baby length. When the baby is clearly visible to the CMOS video camera2, the measurement will be done on the baby length and the dimension of closest object. The ratio between actual dimension of the object and the measured dimension of the object will be used to scale the measured baby length to the actual baby length. There is another way to obtain the actual baby length by using the video data. A proximity sensor10which is coupled to the microprocessor1is used to measure the distance between the baby and the CMOS video camera2. Then, the scale factor is created to figure out the actual length from the measured length and the proximity distance. For example, in proximity distance of 1 meter, the actual length is equal to 10 times of measured length. The closer the proximity distance is, the smaller the scaling factor is. The last method is to request parent measuring the actual baby length in regular basis for calibration. In the motion detection process, the common position and posture to have clearly visibility on head and four limbs can be determined. The snapshot of this situation will be sent to parent for manual measurement on the baby length. The input from parent and the measured baby length through the CMOS video camera2will act a reference in near future to automatically calculate the actual baby length. The reason for regular manual measurement is to minimize the error in scaling the measured length to actual length. Then, the growth rate history will be more realistic. An exemplary operation process of the present embodiment is detailed as follows: When the multi-purpose video monitoring camera is switched on, the baby presence detection module101will scan any human face including adult and baby. If baby presence is confirmed, the check-in status will be updated as check-in. At this time, the baby presence detection module101will capture high resolution of the baby face for facial analysis since it is the moment when the baby head is closest to the CMOS video camera2. After check-in, data from the CMOS video camera2and the thermographic video camera3is continuously sent to the motion detection module103to determine if the baby is idle without significant movement for a predetermined period of time. If the motion detection module determined that the baby is idle without significant movement for a predetermined period of time, the breathing detection module104is triggered to calculate the breathing rate of the baby. If the current position of the head of the baby is available from the motion detection module103, the current position of the head of the baby will be used; otherwise, the last head position found in the motion history in the motion detection module103is used. If the breathing rate is less than 10 times per minute, the suffocation detection module108is activated. When the baby is in large motion activity (i.e. the motion detection module103does not determine that the baby is idle without significant movement for a predetermined period of time), the breathing detection module104and the suffocation detection module108are deactivated. No matter if the baby is active or not, the temperature measurement module109, the air conditioning module111and the urination and defecation detection module110are in-place when the baby is present in the monitoring area. When the baby is not present in the monitoring area, the baby growth rate module112is activated to calculate actual baby length and build a history of the actual baby length. Before and after check-in and check-out, any adult faces captured by the baby presence detection module101could be used to determine when and who take in/take away the baby from the monitoring area. The face recognition applied to adult face could generate a database for common person such as baby's mother or helper who will appear in the monitoring area. It is assumed that the person who takes in the baby is a trusted person. So, this person is able to take the baby away from the monitoring area without any special attention. When a stranger is found to take away the baby, the present invention can alert the parent for this situation through the warning module102. The embodiment described above is a preferred embodiment of the present invention. It is understood that the present invention should not be limited to the embodiment as described. Any changes, modifications, replacements, combinations and simplification without deviating from the essence and principle of the present invention should be considered alternative configurations that are equally effective and should also fall within the scope of protection of the present invention. | 38,498 |
11857312 | DETAILED DESCRIPTION Described embodiments generally relate to methods, devices and systems for hearing assessment. In particular, described embodiments are directed to methods, devices and systems for hearing assessment using measures of a patient's brain activity and/or cardiac activity. FIG.1shows a system100for hearing assessment using fNIRS. fNIRS is a brain imaging technique that uses light in the near-infrared spectrum to evaluate neural activity in the brain via changes in blood oxygenation. This is possible due to a range of wavelengths of near-infrared light over which skin, tissue, and bone are mostly transparent but in which blood is a stronger absorber of the light. Differences in the light absorption levels of oxygenated and deoxygenated blood allow the measurement of relative changes in blood oxygenation in response to brain activity. fNIRS raw data measures changes in blood oxygenation, from which neural activity can be extracted using a series of processing steps. As well as neural activity, cardiac information signals can be extracted from the fNIRS raw data. fNIRS raw data is sensitive to cardiac information. Cardiac information in this context may include respiratory information, and may include information such as heart beat pulses, breathing and blood pressure changes. These cardiac information signals are often separated and rejected in fNIRS analyses, in order to avoid these additional signals from interfering with the measurement of relative changes in blood oxygenation in response to brain activity. According to some embodiments, system100may filter fNIRS data to remove cardiac information signals. According to some alternative embodiments, system100may use the cardiac information signals as additional or alternative sources of data for the hearing assessment. System100is made up of a hearing assessment device110, a sound generator140, a stimulation member145, and an external processing device195. According to some embodiments, system100also comprises headgear160. According to some embodiments, system100also comprises a cardiac monitor165. According to some embodiments, system100may comprise only one of headgear160and cardiac monitor165. According to some embodiments, system100may comprise both headgear160and cardiac monitor165. Hearing assessment device110has a processor120, which communicates with a sound output module130, memory150, a light output module170, a data input module180and a communications module190. In the illustrated embodiment, sound generator140is a separate unit from assessment device110. However, in some embodiments, sound generator140may be part of hearing assessment device110. Stimulation member145may be a speaker, earphone, hearing aid, hearing instrument, implantable auditory prosthesis comprising implantable electrodes, cochlear implant, brain stem implant, auditory midbrain implant, or other component used to provide aural stimulation to a patient. According to some embodiments, two stimulation members145may be used, to provide binaural stimulation. According to some embodiments, stimulation member145may be an audiometric insert earphone, such as the ER-3A insert earphone by E-A-RTONE™ 165 GOLD, US. In some embodiments, stimulation member145may interface with another component, such as a hearing aid or cochlear implant, in order to provide aural stimulation to the patient. Sound generator140causes the stimulation member145to produce a range of aural stimulation signals to assess the patient's hearing. When the patient has a cochlear implant, stimulation member145may be a computer and pod that interfaces directly with a coil of the cochlear implant, to cause the implant to produce electrical pulses that evoke sound sensations. In this case, sound generator140generates and transmits instructions for the patterns of electrical pulses to stimulation member145. Headgear160includes a number of optodes162/164, having at least one source optode162and at least one detector optode164. Source optodes162are configured to receive signals via transmission channels168, and detector optodes164are configured to provide output signals via measurement channels166. Headgear160may be a cap, headband, or other head piece suitable for holding optodes162/164in position on a patient's head. Optodes162/164may be arranged on headgear160to be positioned in the region of the auditory cortex of the patient when headgear160is worn correctly. In some cases, headgear160may have between 1 and 32 source optodes162and between 1 and 32 detector optodes164. Source optodes162and their paired detector optodes164may be spaced at between 0.5 and 5 cm from one another on headgear160. In some embodiments, headgear160may be an Easycap 32 channel standard EEG recording cap, and optodes162/164may be attached using rivets or grommets. According to some embodiments, headgear160may be an NIRScout system NIRScap by NIRX Medical technologies LLC, Germany. In some embodiments, headgear160may have 16 source optodes162and 16 detector optodes164, making up to 256 channels or source-detector pairs. According to some embodiments, headgear160may be arranged so that source optodes162are arranged to be positioned in proximity to source positions710and detector positions720of a brain700, as illustrated inFIG.7, when headgear160is worn correctly on a patient's head. InFIG.7, source positions710are illustrated in white, and detector positions720are illustrated in black. Headgear160may comprise sixteen source optodes162and sixteen detector optodes164. According to some embodiments, optodes162/164may be arranged to be positioned over at least one of the posterior temporal lobe, and the anterior temporal lobe/pre-frontal lobe of the patient's brain. Optodes162may be arranged to be positioned over either of the left hemisphere730, the right hemisphere740, or both hemispheres730/740. According to some embodiments, source/detector pairs of source optodes164and detector optodes164may be located around 0.5 to 5 cm apart. Optodes162/164as arranged inFIG.7allow for a number of different channels810of data to be obtained. According to some embodiments, twelve channels810of data are obtained from each hemisphere730/740, being a total of 24 channels810, as shown inFIG.8. Each channel810comprises a source optode162and a detector optode164, although each source optode162and detector optode164may belong to more than one channel810, as described below with reference toFIG.3. According to some embodiments, some of the channels810may be overlapping channels. Overlapping channels may allow for noise to be reduced from the data signals, by averaging the data from two overlapping channels. Furthermore, overlapping channels may be used as a backup for one another in case one of the channels stops working, or produces unacceptable data. According to some embodiments, at least some optode source/detector pairs162/164may be arranged to operate as short channels, while some optode source/detector pairs162/164may be arranged to operate as long channels. Short channels may comprise pairs of optodes162/164located around 5 mm to 15 mm apart, and may be used to collect data from the scalp region only, which may include at least one signal that is not related to brain activity, such as cardiac signals, noise and other signals. According to some embodiments, short channels may comprise pairs of optodes162/164located around 11 mm apart. The short channels may be configured so as not to sense any brain activity. Long channels may be configured to be around 2 cm to 5 cm apart, and may be configured to sense brain activity as well as scalp activity. According to some embodiments, long channels may comprise pairs of optodes162/164located around 3 cm apart. Data received from the short channels may be removed from the data received by the long channels in order to separate the data related to brain activity from other signals, including cardiac data and noise. According to some embodiments, where only cardiac information is being used for a hearing assessment, all optodes162/164may be arranged to operate as short channels. Channels810may be grouped into one or more regions of interest (ROIs). For example, as illustrated inFIG.8, channels810may be divided into regions811,812,813,814,815,816,817and818. Regions811,812,813and814may be located in the left hemisphere730, while regions815,816,817and818may be located in the right hemisphere740. Regions811and815may comprise channels810located in the middle orbital gyrus, middle frontal gyrus and inferior frontal gyrus pars triangularis. Regions812and816may comprise channels810located in the inferior frontal gyrus pars orbitalis, inferior frontal gyrus pars operculatris, and superior temporal gyrus. Regions813and817may comprise channels810located in the precentral gyrus, Heschl's gyrus and middle temporal gyrus. Regions814and818may comprise channels810located in the postcentral cyrus, supramarginal gyrus and superior temporal gyrus. According to some embodiments, headgear160may comprise a subset of optodes162/164as illustrated inFIG.7. For example, in some embodiments headgear160may comprise optodes162/164located only in regions811,814,815and818. As regions812and813are physically located between regions811and814, and regions816and817are physically located between regions815and818, the response in the ‘middle’ regions812,813,816and817is often a combination of the responses from the ‘end’ regions811,814,815and818. As a result, measuring the response of ‘middle’ regions812,813,816and817may not produce any significant additional information that is not available from ‘end’ regions811,814,815and818. Referring again to system100ofFIG.1, cardiac monitor165may comprise one or more devices configured to measure cardiac information of a patient. The cardiac information may include heartbeat, respiration rhythm, systemic blood pressure and Mayer waves. Cardiac monitor165may comprise one or more of a heart rate monitor, a respiratory monitor, blood pressure monitor and Mayer wave monitor. Although only one external processing device195is shown, assessment device110may be in communication with more than one external processing device195, which may in some embodiments be desktop or laptop computers, mobile or handheld computing devices, servers, distributed server networks, or other processing devices. According to some embodiments, external processing device195may be running a data processing application such as Matlab 2016b (Mathworks, USA), for example. Filtering of received data signals may be done by external processing device195running Homer2functions in some embodiments. Processor120may include one or more data processors for executing instructions, and may include one or more of a microprocessor, microcontroller-based platform, a suitable integrated circuit, and one or more application-specific integrated circuits (ASIC's). Sound output module130is arranged to receive instructions from processor120and send signals to sound generator140, causing sound generator140to provide signals to stimulation member145. Where stimulation member145comprises a speaker or earphone, the signals may include an acoustic signal delivered via the earphone or speaker in the sound field. Where stimulation member145comprises a hearing instrument, the signals may comprise a digital sound file delivered via direct audio input to the hearing instrument. Where stimulation member145comprises an implantable auditory prostheses, the signals may comprise instructions for an electrical signal to be delivered by implanted electrodes in the implantable auditory prostheses. Memory150may include one or more memory storage locations, either internal or external to optical read system100, and may be in the form of ROM, RAM, flash or other memory types. Memory150is arranged to be accessible to processor120, and contain program code that is executable by processor120, in the form of executable code modules. These may include sound generation module152, pre-processing module154, and automatic processing module156. Light output module170is configured to receive instructions from processor120and send signals to source optodes162via transmission channels168, causing source optodes162to generate near infra-red light. Data input module180is configured to receive data signals from detector optodes164via measurement channels168, the data signals being generated based on the near infra-red light detected by detector optodes164. Communications module190may allow for wired or wireless communication between assessment device110and external processing device195, and may utilise Wi-Fi, USB, Bluetooth, or other communications protocols. User input module112may be configured to accept input from a number of user input sources, such as a touchscreen, keyboard, buttons, switches, electronic mice, and other user input controls. User input module112is arranged to send signals corresponding to the user input to processor120. Display114may include one or more screens, which may be LCD or LED screen displays in some embodiments, and be caused to display data on the screens based on instructions received from processor120. In some embodiments, assessment device110may further include lights, speakers, or other output devices configured to communicate information to a user. System100may be used to determine the range of sound stimulus levels that elicit sound percepts in patients between their threshold of hearing and uncomfortably loud sounds. Control unit120may be configured to execute instructions read from sound generation module152of memory150, to cause processor120to send instructions to sound output module130. Sound output module may consequently communicate with sound generator140, to cause sound generator140to generate a sound signal based on the instructions received. Sound generator140may output the sound signal to stimulation member145to cause stimulation member145to produce one or more sounds. According to some embodiments, sound generator140may be configured to generate alternating periods of sounds and silence. Periods of sound may be 1 to 30 seconds in duration, and the periods of silence may be between 4 and 40 seconds in duration according to some embodiments. Sound generator140may be configured to generate sounds with varying levels of intensity or loudness. For example, the sounds may be adjustable within the dynamic range of the person being tested. For a person with normal hearing, the sounds may be adjustable between approximately 10 and 120 dB sound pressure level (SPL), for example. The characteristics of the sound (for example, band width, frequency, amplitude or frequency modulation) may be adjustable depending on the person being tested and the purpose of the testing. In some embodiments the alternating time periods may have sounds of different intensity or different type, instead of being periods of sounds and silence. An example series of sounds generated by sound generator140is illustrated inFIG.9, which shows the progression of example test period900. According to some embodiments, a test session may include 1 or more test periods, such as 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 test periods, for example. Each test period may last for several minutes, such as between 5 and 10 minutes, for example. According to some embodiments, the test period may last for 7 minutes. According to some embodiments, 5 test periods of 7 minutes each may be carried out, so that the total test session may be around 35 minutes long. Test period900includes eight blocks of rest910. In the illustrated embodiment, each period of rest lasts for 25, 30 or 35 seconds, with the length of time applied at random. According to some embodiments, rest periods may last for between 5 and 180 seconds. In some embodiments, test periods may last between 20 and 40 seconds. According to some embodiments, test periods may last between 10 and 60 seconds. According to some embodiments, the test periods may be another suitable selected length of time. Test period900further includes 8 stimulation periods920, corresponding to times when stimulation would be delivered to a patient. According to some embodiments, each stimulation period920may last for between 1 and 30 seconds. For example, the stimulation period may last for 18 seconds in some embodiments. According to some embodiments, the length of each stimulation period920within a test period900may be equal. In the illustrated embodiment, a sound of 15 dB, 40 dB, 65 dB or 90 dB was played in each stimulation period920, with the stimulation levels being applied at random. According to some embodiments, each stimulation level may be repeated a set number of times within a test period900. For example, in the illustrated embodiment, each stimulation level is repeated twice within test period900. Stimulation member145may be positioned on or near a patient, in order to aurally stimulate the patient. Where headgear160is being used, headgear160may be positioned on the patient so that optodes162/164are positioned in proximity to the temporal lobe of the patient. Where cardiac monitor165is being used, cardiac monitor165may be positioned to measure cardiac information of the patient. When the patient hears a sound due to the stimulation provided by stimulation member145, the neural activity in the patient's brain in the measured area, which may be at or around the auditory cortex, changes. According to some embodiments, the patient's heart rate, heart rate variability, blood pressure and/or breathing rate may also increase or decrease when the patient hears a sound. Optodes162/164are used to measure the changes in blood oxygenation in the auditory cortex region, which may be a result of changes in neural activity, and/or changes in heart rate, heart rate variability, blood pressure and/or breathing. Processor120sends instructions to light output module170, which controls the light emitted by source optodes162by sending signals along transmission channels168. This light passes through the measured region of the patient's brain, and some of the light is reflected back to detector optodes164. Data collected by detector optodes164is carried by measurement channels166to data input module180, which communicates with processor120. Cardiac monitor165may also be used to measure changes in heart rate, heart rate variability, blood pressure and/or breathing, and data signals collected by cardiac monitor165may also be carried by measurement channels to data input module180, which communicates with processor120. In some cases, the data may be stored in memory150for future processing by assessment device110or external computing device195. In some embodiments, the data may be processed by assessment device110in real time. Processor120may execute pre-processing module154to pre-process the data as it is captured. Pre-processing module154may process the data by removing noise, and unwanted signal elements. According to some embodiments, these may include signal elements such as those caused by breathing of the patient, the heartbeat of the patient, a Mayer wave, a motion artefact, brain activity of the patient, and the data collection apparatus, such as measurement noise generated by the hardware. In some embodiments, the signal elements caused by breathing or heartbeats may be kept for further analysis, as described below. In some embodiments, pre-processing module154may pass the captured data through a low-pass filter to remove noise signals. In some embodiments, the filter may be a low-pass filter, such as a 0.1 Hz, 0.2 Hz, 0.3 Hz, 0.4 Hz, or 0.5 Hz low-pass filter, for example. In some embodiments, pre-processing module154may pass the captured data through a high-pass filter or a band-pass filter to remove noise signals. In some embodiments, the filter may be a high-pass filter, such as a 0.01 Hz, high-pass filter, for example. Pre-processing module154may additionally or alternatively use a transform to process the captured data, using a technique such as principle component analysis (PCA) for example. Pre-processing module154may transform the captured data to another domain, and then remove unwanted components of the data to retain only the desired data components. In some embodiments, pre-processing module154may model the response signal using an autoregressive integrative model fit of the data, as described in Barker et al. 2013 (Barker, Jeffrey W., Ardalan Aarabi, and Theodore J. Huppert. ‘Autoregressive Model Based Algorithm for Correcting Motion and Serially Correlated Errors in FNIRS’. Biomedical Optics Express 4, no. 8 (1 Aug. 2013): 1366. https://doi.org/10.1364/BOE.4.001366), or a real-time implementation of an adaptive general linear model, as described in Abdelnour et al. 2009 (Abdelnour, A. Farras, and Theodore Huppert. ‘Real-Time Imaging of Human Brain Function by near-Infrared Spectroscopy Using an Adaptive General Linear Model’. NeuroImage 46, no. 1 (15 May 2009): 133-43. https://doi.org/10.1016/j.neuroimage.2009.01.033). A method of pre-processing data that may be performed by pre-processing module154is described below with reference toFIG.14. As described in further detail below with reference toFIG.6, processor120may subsequently execute automatic processing module156, which may determine whether the aural stimulation provided by stimulation member145correlates to a change in activity in the auditory region as measured by the source-detector pair of optodes162and164. This information may be determined by measuring the changes in attenuation of the light received by detector optode164compared to the light emitted by source optode162. As described in further detail below with reference toFIG.15, automatic processing module156may also determine whether the aural stimulation provided by stimulation member145was associated with a change in heart rate, heart rate variability, blood pressure, or breathing as measured by the source-detector pair of optodes162and164. In some embodiments, automatic processing module156may also determine whether the aural stimulation provided by stimulation member145was associated with a change in heart rate, heart rate variability, blood pressure, or breathing as measured by the cardiac monitor165. Sounds generated by sound generator140may include sounds within the human hearing range. These may include pure tones in the range of 125 Hz to 16 kHz, for example. In some embodiments, frequency or amplitude modulated tones may be used. According to some embodiments, the sounds may include varying intensities of broadband modulated sound, with the intensities ranging from near-threshold to comfortably loud levels. According to some embodiments, four sound intensities may be used. For example, sounds may be played at 15 dB, 40 dB, 65 dB and 90 dB, according to some embodiments. Where the patient is an infant, band-passed infant-directed sounds may be used, such as infant-directed speech sounds. According to some embodiments, the sounds may include ICRA noise, as developed for the International Collegium of Rehabilitative Audiology. ICRA noise is a speech-like signal with long term average speech spectra and modulation characteristics like natural speech. Each sound may have a linear ramp of 10 ms applied at the start and end. Source optodes162may generate near-infrared (NIR) light, being light having a wavelength of between 650 and 1000 nm. In some embodiments, light may be generated at two or more different frequencies, with one frequency being absorbed more by the oxygenated haemoglobin (HbO) in the blood than by non-oxygenated haemoglobin (HbR), and one frequency being absorbed more by HbR than by HbO. In such embodiments, one frequency of light may be chosen such that the wavelength of the light is below 810 nm, and the other may be chosen to have a wavelength of above 810 nm. For example, according to some embodiments, one frequency may be around 760 nm, and the other frequency may be around 850 nm. In this document, these wavelengths will be referred to as the first wavelength and the second wavelength, respectively. FIG.2shows a diagram200of source optode162and detector optode164being used to perform fNIRS on a patient. fNIRS is an optical imaging technology that measures the light attenuation of the brain in a NIR spectrum, using light with a wavelength of around 650 to 1000 nm. Optodes162and164are placed on a scalp208of a patient, in the region of the temporal lobe tissue206. Source162receives a signal202and emits NIR light into region212. The NIR light passes through tissue206and is partially absorbed and partially reflected by blood vessels210. The NIR light is mainly absorbed by the oxygenated haemoglobin (HbO) and the deoxygenated haemoglobin (HbR) in blood flow. The reflected light is captured by detector optode164, and the data204is output, to be received by data input module180. By measuring the change of light attenuation from a baseline state at two or more wavelengths, the changes of HbO and HbR concentrations (AHbO) and (AHbR) can be quantified. Haemodynamic changes in the brain have been demonstrated to be tightly coupled with changes in neuronal activations, as described in Logothetis N. K., Wandell B. A. (2004). “Interpreting the BOLD signal”, Annu. Rev. Physiol. 66, 735-769 DOI:10.1146/annurev.physiol.66.082602.092845 [PubMed]. The raw signals and HbO and HbR also contain information about changes in heart rate, heart rate variability, blood pressure and breathing rate. A method1500of determining cardiac information is described below with reference toFIG.15. FIGS.3ato3cshow various arrangements310,320and330of optodes162/164showing how multiple data channels can be derived from each optode162/164. Although particular arrangements are illustrated, it is envisaged that any arrangement of at least one source optode162and at least one detector optode164may be used. FIG.3ashows an arrangement310having one source optode162, with two detector optodes164. Light emitted by source optode162is captured by both detector optodes164, giving data about two regions of the auditory cortex. FIG.3bshows an arrangement320having two source optodes162, with one detector optodes164. Light emitted by source optodes162is captured by detector optodes164. By having each source optode162emit light sequentially or modulated at a different frequency, detector optode164can determine which source optode162the received data came from, and this arrangement therefore also allows for data about two regions of the auditory cortex to be captured. FIG.3cshows an arrangement330having two source optodes162, with two detector optodes164. Light emitted by each source optode162is captured by each detector optode164. By having each source optode162emit light sequentially or modulated at a different frequency, each detector optode164can determine which source optode162the received data came from, and this arrangement therefore allows for data about four regions of the auditory cortex to be captured. FIG.4shows a diagram400showing headgear160in position on a patient401, where headgear160is an elastic cap with optodes positioned according to standard 10-5 system locations as described in Oostenveld R & Praamstra P (2001), “The five percent electrode system for high-resolution EEG and ERP measurements”, Clinical neurophysiology: official journal of the International Federation of Clinical Neurophysiology 112(4):713-719. 18 regions of interest402are labelled, showing where 32 optodes162/164may be placed. Areas403show other areas of the 10-5 system cap where optodes162/164may be placed, but which are empty in the illustrated arrangement.FIGS.7and8, as described above, show an alternative arrangement of optodes162/164on a brain700of a patient401. According to some embodiments, at least some of optodes162/164may be located over the temporal lobe region of the patient. FIG.5shows an example graph500of response signals510and540received from a detector optode164situated in a left hand auditory cortex area of a patient401when auditory stimulation was administered to patient401. Axis520shows the time scale in seconds, while axis530shows the amplitude of the response as a change in concentration of HbO or HbR. Response signal510shows the change in the concentration of HbO over time, and has a number of key features, including a peak magnitude501, a width502, and a time-to-peak (or lag time of the peak magnitude)503measured from the time the auditory stimulation was started. According to some embodiments, key features may also include values associated with modelling the response signal using an autoregressive integrative (ARI) model fit of the data, changes in inter-peak intervals of the signal, measures of reliability of parameters of the signal, measures of the magnitude of the auditory brainstem or cortical response potentials, or beta-values obtained from modelling the response signal using a general linear model. These key features may be derived by a method such as that described in Picton T W “Human auditory evoked potentials” Plural Publishing inc, San Diego, 2011; Rance et al. “Hearing threshold estimation in infants using auditory steady-state responses” J Am Ac Audiol, 2005, 16:291-300; or Visram et al “Cortical auditory evoked potentials as an objective measure of behavioral thresholds in cochlear implant users,” Hear Res, 2015, 327: 35-42. Response signal540shows the change in the concentration of HbR over time. In the illustrated example, the auditory stimulation was provided at time=0.FIGS.10Ato12, described in further detail below, show alternative graphs of response signals obtained via system100. FIG.6shows a flow diagram600illustrating a method of assessing the hearing of a patient using system100. At step602, processor120may execute sound generation module152, causing processor120to instruct sound output module130to generate an auditory stimulation signal having parameters as determined by sound generation module152. This step may be initiated based on an input signal received by user input module112and communicated to processor120. Parameters of the stimulation to be supplied, such as the frequency and duration of the stimulation, are stored in memory150. According to some embodiments, the parameters of the stimulation may be pseudo-randomly determined as described above with reference toFIG.1. In some alternative embodiments, sound generator140may determine the stimulation parameters, rather than receiving them from assessment device110. In these embodiments, sound generator140may communicate the parameters of stimulation to device110. At step604, sound generator140may receive instructions from sound output module130, and cause stimulation member145to deliver an auditory stimulation signal. This may be in the form of a sound, where stimulation member145is a speaker or headphone, or it may be in the form of an electrical signal, where stimulation member145is an interface to a cochlear implant, for example. At step605, processor120instructs light output module170to cause source optodes162to emit NIR light. In some embodiments, light output module170may cause source optodes162to emit NIR light continuously, independent of stimulation provided by stimulation member145. At step606, detector optodes164record the intensity of any NIR light received, and transmit the data to data input module180via measurement channels166. In some embodiments, method600may include step607. Step607may involve cardiac monitor165generating data signals related to cardiac information of the patient, and transmitting the data to data input module180. At step608, data received by data input module180is stored by processor120in memory150. In some embodiments, at this point, the data may also or alternatively be transmitted to external processing device195for storage and/or processing. In the illustrated embodiment, at step610processor executes pre-processing module154. According to some embodiments, pre-processing module154may perform a pre-processing method such as method1400, described below with reference toFIG.14. In some embodiments, pre-processing module154checks that data is being received, and instructs processor120to cause a warning or alert to be displayed on display114if no data signals or data signals of poor quality are detected. This may indicate that optodes162/164or cardiac monitor165are faulty, for example, or are incorrectly positioned. Pre-processing module154may also exclude data received from channels810that lack a predetermined degree of correlation between the signals in the first wavelength and the second wavelength. For example, according to some embodiments signals with a correlation of less than 0.75 may be discarded. In other embodiments, channels810may be discarded based on the detected light intensity being too high or too low (which may indicate poor optode/skin contact). Pre-processing module154may also process the incoming data signals received from data input module180to extract just the data relating to rates of change of HbO and HbR that are due to changes in brain activity, and to remove unwanted aspects of the signal, such as drift, broadband noise, motion artefacts, and signals dues to heartbeat and respiration. The unwanted aspects of the incoming data signals may be removed by wavelet analysis and/or by applying one or more bandpass filters. According to some embodiments, bandpass filters of 0.01 and 0.5 Hz may be applied. According to some embodiments, the rates of change of HbO and HbR may be estimated by applying the modified Beer Lambert Law. According to some embodiments, unwanted aspects of the signal may also be removed by subtracting short channel data from long channel data, as described below with reference to step1412ofFIG.14. In some embodiments, pre-processing module154may model the response signal using an autoregressive integrative model fit of the data as described in Barker et al. 2013, or a real-time implementation of an adaptive general linear model as described in Abdelnour et al. 2009, as described below with reference to step1426ofFIG.14. In some embodiments, method600may include step612. At step612, cardiac signals may also be extracted from the incoming data signals received from data input module180. The cardiac signals may include signals relating to changes in heart rate, heart rate variability, blood pressure and/or breathing of the patient. The cardiac signals may be extracted from fNIRS data generated by channels810, or from data generated by cardiac monitor165. According to some embodiments, a cardiac information processing method such as method1500may be performed. Method1500is described in further detail below with reference toFIG.15. According to some embodiments, step610may be performed at the same time or in parallel to step612. According to some embodiments, step612may be performed before step610. After pre-processing, processor120may execute automatic processing module156, at step614, to process the response signals relating to the change of HbO and HbR and/or the cardiac signals, if any. This may cause processor120to analyse the shape of the response signal510, as illustrated inFIG.5, and how response signal510varies over time, to extract predetermined parameters from the data, and associate these with the parameters of the stimulation signal, as determined and stored at step602. Processor120may also analyse the shape of any extracted cardiac response signal data, such as signal1230as illustrated inFIG.12andFIG.18. Parameters such as peak magnitude501, width502, time-to-peak503, inter-peak intervals, and other features of the shape of response signal510and/or1230may be compared to predetermined ranges of values that may indicate whether the patient heard the stimulation, or whether the stimulation was of an uncomfortably high level, for example. Other measurements that may be associated with the stimulus parameters include the time taken to reach the peak magnitude after the stimulus onset, the duration of the response, and the difference in relative response magnitudes measured in different regions, which may be measured either at fixed times after the stimulus onset or at the peak response magnitude. The functional connectivity between different regions of the brain may also be measured. In some embodiments, automatic processing module156may cause processor120to perform mathematical analysis of response signal510/1230and any other extracted signals, such as by performing statistical tests on the extracted signals in the temporal and frequency domains. In some embodiments, automatic processing module156may compare response signals from different areas of the patient's brain, to determine functional connectivity between the brain regions using correlation techniques. Automatic processing module156may also compare response signals from different brain regions with the cardiac response signals. In some embodiments, only a single stimulation signal might be generated, in which case the process moves through step616to step620, at which point the results of the data processing may be displayed on display114. In some embodiments, the results may also be stored in memory150and/or communicated to external processing device195for further processing, viewing or storing. In some other embodiments, further stimulation may be required to collect further data, at which stage the method may continue through to step618. At step618, processor120may execute sound generation module195to adjust the parameters of the stimulation signal. In some cases, the results of automatic processing at step614may be used to adjust the parameters of the subsequent stimulation signal. In some embodiments, these steps may be part of an automatic threshold seeking process, used to determine a patient's hearing range. In some embodiments, these steps may be part of an automatic process used to determine the limit of sound levels above which a patient considers the sound to be too loud. In some embodiments, performing the automatic threshold seeking process may include adjusting the stimulation signal at step618to play a range of levels of sounds, mixing amplitudes and frequencies in a random or pseudo-random way, as further described with reference toFIGS.9and19. The order of presentation of different sound intensities and the range and number of intensities presented can vary according to the particular application. A fixed range of levels and level step size can be determined from already-existing information if available. For example, if a hearing aid validation is required, and sounds are presented acoustically via the hearing aid, then the sounds should cover the input dynamic range of the hearing aid, which may include at least one sound expected to be below an aided hearing threshold in some embodiments. Alternatively, an adaptive procedure can be undertaken, in which the level of the next sound is chosen based on the parameters derived from the response data evoked by the previous sound. This procedure may be used when programming the threshold and comfortable level currents in a cochlear implant patient, for example. The parameters of stimulation may be adjusted at step618in an incremental way, increasing and decreasing the parameters in turn until the targeted response parameter values are attained, which may be when the patient no longer exhibits a response to the stimulation being provided, for example. To find the sound level that corresponds to hearing threshold, the sound may be started at a low intensity and increased in pre-defined steps until a statistically significant response is determined in a least one parameter. The sound intensity may then be decreased in smaller steps, to find the lowest sound intensity that satisfies the pre-determined criterion for hearing threshold. Other adaptive procedures to determine hearing threshold or comfortably loud levels could use a statistical procedure that estimates the likely level-versus loudness function and chooses the next level to test based on optimising the information to be gained. Examples of such procedures include the QUEST+ procedure, as described at http://jov.arvojournals.org/article.aspx?articleid=2611972, and the QUEST procedure, as described at https://link.springer.com/article/10.3758/BF03202828. In some embodiments, performing the automatic process for determining uncomfortably loud sounds may include incrementally adjusting the stimulation signal at step618, and waiting for peak magnitude501of response signal510to reach a threshold value that is known to correlate to uncomfortably loud sounds. According to some embodiments, an uncomfortable level of sound may be defined as a sound that evokes a strong response in either one or both of heart rate and anterior response, for example a peak response magnitude of 1×10-7 HbO concentration change relative to baseline, approximately, from optodes162/164in the anterior area, or a significant increase in heart rate (more than 3% increase, approximately). According to some embodiments, a comfortable-level sound may be defined as a sound that is the lowest level of intensity to evoke a positive response in HbO in the anterior regions, such as regions811,812,815and816. According to some embodiments, a comfortable sound may be defined as a sound that is the lowest level of intensity to evoke an increase in heart rate. Statistical methods such as Taylor's change-point analysis, statistical classification via machine learning, fuzzy logic, or other known techniques may be used to process the response signals and determine hearing thresholds or comfortable loudness or uncomfortable loudness. According to some embodiments, using more than one response signal to determine the signal loudness may be more reliable, as it may reduce the influence of noise on the result. Once the parameters of the stimulation signal are appropriately adjusted at step618, the method may move back to step602. FIG.14shows a flow diagram1400illustrating a method of pre-processing fNIRS data retrieved using system100. At step1402, channels810of data are excluded from further analysis if they are considered to be bad channels. According to some embodiments, bad channels may be the result of channels810in which the scalp of the patient and the optodes162/164are not well coupled. The identification and removal of bad channels may be done in a number of ways. According to some embodiments, channels810with high gains may be considered bad channels and excluded from analysis, as high gains may correspond to low light intensity received by detector optodes164. For example, if the connection between a detector optode164and the scalp of a patient is blocked by hair, or if the optode164is otherwise not in good contact with the skin, then the light received by detector optode164will have a relatively low intensity. Device110may be configured to automatically increase the gain for detector164where the signal being generated by detector164is low in magnitude. If this gain value is too high, this may indicate that there is poor coupling between detector164and the scalp, and that the data from that detector164should be discarded. Based on this, according to some embodiments step1402may include discarding channel values where the gain for the channel810is above a predetermined threshold value. Similarly if the automatically-set gain is very low, it may indicate that the source optode162may not be correctly placed against the scalp, and needs to be repositioned or the channel discarded. According to some embodiments, channels with gains over 7 may be discarded, as this may indicate inadequate scalp-electrode connection. According to some embodiments, channels with a gain under a predetermined threshold, or equal to a predetermined threshold, may also be discarded. For example, according to some embodiments, channels with a gain of 0 may be discarded. According to some embodiments, channels with low correlation between the first wavelength and the second wavelength may also be considered bad channels and discarded, as described in Pollonini, L., Olds, C., Abaya, H., Bortfeld, H., Beauchamp, M. S., & Oghalai, J. S. (2014), “Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy”,Hearing research,309, 84-93. Low correlation between the first wavelength and the second wavelength may be another indication of poor coupling between a detector164and the scalp of the patient. Data may first be filtered using a narrow bandpass filter, which may be used to filter out all signals apart from those in the heartbeat range, which may be signals between 0.5-1.5 Hz, or between 0.5 Hz and 2.5 Hz, for example. The remaining signal is dominated by the heartbeat signal, and is commonly the strongest signal in the raw fNIRS data received from detectors164, and therefore should show up strongly in the signals for both the first wavelength and the second wavelength if both source162and detector164are well-coupled with the skin of the patient. If the first wavelength and the second wavelength are strongly correlated, this indicates that the coupling between the scalp and detector164is sufficiently strong. If the coupling is poor, then the channel810may be excluded. Poor coupling may be defined as a case where the correlation coefficient less than 0.75, for example. Based on this, according to some embodiments step1402may include discarding channel values where the correlation coefficient between the HbO wavelength and the HbR wavelength signals is below a predetermined threshold value. According to some embodiments, the correlation between the first and the second wavelength may be determined to be the scalp coupling index (SCI). The SCI may be calculated as the correlation between the two detected signals at the first wavelength and at the second wavelength, and filtered to a range that would mainly include heart beat data, as described above. For example, the SCI may be calculated as the correlation between the two detected signals at 760 and 850 nm and band-pass filtered between 0.5 and 2.5 Hz, in some embodiments. According to some embodiments, channels with SCIs lower than a predetermined threshold may be rejected. For example, according to some embodiments, channels with an SCI of less than 0.8 may be rejected. According to some embodiments, channels with an SCI of less than 0.75 may be rejected. According to some embodiments, channels with an SCI of less than 0.7 may be rejected. At step1404, the first wavelength raw data and the second wavelength raw data of the remaining channels are converted into a unit-less measure of changes in optical density over time. This step may be performed as described in Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009), “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. At step1406, motion artefacts in the optical density data may be removed. According to some embodiments, motion artefacts may manifest as spike-shaped artefacts in the data. Motion artefacts may be removed using wavelets, as described in Molavi, B., & Dumont, G. A. (2012), “Wavelet-based motion artefact removal for functional near-infrared spectroscopy”,Physiological measurement,33(2), 259. In some embodiments, motion artefacts may be removed using threshold-crossing detection and spline-interpolation. According to some embodiments, motion artefacts may also or alternatively be removed using techniques such as outlier detection using analysis of studentised residuals, use of principal component analysis (PCA) to remove signals with high covariance across multiple source-detector pairs and across optical wavelengths, Wiener filtering and autoregression models, as described in Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009), “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. At step1408, the signals generated at step1406may be passed through a bandpass filter to remove drift, broadband noise and/or systemic physiological responses such as heartbeat, respiration rhythm, systemic blood pressure and low frequency waves known as Mayer waves. According to some embodiments, the bandpass filter may be a 0.01 to 0.5 Hz bandpass filter. According to some embodiments, step1408may also or alternatively involve the removal of physiological signals in other ways, such as using other filtering methods, adaptive filtering or remote measurement of the signals to subtract them, as described in Kamran, M. A., Mannan, M. M. N., & Jeong, M. Y. (2016), “Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review”, Front Hum Neurosci, 10, and Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009) “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. At step1410, the signals generated at step1408may be converted to HbO and HbR concentration change signals, using the modified Beer-Lambert law as described in Delpy, D. T., Cope, M., van der Zee, P., Arridge, S., Wray, S., & Wyatt, J. (1988), “Estimation of optical pathlength through tissue from direct time of flight measurement”, Physics in medicine and biology, 33(12), 1433. Step1410may involve converting the optical density data as derived from the signals received from optodes164to concentration change units, taking into account the channel length, being the distance between the source162and the detector164optodes. At step1412, in order to remove the contribution of skin and scalp signals from the long channels, short channel data may be removed from the long channel data, either directly or by using a general linear model (GLM). In general, the shorter the distance between an optode pair162/164, the shallower the area from which the signal is recorded. Therefore, very short channels measure activity only from the blood vessels in the skin and scalp. Very short channels may comprise source and detector pairs positioned around 1.5 cm or less apart. The skin and scalp signals may include signals relating to heartbeat, breathing and blood pressure. According to some embodiments, principle component analysis (PCA) may be carried out across the short channels only. The first principle component (PC) across the short channels may represent activity common to all the short channels, which can then be included as a term in the general linear model of the long channel data and then effectively removed. According to some embodiments, this step may be carried out based on the methods outlined in Sato, T., Nambu, I., Takeda, K., Aihara, T., Yamashita, O., Isogaya, Y., . . . Osu, R. (2016), “Reduction of global interference of scalp-hemodynamics in functional near-infrared spectroscopy using short distance probes”,NeuroImage,141, 120-132. At step1414, the time series of HbO and HbR concentration change data determined at step1412may be epoched. Each epoch may be from around −5 to 30 seconds relative to the onset time of the stimulus. According to some embodiments, the stimulus may be 18 seconds long, leaving 12 seconds after the stimulus finishes for the signal to return to baseline. According to some embodiments, other epoch time values may be used depending on stimulus length and silent period length. At step1416, epochs with statistically unlikely concentration change values may be excluded. For example, epochs with early stimulation phase values within the range of mean plus 2.5 standard deviations (across trials) may be included, and all other epochs may be excluded. The early stimulation phase may be defined as from −5 to +2 seconds in some embodiments. According to some embodiments, step1416may be performed as described in Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009), “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. The excluded epochs may relate to movement artefacts and noise. At step1418, where multiple different stimuli have been presented during the measurements, data resulting from each of the stimulations may be separately averaged. At step1420, where overlapping channels810of optodes162/164were used, the averaged responses from the overlapping channels may be averaged. Averaging data across overlapping channels may reduce noise in the data. At step1422, regions of interest (ROIs) may be constructed based on the positions of the optodes162/164, as described above with reference toFIG.8. According to some embodiments, two or more neighbouring channels may be combined into one ROI. Channels to group as ROIs may be selected according to similar response waveform patterns, for example. Channels to group as ROIs may be also be selected according to pre-determined anatomical or functional considerations In an alternative embodiment, after step1404, step1410is performed. At step1410, the signals generated at step1404may be converted to HbO and HbR concentration change signals, using the modified Beer-Lambert law as described in Delpy, D. T., Cope, M., van der Zee, P., Arridge, S., Wray, S., & Wyatt, J. (1988), “Estimation of optical pathlength through tissue from direct time of flight measurement”, Physics in medicine and biology, 33(12), 1433. Step1410may involve converting the optical density data as derived from the signals received from optodes164to concentration change units, taking into account the channel length, being the distance between the source162and the detector164optodes. The method may then proceed to step1426, during which the response signal may be modelled using either an autoregressive integrative model fit of the data as described in Barker et al. 2013, or a real-time implementation of an adaptive general linear model as described in Abdelnour et al. 2009. After step1426, the method may proceed to step1422. At step1424, measures may be automatically extracted from the response signals. These measures may include a calculated magnitude of the peak of the signal, if the response shows single peak, or a calculated mean magnitude in an early and/or late window of the signal. According to some embodiments, an early window may be a window of around 3 to 9 seconds from the stimulation onset time, and a late window may be a window of around 14 to 20 seconds from the stimulation onset time. According to some embodiments, the response magnitude may be averaged over multiple time windows of various durations and centre times covering all or part of the epoched time window. According to some embodiments, an early window may be a window of around 0 to 6 seconds from the stimulation onset time, and a late window may be a window of around 24 to 30 seconds from the stimulation onset time. According to some embodiments, the measures may also or alternatively include a calculated time to the peak of the signal, and/or a width of the peak of the signal. According to some embodiments, the measures may include values associated with modelling the response signal using an autoregressive integrative (ARI) model fit of the data as described in Barker et al. 2013, or a real-time implementation of an adaptive general linear model as described in Abdelnour et al. 2009. In some embodiments, the measures may the beta-value obtained from modelling the response signal using a general linear model. FIG.15shows a flow diagram1500illustrating a method of determining cardiac data from fNIRS data generated by channels810of system100. As described below with reference toFIG.12, cardiac data may be used to assess hearing of a patient. Cardiac signals may be used in conjunction with or independent of HbO and HbR data to perform a hearing assessment. Cardiac signals may be determined using fNIRS as described here with reference toFIG.15, or may alternatively or in addition be generated by cardiac monitor165. At step1502, after channels810have been used to generate fNIRS data, bad channels of data are excluded from further analysis. According to some embodiments, bad channels may be the result of channels in which the scalp of the patient and the optodes162/164are not well coupled. The identification and removal of bad channels may be done in a number of ways. According to some embodiments, channels with high gains may be considered to be bad channels and may be excluded from analysis, as high gains may correspond to low light intensity received by detector optodes164. For example, if the connection between a detector optode164and the scalp of a patient is blocked by hair, then the light received by detector optode164will have a relatively low intensity. Device110may be configured to automatically increase the gain for a detector164if the detector164detects a low intensity of light. If this gain value is too high, this may indicate that there is poor coupling between detector164and the scalp, and that the data from that detector164should be discarded. Based on this, according to some embodiments step1402may include discarding channel values where the gain for the channel is above a predetermined threshold value. Similarly if the automatically-set gain is very low, it may indicate that the source optode may not be correctly placed against the scalp, and needs to be repositioned or the channel discarded. According to some embodiments, channels with gains over 7 may be discarded, as this may indicate inadequate scalp-electrode connection. According to some embodiments, channels with a gain under a predetermined threshold may also be discarded. For example, according to some embodiments, channels with a gain of 0 may be discarded. According to some embodiments, channels with low correlation between the first wavelength and the second wavelength may also be considered to be bad channels and be discarded, as described in Pollonini, L., Olds, C., Abaya, H., Bortfeld, H., Beauchamp, M. S., & Oghalai, J. S. (2014), “Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy”,Hearing research,309, 84-93. Low correlation between the first wavelength and the second wavelength may be another indication of poor coupling between a detector164and the scalp of the patient. Data may first be filtered using a narrow bandpass filter, which may be used to filter out all signals apart from those in the heartbeat range, which may be signals between 0.5-1.5 Hz, or between 0.5 and 2.5 Hz, for example. The remaining signal is dominated by the heartbeat signal, and is commonly the strongest signal in the raw fNIRS data received from detectors164, and therefore should show up strongly in the signals for both the first wavelength and the second wavelength if both source162and detector164are well-coupled with the skin of the patient. If the first wavelength and the second wavelength are strongly correlated, this indicates that the coupling between the scalp and detector164is sufficiently strong. If the coupling is poor, then the channel may be excluded. Poor coupling may be defined as a case where the correlation coefficient less than 0.75, for example. Based on this, according to some embodiments step1402may include discarding channel values where the correlation coefficient between the HbO wavelength and the HbR wavelength signals is below a predetermined threshold value. According to some embodiments, the correlation between the first and the second wavelength may be determined to be the scalp coupling index (SCI). The SCI may be calculated as the correlation between the two detected signals at the first wavelength and at the second wavelength, and filtered to a range that would mainly include heart beat data, as described above. For example, the SCI may be calculated as the correlation between the two detected signals at 760 and 850 nm and band-pass filtered between 0.5 and 2.5 Hz, in some embodiments. According to some embodiments, channels with SCIs lower than a predetermined threshold may be rejected. For example, according to some embodiments, channels with an SCI of less than 0.8 may be rejected. According to some embodiments, channels with an SCI of less than 0.75 may be rejected. According to some embodiments, channels with an SCI of less than 0.7 may be rejected. At step1504, the remaining channels of first wavelength raw data and the second wavelength raw data are converted into a unit-less measure of changes in optical density over time. This step may be performed as described in Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009), “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. At step1506, motion artefacts in the optical density data signals may be removed. According to some embodiments, motion artefacts may manifest as spike-shaped artefacts in the fNIRS data. Motion artefacts may be removed using wavelet analysis, as described in Molavi, B., & Dumont, G. A. (2012), “Wavelet-based motion artifact removal for functional near-infrared spectroscopy”,Physiological measurement,33(2), 259. In some embodiments, motion artefacts may be removed using threshold-crossing detection and spline-interpolation. According to some embodiments, motion artefacts may also or alternatively be removed using techniques such as outlier detection using analysis of studentised residuals, use of principal component analysis (PCA) to remove signals with high covariance across multiple source-detector pairs and across optical wavelengths, Wiener filtering and autoregression models, as described in Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009), “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. At step1508, the signals generated at step1506may be passed through a bandpass filter to obtain only the part of the signal dominated by the heartbeat signal. According to some embodiments, filtering the signals may remove drift, broadband noise and unwanted physiological responses. According to some embodiments, the bandpass filter may be a 0.5 to 1.5 Hz bandpass filter. According to some embodiments, the bandpass filter may be determined for each person based on their pre-determined approximate average resting heart rate. According to some embodiments, step1508may also or alternatively involve the removal of unwanted signals in other ways, such as using other filtering methods, adaptive filtering or remote measurement of the signals to subtract them, as described in Kamran, M. A., Mannan, M. M. N., & Jeong, M. Y. (2016), “Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review”, Front Hum Neurosci, 10, and Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009) “HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain”,Appl Opt,48(10), D280-298. Optionally, at step1510, the signals generated at step1508may be converted to HbO and HbR concentration change, using the modified Beer-Lambert law as described in Delpy, D. T., Cope, M., van der Zee, P., Arridge, S., Wray, S., & Wyatt, J. (1988), “Estimation of optical pathlength through tissue from direct time of flight measurement”,Physics in medicine and biology,33(12), 1433. Step1510may involve converting the unit-less optical density data as derived from the signals received from optodes164to concentration change units, taking into account the channel length, being the distance between a paired source optode162and detector optode164. According to some embodiments, step1510may be excluded, and method step1512may be performed on the filtered signal derived at step1508. At step1512, the signal determined at step1508or1510is up-sampled to around 100 Hz, as outlined in Perdue, K. L., Westerlund, A., McCormick, S. A., & Nelson, C. A., 3rd. (2014), “Extraction of heart rate from functional near-infrared spectroscopy in infants”,J Biomed Opt,19(6), 067010. The up-sampled signal may then be used to find peaks in the data, which may correspond to heart beats. At step1514, unwanted peaks determined at step1512are rejected. Peaks may be rejected from the data if the width of the peak is determined to be larger than the mean+1.5 standard deviations of mean widths. Peaks that are too wide may be a result of noise rather than heartbeats, and should therefore be removed from the signal data. According to some embodiments, peaks may also be rejected if the time between peaks is too small, or below a predetermined threshold. At step1516, the times between the peaks that were not rejected at step1512are calculated. The time between peaks may be known as the inter-peak interval, or IPI. At step1518, unwanted IPIs may be rejected. Unwanted IPIs may be IPIs greater than the mean plus 2 standard deviations across all determined IPIs, in some embodiments. According to some embodiments, IPIs bigger or smaller than a predetermined threshold may also be deleted. At step1520, beats per minute (BPM) of the remaining IPIs is calculated, by finding the inverse of the IPIs and multiplying by 60. This results in a time series of beats per minute versus time. However, at this point the time has non-uniform steps, as it corresponds to time points where peaks were detected in the signal. In order to get uniform time intervals for later averaging across epochs, the signal is then resampled to 20 Hz. The signal may also be passed through a low-pass filter to remove abrupt changes in heart beat rate which are likely not physiological in origin. Step1520may be performed as described in Perdue, K. L., Westerlund, A., McCormick, S. A., & Nelson, C. A., 3rd. (2014), “Extraction of heart rate from functional near-infrared spectroscopy in infants”,J Biomed Opt,19(6), 067010. In an alternative embodiment, step1518is followed by step1528. At step1528, IPIs for a predetermined number of beats before and after stimulation onset may be recorded. For example, as described in further detail below with reference toFIG.18, IPIs for 5 heart beats before and 5 heart beats after stimulation onset may be recorded. From step1528, the method may proceed to step1526. At step1522, the average and the standard deviation of the heart rate versus time as determined at step1520is calculated across all channels810. If a channel810has values outside predetermined thresholds, the channel810may be rejected. According to some embodiments, channels810with values outside a range of mean heart rate plus/minus 20 beats/min are rejected. According to some embodiments, channels810with values outside a range mean heart rate plus/minus a predetermined number of standard deviations are rejected. According to some embodiments, channels with IPIs outside of the range of mean IPI plus-or-minus a predetermined number of standard deviations are rejected. According to some embodiments, the calculations in step1522may be performed on a single channel810. According to some embodiments, the calculations in step1522may be averaged over a group of channels810chosen to have low noise. At step1524, the data determined at step1520or step1528may be epoched. Each epoch may be from around −5 to 30 seconds relative to the onset time of the stimulus. According to some embodiments, other time values may also be used according to stimulus length and length of silent periods. In cases where multiple different stimuli have been applied in different epochs, epochs may be averaged based on the stimulus identity, to result in one average for each separate stimulus. This results in data according to the graph shown inFIG.12. At step1526, measures may be automatically extracted from the data determined at step1524. These measures may include a percentage change in heart rate compared to average heart rate in the 5 seconds before stimulus onset, being the baseline heartrate; the peak change in heart rate from the baseline; the time to reach the peak from the onset time; and other parameters such as the width of the peak. These measures may be used to determine hearing thresholds or comfortable loudness or uncomfortable loudness, as described above with reference toFIG.6. FIGS.10A,10B,11A and11Bshow graphs1000,1050,1100and1150, respectively, illustrating an example set of changes of HbO in response to an audio stimulus in regions811and814(as illustrated inFIG.8). In graphs1000and1050, shown inFIGS.10A and10B, x-axes1020and1070define the time from the stimulus onset in seconds, where the stimulus begins at 0 seconds. Y-axes1010and1060define the HbO concentration change, where 0 is the average concentration of HbO while no sound stimulus is being delivered. Responses1030and1080relate to a stimulus of 90 dB sound pressure level (SPL), responses1032and1082relate to a stimulus of 65 dB SPL, responses1034and1084relate to a stimulus of 40 dB SPL and responses1036and1086relate to a stimulus of 15 dB SPL. Graph1000illustrates the rate of change of HbO in region811, while graph1050shows the rate of change of HbO in region814. In graphs1100and1150, shown inFIGS.11A and11B, x-axes1120and1170define the intensity in dB SPL of an audio stimulation delivered to a patient. Y-axes1110and1160define the HbO concentration change, where 0 is the concentration of HbO while no sound stimulus is being delivered. Graph1100illustrates the response amplitude of HbO in region811, and the amplitude of HbO concentration change was calculated as a mean amplitude in a time window extending 24 to 30 seconds after the onset of the audio stimulation. Graph1150shows the response amplitude of HbO in region814, and the amplitude of HbO concentration change was calculated as a mean amplitude in a time window extending 0 to 6 seconds after the onset of audio stimulation. FIG.12shows a graph1200illustrating an example set of changes of heart rate in response to an audio stimulus, with the heart rate having been calculated by fNIRS, as described above with reference to step612of method600and method1500. According to some embodiments, changes in heart rate may also be determined based on data generated by cardiac monitor165. X-axis1220defines the time from the stimulus onset in seconds, where the stimulus begins at 0 seconds. Y-axis1210defines the percentage change in heart rate from the heart rate when there is no stimulation, being a baseline heart rate. In the illustrated example, the baseline heart rate was around 70 beats per minute. Response1230relates to a stimulus of 90 dB SPL, response1232relates to a stimulus of 65 dB SPL, response1234relates to a stimulus of 40 dB SPL and response1236relates to a stimulus of 15 dB SPL. For responses1234and1236, an initial drop and subsequent rise in heart rate can be seen following the stimulus onset at 0 seconds. Response1236shows an average drop in heart rate of 4% over 4.6 seconds after stimulus onset, and response1234shows an average drop in heart rate of 1.1% over 1.6 seconds after stimulus onset. Responses1232and1230show heart rate increasing following stimulus onset, with both responses reaching similar average levels. It can be seen fromFIG.12that a soft sound of 15 dB SPL induces a slight reduction in heart rate early after the stimulus onset. The reduction in heart rate for a 15 dB SPL sound may be up to around 3%. In contrast, a loud sound of around 90 dB induces a strong increase in heart rate that persists while the stimulus is on. The increase in heart rate for a 90 dB SPL sound may be up to around 10%. To quantify the immediate change in responses1230,1232,1234and1236after stimulus onset, the mean heart rate change between 0 seconds and 8 seconds may also be calculated, and the results of an example of such a calculation are shown inFIG.16. FIG.16shows a graph1600having a y-axis1210showing the percentage change in heart rate of the recorded signal ofFIG.12for a predetermined post-stimulation period, and an x-axis1620showing the level of sound intensity of the stimulation in dB SPL. In the illustrated embodiment, the post-stimulation period is between 0 and 8 seconds after the stimulation onset. An 8 second period was be chosen to cover the initial peak seen after stimulation onset in the averaged data as shown inFIG.12, but a different period may be chosen in some embodiments. Pairings1630show comparisons between intensity levels where a significant effect on heart rate change was found, with a significant effect defined as p<0.001. As illustrated, a significant effect of intensity level on heart rate change was found in pairwise comparisons1630showing significant difference between all sound intensity levels except at 65 dB and 90 dB. At the higher stimulus levels of 65 and 90 dB, a bi-phasic response with peaks at 4 and 14.5 seconds post-stimulus onset can be seen inFIG.12. As seen inFIG.12, at stimulus offset, an initial decrease in heart rate at all stimulation intensity levels was observed, following which the heart rate at all stimulus intensity levels returned to the baseline measurement. Boxes1640represent the median, interquartile range and largest/smallest non-outliers from a sample of 27 tested patients. Crosses1645represent outliers, defined as values greater than 1.5 times the interquartile range. FIG.17shows a graph1700with a y-axis1710showing inter-beat intervals (or time between heart beats) in seconds for one example patient during a seven minute recording with stimulus levels of 15 dB, 40 dB, 60 dB and 90 dB presented in a random order. The x-axis1720shows time in seconds. Lines1730correspond to a stimulus of 15 dB being delivered, lines1740correspond to a stimulus of 40 dB being delivered, lines1750correspond to a stimulus of 65 dB being delivered, and lines1760correspond to a stimulus of 90 dB being delivered. A drop in inter-beat intervals (corresponding to an increased heart rate) following the 65 and 90 dB SPL levels at lines1750and1760are clearly seen. To illustrate the immediate change in inter-beat intervals following sound onset in more detail, the percentage change in the first five intervals after sound onset relative to baseline (defined as the averaged five intervals before stimulus onset), may be calculated. An example of this calculation is shown inFIG.18, described in further detail below. FIG.18shows a graph1800having a y-axis1810showing the percentage of inter-beat interval change relative to a baseline measurement for inter-beat intervals, and an x-axis1820showing the inter beat interval number relative to the stimulus onset. Graph1800shows inter-beat intervals staring from 5 intervals before the stimulus onset, up to 5 intervals after the stimulus onset. Four sets of data are shown. Data1is shown by line1830and corresponds to a stimulus of 15 dB. Data2is shown by line1840and corresponds to a stimulus of 40 dB. Data3is shown by line1850and corresponds to a stimulus of 65 dB. Data4is shown by line1860and corresponds to a stimulus of 90 dB. Table 1 below shows the percentage change in inter-beat intervals averaged across the first two intervals and also across intervals three to five. For both these ranges, a significant stimulus level x time interaction was found (P<0.001) indicating changes in intervals after stimulus onset were dependent on stimulus levels. TABLE 1Mean (SEM) percentage change in inter-beat intervalsfrom baseline, across participants.Mean (SEM)P valueMean (SEM)P valueStimulusinter-beat(changeinter-beat(changelevelintervalsfromintervalsfrom(dB SPL)1 to 2 (%)baseline)3 to 5 (%)baseline)150.82 (0.41)0.5292.17 (0.37)<0.001***401.48 (0.41)0.011*0.03 (0.37)1.0650.66 (0.43)0.779−2.63 (0.40)<0.001***90−0.34 (0.45)0.993−3.43 (0.34)<0.001****P < 0.05,***P < 0.001 Post-hoc comparison shows that across the first two beats, the average change from the baseline measurement was only significant in data1840, being the 40 dB SPL beat. At this sound level, after the first two inter-beat intervals, values returned toward baseline and were not significantly different from baseline when averaged across intervals three to five (seeFIG.18). Averaged inter-beat intervals three to five were significantly higher than baseline at 15 dB SPL as shown by data1830, and significantly lower at stimulus levels 65 and 90 dB SPL, as shown by data1850and data1860. Following stimulus onset, the change in inter-beat intervals three to five from baseline was significantly different between all stimulus levels except 65 and 90 dB SPL. Similar responses to those shown inFIGS.12and16to18may be determined for breathing rate, blood pressure, and other cardiac responses. These response signals may be used alone or in combination with neural activity responses to determine threshold levels of hearing for the patient or loudness of above-threshold sounds. According to some embodiments, the hearing threshold values for a patient may be obtained by fitting a function to the values at different sound intensities and interpolating or extrapolating a parameter value designated as corresponding to threshold. FIG.13shows a graph1300having an x-axis1320of intensity in dB SPL and a y-axis1310of fNIRS response in HbO change. Graph1300illustrates a number of data points1330, being the peak magnitude of the response in a patient as measured in region814. Line1340shows the magnitude of the response in region814is extrapolated to zero. The sound intensity at which this occurs is the fNIRS-estimated threshold. In the illustrated embodiment, this is around 18 dB SPL. Using a 3-alternative forced-choice adaptive procedure, the behaviourally-determined threshold for detecting the sound was determined to be 12.5 dB SPL in this patient. Alternatively, the hearing threshold for a patient may be determined as the lowest sound intensity that satisfies one or more parameter values. For example, in the scenario illustrated byFIGS.10A to11B, the hearing threshold may be determined as the lowest sound intensity for which region811shows a suppression or a negative response, or in the scenario illustrated byFIG.12, the lowest sound intensity for which the heart rate decreases after stimulus onset, or a combination of these and other parameters. FIG.19shows a graph1900having an x-axis1920of intensity in dB SPL and a y-axis1910of a beta value fNIRS measure. Graph1900illustrates a number of data points1950, being the beta values of the fNIRS response in a patient. Line1940shows the magnitude of the response is extrapolated to zero dBSPL. The sound intensity at which line1940reaches the beta value measured in the rest period is the fNIRS-estimated threshold. In the illustrated embodiment, this is around 12.4 dB SPL, as indicated by point1960and the dotted lines. Using a 3-alternative forced-choice adaptive procedure, the behaviourally-determined threshold for detecting the sound was determined to be 10 dB SPL in this patient, as illustrated by line1930. Alternatively, the hearing threshold for a patient may be determined as the lowest sound intensity that satisfies one or more parameter values. For example, in the scenario illustrated byFIGS.10A to11B, the hearing threshold may be determined as the lowest sound intensity for which region811shows a suppression or a negative response, or an increase or positive response, or in the scenario illustrated byFIG.12, the lowest sound intensity for which the heart rate decreases after stimulus onset, or a combination of these and other parameters. According to some embodiments, the system and methods described above can be used in combination with simultaneously-collected EEG measures of electrical brain responses to auditory stimulation, using standard methods such as ABR (auditory brainstem response) CAEP (Cortical auditory evoked potentials) ASSR (auditory steady-state responses). The simultaneous use of multi-dimensional data that includes both fNIRS and/or cardiac data along with EEG data may optimise the accuracy and/or reliability of the estimates of the clinical parameters of threshold and comfortable loudness levels. According to some embodiments, the methods described above may be used in combination with other objective measures of hearing, such as EEG, physiological responses such as skin conductance, respiration rate, blood pressure changes, and with any available behavioural measures or observations. It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. | 81,100 |
11857313 | DEFINITIONS As used herein, “non-sweat biofluid” or “biofluid that is not sweat” means a fluid source of analytes that is not sweat. For example, a non-sweat biofluid could be a solution that bathes and surrounds tissue cells such as interstitial fluid. Embodiments of the disclosed invention may focus on interstitial fluid found in the skin and, particularly, interstitial fluid found in the dermis. However, interstitial fluid in other body compartments may also apply. Sensors could also be implanted in large arteries, the bladder, or other biofluid cavities, such that the term non-sweat biofluid may also apply to biofluids such as blood, urine, saliva, or other suitable biofluids for analyte sensing that are not sweat. As used herein, “sweat” is a fluid source of analytes that is sweat from eccrine or apocrine glands. Sweat from eccrine glands may be easier to sense as apocrine glands are harder to access in their locations on the body, less controlled in sweat generation rate, and contain confounding challenges such as high bacterial counts, which can skew analyte readings. As used herein, “continuous monitoring” means the capability of a device to provide at least one measurement of a biofluid, such as interstitial fluid, determined by a continuous or multiple collection and sensing of that measurement or to provide a plurality of measurements of that biofluid over time. As used herein, “chronological assurance” is an assurance of the sampling rate for measurement(s) of a biofluid, such as sweat or a non-sweat biofluid like interstitial fluid, or solutes therein in terms of the rate at which measurements can be made of new biofluid or its new solutes as originating from the body. Chronological assurance may also include a determination of the effect of sensor function, potential contamination with previously generated biofluids, previously generated solutes, other fluid, or other measurement contamination sources for the measurement(s). As used herein, “sampling rate” is the effective rate at which new biofluid or new solute concentrations reach a sensor that measures a property of the biofluid such as sweat or a non-sweat biofluid or the solutes therein. Sampling rate therefore could be the rate at which new biofluid is refreshed at the one or more sensors and therefore old fluid is removed as new fluid arrives. The inverse of sampling rate (1/s) could also be interpreted as a “sampling interval”. Sampling rates or intervals are not necessarily regular, discrete, periodic, discontinuous, or subject to other limitations. As used herein, “measured” can imply an exact or precise quantitative measurement and can include broader meanings such as, for example, measuring a relative amount of change of something. Measured can also imply a binary measurement, such as ‘yes’ or ‘no’ type measurements. As used herein, “advective transport” is a transport mechanism of a substance or conserved property by a fluid due to the fluid's bulk motion. As used herein, “diffusion” is the net movement of a substance from a region of high concentration to a region of low concentration. This is also referred to as the movement of a substance down a concentration gradient. As used herein, “convection” is the concerted, collective movement of groups or aggregates of molecules within fluids and rheids, either through advection or through diffusion or a combination of both. DETAILED DESCRIPTION Embodiments of the disclosed invention apply at least to any type of sensor device that measures biofluid or analyte in a biofluid such as sweat, a non-sweat biofluid, or a combination of both sweat and a non-sweat biofluid. Further, embodiments of the disclosed invention apply to sensing devices, which can take on forms including adhesive patches, bands, straps, implants, transdermal patches, portions of clothing, wearables, or any suitable mechanism that reliably brings sensing technology into intimate proximity with a non-sweat biofluid, sweat, or both a non-sweat biofluid and sweat. Certain illustrated embodiments of the disclosed invention show sensors as simple individual elements. It is understood that many sensors require two or more electrodes, reference electrodes, or additional supporting technology or features which are not captured in the description herein. In embodiments, sensors are preferably electrical in nature, but may also include optical, chemical, mechanical, or other known biosensing mechanisms. Sensors can be in duplicate, triplicate, or more, to provide improved data and readings. Sensors may be referred to by what the sensor is sensing, for example: a sweat sensor; an impedance sensor; a sample volume sensor; a sample generation rate sensor; and a solute generation rate sensor. Certain embodiments of the disclosed invention show sub-components of what would be sensing devices with more sub-components needed for use of the device in various applications, which are obvious (such as a battery, antenna, adhesive), and for purposes of brevity and focus on inventive aspects, such components are not explicitly shown in the diagrams or described in the embodiments of the disclosed invention. Embodiments of the disclosed invention provide biofluid sensing systems capable of providing superior biosensing by coupling higher precision or accuracy concentration sensing of analytes in a first biofluid with more rapid trending and continuous data provided by sensing analytes in a second biofluid. In an exemplary embodiment the first biofluid may be a non-sweat biofluid and the second biofluid may be sweat. With reference toFIG.1, in an embodiment of the disclosed invention, a biofluid sensing system comprises a single device100that includes two subsystems102,104for sensing analytes in different types of biofluids such as a sweat or a non-sweat biofluid or both. The device100is placed on or near skin12, which includes the epidermis12aand dermis12b. The dermis12bcontains interstitial fluid. The device100may utilize any suitable substrate or material to hold it together, for example, such as a plastic nylon casing110. The device100contains sensors120,122,124,126, each of which could be any sensor capable of measuring a property of a biofluid or an analyte in a biofluid. For example, each of the sensors120,122,124,126may be an ion-selective senor, an amperometric (enzymatic) sensor, an electrochemical aptamer sensor, a fluorometric sensor, an antibody-based sensor, or may involve other suitable sensing modalities. As shown, each subsystem has a group of sensors, and each sensor in the group may be configured to sense the same or different analytes in a single type of biofluid. In the illustrated embodiment, sensors120,122are grouped together and sensors124,126are grouped together. It should be recognized that the number of sensors in each group of sensors may vary. Each of the sensors120,122,124,126may be for sensing the same or different analytes in one or more than one type of biofluid. In the illustrated embodiment, sensors120,122are for sensing analytes in a non-sweat biofluid such as interstitial fluid, and sensors124,126are for sensing analytes in sweat. In that regard, the device100includes first subsystem102for sensing analytes in interstitial fluid that comprises a microneedle array180that provides a pathway190for diffusion of analytes between the dermis12band the sensors120,122. If the pathway190is initially dry, biofluid may also enter into the pathway190through the microneedle array180such that the analyte diffuses through the biofluid inside pathway190to the sensor. In another embodiment, pathway190may be preloaded with a fluid that allows analytes in the biofluid to diffuse from the microneedle array180through the fluid in pathway190to the sensors120,122. The microneedle array180can comprise any suitable material used for fabrication of microneedle arrays, such as glass, silicon, skin-compatible metals, polymers, etc. The device100also includes a subsystem104for sensing analytes in sweat comprising a wicking component or microfluidic channel130to transport sweat generated on the skin12to the sensors124,126. Suitable materials for the wicking component or microfluidic channel130include paper, rayon, and a polymer microchannel. The subsystem104for sensing sweat further includes a reservoir132for storage of old/waste sweat, which could be for example a hydrogel. Further included in the subsystem104for sensing sweat is a sweat stimulating component comprising a membrane170, a sweat stimulation gel or solution140, and an iontophoresis electrode150to drive sweat stimulants from the stimulation gel or solution140into the skin12. The stimulation gel140may be, for example, agar containing sweat stimulants such as pilocarpine or carbachol. Suitable materials for the membrane170include a forward osmosis membrane or a dialysis membrane. The membrane170serves to decrease fluid and or solution contamination and mixing between the stimulation gel or solution140, the skin12, and the microfluidic component130. Still referring toFIG.1, the device100may be applied to the skin12. The microneedle array180pierces the skin12to provide the pathway190for diffusion of analytes in a non-sweat biofluid between the dermis12band the sensors120,122. The sensors120,122sense the same or different analytes that diffuse through the microneedle array180into the pathway190. Additionally, the sweat sensing subsystem104may stimulate sweat thereunder by iontophoretically driving the sweat stimulant from the stimulation gel or solution140through the membrane170and microfluidic channel130into the skin12using the iontophoresis electrode150. As sweat emerges from the skin12, the microfluidic channel130transports sweat across the sensors124,126and into the reservoir132. Where the subsystems are configured to sense analytes in at least two different biofluids, an effect of lag time on the measurements of an analyte in one biofluid may be determined at least in part by the measurements of an analyte in a different biofluid. For example, sensors120and122and sensors124and126could communicate with an electronic microcontroller that could be part of the device100. The microcontroller could communicate with a wireless communication component, such as Bluetooth, and then to a smartphone with software that could analyze the data and lag times. Applications of the device100are provided in the Examples below. With reference toFIG.2, in an embodiment of the disclosed invention, a biofluid sensing system200is comprised of a plurality of separate subsystems or devices200a,200b,200c. The subsystem200ais a transdermal sensor that includes a sensor220at the end of a needle or wire282that extends into the skin12. The needle or wire282electrically connects the sensor220to the subsystem200a, which may also include electronics and a housing, such as a plastic housing. The needle or wire282may be, for example, like that used for continuous wearable glucose monitors. The subsystem200bis an implantable sensor system that can be electronic or optical. In the illustrated embodiment, the subsystem200bis electronic and comprises, for example, a sensing and transmission component222, which includes a microcircuit and sensor and communication antenna, embedded in a biocompatible casing272. The biocompatible casing272may be made of, for example, a hydrogel, block co-polymer, or other suitable material that is porous to the analyte of interest. In an embodiment, an optical subsystem200bcould be, for example, a fluorometric or colorimetric sensor that is optically interrogated through the epidermis12a. The subsystem200cis for sensing sweat and comprises a sweat sensor224in a casing210. In various embodiments, sweat may be stimulated before the subsystem200cis applied to skin or naturally induced sweat may be utilized. Alternately, sweat stimulation could be integrated into200cusing suitable approaches such as the approach show for the device100. While the embodiment illustrated inFIG.2includes three types of sensor subsystems200a,200b,200c, embodiments of the invention may include two of the three types of illustrated sensor subsystems. For example, an embodiment of the invention may include subsystem200aand subsystem200bor subsystem200cand another embodiment may include subsystem200band subsystem200c. It is further contemplated that one or more of the subsystems200a,200b,200cmay be included in duplicate or triplicate or more. In order to facilitate a more complete understanding of the embodiments of the invention, the following non-limiting examples of the device100are provided below. Example 1 The physiological lag between glucose levels in the blood and in interstitial fluid can be problematic in continuous glucose monitoring if the lag isn't considered when calibrating the monitors. Patients with diabetes who want to use continuous glucose monitors need to be instructed to calibrate the devices when their glucose levels are in a steady state rather than during a period of changing glucose levels. Finger-stick monitors and the electrochemical sensors in continuous glucose monitors (CGMs) work on the same principle, based on glucose oxidase breaking down glucose and generating electrons, which are measured by the monitor's sensors. Finger-stick monitors measure serum glucose, and continuous monitors measure glucose in the interstitial fluid. When glucose levels are changing—such as rising glucose levels seen particularly after meals—there can be as much as a 30-minute delay before a changed glucose level in blood is reflected in interstitial fluid. If patients calibrate the continuous glucose monitoring devices when their glucose is changing (i.e., not in steady state), their sensor could be calibrated inaccurately and not give them reliable readings. With reference toFIG.1, the sensor120senses glucose in interstitial fluid whereas sensor124senses glucose in sweat. In an embodiment, both sensors120,124include glucose oxidase enzymatic electrodes. Additionally, or alternatively, one or both of the sensors120,124may be an electrochemical aptamer-based sensor for glucose. Additionally, or alternatively, one or both of the sensors122,126may sense pH to account for the effects of pH on operation of the sensors120,124, respectively. The sensor124can alert the user that glucose levels are changing rapidly and warn of possible inaccurate calibration or even feed into an algorithm that provides a potential accuracy level for the calibration in numeric form, or such as ‘good-fair-poor’ indications. The sensor124could also be used to detect the rate of change in glucose during calibration, and then couple that information with the calibration data to correct for potential error in calibration. For example, if the sensor124measures glucose rising at 10% every 5 minutes, and the lag time for the sensor120is 15 minutes, then the device100calibration value used based on the sensor120can be increased by approximately 30% due to the information provided by the sensor124. Devices and subsystems inFIG.2could provide similar advantages. In this example, two or more of the sensors sense the same 1stanalyte, such as glucose, with one sensor sensing the 1stanalyte in a biofluid that is not sweat and the other sensor sensing the 1stanalyte in sweat. Example 2 With reference toFIG.2, in an embodiment, the sensors220,222,224may each sense a different analyte. For example, the sensor220could be for sensing an inflammatory marker, such as cytokine, that changes slowly in the body and slowly in interstitial fluid, the sensor222could be a fluorometric sensor for glucose in interstitial fluid, and the sensor224could be a sweat sensor for cortisol. The sensor220could measure the longer-term effects of stressors on the body (e.g., inflammation), whereas the sensor224could measure the short term effects of stress on the body. For example, if a patient had a panic attack, cortisol levels could rise rapidly, and the rate of rise of cortisol as sensed by the sensor224could provide an indication of the severity of the panic attack. The prolonged effect of the panic attack could also be measured by sensor220by measuring at least one cytokine level. The glucose sensor222could measure the effect of diet and health on the causality of the panic attack(s). Thus, in an embodiment, two or more of the sensors are for sensing a 1stanalyte and 2ndanalyte that are different, one sensor sensing the 1stanalyte in a biofluid that is not sweat and the other sensor sensing the 2ndanalyte in sweat. Example 3 With reference toFIG.1, the sensor120senses vasopressin, and the sensor124senses Na+, which is an indicator of sweat generation rate. The sensor124could therefore provide a leading warning of possible dehydration before dehydration occurs as recorded by sensor220, which measures changes in levels of vasopressin. Example 4 With reference toFIG.1, the sensors120,124senses glucose, the sensor122senses insulin, and the sensor126senses adrenaline or cortisol, which are released when the body senses low glucose. This combined system could measure not only glucose like that of Example 1, but also circulating concentrations of delivered insulin, and the body's rapid response to the effects of low or high glucose, for a complete monitoring system that helps a patient or user avoid hypoglycemic shock. While specific embodiments have been described in detail to illustrate the disclosed invention, the description is not intended to restrict or in any way limit the scope of the appended claims to such detail. The various features discussed herein may be used alone or in any combination. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and methods and illustrative examples shown and described. Accordingly, departures may be made from such details without departing from the scope of the general inventive concept. | 18,088 |
11857314 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Diabetic hospital patients who eat meals often have poor appetites; consequently, co-ordination of meal boluses and meals is difficult. Meal boluses without meals cause hypoglycemia; meals without meal boluses cause hyperglycemia. Different providers may use different methods of adjusting doses: some may use formulas of their own; some may use paper protocols that are complex and difficult for the nurse to follow, leading to a high incidence of human error; and some may use heuristic methods. There is no guarantee of consistency. Moreover, for diabetic patients who do not eat meals, there is no currently no computerized method of tracking the patient's status. For non-diabetic patient who get include due to “stress hyperglycemia” when they are very sick or undergoing surgery, there is no current method of monitoring their recovery when the stress subsides and their need for insulin rapidly decreases. If the dose regimen does not decrease rapidly also, hypoglycemia may result. Therefore, it is desirable to have a clinical support system100(FIGS.1A and1B) that monitors patients' blood glucose level. Referring toFIGS.1A and1B, in some implementations, a clinical decision support system100analyzes inputted patient condition parameters for a patient10and calculates a personalized dose of insulin to bring and maintain the patient's blood glucose level into a target range BGTR. Moreover, the system100monitors the glucose levels of a patient10and calculates recommended intravenous or subcutaneous insulin dose to bring the patient's blood glucose into the preferred target range BGTRover a recommended period of time. A qualified and trained healthcare professional40may use the system100along with clinical reasoning to determine the proper dosing administered to a patient10. Therefore, the system100is a glycemic management tool for evaluation a patient's current and cumulative blood glucose value BG while taking into consideration the patient's information such as age, weight, and height. The system100may also consider other information such as carbohydrate content of meals, insulin doses being administered to the patient10, e.g., long-acting insulin doses for basal insulin and rapid-acting insulin doses for meal boluses and correction boluses. Based on those measurements (that may be stored in non-transitory memory24,114,144), the system100recommends an intravenous dosage of insulin, glucose, or saline or a subcutaneous basal and bolus insulin dosing recommendation or prescribed dose to adjust and maintain the blood glucose level towards a configurable (based on the patient's information) physician's determined blood glucose target range BGTR. The system100also considers a patient's insulin sensitivity or improved glycemic management and outcomes. The system100may take into account pertinent patient information such as demographics and previous results, leading to a more efficient use of healthcare resources. Finally, the system100provides a reporting platform for reporting the recommendations or prescribed dose(s) to the user40and the patient10. In addition, for diabetic patients who eat meals, the system100provides faster, more reliable, and more efficient insulin administration than a human monitoring the insulin administration. The system100reduces the probability of human error and insures consistent treatment, due to the system's capability of storing and tracking the patient's blood glucose levels BG, which may be used for statistical studies. As for patients who are tube-fed or do not eat meals, the system100provides dedicated subprograms, which in turn provide basal insulin and correction boluses but no meal boluses. Patients who are tube-fed or who do not eat usually have a higher basal insulin level than patients who eat, because the carbohydrates in the nutritive formula are accounted—for in the basal insulin. The system100provides a meal-by-meal adjustment of Meal Boluses without carbohydrate counting, by providing a dedicated subprogram that adjusts meal boluses based on the immediately preceding meal bolus and the BG that followed it. The system100provides a meal-by-meal adjustment of Meal Boluses with carbohydrate counting by providing a dedicated subprogram that adjusts meal boluses based a Carbohydrate-to-Insulin Ratio (CIR) that is adjusted at each meal, based on the CIR used at the immediately preceding meal bolus and the BG that followed it. Hyperglycemia is a condition that exists when blood sugars are too high. While hyperglycemia is typically associated with diabetes, this condition can exist in many patients who do not have diabetes, yet have elevated blood sugar levels caused by trauma or stress from surgery and other complications from hospital procedures. Insulin therapy is used to bring blood sugar levels back into a normal range. Hypoglycemia may occur at any time when a patient's blood glucose level is below a preferred target. Appropriate management of blood glucose levels for critically ill patients reduces co-morbidities and is associated with a decrease in infection rates, length of hospital stay, and death. The treatment of hyperglycemia may differ depending on whether or not a patient has been diagnosed with Type 1 diabetes mellitus, Type 2 diabetes mellitus, gestational diabetes mellitus, or non-diabetic stress hyperglycemia. The blood glucose target range BGTRis defined by a lower limit, i.e., a low target BGTRLand an upper limit, i.e., a high target BGTRH. Stress-related hyperglycemia: Patients often get “stress hyperglycemia” if they are very sick or undergoing surgery. This condition requires insulin. In diabetic patients, the need for insulin is visibly increased. In non-diabetic patients, the stress accounts for the only need for insulin, and as the patients recover, the stress subsides, and their need for insulin rapidly decreases. For non-diabetic patients, the concern is that their need for insulin decreases faster than their dose regimen, leading to hypoglycemia. Diabetes Mellitus has been treated for many years with insulin. Some recurring terms and phrases are described below: Injection: Administering insulin by means of manual syringe or an insulin “pen,” with a portable syringe named for its resemblance to the familiar writing implement. Infusion: Administering insulin in a continuous manner by means of an insulin pump for subcutaneous insulin or an intravenous apparatus123a, both of which are capable of continuous administration. Intravenous Insulin Therapy: Intravenous infusion of insulin has been approved by the U.S. Food and Drug Administration as an acceptable indication for use. Intravenous infusion is the fastest of all insulin administration routes and, typically, only available in the hospital setting. For instance, in intensive care units, the patients may be fed by intravenous glucose infusion, by intravenous Total Parenteral Nutrition (TPN), or by a tube to the stomach. Patients are often given insulin in an intravenous infusion at an insulin infusion rate IIR. The IIR is regulated by the frequent testing of blood glucose, typically at intervals between about 20 minutes and 2 hours. This is combined with a protocol in which a new IIR is computed after each blood glucose test. Basal-Bolus Therapy: Basal-bolus therapy is a term that collectively refers to any insulin regimen involving basal insulin and boluses of insulin. Basal Insulin: Insulin that is intended to metabolize the glucose released by a patient's the liver during a fasting state. Basal insulin is administered in such a way that it maintains a background level of insulin in the patient's blood, which is generally steady but may be varied in a programmed manner by an insulin pump123a. Basal insulin is a slow, relatively continuous supply of insulin throughout the day and night that provides the low, but present, insulin concentration necessary to balance glucose consumption (glucose uptake and oxidation) and glucose production (glucogenolysis and gluconeogenesis). A patient's Basal insulin needs are usually about 10 to 15 mU/kg/hr and account for 30% to 50% of the total daily insulin needs; however, considerable variation occurs based on the patient10. Bolus Insulin: Insulin that is administered in discrete doses. There are two main types of boluses, Meal Bolus and Correction Bolus. Meal Bolus: Taken just before a meal in an amount which is proportional to the anticipated immediate effect of carbohydrates in the meal entering the blood directly from the digestive system. The amounts of the Meal Boluses may be determined and prescribed by a physician40for each meal during the day, i.e., breakfast, lunch, and dinner. Alternatively, the Meal Bolus may be calculated in an amount generally proportional to the number of grams of carbohydrates in the meal. The amount of the Meal Bolus is calculated using a proportionality constant, which is a personalized number called the Carbohydrate-to-Insulin Ratio (CIR) and calculated as follows: Meal Insulin Bolus={grams of carbohydrates in the meal}/CIR (1) Correction Bolus CB: Injected immediately after a blood glucose measurement; the amount of the correction bolus is proportional to the error in the BG (i.e., the bolus is proportional to the difference between the blood glucose measurement BG and the patient's personalized Target blood glucose BGTarget). The proportionality constant is a personalized number called the Correction Factor, CF, and is calculated as follows: CB=(BG−BGTarget)/CF (2) A Correction Bolus CB is generally administered in a fasting state, after the previously consumed meal has been digested. This often coincides with the time just before the next meal. There are several kinds of Basal-Bolus insulin therapy including Insulin Pump therapy and Multiple Dose Injection therapy: Insulin Pump Therapy: An insulin pump123ais a medical device used for the administration of insulin in the treatment of diabetes mellitus, also known as continuous subcutaneous insulin infusion therapy. The device includes: a pump, a disposable reservoir for insulin, and a disposable infusion set. The pump123ais an alternative to multiple daily injections of insulin by insulin syringe or an insulin pen and allows for intensive insulin therapy when used in conjunction with blood glucose monitoring and carbohydrate counting. The insulin pump123ais a battery-powered device about the size of a pager. It contains a cartridge of insulin, and it pumps the insulin into the patient via an “infusion set”, which is a small plastic needle or “canula” fitted with an adhesive patch. Only rapid-acting insulin is used. Multiple Dose Injection (MDI): MDI involves the subcutaneous manual injection of insulin several times per day using syringes or insulin pens123b. Meal insulin is supplied by injection of rapid-acting insulin before each meal in an amount proportional to the meal. Basal insulin is provided as a once, twice, or three time daily injection of a dose of long-acting insulin. Other dosage frequencies may be available. Advances continue to be made in developing different types of insulin, many of which are used to great advantage with MDI regimens: Long-acting insulins are non-peaking and can be injected as infrequently as once per day. These insulins are widely used for Basal Insulin. They are administered in dosages that make them appropriate for the fasting state of the patient, in which the blood glucose is replenished by the liver to maintain a steady minimum blood glucose level. Rapid-acting insulins act on a time scale shorter than natural insulin. They are appropriate for boluses. In some examples, critically ill patients are ordered nil per os (NPO), which means that oral food and fluids are withheld from the patient10. Typically these patients are unconscious, have just completed an invasive surgical procedure, or generally have difficulty swallowing. Intravenous insulin infusion is typically the most effective method of managing blood glucose levels in these patients. A patient10may be NPO and receiving a steady infusion of intravenous glucose, Total Parenteral Nutrition, tube feeding, regular meals that include carbohydrates, or not receiving any nutrition at all. In cases where the patient10is not receiving any nutrition, blood glucose is typically replaced by endogenous production by the liver. As a patient's condition improves, an NPO order may be lifted, allowing the patient10to commence an oral caloric intake. In patients10with glycemic abnormalities, additional insulin may be needed to cover the consumption of carbohydrates. These patients10generally receive one-time injections of insulin in the patient's subcutaneous tissue. Subcutaneous administration of mealtime insulin in critically ill patients10can introduce a patient safety risk if, after receiving the insulin injection, the patient10decides not to eat, is unable to finish the meal, or experiences emesis. Continuous intravenous infusion of mealtime insulin, over a predetermined time interval, allows for an incremental fulfillment of the patient's mealtime insulin requirement, while minimizing patient safety risks. If a patient10decides he/she is unable to eat, the continuous intravenous infusion may be stopped or, if a patient10is unable to finish the meal, the continuous intravenous infusion rate may be decreased to compensate for the reduction in caloric intake. The pharmacokinetics (what the body does to a drug over a period of time, which includes the processes of absorption, distribution, localization in tissues, biotransformation, and excretion) and pharmacodynamics (what a drug does to the body) actions of insulin significantly improve when administering insulin via an intravenous route, which is a typical method of delivery for hospitalized patients10. The management of prandial insulin requirements using an intravenous route can improve patient safety, insulin efficiency, and the accuracy of insulin dosing. The majority of patients who require continuous intravenous insulin infusion therapy may also need to be transitioned to a subcutaneous insulin regimen for ongoing control of blood glucose, regardless of diabetes mellitus (DM) diagnosis. Moreover, the timing, dosing, and process to transition patients10from a continuous intravenous route of insulin administration to a subcutaneous insulin regimen is complex and should be individualized based on various patient parameters. Failure to individualize this approach could increase the risk of severe hypoglycemia during the transition process. If not enough insulin is given, the patient10may experience acute post-transition hyperglycemia, requiring re-initiation of a continuous intravenous insulin infusion. Therefore, the clinical decision support system100calculates a personalized dose of insulin to bring and maintain the patient's blood glucose level into a target range BGTR, while taking into consideration the condition of the patient10. The clinical decision support system100includes a glycemic management module50, an integration module60, a surveillance module70, and a reporting module80. Each module50,60,70,80is in communication with the other modules50,60,70,80via a network20. In some examples, the network24(discussed below) provides access to cloud computing resources that allows for the performance of services on remote devices instead of the specific modules50,60,70,80. The glycemic management module50executes a process200(e.g., an executable instruction set) on a processor112,132,142or on the cloud computing resources. The integration module60allows for the interaction of users40with the system100. The integration module60receives information inputted by a user40and allows the user40to retrieve previously inputted information stored on a storage system (e.g., one or more of cloud storage resources24, a non-transitory memory144of a hospital's electronic medical system140, a non-transitory memory114of the patient device110, or other non-transitory storage media in communication with the integration module60). Therefore, the integration module60allows for the interaction between the users40and the system100via a display116,146. The surveillance module70considers patient information208areceived from a user40via the integration module60and information received from a glucometer124that measures a patient's blood glucose value BG and determines if the patient10is within a threshold blood glucose value BUTH. In some examples, the surveillance module70alerts the user40if a patient's blood glucose values BG are not within a threshold blood glucose value BGTH. The surveillance module70may be preconfigured to alert the user of other discrepancies between expected values and actual values based on preconfigured parameters (discussed below). For example, when a patient's blood glucose value BG drops below a lower limit of the threshold blood glucose value BGTHL. The reporting module80may be in communication with at least one display116,146and provides information to the user40determined using the glycemic management module the integration module60, and/or the surveillance module70. In some examples, the reporting module80provides a report that may be displayed on a display116,146and/or is capable of being printed. The system100is configured to evaluate a glucose level and nutritional intake of a patient10. The system100also evaluates whether the patient10is transitioning to a subcutaneous insulin regime. Based on the evaluation and analysis of the data, the system100calculates an insulin dose, which is administered to the patient10to bring and maintain the blood glucose level of the patient10into the blood glucose target range BGTR. The system100may be applied to various devices, including, but not limited to, intravenous infusion pumps123a, subcutaneous insulin infusion pumps123a, glucometers, continuous glucose monitoring systems, and glucose sensors. In some implementations, as the system100is monitoring the patient's blood glucose values BG and the patient's insulin intake, the system100notifies the user40if the patient10receives more than 500 units/hour of insulin because the system100considers these patients10to be insulin resistant. In some examples the clinical decision support system100includes a network20, a patient device110, a dosing controller160, and a service provider130. The patient device110may include, but is not limited to, desktop computers or portable electronic device (e.g., cellular phone, smartphone, personal digital assistant, barcode reader, personal computer, or a wireless pad) or any other electronic device capable of sending and receiving information via the network20. The patient device110includes a data processor112(e.g., a computing device that executes instructions), and non-transitory memory114and a display116(e.g., touch display or non-touch display) in communication with the data processor112. In some examples, the patient device110includes a keyboard118, speakers212, microphones, mouse, and a camera. The service provider130may include a data processor132in communication with non-transitory memory134. The service provider130provides the patient10with a process200(seeFIG.2) (e.g., a mobile application, a web-site application, or a downloadable program that includes a set of instructions) executable on a processor112,132,142of the dosing controller160and accessible through the network20via the patient device110, intravenous infusion pumps123a, hospital electronic medical record systems140, or portable blood glucose measurement devices124(e.g., glucose meter or glucometer). Intravenous infusion pumps infuse fluids, medication or nutrients into a patient's circulatory system. Intravenous infusion pumps123amay be used intravenously and, in some instances, subcutaneous, arterial and epidural infusions are used. Intravenous infusion pumps123atypically administer fluids that are expensive or unreliable if administered manually (e.g., using a pen123b) by a nurse or doctor40. Intravenous infusion pumps123acan administer a 0.1 ml per hour injection, injections every minute, injections with repeated boluses requested by the patient, up to a maximum number per hours, or fluids whose volumes vary by the time of day. In some implementations, an electronic medical record system140is located at a hospital42(or a doctor's office) and includes a data processor142, a non-transitory memory144, and a display146(e.g., touch display or non-touch display). The transitory memory144and the display146are in communication with the data processor142. In some examples, the hospital electronic medical system140includes a keyboard148in communication with the data processor142to allow a user40to input data, such as patient information208a(FIGS.2A and2B). The non-transitory memory144maintains patient records capable of being retrieved, viewed, and, in some examples, modified and updated by authorized hospital personal on the display146. The dosing controller160is in communication with the glucometer124and includes a computing device112,132,142and non-transitory memory114,134,144in communication with the computing device112,132,142. The dosing controller160executes the process200. The dosing controller160stores patient related information retrieved from the glucometer124to determine an insulin dose rate IRR based on the received blood glucose measurement BG. The network20may include any type of network that allows sending and receiving communication signals, such as a wireless telecommunication network, a cellular telephone network, a time division multiple access (TDMA) network, a code division multiple access (CDMA) network, Global system for mobile communications (GSM), a third generation (3G) network, fourth generation (4G) network, a satellite communications network, and other communication networks. The network20may include one or more of a Wide Area Network (WAN), a Local Area Network (LAN), and a Personal Area Network (PAN). In some examples, the network20includes a combination of data networks, telecommunication networks, and a combination of data and telecommunication networks. The patient device110, the service provider130, and the hospital electronic medical record system140communicate with each other by sending and receiving signals (wired or wireless) via the network20. In some examples, the network20provides access to cloud computing resources, which may be elastic/on-demand computing and/or storage resources24available over the network20. The term ‘cloud’ services generally refers to a service performed not locally on a user's device, but rather delivered from one or more remote devices accessible via one or more networks20. Referring toFIGS.1B and2A-2C, the process200receives parameters (e.g., patient condition parameters) inputted via the client device110, the service provider130, and/or the hospital system140, analyzes the inputted parameters, and determines a personalized dose of insulin to bring and maintain a patient's blood glucose level BG into a preferred target range BGTR. In some implementations, before the process200begins to receive the parameters, the process200may receive a username and a password (e.g., at a login screen displayed on the display116,146) to verify that a qualified and trained healthcare professional40is initiating the process200and entering the correct information that the process200needs to accurately administer insulin to the patient10. The system100may customize the login screen to allow a user40to reset their password and/or username. Moreover, the system100may provide a logout button (not shown) that allows the user40to log out of the system100. The logout button may be displayed on the display116,146at any time during the execution of the process200. The clinical decision support system100may include an alarm system120that alerts a user40when the patient's blood glucose level BG is outside the target range BGTR. The alarm system120may produce an audible sound via speaker122in the form of a beep or some like audio sounding mechanism. In some examples, the alarm system120displays a warning message or other type of indication on the display116of the patient device110to provide a warning message. The alarm system120may also send the audible and/or visual notification via the network20to the hospital system140(or any other remote station) for display on the display146of the hospital system140or played through speakers152of the hospital system140. The process200prompts a user40to input patient information208aat block208. The user40may input the patient information208a, for example, via the user device110or via the hospital electronic medical record systems140located at a hospital42(or a doctor's office). The user40may input new patient information208aas shown inFIG.2Bor retrieve previously stored patient information208aas shown inFIG.2C. In some implementations, the process200provides the user40with a patient list209(FIG.2C) where the user40selects one of the patient names from the patient list209, and the process200retrieves that patient's information208a. The process200may allow the user40to filer the patient list209, e.g., alphabetically (first name or last name), by location, patient identification. The process200may retrieve the patient information208afrom the non-transitory memory144of the hospital's electronic medical system140or the non-transitory memory114of the patient device110(e.g., where the patient information208awas previously entered and stored). The patient information208amay include, but is not limited to, a patient's name, a patient's identification number (ID), a patient's height, weight, date of birth, diabetes history, physician name, emergency contact, hospital unit, diagnosis, gender, room number, and any other relevant information. In some examples, the diagnosis may include, but is not limited to, burn patients, Coronary artery bypass patients, stoke patients, diabetic ketoacidosis (DKA) patients, and trauma patients. After the user40completes inputting the patient information208a, the process200at block202determines whether the patient10is being treated with an intravenous treatment module by prompting the user40(e.g., on the display116,146) to input whether the patient10will be treated with an intravenous treatment module. If the patient10will not be treated with the intravenous treatment module, the process200determines at block210whether the patient10will be treated with a subcutaneous treatment module, by asking the user40(e.g., by prompting the user on the display116,146). If the user40indicates that the patient10will be treated with the subcutaneous treatment, the process200flows to block216, where the user40enters patient subcutaneous information216a, such as bolus insulin type, target range, basal insulin type and frequency of distribution (e.g., 1 does per day, 2 doses per day, 3 doses per day, etc.), patient diabetes status, subcutaneous type ordered for the patient (e.g., Basal/Bolus and correction that is intended for patients on a consistent carbohydrate diet, or Basal and correction that is intended for patients who are NPO or on continuous enteral feeds), frequency of patient blood glucose measurements, or any other relevant information. In some implementations, the patient subcutaneous information216ais prepopulated with default parameters, which may be adjusted or modified. When the user40enters the patient subcutaneous information216, the subcutaneous program begins at block226. The process may determine whether the patient10is being treated with an intravenous treatment or a subcutaneous treatment by prompting the user40to select between two options (e.g., a button displayed on the display116,146), one being the intravenous treatment and the other begin the subcutaneous treatment. In some implementations and referring back to block202, if the process200determines that the patient10will be treated with the intravenous treatment module, the process200prompts the user40at block204for setup data204a, such as patient parameters204arelevant to the intravenous treatment mode. In some examples, the patient parameter204arelating to the intravenous treatment may be prepopulated, for example, with default values that may be adjusted and modified by the user40. These patient parameters204amay include an insulin concentration (i.e., the strength of insulin being used for the intravenous dosing, which may be measured in units/milliliter), the type of insulin and rate being administered to the patient, the blood glucose target range BGTR, the patient's diabetes history, a number of carbohydrates per meal, or any other relevant information. In some implementations, the type of insulin and the rate of insulin depend on the BG of the patient10. For example, the rate and type of insulin administered to a patient10when the blood glucose value BG of the patient10is greater or equal to 250 mgl/dl may be different than the rate and type of insulin administered to the patient10when the blood glucose value BG of the patient is greater than 250 ml/dl. The blood glucose target range BGTRmay be a configurable parameter, customized based on various patient factors. The blood glucose target range BGTRmay be limited to 40 mg/dl (e.g., 100-140 mg/dl, 140-180 mg/dl, and 120-160 mg/dl). After the user40inputs patient parameters204afor the intravenous treatment at block204, the process200prompts the user40to input the blood glucose value BG of the patient10at block206. The blood glucose value BG may be manually inputted by the user40, sent via the network20from a glucometer124, sent electronically from the hospital information or laboratory system140, or other wireless device. The process200determines a personalized insulin dose rate, referred to as an insulin infusion rate IIR, using the blood glucose value BG of the patient10and a dose calculation process300. FIG.3provides a dose calculation process300for calculating the insulin infusion rate IIR of the patient10for intravenous treatment after the process200receives the patient information208adiscussed above (including the patients' blood glucose value BG). At block301the dose calculation process300determines if the patient's blood glucose BG is less than a stop threshold value BGTHstop. If not, then at block303the dose calculation process300goes to block304without taking any action. If, however, the patient's blood glucose BG is less than a stop threshold value BGTHstop, then the calculation dose process sets the patient's regular insulin dose rate IRR to zero at block302, which then goes to block322. The dose calculation process300determines at decision block304if the inputted blood glucose value BG is the first inputted blood glucose value. The patient's regular insulin dose rate IIR is calculated at block320in accordance with the following equation: IIR=(BG−K)*M (3A) where K is a constant, known as the Offset Target, with the same unit of measure as blood glucose and M is a unit-less multiplier. In some examples, the Offset Target K is lower than the blood glucose target range of the patient10. The Offset Target K allows the dose calculation process300to calculate a non-zero stable insulin dose rate even with a blood glucose result is in the blood glucose target range BGTR. The initial multiplier MI, determined by the physician40, approximates the sensitivity of a patient10to insulin. For example, the initial multiplier equals 0.02 for adults ages 18 and above. In some examples, the initial multiplier MIequals 0.01 for frail elderly patients10who may be at risk for complications arising when their blood glucose level BG falls faster than 80 mg/dl/hr. Moreover, the physician40may order a higher initial multiplier MIfor patients10with special needs, such as CABG patients (i.e., patients who have undergone coronary artery bypass grafting) with BMI (Body Mass Index which is a measure for the human body shape based on the individual's mass and height) less than 30 might typically receive an initial multiplier of 0.05, whereas a patient with BMI greater than 30 might receive an initial multiplier MIof 0.06. In addition, a patient's weight may be considered in determining the value of the initial multiplier MI, for examples, in pediatric treatments, the system100calculates a patient's initial multiplier MIusing the following equation: MI=0.0002×Weight of patient (in kilograms) (3B) In some implementations, K is equal to 60 mg/dl. The dose calculation process300determines the target blood glucose target range BGTRusing two limits inputted by the user40, a lower limit of the target range BGTRLand an upper (high) limit of the target range BGTRH. These limits are chosen by the user40so that they contain the desired blood glucose target as the midpoint. Additionally, the Offset Target K may be calculated dynamically in accordance with the following equation: K=BGTarget−Offset, (4) where BGTargetis the midpoint of the blood glucose target range BGTRand Offset is the preconfigured distance between the target center BGTargetand the Offset Target, K. In some implementations, the insulin dose rate IRR may be determined by the following process on a processor112,132,142. Other processes may also be used.function IIR($sf, $current_bg, $bg_default=60, $insulin_concentration, $ins_units_of_measure=‘units/hr’) {settype($sf,‘float’);settype($bg_default,‘float’);settype($current_bg,‘float’);settype($insulin_concentration,‘float’);/*@param $sf=sensitivity factor from db@param $current_bg=the current bg value being submitted@param $db_default=the default “Stop Insulin When” value . . . . If it isn't passed, it defaults to 60@param $insulin_concentration=the default insulin concentration from settings*/if($current_bg>60) {$iir=array( );$iir[0]=round(($current_bg−$bg_default)*$sf, 1);if ($ins_units_of_measure !=‘units/hr’){$iir[1]=round(($current_bg−$bg_default)*$sf/$insulin_concentration,1);}return $iir;} else {return 0;}} Referring to decision block304, when the dose calculation process300determines that the inputted blood glucose value BG is the first inputted blood glucose value, then the dose calculation process300defines the value of the current multiplier M equal to an initial multiplier (MI) at block306. The dose calculation process300then calculates, at block320, the Insulin Infusion Rate in accordance with the BR equation (EQ. 3A) and returns to the process200(seeFIG.2). However, referring back to decision block304, when the dose calculation process300determines that the inputted blood glucose value BG is not the first inputted blood glucose value, the dose calculation process300determines if the Meal Bolus Module has been activated at decision block308. If the dose calculation process300determines that the Meal Bolus Module has been activated, then the dose calculation process300begins a Meal Bolus process500(seeFIG.5). Referring back to decision block308, if the Meal Bolus Module has not been activated, the dose calculation process300determines, at decision block310, if the current blood glucose value BG is greater than the upper limit BGTRHof the blood glucose target range BGTR. If the blood glucose value BG is greater than the upper limit BGTRHof the blood glucose target range BG R, the dose calculation process300determines, at block314, a ratio of the current blood glucose value BG to the previous blood glucose value BGP, where BGPwas measured at an earlier time than the current BG. The process200then determines if the ratio of the blood glucose to the previous blood glucose, BG/BGP, is greater than a threshold value LA, as shown in the following equation: (BG/BGP)>LA(5) where BG is the patient's current blood glucose value; BGPis the patient's previous blood glucose value; and LAis the threshold ratio of BG/BGPfor blood glucose values above the upper limit of the blood glucose target range BGTRH. If the ratio BG/BGPexceeds the threshold ratio LA, then the Multiplier M is increased. In some examples, the threshold ratio LAequals 0.85. If the dose calculation process300determines that the ratio (BG/BGp) of the blood glucose value BG to the previous blood glucose value BGpis not greater than the threshold ratio LAfor a blood glucose value BG above the upper limit BGTRHof the blood glucose target range BGTR, then the dose calculation process300sets the value of the current multiplier M to equal the value of the previous multiplier MP, see block312. M=MP(6) Referring back to block314, if the dose calculation process300determines that the ratio (BG/BGp) of the blood glucose value BG to the previous blood glucose BGPis greater than the threshold ratio LAfor a blood glucose value above upper limit BGTRHof the blood glucose target range BGTR, then dose calculation process300multiplies the value of the current multiplier M by a desired Multiplier Change Factor (MCF) at block318. The dose calculation process300then calculates the insulin infusion rate at block320using the IIR equation (EQ. 3A) and returns to the process200(seeFIG.2). Referring back to block310, when the dose calculation process300determines that the current blood glucose value BG is not greater than the upper limit BGTRHof the blood glucose target range BGTR, the dose calculation process300then determines if the current blood glucose concentration BG is below the lower limit BGTRLof the blood glucose target range BGTRat decision block311. If the current blood glucose value BG is below the lower limit BGTRL, of the blood glucose target range BGTR, the dose calculation process300at block316divides the value of the current multiplier M by the Multiplier Change Factor (MCF), in accordance with the following equation: M=MP/MCF(7) and calculates the current insulin infusion rate IIR using equation 3 at block320and returns to the process200(seeFIG.2). At block311, if the dose calculation process300determines that the blood glucose value BG is not below the lower limit of the blood glucose target range BGTRL, the dose calculation process300sets the value of the current multiplier to be equal to the value of the previous multiplier MPat block312(see EQ. 6). Referring again toFIG.3, at block311, if the current blood glucose value BG is below the lower limit of the target range BGTRL, logic passes to decision block322, where the process300determines if the current blood glucose concentration BG is below a hypoglycemia threshold BGHypo. If the current blood glucose BG is below the hypoglycemia threshold BGHypo, logic then passes to block324, where the process300recommends hypoglycemia treatment, either by a calculation of an individualized dose of intravenous glucose or oral hypoglycemia treatment. Referring back toFIG.2A, after the dose calculation process300calculates the insulin infusion rate IIR, the process200proceeds to a time calculation process400(FIG.4A) for calculating a time interval TNextuntil the next blood glucose measurement. FIG.4Ashows the time interval calculation process400for calculating a time interval TNextbetween the current blood glucose measurement BG and the next blood glucose measurement BGnext. The time-duration of blood glucose measurement intervals TNextmay vary and the starting time interval can either be inputted by a user40at the beginning of the process200,300,400, or defaulted to a predetermined time interval, TDefault(e.g., one hour). The time interval TNextis shortened if the blood glucose concentration BG of the patient10is decreasing excessively, or it may be lengthened if the blood glucose concentration BG of the patient10becomes stable within the blood glucose target range BGTR. The time-interval calculation process400determines a value for the time interval TNextbased on several conditions. The time-interval process400checks for the applicability of several conditions, where each condition has a value for Tnextthat is triggered by a logic-test (except Tdefault). The process400selects the lowest value of Tnextfrom the values triggered by logic tests (not counting Tdefault). If no logic test was triggered, the process selects Tdefault. This is accomplished inFIG.4Aby the logic structure that selects the lowest values of Tnextfirst. However, other logic structures are possible as well. The time calculation process400determines at decision block416if the current blood glucose BG is below the lower limit BGTRL(target range low limit) of the blood glucose target range BGTR. If the current blood glucose BG is below the lower limit BGTRLof the blood glucose target range BGTR, then the time calculation process400determines, at decision block418, if the current blood glucose BG is less than a hypoglycemia-threshold blood glucose level BGHypo. If the current blood glucose BG is less than the hypoglycemia-threshold blood glucose level BGHypothe time calculation process400sets the time interval TNextto a hypoglycemia time interval THypo, e.g., 15 or 30 minutes, at block426. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. If the current blood glucose BG is not less than (i.e., is greater than) the hypoglycemia-threshold blood glucose level BGHypoat block418, the time calculation process400determines at block422if the most recent glucose percent drop BG% Drop, is greater than the threshold glucose percentage drop % DropLow Limit(for a low BG range) using the following equation: BG%drop>%DropLowLimit(8A)sinceBG%drop=((BGP-BG)BGP)(8B)then,((BGP-BG)BGP)>%DropLowLimit(8C) where BGPis a previously measured blood glucose. If the current glucose percent drop BG% Drop, is not greater than the limit for glucose percent drop (for the low BG range) % DropLow Limit, the time calculation process400passes the logic to block412. In some examples, the low limit % DropLow Limitequals 25%. Referring back to block422, if the current glucose percent drop BG% Dropis greater than the limit for glucose percent drop (for the low BG range) % DropLow Limit, the time calculation process400at block424sets the time interval to a shortened time interval TShort, for example 20 minutes, to accommodate for the increased drop rate of the blood glucose BG. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. Referring back to decision block416, if the time calculation process400determines that the current blood glucose BG is not below the lower limit BGTRLfor the blood glucose target range BGTR, the time calculation process400determines at block420if the blood glucose BG has decreased by a percent of the previous blood glucose that exceeds a limit % DropRegular(for the regular range, i.e., blood glucose value BG>BGTRL), using the formula: ((BGP-BG)BGP)>%DropRegular(9) If the blood glucose BG has decreased by a percentage that exceeds the regular threshold glucose percent drop (for the regular BG range) % DropRegular, the time calculation process400, at block425, sets the time interval to the shortened time interval TShort, for example 20 minutes. A reasonable value for % DropRegularfor many implementations is 66%. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. If, however, the glucose has not decreased by a percent that exceeds the threshold glucose percent drop % DropRegular, (for the regular BG range), the time calculation process400routes the logic to block412. The process400determines, at block412, a blood glucose rate of descent BGDropRatebased on the following equation: BGDropRate=(BGP−BG)/(TCurrent−TPrevious) (10) where BGPis the previous blood glucose measurement, TCurrentis the current time and TPreviousis the previous time. Moreover, the process400at block412determines if the blood glucose rate of descent BGDropRateis greater than a preconfigured drop rate limit BGdropRateLimit. If the time calculation process400at block412determines that the blood glucose rate of descent BGDropRate, has exceeded the preconfigured drop rate limit BGdropRateLimit, the time interval TNextuntil the next blood glucose measurement is shortened at block414to a glucose drop rate time interval TBGDR, which is a relatively shorter time interval than the current time interval TCurrent, as consideration for the fast drop. The preconfigured drop rate limit BGdropRateLimitmay be about 100 mg/dl/hr. The glucose drop rate time interval TBGDRmay be 30 minutes, or any other predetermined time. In some examples, a reasonable value for TDefaultis one hour. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. If the time calculation process400determines at block412that the glucose drop rate BGDropRatedoes not exceed the preconfigured rate limit BGdropRateLimit, the time calculation process400determines, at block408, if the patient's blood glucose concentration BG has been within the desired target range BGTR(e.g., BGTRL<BG<BGTRH) for a period of time TStable. The criterion for stability in the blood glucose target range BGTRis a specified time in the target range BGTRor a specified number of consecutive blood glucose measurements in the target range BGTR. For example, the stable period of time TStablemay be one hour, two hours, two and a half hours, or up to 4 hours. If the stability criterion is met then the time interval TNextuntil the next scheduled blood glucose measurement BG may be set at block410to a lengthened time interval TLong(such as 2 hours) that is generally greater than the default time interval TDefault. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. If the time calculation process400determines that the patient10has not met the criteria for stability, the time calculation process400sets the time interval TNextto a default time interval TDefaultat block406. Then the time calculation process400is complete and returns to the process200(FIG.2) at block428. Referring toFIGS.4B and4C, once the time calculation process400calculates the recommended time interval TNext, the process200provides a countdown timer430that alerts the user40when the next blood glucose measurement is due. The countdown timer430may be on the display116of the patient device110or displayed on the display146of the hospital system140. When the timer430is complete, a “BG Due!” message might be displayed as shown inFIG.4B. The countdown timer430may include an overdue time432indicating the time late if a blood glucose value is not entered as scheduled. In some implementations, the countdown timer430connects to the alarm system120of the user device110. The alarm system120may produce an audible sound via the speaker122in the form of a beep or some like audio sounding mechanism. The audible and/or visual notification may also be sent via the network to the hospital system140(or any other remote station) and displayed on the display146of the hospital system140or played through speakers152of the hospital system140, or routed to the cell phone or pager of the user. In some examples, the audible alarm using the speakers122is turned off by a user selection434on the display116or it is silenced for a preconfigured time. The display116,143may show information230that includes the patient's intravenous treatment information230aor to the patient's subcutaneous treatment information230b. In some examples, the user40selects the countdown timer430when the timer430indicates that the patient10is due for his or her blood glucose measurement. When the user40selects the timer430, the display116,146allows the user40to enter the current blood glucose value BG as shown inFIG.4D. For intravenous patients10, the process200may ask the user40(via the display116,146) if the blood glucose is pre-meal blood glucose measurement (as shown inFIG.4D). When the user40enters the information230(FIG.4D), the user40selects a continue button to confirm the entered information230, which leads to the display116,146displaying blood glucose information230cand a timer430showing when the next blood glucose measurement BG is due (FIG.4E). In addition, the user40may enter the patient's blood glucose measurement BG at any time before the timer430expires, if the user40selects the ‘enter BG’ button436. Therefore, the user40may input blood glucose values BG at any time, or the user40may choose to start the Meal Bolus module process500(seeFIG.5) by selecting the start meal button438(FIG.4E), transition the patient to SubQ insulin therapy600(seeFIG.6), or discontinue treatment220. Referring toFIGS.5A-5F, in some implementations, the process200includes a process where the patient's blood glucose level BG is measured prior to the consumption of caloric intake and calculates the recommended intravenous mealtime insulin requirement necessary to control the patient's expected rise in blood glucose levels during the prandial period. When a user40chooses to start the Meal Bolus process500(e.g., when the user40positively answers that this is a pre-meal blood glucose measurement inFIG.4D, or when the user40selects the start meal button438inFIG.4E), the Meal Bolus process500, at decision block504, requests the blood glucose BG of the patient10(as shown inFIG.5C). The user40enters the blood glucose value BG at501or the system100receives the blood glucose BG from a glucometer124. This blood glucose measurement is referred to herein as the Pre-Meal BG or BG1. In some examples, where the user40enters the information, the user40selects a continue button to confirm the entered information230c. In some examples, the intravenous meal bolus process500is administered to a patient10over a total period of time TMealBolus. The total period of time TMealBolusis divided into multiple time intervals TMealBolus1to TMealBolusN, where N is any integer greater than zero. In some examples, a first time interval TMealBolus1runs from a Pre-Meal blood glucose value BG1at measured at time T1, to a second blood glucose value BG2at measured at time T2. A second time interval TMealBolus2runs from the second blood glucose value BG2measured at time T2to the third blood glucose value BG3measured at time T3. A third time interval TMealBolus3runs from the third blood glucose value BG3measured at time T3to a fourth blood glucose value BG4measured at time T4. In some implementations where the time intervals TMealBolusNare smaller than TDefault, the user40should closely monitor and control over changes in the blood glucose of the patient10. For example, a total period of time TMealBolusequals 2 hours, and may be comprised of: TMealBolus1=30 minutes, TMealBolus2=30 minutes, and TMealBolus3=1 hour. This example ends on the fourth blood glucose measurement. When the Meal Bolus process500has been activated, an indication440is displayed on the display116,146informing the user40that the process500is in progress. The Meal Bolus process500prompts the user40if the entered blood glucose value BG is the first blood glucose value prior to the meal by displaying a question on the patient display116. If the Meal Bolus process500determines that the entered blood glucose value BG is the first blood glucose value (BG1) prior to the meal, then the Meal Bolus process500freezes the current multiplier M from being adjusted and calculates a regular intravenous insulin rate IRR at block512. The regular intravenous insulin rate IRR may be determined using EQ. 3A. Meanwhile, at block502, the Meal Bolus process500loads preconfigured meal parameters, such as meal times, insulin type, default number of carbohydrates per meal, the total period of time of the meal bolus process TMealBolus, interval lengths (e.g., TMealBolus1, TMealBolus1. . . TMealBolusN), and the percent, “C”, of the estimated meal bolus to be delivered in the first interval TMealBolus1. In some examples, when the system100includes a hospital electronic medical record system140, nutritional information and number of grams of carbohydrates are retrieved from the hospital electronic medical record systems140automatically. The Meal Bolus process500allows the user40to select whether to input a number of carbohydrates from a selection of standard meals (AcutalCarbs) or to use a custom input to input an estimated number of carbohydrates (EstimatedCarbs) that the patient10is likely to consume. The Meal Bolus process500then flows to block506, where the estimated meal bolus rate for the meal is calculated. The calculation process in block506is explained in two steps. The first step is calculation of a meal bolus (in units of insulin) in accordance with the following equation: Estimated Meal Bolus=EstimatedCarbs/CIR (11A) where CIR is the Carbohydrate-to-Insulin Ratio, previously discussed. The Meal Bolus process500then determines the Estimated Meal Bolus Rate based on the following equation: Estimated Meal Bolus Rate=Estimated Meal Bolus*C/TMealBolus1(11B) Where, TMealBolus1is the time duration of the first time interval of the Meal Bolus total period of time TMealBolus. C is a constant adjusted to infuse the optimum portion of the Estimated Meal Bolus during first time interval TMealBolus1. For instance: if Estimated Meal Bolus=6 units, TMealBolus1=0.5 hours, and C=25%, then applying Eq. 11A as an example: Estimated Meal Bolus Rate=(6 units)*25%/(0.5 hours)=3 units/hour (11C) The Meal Bolus process500calculates the Total Insulin Rate at block508as follows: Total Insulin Infusion Rate=Estimated Meal Bolus Rate+Regular Intravenous Rate (12) The Meal Bolus process500flows to block510where it sets the time interval for the first interval TMealBolus1to its configured value, (e.g., usually 30 minutes), which will end at the second meal bolus blood glucose (BG2). After the first time interval TMealBolus1expires (e.g., after 30 minutes elapse), the Meal Bolus process500prompts the user40to enter the blood glucose value BG once again at block501. When the Meal Bolus process500determines that the entered blood glucose value BG is not the first blood glucose value BG1entered at block504(i.e., the pre-meal BG, BG1, as previously discussed), the process500flows to block514. At block514, the Meal Bolus process500determines if the blood glucose value BG is the second value BG2entered by the user40. If the user40confirms that the entered blood glucose value BG is the second blood glucose value BG2entered, the Meal Bolus process500uses the just-entered blood glucose BG2to calculate the intravenous insulin rate IRR at block516and flows to block524. Simultaneously, if the blood glucose is the second blood glucose BG2, the Meal Bolus process500prompts the user40to enter the actual amount of carbohydrates that the patient10received at block518. The Meal Bolus process500then determines at decision block520and based on the inputted amount of actual carbohydrates, if the patient did not eat, i.e., if the amount of carbohydrates is zero (seeFIG.5C). If the Meal Bolus process500determines that the patient did not eat, the Meal Bolus process500then flows to block540, where the meal bolus module process500is discontinued, the multiplier is no longer frozen, and the time interval TNextis restored to the appropriate time interval TNext, as determined by process400. If however, the Meal Bolus process500determines that the patient10ate, i.e., the actual carbohydrates is not zero (seeFIG.5D), then The Meal Bolus process500flows to block522, where it calculates a Revised meal bolus rate according to the following equations, where the Revised Meal Bolus and then an amount of insulin (in units of insulin) are calculated: Revised Meal Bolus=ActualCarbs/CIR (13A) The process at block522then determines the amount (in units of insulin) of estimated meal bolus that has been delivered to the patient10so far: Estimated Meal Bolus Delivered=Estimated Meal Bolus Rate*(T2−T1) (13B) where time T1is the time of when the first blood glucose value BG1is measured and time T2is the time when the second blood glucose value BG2is measured. The process at block522then calculates the portion of the Revised Meal Bolus remaining to be delivered (i.e., the Meal Bolus that has not yet been delivered to the patient10) as follows: Revised Meal Bolus Remaining=Revised Meal Bolus−Estimated Meal Bolus Delivered (13C) The process at block522then calculates the Revised Meal Bolus Rate as follows: Revised Meal Bolus Rate=Revised Meal Bolus Remaining/Time Remaining (14A) where Time Remaining=TMealBolus−TMealBolus1. Since the total time interval TMealBolusand the first time interval TMealBolus1are preconfigured values, the Time Remaining may be determined. The Meal Bolus process500calculates the total insulin rate at block524by adding the Revised Meal Bolus Rate to the regular Intravenous Rate (IIR), based on the blood glucose value BG: Total Insulin Rate=Revised Meal Bolus Rate+IIR (14B) The Meal Bolus process500flows to block526where it sets the time interval TNextto the second interval TMealBolus2, which will end at the third meal bolus blood glucose BG3e.g., usually 30 minutes. After the second interval, TMealBolus2expires (e.g., 30 minutes), the Meal Bolus process500prompts the user40to enter the blood glucose value BG once again at block501. The Meal Bolus process500determines that the entered blood glucose value BG is not the first blood glucose value entered at block504(previously discussed) and flows to block514. The Meal Bolus process500determines that the entered blood glucose value BG is not the second blood glucose value entered at block514(previously discussed) and flows to block528. At block528, the Meal Bolus process500determines if the blood glucose value BG is the third value entered. If the entered blood glucose value BG is the third blood glucose value BG entered, the Meal Bolus process500calculates the intravenous insulin rate IRR at block530and flows to block532. At block532the process determines the Total Insulin Rate by adding the newly-determined Regular Intravenous Insulin Rate (IIR) to the Revised Meal Bolus Rate, which was determined at BG2and remains effective throughout the whole meal bolus time, Tmealbolus. The Meal Bolus process500flows to block534where it sets the time interval TNextto the third interval TMealBolus3for the fourth meal bolus blood glucose, e.g., usually minutes. In some implementations, more than 3 intervals (TMealBolus1, TMealBolus2TMealBolus3) may be used. Additional intervals TMealBolusNmay also be used and the process handles the additional intervals TMealBolusNsimilarly to how it handles the third time interval TMealBolus3. As discussed in the current example, the third interval TMealBolus3is the last time interval, which ends with the measurement of the fourth blood glucose measurement BG4. After the third time interval, TMealBolus3, expires (e.g., 60 minutes), the Meal Bolus process500prompts the user40to enter the blood glucose value BG once again at block501. The Meal Bolus process500determines that the entered blood glucose value BG is not the first blood glucose value entered at block504(previously discussed) and flows to block514. The Meal Bolus process500determines that the entered blood glucose value BG is not the second blood glucose value entered at block514(previously discussed), nor the third blood glucose level entered at block528and flows to block536. At block536, the Meal Bolus process500determines that the inputted blood glucose is the fourth blood glucose value BG4. In this example, the fourth blood glucose value BG4is the last one. The process500then flows to block538where the multiplier is no longer frozen, and the time interval TNextis restored to the appropriate time interval TNext, as determined by the Timer Adjustment process400(FIG.4A). At this time, the Meal Bolus process500ends and the user40is prompted with a message indicating that the Meal Bolus process500is no longer active. As shown inFIG.5D, and previously discussed with respect toFIGS.4B-4E, the process200provides a countdown timer430that alerts the user40when the next blood glucose measurement is due. The countdown timer430may be on the display116of the patient device110or displayed on the display146of the hospital system140. When the timer430is complete, a “BG Due!” message might be displayed as shown inFIG.4B. Moreover, the timer430may be a countdown timer or a meal timer indicating a sequence of mealtime intervals (e.g., breakfast, lunch, dinner, bedtime, mid-sleep). In some implementations, a Meal Bolus process500may be implemented by the following process on a processor112,132,142. Other processes may also be used.function PreMealIIR($PatientID, $CurrentBG, $Multiplier, $InsulinConcentration,$EstCarbs, $ActualCarbs, STimeInterval, $InsulinUnitsOfMeasure, $MealBolusCount){$iir=array( );$CarbInsulinRatio=CIR($PatientID);$NormalInsulin=($CurrentBG−60)*$Multiplier;if($MealBolusCount==0){//first run—Premeal Bolus$MealBolus=($EstCarbs/$CarbInsulinRatio);if($MealBolus<0){$MealBolus=0;}$iir[0]=$NormalInsulin+($MealBolus*0.5);$iir[2]=($MealBolus*0.5);/*print “Premeal: MX:”. $Multiplier. “<BR>”;print ($CurrentBG−60)*$Multiplier;print “+”;print ($MealBolus*0.5);*/} else if($MealBolusCount==1){//second run Post Meal Bolus//third run time interval coming in is actually the//difference between the premeal BG and the first Post Meal BG (second run)$MealBolus=($ActualCarbs/$CarbInsulinRatio);$OldMealBolus=($EstCarbs/$CarbInsulinRatio);$CurrentMealBolus=($MealBolus—($OldMealBolus*0.5*$TimeInterval))/1.5;if($CurrentMealBolus<0){$CurrentMealBolus=0;}$iir[0]=$NormalInsulin+$CurrentMealBolus;$iir[2]=$CurrentMealBolus;/*print “PlateCheck: <BR>MX:”. $Multiplier. “<BR>”;print “Est Carbs:”. $EstCarbs. “<BR>”;print “ActualCarbs:”. $ActualCarbs. “<BR>”;print “CarbInsulinRatio:”. $CarbInsulinRatio. “<BR>”;print “TimeInterval:”. $TimeInterval. “<BR>”;print “Multiplier:”. $Multiplier;*/}else{$MealBolus=($ActualCarbs/$CarbInsulinRatio);$OldMealBolus=($EstCarbs/$CarbInsulinRatio);/*print “Actual Carbs:”. $ActualCarbs. “<BR>”;print “Est Carbs:”. $EstCarbs. “<BR>”;print “CIR:”. $CarbInsulinRatio. “<BR>”;print “Multiplier:”. $Multiplier. “<BR>”;print “CurrentBG:”. $CurrentBG. “<BR>”;print “IIR:”. (($CurrentBG−60)*$Multiplier). “<BR>”;print “MealBolus:”. $MealBolus. “<BR>”;print “OldMealBolus:”. $OldMealBolus. “<BR>”;print “TimeInterval:”. $TimeInterval. “<BR>”;*/$CurrentMealBolus=($MealBolus−($OldMealBolus*0.5*$TimeInterval))/1.5;if($CurrentMealBolus<0){$CurrentMealBolus=0;{$iir[0]=$NormalInsulin+$CurrentMealBolus;$iir[2]=$CurrentMealBolus;/*print “Post PlateCheck: <BR>MX:”. $Multiplier. “<BR>”;print “IIR:”;print ($CurrentBG−60)*$Multiplier. “<BR>”;print “Est Carbs:”. $EstCarbs. “<BR>”;print “Acutal Carbs:”. $ActualCarbs. “<BR>”;print “Old Meal bolus:”. $OldMealBolus. “<BR>”;print “TimeInterval:”. $TimeInterval. “<BR>”;print “Meal bolus:”. $MealBolus. “<BR>”;print “Final Calc:”. $iir[0];*/}if ($InsulinUnitsOfMeasure !=“units/hr”){$iir[0]=$iir[0]/$InsulinConcentration;}return $iir;} Referring toFIGS.2A and6A, if the user elects to initiate the SubQ Transition process600, the SubQ Transition process600determines at decision block604if the current blood glucose BG is within a preconfigured stability target range BGSTR, e.g., 70-180 mg/dl, which is usually wider than the prescribed Target Range, BGTR. If the blood glucose BG is not within the preconfigured stability target range BGSTR(e.g., BGLow<BG<BGHigh), the SubQ Transition process600at block606displays a warning notification on the patient display116. Then, at lock610, the SubQ Transition process600is automatically discontinued. Referring back to block604, if the blood glucose BG is within the preconfigured stability target range BGSTR(e.g. 70-180 mg/dl), the SubQ Transition process600at decision block608determines if the patient's blood glucose measurement BG has been in the patient's personalized prescribed target range BGTRfor the recommended stability period TStable, e.g., 4 hours. If the SubQ Transition process600determines that the blood glucose value BG has not been in the prescribed target range BGSTRfor the recommended stability period TStable, the SubQ Transition process600moves to block614where the system100presents the user40with a warning notification on the patient display116, explaining that the patient10has not been in the prescribed target range for the recommended stability period (seeFIG.6C). The SubQ Transition process600continues to decision block618where it determines whether the user40wants the patient10to continue the SubQ Transition process or to discontinue the SubQ Transition process. The SubQ Transition process600displays on the display116of the patient device110the question to the user40as shown inFIG.6D. If the user40chooses to discontinue the SubQ Transition process, the SubQ Transition process600flows to block624, where the SubQ Transition process is discontinued. Referring back to block618, if the user40chooses to override the warning and continue the SubQ Transition process, the process600prompts the user40to enter SubQ information617. The SubQ Transition process600flows to block616, where the patient's SubQ Transition dose is calculated as a patient's total daily dose TDD. In some implementations, TDD is calculated in accordance with equation: TDD=QuickTransitionConstant*MTrans(15A) where QuickTransitionConstant is usually 1000, and MTransis the patient's multiplier at the time of initiation of the SubQ transition process. Referring again to block616, in some implementations TDD is calculated by a statistical correlation of TDD as a function of body weight. The following equation is the correlation used: TDD=0.5*Weight (kg) (15B) The SubQ Transition process600continues to block620, where the recommended SubQ dose is presented to the user40(on the display116) in the form of a Basal recommendation and a Meal Bolus recommendation (seeFIG.6F). Referring again to decision block608, if the SubQ Transition process600determines that the patient10has been in the prescribed target range BGTRfor the recommended stability period, TStable, SubQ Transition process600continues to block612, where the patient's total daily dose TDD is calculated in accordance with the following equation: TDD=(BGTarget−K)*(MTrans)*24 (16) where MTransis the patient's multiplier at the time of initiation of the SubQ transition process. In some implementations, the patient's total daily dose TDD may be determined by the following process on a processor112,132,142. Other processes may also be used.function getIV_TDD($PatientID) {//weight=getOneField(“weight”, “patients”, “patientID”, $PatientID);//return $weight/2;$CI=get instance( );$CI->load->model(‘options’);$d=$CI->options->GetIVTDDData(SPatientID);$TargetHigh=$d[“TargetHigh”];$TargetLow=$d[“TargetLow”];$Multiplier=$d[“Multiplier”];$MidPoint=($TargetHigh+$TargetLow)/2;$Formula=($MidPoint−60)*$Multiplier*24;return $Formula; } When the patient's total daily dose TDD is calculated, the SubQ Transition process600continues to block620where the recommended SubQ dose is presented to the user40as described above. The SubQ Transition process600continues to block622, where the SubQ Transition process600provides information to the user40including a recommended dose of Basal insulin. The user40confirms that the Basal insulin has been given to the patient10; this starts a transitions timer using the TransitionRunTimeNext, usually 4 hours. At this point, normal calculation rules governing the IIR are still in effect, including the intravenous IIR timer (Timer Adjustment process400), which continues to prompt for blood glucose tests at time intervals TNextas described previously. The SubQ Transition process600passes to decision block626, which determines whether the recommended time interval TransitionRunTime has elapsed, e.g., 4 hours, after which time SubQ Transition process600continues to block630, providing the user with subcutaneous insulin discharge orders and exiting the IV Insulin process in block634. FIG.7provides an arrangement of operations for a method700of administering intravenous insulin to a patient10. The method700includes receiving702blood glucose measurements BG on a computing device (e.g., a processor112of a patient device110, a processor152of a hospital electronic medical record system150, or a data processor132of a service provider130) of a dosing controller160from a blood glucose measurement device124(e.g., glucose meter or glucometer). The blood glucose measurements BG are separated by a time interval TNext. The method700includes determining704, using the computing device112,132,152, an insulin dose rate IIR based on the blood glucose measurements BG. In some implementations, the method700determines the insulin dose rate IRR based on a current blood glucose measurement BG, a constant K, and a multiplier M (see EQ. 3A above). The constant K may equal 60 mg/dl. The method700includes leaving the multiplier M unchanged between time intervals TNextwhen the current blood glucose measurement BG is greater than an upper limit BGTRHof the blood glucose target range BGTRand the blood glucose percent drop BG% Dropfrom the previous blood glucose value BGPis greater than or equal to a desired percent drop BG % dropM (see EQ. 5). The method700also includes multiplying the multiplier M by a change factor MCFwhen the current blood glucose measurement BG is greater than an upper limit BGTRHof the blood glucose target range BGTRand the blood glucose percent drop BG % Drop (or blood glucose percent drop) is less than the desired percent drop BG % dropM. Additionally or alternatively, the method700includes leaving the multiplier M unchanged between time intervals TNextwhen the current blood glucose measurement BG is in the target range BGTRi.e. when BG is less than an upper limit BGTRHof the blood glucose target range and greater than the lower limit BGTRLof the target range, BGTR. The method700also includes dividing the multiplier M by a change factor MCFwhen the current blood glucose measurement BG is less than the lower limit BGTRLof the blood glucose target range BGTR. The method700may include setting the time interval TNextto a hypoglycemia time interval THypoof between about 15 minutes and about 30 minutes, when the current blood glucose measurement BG is below a hypo-threshold blood glucose level BGHypo. The method700includes determining706a blood glucose drop rate BGDropRatebased on the blood glucose measurements BG and the time interval TNext. The method700includes determining707a blood glucose percent drop BG% Drop, using the computing device112,132,152from a previous blood glucose measurement BGP. When the blood glucose drop rate BGDropRateis greater than a threshold drop rate BGDropRateLimit, the method700includes decreasing at708the time interval TNextbetween blood glucose measurements measure by the glucometer. The method700also includes decreasing710the time interval TNextbetween blood glucose measurements BG when the percent drop BG % Drop of the blood glucose BG is greater than the threshold of the percent drop % DropRegular, where the threshold of the percent drop % DropRegulardepends on whether the current blood glucose measurement BG is below a lower limit BGTRLof a blood glucose target range BGTR. In some implementations, the method700includes decreasing the time interval TNextwhen the current blood glucose measurement BG is greater than or equal to the lower limit BGTRLof the blood glucose target range BGTRand the blood glucose percent drop BG % Drop exceeds a threshold percent drop % DropRegular. In some implementations, the method700includes decreasing the time interval TNextwhen the current blood glucose measurement BG is below the lower limit BGTRLof the blood glucose target range BGTRand above the hypo-threshold blood glucose level BGHypo, and the blood glucose percent drop BG% Dropis greater than or equal to a threshold percent drop % DropLowLimit. In some examples, the method700includes leaving the multiplier M unchanged for at least two subsequent time intervals, TNext, when the current blood glucose measurement BG is a pre-meal measurement. In some examples, the method700includes receiving, on the computing device112,132,142, a number of carbohydrates for a meal as well as a blood glucose measurement, and determining, using the computing device112,132,142, an intravenous insulin rate IIR based on the blood glucose (this IIR may be calculated using EQ. 3A). In addition, the method700includes determining, using the computing device112,132,142, a meal bolus insulin rate IIR based on the number of carbohydrates. The method700then calculates a Total insulin rate as the sum of the meal bolus rate and the regular intravenous rate as shown in EQ. 12. The method700may further include setting the time interval TNextto about 30 minutes. If the blood glucose measurement BG is a second consecutive measurement after (but not including) an initial pre-meal blood glucose measurement BG, the method700includes setting the time interval TNextto about 30 minutes. In some implementations, the method700includes electronically displaying on a display116,146a warning and blocking transition to a subcutaneous administration of insulin when the current blood glucose measurement BG is outside a stability target range BGSTR. In addition, the method700includes electronically displaying on the display116,146a warning when the current blood glucose measurement BG is within the patient's personalized target range BGTRfor less than a threshold stability period of time TStable. In some examples, the method700includes determining a total daily dose of insulin TDD based on the multiplier M when the current blood glucose measurement BG is within a stability target range BGSTRfor a threshold stability period of time TStable. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Moreover, subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter affecting a machine-readable propagated signal, or a combination of one or more of them. The terms “data processing apparatus”, “computing device” and “computing processor” encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus. A computer program (also known as an application, program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. One or more aspects of the disclosure can be implemented in a computing system that includes a backend component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a frontend component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such backend, middleware, or frontend components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some implementations, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular implementations of the disclosure. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multi-tasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. | 84,584 |
11857315 | DETAILED DESCRIPTION The present disclosure has applicability to medical probes in general and is directed toward patient monitors, cabling, sensors, and the like. As discussed above, a patient monitor comprises signal processing capable of monitoring whether a caregiver or user is attaching authorized cabling and/or sensors. Such quality control systems aid monitor manufacturers in ensuring that caregivers such as doctors obtain accurate data from patient monitors used in applications from general ward, athletic, or personal monitoring to surgical and other potentially life-threatening environments, to any other use of noninvasive monitoring of patient physiologies. Although the present disclosure is applicable to many different types of patient monitors, some of this discussion will focus on pulse oximeters, as representative embodiments only. In general, a patient monitor may advantageously read a first information element on a first accessory to obtain first quality control information. The first information may advantageously allow the signal processor to identify the first accessory, such as a cable, as an authorized cable. In an embodiment, the patient monitor may advantageously read a second information element on a second accessory to obtain second quality control information. In an embodiment, the first information element provides an indication of what the second quality control information should be. When the first and second information correlates, the patient monitor can be more assured of the quality of the attached accessories. On the other hand, when there is a mismatch, various remedial measures may be taken, including displaying a message of one or more unauthorized accessories, actuating an indicator light on one or more of the accessories, or other audible or visual indications of the mismatch. For example, in an embodiment, a signal processor of a patient monitor communicates with a first information element associated with a first accessory, and uses the information stored or coded therein to determine a type of information such as a resistance value, expected to be stored or coded into a second information element associated with a second accessory. Specifically, the information gained from the first accessory, such as a cable, may provide specific resistance value(s) or range of values expected on the second accessory, such as a sensor. Such resistance values may be found in parallel with one or more emitters (such as for example, those disclosed in the foregoing '644 patent) or on separate conductors (such as, for example, those disclosed in the foregoing '643 patent). In other embodiments, the information gained from the first accessory provides information usable to access the second information element. Communication with the second information element on the second accessory advantageously provides the specific resistance value(s) or range of values expected on the sensor. In another embodiment, the patient monitor may advantageously additionally acquire information indicative of the lifespan, amount of use, or age of one or more accessories, including the cable and/or the sensor. In an embodiment, if the patient monitor determines that one or more accessories have expired, it will inform the user with an appropriate audio or visual message. Much of this discussion utilizes pulse oximeters and oximeter cable and sensor accessories in explaining the disclosure and for ease of understanding. However, the disclosure herein is not limited thereby. Patient monitors other than oximeters may similarly utilize the ideas disclosed. Similarly, labeling the first and second accessories as a cable and sensor more clearly differentiate the two accessories; however, a skilled artisan will recognize, from the disclosure herein, a wide range of uses of cascading security devices for linked or nonlinked monitor accessories. To facilitate a complete understanding of the disclosure, the remainder of the detailed description describes the disclosure with reference to the drawings. Corresponding numbers indicate corresponding parts, and the leading digit of any number indicates the figure in which that element is first shown. FIG.1Ashows sensor and cable elements of an oximeter system as is generally known in the prior art. The system comprises cable104connecting sensor106to an oximeter102(not shown). As shown here, the sensor106includes a reusable portion108, generally including expensive electronics, and a disposable portion110, generally including positioning mechanisms such as tape. Male connection housing112at one end of sensor106connects sensor106to female cable connection150of cable104. The operation and construction of reusable and disposable sensors is disclosed in U.S. Pat. No. 6,920,345 entitled “Optical Sensor Including Disposable and Reusable Elements” awarded to Al-Ali and owned by the assignee of the present disclosure, the full disclosure of which is incorporated herein by reference. Other disclosure may be found in U.S. Application No. 60/740,541, filed Nov. 29, 2005, also entitled “Optical Sensor Including Disposable and Reusable Elements,” incorporated herein by reference. FIG.1Billustrates a patient monitor102and attached accessories in accordance with an embodiment of the disclosure. Specifically, cable104and sensor106each include an information element housed within them (cable information element116and second sensor information element134, respectively). The placement of these information elements need not be as shown in the figure, as will be described in more detail below.FIG.1Balso illustrates the signal flow of an embodiment of a process for controlling the quality of attached accessories. First, the quality control process may be initiated when one or more new accessories are attached to the monitor102; similarly, the process may initiate when a monitor is turned on. Recognizing that an accessory is attached, the monitor searches for cable information element116(step2). The information element116then returns a cable authentication code, which may be used by the monitor to determine that the cable104is a quality, authorized cable (step3). Based on the cable authentication code, the monitor102then searches for a specific sensor information element134(step4). If the correct type of information element is found, the monitor retrieves a sensor authorization code (step5). The monitor can then compare the cable authorization code and the sensor authorization code to determine whether the cable104and sensor106are matching, quality accessories. If the codes do correlate, the monitor may enable the system for monitoring of a patient (step6). FIGS.2and2Ashow a block diagram of embodiments of oximeter systems including improved security technologies. Oximeter102uses port252to connect to cable104at connector114. Cable104in turn uses cable connector150to connect to sensor106at connection housing112. Cable104includes an information element116, which may be located anywhere therein, but is pictured in the figures in port connector114. Cable information element116is preferably an EEPROM with encrypted data. In an embodiment, sensor106includes LEDs222and224. The first LED222has a first corresponding electrical connection220; the second LED224has a second corresponding electrical connection228; and the photodetector226has a corresponding electrical connection232. In the configuration shown inFIG.2, the LEDs222,224are connected at their outputs to a common ground electrical connection230; however, other configurations may advantageously be implemented, such as, for example back-to-back (seeFIG.2A), anode, cathode, common anode, common cathode, or the like. The photodetector226is connected to an electrical connection233. In accordance with this aspect of the present disclosure, one of the LED electrical connections220can also be used for a first sensor information element218—placing first sensor information element218in parallel with one of LEDs222,224. In an embodiment, first sensor information element may comprise a coding resistor or other passive element. According to an embodiment, Oximeter102may communicate with cable information element116which returns data to oximeter102. In at least an embodiment such data may be encrypted, and oximeter102is able to decrypt the information. In an embodiment, the information designates additional information that oximeter102may read from attached sensor106, generally from first sensor information element218. The value of the first sensor information element218and/or its placement across an LED may be used to help indicate that the probe is configured properly for the oximeter. The first sensor information element218may be utilized to indicate that the probe is from an authorized supplier such as a “Masimo” standard probe, “Patient Monitoring Company 1” probe, “Patient Monitoring Company 2” probe, etc. In another embodiment, the first sensor information element218may be used to indicate LED wavelengths for the sensor or other parameters of the sensor106. In an embodiment, reading of the first sensor information element218may advantageously be accomplished according to the disclosure of U.S. Pat. No. 6,397,091, entitled “Manual and automatic probe calibration,” awarded to Diab and owned by the assignees of the present disclosure, incorporated herein by reference. In addition, it should be noted that the cable information element or first sensor information element need not be passive elements. Coding information could also be provided through an active circuit such as a transistor network, memory chip, or other identification device, for instance Dallas Semiconductor DS 1990 or DS 2401 or other automatic identification chip. It is also possible to place the first sensor information element218in series or in parallel with one of the LEDs222,224or with the photodetector226on transmission line233or place the first sensor information element218apart from all of the LEDs222,224and photodetector226on its own transmission lines. Other placements of the first sensor information element218would also be obvious to one of ordinary skill in the art, so long as the coded value or other data from first sensor information element218can be determined by oximeter102. Another embodiment of an oximeter system having improved security technologies is shown inFIG.3. In embodiments such as pictured inFIG.3, sensor106of the oximeter system additionally has a second sensor information element134. In a preferred embodiment, second sensor information element134is an EEPROM with encrypted data, but it may be any of a wide variety of active or passive solutions discussed in relation to first sensor information element and/or cable information element. The second sensor information element134is attached to the sensor through line336. Line336may preferably be a serial cable or other type of cable that allows two-way transfer of data. In such an embodiment, cable information element116of the cable may provide information to oximeter102that indicates both a first sensor information element218and a second sensor information element134should be found and provide information to the oximeter102. Second sensor information element134may then provide data, encrypted or not, to oximeter102, such that the data indicates to oximeter102information about coding values of, or other data stored on, first sensor information element218. Oximeter102may then obtain and compare the information from first sensor information element218and second sensor information element134to determine the security and reliability of sensor106. If the elements do not correctly designate a single approved sensor, an audible and/or visual warning may be triggered. The addition of this second information element may serve to tie various portions of a single accessory, such as a sensor, together, thereby making it more difficult for a knock off manufacturer to scavenge parts, particularly if the parts are discarded separately. Alternatively, information from the cable information element116may indicate that an attached oximeter102should look for second sensor information element134. Information contained in second information element134may then indicate whether or not a first sensor information element218is present and/or what data should be included thereon to indicate an authorized sensor. In various embodiments, second sensor information element134may advantageously store some or all of a wide variety of information, including, for example, sensor type designation, patient information, sensor characteristics, software such as scripts or executable code, oximeter or algorithm upgrade information, or many other types of data. In a preferred embodiment, the second sensor information element134may also store useful life data indicating whether some or all sensor components have expired and should be replaced. In such an embodiment, the oximeter102may compare the information it received from first sensor information element218and second sensor information element134as before. Further it may also help aid in determining that sensor elements have not been used longer than their useful life based on the life data retrieved from second sensor information element134. In such an embodiment, the oximeter102may also produce an audible or visual alarm if sensor life data from second sensor information element134indicates that some or all of sensor106's components are out of date. Similarly cable information element116may also include useful life data. This data can be used by oximeter102to help reduce the risk that cable104might be used longer than its safe life. At least some embodiments including second information element134may include further protection against cannibalization of parts. Once a sensor including second information element134is attached and authorized, the LEDs should be immediately accessible for measurement by the patient monitor102. In an embodiment, if at any time the second information element134is accessible but the LEDs are not, the patient monitor102may trigger an alert or an alarm and/or may disable the use of the component including the second information element134. This may help to provide additional quality control protection because if the first and second information elements218,134are cannibalized from old sensors, they are often placed in a generic cable or generic sensor adaptor. This generic adaptor often remains connected while generic sensors are replaced. FIG.4illustrates one potential general layout of the first sensor information element218, cable information element116, and LEDs222,224. In such an embodiment, oximeter board440is the portion of the oximeter102that communicates with the cable104and sensor106. In an embodiment, oximeter board440may preferably communicate with cable information element116via a serial transmission line446. InFIG.4, cable information element116is located in port connector114of the cable104in this embodiment. Once oximeter board440determines that it is connected to cable104providing information indicating that it should look for first sensor information element218, it sends and receives signals down and from transmission lines442,444. Transmission lines442,444pass the length of cable104into sensor106where first sensor information element218and LEDs222,224are connected in parallel as described in more detail with respect toFIG.2A. FIG.4shows a possible distribution of the first sensor information element218and LEDs222,224in the sensor. In the embodiment shown, first sensor information element218is located in the connection housing112where space is generally more readily available (as it is generally desirable to keep the sensor volume near the LED emitters222,224and photodetector226as low as possible). Other placements for the elements, such as the first sensor information element218and LEDs222,224on sensor106, are also contemplated by this disclosure. Those of ordinary skill in the art would know that first sensor information element218, for example, could be located anywhere in the sensor106or on separate transmission lines from those connecting the LEDs222,224to the oximeter board440. FIG.5illustrates an embodiment of the layout for the cable104whose cable information element116indicates that a first sensor information element218and a second sensor information element134should be found in the sensor. In an embodiment, serial transmission line446connects the oximeter board440to the cable information element116as above. However, serial transmission line446also runs the length of cable104and connects to second sensor information element134located in sensor106in a multi-drop memory configuration. Oximeter board440may access cable information element116and second sensor information element134while running generally few transmission lines. If cable104is connected to a sensor106that does not have second sensor information element134, the oximeter board440may advantageously determine that the sensor is unauthorized and also advantageously may not enable the sensor. The rest of the circuits (i.e. transmission lines442,444; first sensor information element218; and LEDs222,224) are the same as inFIG.4. It is to be noted thatFIGS.4and5are representative embodiments only. These figures are not meant to be read as the exact or only possible locations of the elements discussed. For example, first sensor information element218and/or second information element134may or may not be located in the same portion of the sensor. One or both or neither may be placed in or near the connection housing112. It is also possible for them to be at other positions in the sensor. The roles of each may also be switched with either one or both containing information about data stored on the other. The numbering and discussion of the information elements is merely for ease of reference. It is also important to know that functionality of serial transmission line446, as well as transmission lines442,444, may be accomplished through other means, such as, for example, public or private communications networks or computing systems, or various wired or wireless communications. Requirement Tables In an embodiment, an information element116includes data allowing the connection of both types of sensors depicted inFIG.2andFIG.3. Thus, either a sensor106with only first information element218or one with both first information element218and second information element134could be connected as authorized sensors. In an embodiment, cable information element116may include a sensor requirement table as illustrated in Table 1 below. A sensor requirement table may list different types of attachable accessories (such as the sensors generally discussed) and designate which version of such sensors can be authorized. This may be accomplished through a single bit for each type. For example, as shown in Table 1, cable information element116may include a table with a list of bits designating whether or not an attached sensor must have a second information element134—here a 1 indicates the second information element134is required, while a 0 indicates an attached accessory may have either the first information element218or both information elements. As shown in this example, disposable sensors must include the second information element134, but reusable or combination sensors may include one or both sensor information elements. Any of a number of sensor or other accessories may be allowed or disallowed in such a manner. It is understood that the first sensor information element218must be capable of identifying the type of sensor that it is a part of for comparison to the requirement table, in such an embodiment. TABLE 1Disposable1Reusable0Combination0Adult1Neonatal0. . .. . .Override0 Furthermore, in an embodiment, the requirement table may include an override bit or entry. The override bit preferably allows the attachment of both kinds of accessories for all types, regardless of the current values listed in the rest of the table. In such an embodiment, the override bit may allow diagnostics, testing, and the like without having to separately keep track of or lose the settings for the various accessory types. Those of skill in the art will understand from this disclosure that the requirement table functionality may be implemented in a number of ways. For example, the table may be stored in an accessory information element, such as cable information element116, may be included in the monitor102, and the like. Additionally the requirement table may be implemented as a table, linked list, array, single, multi-bit variable, or the like, and each entry may comprise one or more bits to store the information. In one embodiment, the requirement table may be stored on an EPROM, which may allow the table entries to be set only once. In another embodiment, an EEPROM or other rewritable memory may allow each table entry to be altered more than once. Site Licenses The transfer of accessories from location to location, the sale of used accessories, and the like can also make quality control more difficult, such as by making accessory use hard to track. As such, it is also possible to help maintain quality control by recording or maintaining site licenses, so that accessories, once used, can be tracked to their first use location or maintained at a specific location. Many patient monitors have an associated device ID, typically this is a software ID, but IDs coded into hardware are also possible. In an embodiment of the present disclosure where the monitor has such an ID, accessory use may be tracked or controlled through use of the monitor ID. A general example will be set forth before turning to a specific embodiment according to the figures. When an accessory having an information element is plugged into the monitor having a monitor ID, the monitor may check to see if a monitor ID has been written to a portion of the information element. If not, the monitor may cause its own monitor ID to be written to the information element. From this point on, any monitor connected to that accessory will be able to determine the monitor of first use. If the accessory should later fail, an accessory or patient monitor manufacturer may then be able to determine where it was first used and if it was transferred to another location. In an embodiment, accessories may be tied to specific monitors or sets of monitors, such as to aid in keeping an accessory at a particular site or location. Once an accessory is used with a specific monitor, each monitor to which it is subsequently attached can read the monitor ID and determine if the monitor with which it was first used is part of the current monitor's grouping (e.g. a site license). Monitors can be programmed to recognize monitor IDs from a specific site (such as one hospital, a health system, etc.), a geographic area (such as by country), an Original Equipment Manufacturer (OEM), combinations of the same, and the like—anywhere from a single recognized monitor (itself) to any number of monitors. In an embodiment, the information element may include at least a portion with write once capability, such as an EPROM, so that the monitor ID that is first written to the information element cannot be changed. A specific embodiment utilizing an oximeter example will now be discussed in reference to the Figures. In looking toFIGS.5and7, oximeter board440, has a monitor ID (not shown). When, for example, cable104, having cable information element116is connected to oximeter board440, the oximeter board may query cable information element116(block760). If cable information element116has not been used before, in an embodiment, it will have free space to which data may be written (block762, branching with no monitor ID found). Oximeter board440will then cause monitor ID to be written to the cable information element (block764). (In an embodiment, a similar process may take place with sensor106and second sensor information element134.) The monitor ID written to the cable information element116is preferably persistent, so as to remain when the cable104is disconnected from oximeter board440. During each subsequent use of the cable104, oximeter board440will be able to read the monitor ID from cable information element116(blocks760,762, branching with a monitor ID found). In an embodiment, the patient monitor then compares the monitor ID found with a list accessible by the oximeter board440(block768). The oximeter board may respond according to the results of that ID comparison. For example, if the monitor ID found on the cable104is not acceptable, a warning may be generated or the oximeter board may not allow readings using the cable (block770). Alternatively, if the cable contains an acceptable monitor ID, the oximeter may perform monitoring using the cable104(block772). For example, a hospital may have a site license that allows the cables it purchases to be used on any of its own oximeters. Each oximeter board440has its own monitor ID, but also has a list of monitor IDs of the other monitors the hospital owns or licenses. Once a cable is used with one of the hospital's oximeters, the cable104may only be able to work with that hospital's other oximeters. In one embodiment, connecting such a cable104to another hospital's oximeter will trigger a visual or audible warning. In another embodiment, use of the cable may be disabled. This type of quality control can help both the original hospital and the subsequent hospital in this example. If a cable fails, the first hospital can report it to the supplier who may be able to determine if the first hospital's oximeters may be the source of an underlying problem. On the other hand, the second hospital may be alerted to used accessories that may be more likely to fail. There are numerous alternatives for such a “site license” quality control. For example, oximeters or other patient monitors may have specific lists of acceptable monitor IDs, monitor IDs may be the same for all patient monitors in a group, patient monitors may have a range of acceptable monitor IDs, patient monitors may have a specific equation or algorithm that determines acceptable monitor IDs, and the like. In some embodiments, accessories may record monitor IDs from all monitors to which they are connected, allowing manufacturers, suppliers, end users and the like to track the monitor's use. Upgrade Tool One specific accessory that may be utilized in a patient monitor system such as that described in the previous “Requirements Table” and “Site License” sections is an upgrade tool. Upgrade tools connect to an accessory port of a patient monitor to aid in reprogramming or updating the patient monitor without the need for an additional port, taking the patient monitor apart, returning it to the manufacturer and the like. Upgrade tools and a method for their use is generally disclosed in U.S. application Ser. No. 10/898,680, titled “Multipurpose Sensor Port” and filed on Jul. 23, 2004, incorporated herein by reference and made a part of this specification. Often times a patient monitor or a specific control board will be made by an OEM that is capable of monitoring a host of patient parameters. Making all its boards the same can often reduce costs for an OEM. The OEM, however, may license only certain aspects of the patient monitor or control board to various users. For example, one hospital may obtain the equipment and license it to monitor SpO2, while another may license only CO monitoring, and the like. Should a user wish to change its monitoring capabilities, the OEM does not need to sell it new equipment, instead it can just enable or disable various features of the patient monitor or control board that it has already provided to that user through use of an upgrade tool. It is important that such an upgrade tool only be enabled for specific patient monitors, however. For example, if hospital A pays for upgrades to its licenses, the OEM would like to ensure that the upgrade tool provided to A is not used to upgrade hospital B's patient monitors. The monitor ID recording discussed above is one way that this restriction can be accomplished. For example, an upgrade tool may record the monitor ID of the first monitor to which it is attached. In most instances, this will be a patient monitor from the proper upgrade group. Once this monitor ID is recorded, the upgrade tool may then only be enabled by any other patient monitor in the correct group, like any other accessory. In other embodiments, an upgrade tool may contain an information element that stores the monitor IDs of all patient monitors for which an upgrade has been paid. The upgrade tool and patient monitor can then compare IDs to determine if the patient monitor qualifies for the upgrade. As another alternative, an upgrade tool may have a predetermined ID and all OEM patient monitors or boards that may utilize that upgrade tool may be loaded with an ID or software sufficient to match to the upgrade tool's ID during or sometime after manufacture. In other embodiments, a patient monitor may be upgraded by connection to a network, such as by telephone, cable, DSL, USB, FireWire, and the like. Additionally, in an embodiment, a patient monitor may allow a user to enter the monitor ID, such as via a keypad, keyboard, or touch screen interface. An upgrade tool may be used to alter one or more requirements tables as well. However, it is also possible, in an embodiment, to program one or more accessories themselves to amend requirements tables or upgrade other programming. For example, a sensor information element134may include programming to alter a requirement table stored in a cable information element116once the components are connected and readied for monitoring. Wireless Identification Embodiments of the foregoing information elements use electrical connections to facilitate communication between the patient monitors and the information elements. This is also true in patient monitors that utilize disposable and reusable elements (such as pictured inFIG.1A). In sensors such asFIG.1A, it is often advantageous to control the quality of the disposable portions to reduce problems that may arise from inferior disposable portions, such as faulty attachment, improper alignment of sensor components, contamination of the measurement site through ambient light or physical contaminants, and the like. However, maintaining an electrical connection across the reusable/disposable mating point may complicate quality control efforts. Wireless communications may offer additional advantages to help reduce reliance on electrical contacts and advantageously allow communication between disposable and other system elements. Wireless solutions include passive and active radio frequency identification (RF ID). Passive solutions get their broad ordinary meaning known to one skilled in the art, including solutions that rely on induction from surrounding electromagnetic waves, such as radio waves, to power the RF ID tag. Active solutions get their broad ordinary meaning known to one skilled in the art, including solutions that have an internal or external power source, such as a battery, photovoltaic cell, or electrical transmission lines to an exterior source of power. A RF ID solution suitable for the purposes discussed here is generally commercially available. However, a brief discussion of the general technology is instructive. A basic RF ID tag includes an information element, such as an integrated circuit, coupled with an antenna. The antenna receives signals from a reader device capable of acquiring data from the integrated circuit of the tag. In passive RF ID, the incoming radio frequency energy from the reader device induces sufficient electrical current to power the information element and transmit a response indicative of the information stored on the information element. In active RF ID, a battery or other power source may be used to supplement or provide the power for transmitting the response. FIG.6illustrates an exemplary patient monitoring system incorporating wireless authentication utilizing radio frequency identification in relation to cable information element116and sensor information element134. In one embodiment of this disclosure the RF ID configuration is passive, thereby simplifying a disposable portion of a sensor according to this disclosure. In another embodiment of this disclosure, the RF ID configuration may be active. While this creates a slightly more complicated cable, sensor or other accessory, there are advantages that may offset the complications. For example, active RF ID tags typically allow for greater memory and the ability to store data received from the reader. An active RF ID tag may also provide greater transmission distances. Specifically looking to the differences inFIG.6, oximeter board440further comprises or is in communication with a reader650capable of sending and receiving radio frequency signals to attached accessories. In the cable104, information element116is now connected to a radio frequency antenna652to form a cable RF ID tag660. Similarly, in the sensor106, second information element134is also connected to a radio frequency antenna654to form a sensor RF ID tag662. Because cable information element116and information element134may now communicate with each other and/or with oximeter board440(via reader650) through radio frequency signals, there is no need to have serial transmission line446as was previously connecting these elements. To enable attached accessories in an embodiment utilizing this technology, oximeter board440directs reader650to send out a radio frequency signal. In the cable104, antenna652receives this signal, and redirects the energy to reply with a signal indicative of the information stored on cable information element116. Incoming radio frequency signals induce a current in cable information element116and provide the power to transmit a response. Often this is done through back scattering the carrier signal from the reader650. Oximeter board440's reader650may also send out a radio frequency signal received by antenna654in sensor106. Antenna654likewise redirects the energy received in accepting the signal to reply with a signal indicative of the information stored on information element134. Reader650receives each of the signals generated by cable RF ID tag660and sensor RF ID tag662and communicates them to oximeter board440. Oximeter board440compares the received information and enables usage of cable104and sensor106for patient monitoring if it recognizes each as approved accessories. It is notable that the workings of the RF ID system as inFIG.6have been discussed in relation to passive RF ID elements. It would be straightforward for one of ordinary skill to modify either or both of cable RF ID tag660and sensor RF ID tag662to work as active RF ID tags by addition of a power source such as a battery or electrical transmission lines from the oximeter's power source. This may be necessary if the RF ID element needs to transmit more than an identification code or other small amount of data. It should also be understood that the site license and upgrade tool concepts may also utilize wireless technology as described herein to read and write monitor IDs. In an embodiment, this may allow a patient monitor to update associated accessories without need of attaching the accessory to the patient monitor. Although the patient monitor capable of maintaining quality control in an optical sensor is disclosed with reference to its preferred embodiments, the disclosure is not intended to be limited thereby. Rather, a skilled artisan will recognize from the disclosure herein a wide number of alternatives for such a patient monitor. For example, the elements used to code and identify the sensor may be passive or active such as resistors, transistor networks, memory chips, or other identification devices like Dallas Semiconductor DS 1990 or DS 2401 or other automatic identification chips. As described above, first and second sensor information elements may be switched in various embodiments, and one or the other may be included. Additionally, RF ID solutions are not the only wireless solutions available; other passive or active wireless communications may also be used such as those conforming to IEEE or Bluetooth® standards. It is also possible to alter the connections between various accessories; for example, the sensor's106male connection housing112and the cable's104female connection housing150may be reversed or may each have a male and female component. Furthermore, any of a number of accessories may include elements as described herein. Such accessories may be disposable or reusable or may have portions that are disposable and others that are reusable. Accessories may include, for example, cables, sensors, battery packs, data storage such as hard drives, flash drives, and the like, computer boards, and the like. It is also noted that the disclosure herein discusses only a two LED, one photodetector configuration for straightforwardness of the disclosure. One skilled in the art would know that more complex or varied data may be retrieved through the addition of more LEDs or other emitting devices and/or more photodetectors or other detecting devices. Such devices may continue to utilize a single first sensor information element218or multiple information elements, corresponding to various sensor components, with or without a second sensor information element134. Additionally, other combinations, omissions, substitutions and modifications will be apparent to the skilled artisan in view of the disclosure herein. Accordingly, the present disclosure is not intended to be limited by the reaction of the preferred embodiments, but is to be defined by reference to the appended claims. Additionally, all publications, patents, and patent applications mentioned in this specification are herein incorporated by reference and made a part of the specification hereof to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. | 38,369 |
11857316 | DETAILED DESCRIPTION OF THE EMBODIMENTS Referring toFIG.6, a non-invasive optical detection system10constructed in accordance with one embodiment of the present inventions is designed to detect an optical parameter of a first volume of interest14in a scattering medium12(examples of volume of interest and different types of scattering mediums are defined below). Significantly, the optical detection system10uses ultrasound32to create an “optical masking zone”13in a second volume of non-interest16that masks out photons passing through the second volume of non-interest16from contributing to the detected optical parameter of the first volume of interest14, thereby minimizing or at least lessening background noise, and as a result, maximizing or at least increasing the signal-to-noise ratio of the detected optical parameter of the first volume of interest14while also increasing the spatial resolution. The fundamental principle is that light that intersects the optical masking zone13is rejected due to its interaction with the ultrasound32in the masking zone13, while light that does not intersect the optical masking zone13is accepted for subsequent detection, as described in further detail below. Throughout the specification, the “first volume of interest14” within the scattering medium12will also be referred to as “volume of interest14”; and the “second volume of non-interest16” within the scattering medium12will be referred to as “volume of non-interest16.” Although the optical detection system10can be used for any application where it is desirable to detect an optical parameter in a volume of interest14of a scattering medium12, as will be described in further detail below, the optical detection system10particularly lends itself well to anatomical detection applications, and in particular, the detection or imaging of anatomical parts of a human body, an animal body, and/or biological tissue. In this case, the scattering medium12may be an anatomical structure, such as, the intact head of a person, including the scalp, skull, and brain, with the volume of interest14being the brain, and the volume of non-interest16being the scalp and skull. When used to detect neural activity within a head, the optical detection system10essentially renders the skull optically transparent by combining the optical masking zone13to the scalp and skull to mask out the photons that do not penetrate through the skull into the brain, but instead wander around in the scalp and skull. Thus, as will be described in further detail below, the optical detection system10may take the form of an anatomical detection system, in which case, the detected optical parameter may be a physiologically-dependent optical parameter. As will be described herein, the optical detection system10can be used as an optical coherence tomography (OCT) system, although the optical detection system10can be used in other systems, such as an ultrasound modulated optical tomography (UOT) system, e.g., as described in U.S. patent application Ser. No. 15/844,370, entitled “Pulsed Ultrasound Modulated Optical Tomography Using Lock-In Camera,” which is expressly incorporated herein by reference; a holography system, e.g., as described in U.S. U.S. patent application Ser. No. 16/299,067, entitled “Non-Invasive Optical Detection Systems and Methods in Highly Scattering Medium,” which is expressly incorporated herein by reference; and off-axis holography systems, etc. Information and acquired neural data related to the detected physiologically-dependent optical parameter may be used (e.g., computed, processed, stored, etc.) internally within the anatomical detection system to adjust the detection parameters of the detection system, such as increasing or decreasing the strength of the optical source and/or data compression and/ or analysis, such a Fast Fourier Transform (FFT) and/or statistical analysis; or may be transmitted to external programmable devices for use therein, e.g., medical devices, entertainment devices, neuromodulation stimulation devices, lie detection devices, alarm systems, educational games, brain interface devices, etc. In a practical implementation, the optical detection system10will acquire data from multiple target voxels (“data voxels”) spatially separated from each other within the volume of interest14, as will be described in further detail below. A “voxel” may be defined as a contiguous sub-volume of space that is targeted for imaging or detecting within the scattering medium12. For purposes of brevity, the optical detection system10is primarily described herein as acquiring one data voxel (i.e., data representative of an optical parameter of the data voxel), e.g., by using a single paired source-detector arrangement, although it should be understood that the optical detection system10may be capable of acquiring more than one data voxel from the volume of interest14of the scattering medium12, e.g., by using a multiple paired source-detector arrangement or by moving the single paired source-detector arrangement between the acquisition of data voxels, or by having multiple detectors for a single source, as will be described in further detail with respect toFIGS.27and28. Returning toFIG.6, the optical detection system10generally includes an interferometer20, an acoustic assembly22, a detector24, a controller26, and a processor28, which uniquely interact with each other to detect the optical parameter of the volume of interest14while masking out the undesirable photons passing through the volume of non-interest16from the detected optical parameter of the volume of interest14. The interferometer20is, for example a Mach-Zehnder type interferometer, comprising a sample arm that passes through the scattering medium12and a reference arm (described in further detail below with respect toFIG.7) that operate together to create an interference light pattern48. In the illustrated embodiment, the interference light pattern48takes the form of a speckle light pattern, which can be defined as an intensity pattern produced by the mutual interference of a set of scattered wavefronts. That is, a speckle light pattern results from the interference of many waves, but having different phases and amplitudes, which add together to give a resultant wave whose amplitude, and therefore intensity and phase, varies randomly. The interferometer20is configured for delivering sample light40into the scattering medium12along the sample arm during a measurement period. As the sample light40scatters diffusively through the scattering medium12, various portions of the sample light40will take different paths through the scattering medium12. For purposes of brevity, only a first sample light portion40atraveling along one optical path through the volume of interest14, and a second sample light portion40btraveling along another optical path exclusively through the volume of non-interest16are illustrated, although it should be appreciated that the diffused sample light40will travel along many more paths through the scattering medium12. The first sample light portion40apassing through the volume of interest14will exit the scattering medium12as signal light44, and the second sample light portion40bpassing through the volume of non-interest16will exit the scattering medium12as background light46. The signal light44and background light46combine to create a sample light pattern47, which is encoded with optical parameters of the volume of interest14by the signal light44for detection by the optical detector24, as will be described in further detail below. It should be appreciated that, although not all of the sample light pattern47exiting the scattering medium12will be detected, it is only important that enough of the sample light pattern47be detected, such that the optical parameters encoded in the signal light44within the sample light pattern47can be extracted. It should also be appreciated that, as illustrated inFIG.6, because the depth of the volume of interest14is greater than the depth of the volume of non-interest16within the scattering medium12in this particular example, as a practical matter, some of the scattered sample light40may pass through both the volume of interest14and the volume of non-interest16to create the signal light44, in which case, it is desirable that such scattered sample light40be treated as the first sample light portion40athat exits the volume of interest14as signal light44. That is, once the sample light40passes into the volume of interest14from the volume of non-interest16, it is desirable that the light that exits the scattering medium from such sample light40be treated as signal light44. Thus, only the sample light40that is exclusively confined to the volume of non-interest16, without ever passing into or out of the volume of interest14, will be treated as background light46. As will be described in further detail below, in such a case, the ultrasound32is delivered into the scattering medium12, such that one or more optical ports17(shown inFIGS.10a-10band11a-11b) are created within the volume of non-interest16adjacent the optical masking zone13to allow both ingress and egress of the first sample light portion40ato and from the volume of interest12without being undesirably masked by the optical masking zone13from the detected optical parameter within the volume of interest14. The interferometer20combines the sample light pattern47with reference light42(shown inFIG.7) to create the interference light pattern48, which has a holographic beat component that can be detected by the optical detector24as the signal component during the measurement period, as will be discussed in further detail below with respect toFIGS.20-24. The interferometer20amplifies the signal light44in the sample light pattern47by combining the signal light44and the reference light42using, depending on the particular implementation, a homodyne technique or a heterodyne technique. For the purposes of this specification, the term “homodyne,” when referring to the combination of signal light44and reference light42, means that the signal light44and reference light42have the same frequency when combined to generate interference terms having DC holographic beat components, as opposed to the term “heterodyne,” which means that the signal light44and reference light42have different frequencies when combined to generate interference terms with AC holographic beat components. Thus, if the signal light44and reference light42have the same frequency (i.e., they are combined using a homodyne technique), the holographic beat component of the interference light pattern48will be constant. In contrast, if the signal light44and reference light42having different frequencies (i.e., they are combined using a heterodyne technique), the holographic beat component of the interference light pattern48will have a frequency equal to the difference between the frequencies of the signal light44and reference light42. It should be noted that although the interferometer20, for purposes of brevity, is described inFIG.6as only creating one interference light pattern48from the sample light pattern47and reference light42for each measurement period, and further describes the optical detection system10as only having one detector24for detecting such interference light pattern48, the interferometer20may create multiple interference light patterns48(typically phase-modulated) from the sample light pattern47and reference light42for each measurement period, in which case, the optical detection system10may have an equal number of detectors24for detecting such interference light patterns48, as will be described in further detail below with respect toFIGS.23and24. With reference now toFIG.7, one embodiment of an interferometer20that can be used in the optical detection system10ofFIG.6will now be described. The interferometer20includes an optical source50, an optical beam splitter52, an optical beam splitter/combiner58, a path length adjustment mechanism60, and a mirror arrangement62(which comprises, e.g., mirrors62a,62b,62c,62d,62e, and62f). Depending on the specific implementation of the detecting techniques of the optical detection system10, the interferometer20may comprise an optical frequency shifter (not shown) for shifting the frequency of the sample light40and reference light42relative to each other, as further described in U.S. patent application Ser. No. 15/844,370, entitled “Pulsed Ultrasound Modulated Optical Tomography Using Lock-In Camera,” and U.S. patent application Ser. No. 16/299,067, entitled “Non-Invasive Optical Detection Systems and Methods in Highly Scattering Medium,” which are both expressly incorporated herein by reference. The optical source50is configured for generating source light38, and may take the form of, e.g., a super luminescent diode (SLD), a light emitting diode (LED), a Ti:Saph laser, a white light lamp, a diode-pumped solid-state (DPSS) laser, a laser diode (LD), a super luminescent light emitting diode (sLED), a titanium sapphire laser, and/or a micro light emitting diode (mLED), or a distributed feedback (DFB) laser or similar laser to achieve very narrow linewidths and extremely high amplitude stability, among other optical sources. The wavelength of light generated by the optical source50may be, e.g., in the range of 350 nm-1500 nm, and/or may be ultraviolet (UV) light, visible light, and/or near-infrared and infrared light. The optical source50may generate monochromatic light comprising a single-wavelength light, or light having multiple wavelengths (e.g., white light). In some variations, the optical source50can emit a broad optical spectrum or emit a narrow optical spectrum that is then rapidly swept (e.g., changed over time) to functionally mimic or create an effective broad optical spectrum. In alternative embodiments, multiple optical sources may be used to generate the source light38at multiple distinct wavelengths, e.g., one generating source light38within the range of 605 nm to 800 nm, and another generating source light38within the range of 800 nm to 1300 nm. The optical source50may be a continuous wave (CW) or a pulsed wave (PW) optical source with either a predefined coherence length or a variable coherence length Preferably, the optical source50is a high-coherence optical source (i.e., a laser), although in alternative embodiments, the optical source50may be a low-coherence light source. If the optical detection system10utilizes OCT techniques, as will be described in further detail below, the optical source50may be configured for generating source light38having a coherence length selected to correspond to the desired level of path-length selectivity, e.g., from about 75 μm to about 200 μm, e.g., about 100 μm for detecting optical properties at depths of 6-10 mm below the surface of scattering medium12, and in the case illustrated below, the scalp, through the skull, and into the brain. The optical source50may receive power from a drive circuit (not shown). The optical source50, itself, may include control inputs, or a separate optical acoustic modulator (not shown) may include control inputs, for receiving control signals from the controller26that cause the optical source50to emit the source light38at a selected time, duration, and intensity, and if variable, a coherence length. Thus, the controller26(shown inFIG.6) may selectively pulse the source light38, and thus the sample light40and reference light42. It should be noted that, because the optical detection system10does not rely solely on heterodyne suppression of the background light46(as, e.g., compared to UOT), the interferometer20is highly tolerant to instability in the optical source50and waveform shape within the measurement period. The optical beam splitter52is configured for splitting the source light38into the sample light40that propagates along a sample arm of the interferometer20and the reference light42that propagates along a reference arm of the interferometer20. In the illustrated embodiment, the optical beam splitter52(e.g., a partially transparent mirror) splits the source light38via amplitude division by reflecting a portion of the source light38as the sample light40, and transmitting the remaining portion of the source light38as the reference light42, although the optical beam splitter52may alternatively reflect a portion of the source light38as the reference light42, and transmit the remaining portion of the source light38as the sample light40. In alternative embodiments, the optical beam splitter52may split the source light38via wavefront division by splitting a portion of the wavefront into the sample light40and splitting the remaining portion of the wavefront into the reference light42. In either case, the optical beam splitter52may not necessarily split the source light38equally into the sample light40and reference light42, and it may actually be more beneficial for the optical beam splitter52to split the source light38unevenly, such that the amplitude of the sample light40is less than the amplitude of the reference light42(e.g., 10/90 power ratio) in order to comply with tissue safety standards. That is, the amplitude of the sample light40will preferably be relatively low to avoid damaging the tissue, whereas the amplitude of the reference light42, which will be used to boost the sample light pattern47in the interference light pattern48, will be relatively high. The optical beam splitter/combiner58is configured for combining the reference light42with the sample light pattern47via superposition to generate the interference light pattern(s)48. The optical beam splitter/combiner58can take the form of, e.g., a combiner/splitter mirror. Variations of the optical beam splitter/combiner58will be described in further detail below with respect toFIGS.21-24. The path length adjustment mechanism60is configured for adjusting the optical path length of the reference arm to nominally match the expected optical path length of the sample arm. The path length adjustment mechanism60may include control inputs for receiving control signals from the controller26to cause the path length adjustment mechanism60to adjust the optical path length of the reference arm. The path length adjustment mechanism60includes an optical beam splitter/combiner64and an adjustable mirror66that can be displaced relative to the optical beam splitter/combiner64. The beam/splitter combiner64is configured for redirecting the reference light42at a ninety-degree angle towards the mirror66, and redirecting the reference light42reflected back from the mirror66at a ninety-degree angle towards the optical beam splitter/combiner58. Thus, adjusting the distance between the mirror66and the optical beam splitter/combiner64will adjust the optical path length of the reference arm to match the optical path length of the sample arm. Referring further toFIG.8, in the case where the optical detection system10takes the form of an OCT system, the path length adjustment mechanism60may be adjusted to select the path length of the sample light40for detection of optical parameters within a tissue voxel15within the volume of interest14of the scattering medium12, as illustrated inFIG.8. In particular, the system10uses path-length selection to distinguish between a first sample light portion40a′ and a second sample light portion40a″, the first sample light portion40ahaving a first optical path length being backscattered by the tissue voxel15as signal light44′, and thus encoded with the optical parameters of the tissue voxel15, and the second sample light portion40b′ having a second optical path length different from the first optical path length and being backscattered by a region of the volume of interest14not coincident with the tissue voxel15, and thus not encoded with the optical parameters of the target tissue voxel15. As shown, because the tissue voxel15has a fixed depth d2compared to tissue at different other depths (e.g., depth d1), the tissue voxel15may be selectively targeted for detecting by the optical detection system10. That is, the path length adjustment mechanism60is adjusted, such that the optical path length of the first sample light portion40a′ (in contrast to the optical path length of the second sample light portion40a″) matches the optical path length of the reference light42within the optical coherence length of the sample light40, such that only the signal light44′ resulting from the first sample light portion40a′ that is backscattered by the tissue voxel15contributes to the timing-varying interference component of the interference light pattern48. Thus, depending on the location of the particular target tissue voxel15, the path length adjustment mechanism60can be adjusted to target that target tissue voxel15. For example, if a different target tissue voxel (not shown) at a depth d1is desired to be detected, the path length adjustment mechanism60can be adjusted, such that the optical path length of the second sample light portion40a″(in contrast to the optical path length of the first sample light portion40a′) matches the optical path length of the reference light42within the optical coherence length of the sample light40, such that only the signal light44″ resulting from the second sample light portion40a″ that is backscattered by the different tissue voxel contributes to the timing-varying interference component of the interference light pattern48. Further details describing OCT systems are set forth in U.S. patent application Ser. No. 15/853,538, entitled “Systems and Methods for Quasi-Ballistic Photon Optical Coherence Tomography in Diffusive Scattering Media Using a Lock-In Camera Detection” (now U.S. Pat. No. 10,219,700), which is expressly incorporated herein by reference. Referring back toFIG.7, the mirror assembly62is configured for confining the optical light paths in the interferometer20into a small form factor. In the illustrated embodiment, the mirror assembly62includes a first tilted, completely reflective, mirrors62a,62bconfigured for redirecting the sample light40from the optical beam splitter52towards the scattering medium12; a tilted completely reflective, mirror62cconfigured for redirecting the resulting sample light pattern47exiting the scattering medium12towards one face of the optical beam splitter/combiner58, and three tilted, completely reflective, mirrors62d-62fconfigured for redirecting the reference light42from the optical beam splitter/combiner64towards another face of the optical beam splitter/combiner58. In an alternative embodiment, rather than using mirrors in the reference arm, a fiber optical waveguide can be used between the optical beam splitter/combiner64and the optical beam combiner58, e.g., to more easily satisfy the form factor requirements of a wearable device. Referring back toFIG.6, the acoustic assembly22is configured for emitting ultrasound32into the volume of non-interest16. Preferably, the frequency of the ultrasound32is selected (e.g., in the range of 100 KHz-20 MHz), such that the ultrasound32can pass efficiently through the volume of non-interest16, thereby masking photons propagating through the volume of non-interest16from contributing to the detected optical parameter of the volume of interest14; although as will be described in further detail with respect toFIG.9, it is preferable that the frequency of the ultrasound32be greater than 1 MHz. In the illustrated embodiment, such masking of photons is accomplished by decorrelating at least a portion of the background light46in the sample light pattern47from the holographic beat component of the interference light pattern48. For the purposes of this specification, background light46is decorrelated from the holographic beat component of the interference light pattern48if it is prevented contributing to the holographic beat component of interference light pattern48. In the illustrated embodiment, the background light46is decorrelated from the holographic beat component of the interference light pattern48by scrambling the background light46, such that the background light46will have a continuously randomized phase in comparison to the reference light42, and thus, cannot be holographically amplified by the reference light42to create a coherent signal, as discussed in further detail below. As such, the background light46will not significantly contribute to the holographic beat component of the interference light pattern48, as will be discussed in further detail below. Although the ultrasound32will decorrelate at least a portion of the background light46of the sample light pattern47from the holographic beat component of interference light pattern48, it is preferred that the ultrasound32be delivered into the volume of non-interest16, such that substantially all of the background light46of the sample light pattern47be decorrelated from the holographic beat component of interference light pattern48(for the purposes of this specification, defined as at least 90 percent of background light46in the sample light pattern47being decorrelated, determined at the beginning of the pulse of sample light40), and more preferably, such that at least 99 percent of the background light46of the sample light pattern47be decorrelated from the holographic beat component of interference light pattern48, determined at the beginning of the pulse of sample light40. At the same time, it is also preferable that the ultrasound32not decorrelate the signal light46of sample light pattern47from the holographic beat component of the interference light pattern48, determined at the beginning of the pulse of sample light40. Thus, it is important that, during the beginning of the measurement period, the optical masking zone13extend through the entire thickness of the volume of non-interest16without extending into the volume of interest14. Put another way, during the delivery of the sample light40into the scattering medium12, it is preferred that all of the ultrasound32emitted by the ultrasound transducer34(whether CW or PW) into the scattering medium12be substantially confined within the volume of non-interest16(defined as at least 90 percent of the all of the ultrasound32emitted by the ultrasound transducer34confined within the volume of non-interest16) at the beginning of the measurement period, and in this case, during the delivery of the sample light40into the scattering medium12. More preferably, at least 99 percent of all of the ultrasound32emitted by the ultrasound transducer34into the scattering medium12be confined within the volume of non-interest16at the beginning of the measurement period. It is also preferred that the ultrasound32(and thus the optical masking zone13) span the entire width of the volume of non-interest16at the beginning of the measurement period to ensure that all of the background light46is decorrelated from the holographic beat component of the interference light pattern48. Referring further toFIG.9, one embodiment of the acoustic assembly20includes an ultrasound transducer34and a signal generator36. The ultrasound transducer34may take the form of any device that emits ultrasound32at a defined amplitude, frequency, phase, and/or duration in response to a controlled drive signal. Significantly, although the ultrasound transducer34may be complex, e.g., a piezoelectric phased array capable of emitting ultrasound beams with variable direction, focus, duration, and phase, an array of pressure generating units (e.g., silicon, piezoelectric, polymer or other units), an ultrasound probe, or even an array of laser generated ultrasound (LGU) elements, the ultrasound transducer34can be very simple, e.g., a single acoustic element configured for emitting ultrasound beams, since the ultrasound32need not be focused. Furthermore, in contrast to ultrasound transducers that need to be relatively large in order to focus the ultrasound to a small voxel in the brain, the ultrasound transducer34can be made as small as the “footprint” of the optical masking zone13, and therefore, can be integrated into a small form-factor device. Furthermore, for anatomical applications, and in particular, imaging or the detection of optical properties within of the head, because the ultrasound32need not penetrate through the skull into the brain in the case where the volume of non-interest16is confined to the scalp and skull, its frequency can be much higher (e.g., much greater than 1 MHz, e.g., 5-20 MHz). As such, the ultrasound transducer34can be manufactured using much cheaper, modern thin-film transducer fabrication processes, e.g., capacitive micromachined ultrasound transducer (CMUT) technology and piezo micromachined ultrasound transducers (PMUT) technology. The signal generator36is configured for generating alternating current (AC) signals for driving the ultrasound transducer34at a defined amplitude, frequency, phase, and duration. The AC drive signal may be electrical or optical, depending on the nature of the ultrasound transducer arrangement. Because a simple acoustic element can be used for the ultrasound transducer34, the signal generator36may comprise a single-channel transmitter, although in the case where the ultrasound transducer34comprises a phase-array or multi-channel, the signal generator36may comprise a multi-channel transmitter, which could provide more uniform masking of the background light46. For example, the use of multiple ultrasound elements allows the acoustic waves to be shaped, for instance, that lead to sharper boundaries, correct for skull aberrations, or generally create better defined shapes of ultrasound, whereas a single-element transducer only makes acoustic waves having one basic shape that is subject to distortion by the skull. The signal generator36includes control inputs (not shown) for receiving control signals from the controller26that cause the ultrasound transducer34to emit the ultrasound32at the defined amplitude, frequency, phase, and duration. Thus, as will be described in further detail below with respect toFIGS.16a-16cand17a-17c), the controller26may selectively pulse the ultrasound32in certain embodiments. As briefly discussed above, the ultrasound32emitted by the ultrasound transducer34creates an optical masking zone13in the volume of non-interest16that masks out photons exclusively passing through the volume of non-interest16from contributing to the detected optical parameter of the volume of interest14. The mechanism of the masking out of photons occurs via ultrasonic tagging of the photons, which shifts their frequency, causing their interference with the reference light to rapidly oscillate as a fast beat rather than a DC beat in the homodyne case or much faster than an AC beat in the heterodyne case, and thus to integrate out of the detection process by summing to approximately zero. It thus relies on a similar ultrasonic tagging mechanism as UOT, but instead of leading to collection of the tagged photons, it leads to the masking of the tagged photons out of the measurement, leaving only the untagged photons, which did not pass through or minimally passed through the optical masking zone13defined by the spatial arrangement of the ultrasound waves in the scattering medium12. One embodiment of an ultrasound transducer34′ is disc-shaped, as illustrated inFIGS.10aand10b. The ultrasound transducer34′ is configured for emitting the ultrasound32into the scattering medium12to create a cylindrical or frustoconical-shaped optical masking zone13that suppresses the second sample light portion40bthat passes through the volume of non-interest16as the background light46. An annular-shaped optical port17is created around the optical masking zone13in which the sample light40can be delivered to the scattering medium12for propagation through the volume of interest14as the first sample light portion40aand from which the sample light portion40acan exit the scattering medium12as the signal light44. In the illustrated embodiment, the ultrasound transducer34′ is sized to fit between the optical source50and the optical detector24(both shown inFIG.6) that are clocked 180 degrees from each other, such that the sample light40is delivered through the annular-shaped optical port17adjacent one side of the optical masking zone13, and the resulting signal light44exits from the annular-shaped optical port17adjacent the opposite side of the optical masking zone13, as best shown inFIG.10a. However, the optical source50and optical detector24can be oriented relative to each other in any suitable manner, such that sample light40is delivered through the annular-shaped optical port17and the resulting signal light44exits from the annular-shaped optical port17at any relative angular positions along the annular-shaped optical port17. It should also be appreciated that the ultrasound transducer34need not be disk-shaped, but can be any shape, for example, rectangular, triangular, octagonal, etc., to create a similarly-shaped optical port17that surrounds the optical masking zone13. Another embodiment of an ultrasound transducer34″ is disc-shaped, but has a central aperture35, as illustrated inFIGS.11aand11b. The ultrasound transducer34″ is configured for emitting the ultrasound32into the scattering medium12to create an annular-shaped optical masking zone13that suppresses the second sample light portion40bthat passes within and then back out the volume of non-interest16as the background light46. A cylindrical or frustoconical-shaped optical port17is created within the optical masking zone13in which the sample light40can be delivered to the scattering medium12for propagation within the volume of interest14as the first sample light portion40aand back out of the volume of interest14as the signal light44, as best illustrated inFIG.11a. In the illustrated embodiment, the central aperture35of the ultrasound transducer34″ is sized to accommodate a closely spaced or collocated optical source50and optical detector24(both shown inFIG.6), such that the sample light40is delivered through the optical port17and the resulting signal light44exits from the optical port17. It should also be appreciated that the ultrasound transducer34need not be disk-shaped, but can be any shape, for example, rectangular, triangular, octagonal, etc. Similarly, the central aperture35need not be circular, but can be any shape, for example, rectangular, triangular, octagonal, etc. It should be appreciated that the arrangement of the ultrasound transducer34, optical source50, and detector24illustrated inFIGS.11aand11blends itself well to OCT techniques that rely on the backscattering of the “straight path” photons at a selected depth within the scattering medium12. Furthermore, because the background light46has been suppressed by the optical masking zone13, the optical source and detector of such OCT system need not be separated a relatively long distance from each other in order to reduce the fraction of background light46relative to the signal light44, but instead, can be adjacent to each other or even co-located at the center aperture35of the ultrasound transducer34, thereby maximizing the amount of signal light44detected by the optical detector26, and further improving imaging spatial resolution. It should be appreciated that the extent to which the background light46is decorrelated from the holographic beat component of the interference light pattern48is dependent on the frequency of ultrasound32and the duration of the sample light40within the measurement period. That is, the more cycles of ultrasound32per duration of the sample light40, the more the background light46is decorrelated from the holographic beat component. Although it is desirable to select a frequency of the ultrasound32that is high as possible and a duration of the sample light40as low as possible to maximize decorrelation of the background signal from the holographic beat component, it should be appreciated that selection of a higher frequency ultrasound32must be balanced against the requirement that the frequency of the ultrasound32be low enough to allow sufficient penetration of the ultrasound32, as discussed above, and the duration of the sample light40must be balanced against the requirement that the duration of the sample light40be less than the decorrelation time of the tissue, as discussed below with respect toFIG.14. It can be shown that, assuming the mechanism of masking the background light46that travels through the volume of non-interest16is decorrelation time “scrambling” by the ultrasound32(i.e., the addition of phase information at high frequency (in this case, the frequency of the ultrasound32)), there will be several orders of suppression of the background light46. For example, referring toFIG.12, if the ultrasound32has an exemplary frequency that defines a decorrelation time TD, which can be assumed to be one-quarter the period of the ultrasound32, and the sample light40has an exemplary duration TL, the suppression of the background light46can be approximated as: Suppression=TD/TL. [2] For example, if the sample light40has a duration TLset to 10 μs, and the frequency of the ultrasound is 20 MHz, such that decorrelation time TDis 12.5 ns (i.e., one-quarter of the period (50 ns) of the ultrasound32), then, using equation [2], the suppression of the background light46will be TD/TL=12.5 ns/10 μs=1/800, which is nearly three orders of magnitude of suppression of the background light40. As dictated by equation [2], as the frequency of the ultrasound32increases, the decorrelation time TDwill decrease, thereby increasing the decorrelation of the background light46from the holographic beat component of the interference light pattern48. If the mechanism of masking the background light46is acoustic encoding by the ultrasound32(i.e., merely frequency shifted by the ultrasound32in contrast to scrambling), the suppression of the background light46can be approximated as: Suppression=1/(TL*2π*fus). [3] Again, assuming that the sample light40has a duration TLset to 10 μs, and the frequency of the ultrasound is 20 MHz, then, using equation [3], the suppression of the background light46will be 1/(TL*2π*fus)=1/(10μs*2π*20 MHz)=1/1200, which is three orders of magnitude of suppression of the background light46. As dictated by equation [3], as the frequency fusof the ultrasound32increases, the decorrelation of the background light46from the holographic beat component of the interference light pattern48will likewise increase. Although decorrelation of the background light46from the holographic beat component of the interference light pattern48has been shown to increase with the frequency of the ultrasound32(preferably within the range of 5-20 MHz when used to detect optical properties of the brain through the skull), it should be noted that this advantage must be balanced with the penetration of the ultrasound32into the scattering medium12, which decreases as the frequency of the ultrasound32increases, as described in further detail below. Thus, it is desirable to set the frequency of the ultrasound32as high as possible, while achieving the desired penetration of the ultrasound32into the scattering medium12. Although the ultrasound32may have a uniform frequency, a uniform amplitude, and a uniform phase during the measurement period, as illustrated inFIG.13a, it should be appreciated that the masking of the background light46from the detected optical parameter of the volume of interest14can be further increased by varying at least one of the frequency, the amplitude, and the phase of the ultrasound32during the measurement period. For example, the frequency of the ultrasound32may be varied by sweeping it across a range of frequencies (i.e., the frequency is gradually or incrementally changed), as illustrated inFIG.13b, or the frequency of the ultrasound32may be varied by switching it between different random frequencies (i.e., by jumping from one frequency to another frequency), as illustrated inFIG.13c. Notably, sweeping the frequency of the ultrasound32is easier to implement in the hardware (e.g., the transducer34, driver, amplifier, etc.), although switching the frequency of the ultrasound32between random frequencies may be more effective in masking the background light46from the detected optical parameter of the volume of interest14. Two or more of the frequency, the amplitude, and the phase of the ultrasound32may be varied during the measurement period to further mask the background light46from the detected optical parameter of the volume of interest14. For example, as illustrated inFIG.13d, both the amplitude and frequency of the ultrasound32may be varied, and as illustrated inFIG.13e, all three of the amplitude, frequency, and phase of the ultrasound32may be varied. The background light46can also be further masked from the detected optical parameter of the volume of interest14by using an arbitrary waveform selected for maximum masking of the background light46, as illustrated inFIG.13f. Referring now toFIG.14, the relationship between the pulses of sample light40, the measurement period τ, and the active period of the optical detector24(in the case where the optical detector24is a camera, a single camera frame) (exposure or readout time)), will be discussed. During the acquisition of data characterizing the volume of interest14, one or more pulses of the sample light40is delivered into the scattering medium12during each measurement period τ. Although, in the embodiment illustrated inFIG.14, only a single rectangular pulse of the sample light40is delivered into the scattering medium12during each measurement period τ, it should be appreciated that other sample light pulse shapes and number of sample light pulses can be used in each measurement period τ, including, e.g., double Gaussian or even arbitrarily-shaped pulses, as illustrated in U.S. patent application Ser. No. 16/299,067, entitled “Non-Invasive Optical Detection Systems and Methods in Highly Scattering Medium,” which is expressly incorporated herein by reference. In this example, the respective measurement period t is equal to the duration of a single pulse of the sample light40to maximize the data acquisition speed, although in alternative embodiments, the measurement period t may extend over multiple pulses of the sample light40. In the illustrated embodiment, the duty cycle τdutyof the pulsed sample light40is selected to match the frame rate of the optical detector24(in the case where the optical detector24is a camera), such that there is only one measurement period τ for each frame of the optical detector24, although the duty cycle τdutyof the pulsed sample light40may be selected, such that there are multiple measurement periods τ for each frame of the optical detector24. The frame rate of the optical detector24may be much slower than the pulse of sample light40, and can be turned on prior to the pulse of sample light40and turned off after the pulse of sample light40. The measurement period τ, and in this case, the duration of the pulse of sample light40, is preferably selected to be no longer than the speckle decorrelation time of the scattering medium12. The speckle decorrelation time is due to the scatterers' motion inside the scattering medium12, and rapidly decreases with the depth at which the scattering medium12is to be detected, and in particular, scales super-linearly with the depth into the scattering medium at which the volume of interest14is located, falling to microseconds or below as the detected depth extends to the multi-centimeter range. It should also be noted that although the measurement period τ is illustrated as being on the order of a single active period of the optical detector24, as shown inFIG.14, the measurement period τ may be much less than the duration of a single active period of the optical detector24. In particular, if the optical detector24is a camera, due to its limited frame rate, the duration of each camera frame may be much greater than the decorrelation speckle time of the scattering medium12, thus dictating that the measurement period τ be much less than the duration of each camera frame. Depending on the implementation of the optical detection system10, the ultrasound32may be either continuous wave (CW) or pulsed wave (PW). Assuming that the volume of interest14is deeper in the scattering medium12than the volume of non-interest16, if the ultrasound32is CW, it is preferred that the frequency of the ultrasound32be selected, such that it initially passes through the volume of non-interest16without substantially penetrating into the volume of interest14. In this manner, all of the ultrasound32delivered by the ultrasound transducer34into the scattering medium12will be substantially confined within the volume of non-interest16, as described above. In this manner, the background light46passing through the volume of non-interest16will be decorrelated from the holographic beat component of the interference light pattern48, while not decorrelating the signal light44passing through the volume of interest14from the holographic beat component of the interference light pattern48. That is, the frequency of the ultrasound32should be low enough such that it passes through the volume of non-interest16, but high enough, such that it is suppressed to almost zero at the interface between the volume of interest14and the volume of non-interest16. For example, as illustrated inFIG.15, pulses of the sample light40(three shown) are delivered into the scattering medium12from the optical source50of the interferometer20during the continuous delivery of the ultrasound32from the ultrasound transducer34of the acoustic assembly22. The ultrasound32emitted by the ultrasound transducer34is optimally shown passing through the volume of non-interest16, but not passing into the volume of interest14, creating an optical masking zone13that is confined within the volume of non-interest16. As such, the first sample light portion40a, which does not pass through the optical masking zone13, and exits the scattering medium12as signal light44, is not affected by the ultrasound32, and thus, will be correlated with the holographic beat component of the interference light pattern48, whereas the second sample light portion40b, which does pass through the optical masking zone13, and exits the scattering medium12as background light46, will be affected by the ultrasound32in the manner described above, and thus, will be decorrelated from the holographic beat component of the interference light pattern48. In this case, it is preferred that the frequency fusof the ultrasound32be uniform during the entire measurement period, such that the extent that the ultrasound32penetrates into the scattering medium12remains consistent during the delivery of the sample light40into the scattering medium12, thereby providing a stable and predictable optical masking zone13that does not change in size or location over time. If the ultrasound32is PW, the frequency of the ultrasound32may be selected, such that it passes through both the volume of non-interest16and the volume of interest14. In this case, the controller26will operate both the interferometer20and the acoustic assembly22, such that the pulse of sample light40is only applied when no portion of the ultrasound32is disposed within the volume of interest14, thus confining the optical masking zone13within the volume of non-interest16. In this manner, all of the ultrasound32delivered by the ultrasound transducer34into the scattering medium12will be substantially confined within the volume of non-interest16, as described above. Thus, the orientation of the volume of interest14and volume of non-interest16relative to the ultrasound transducer34may be arbitrary. As illustrated inFIGS.16a-16c, the ultrasound transducer34is closer to the volume of non-interest16than to the volume of interest14(i.e., the volume of interest14is deeper in the scattering medium12than the volume of non-interest16is in the scattering medium12). In this case, the pulse of ultrasound32will first be delivered to the scattering medium12at time t0, and at time t1, the pulse of ultrasound32begins to enter the volume of non-interest16, as illustrated inFIG.16a. At time t2, the pulse of sample light40will subsequently be delivered to the scattering medium12just before the pulse of ultrasound32enters the volume of interest14, as illustrated inFIG.16b. Although delivery of the pulse of ultrasound32is illustrated as continuing after time t2, it should be appreciated that delivery of the pulse of ultrasound32can be ceased at time t2. Because the speed of light is many orders greater than the speed of ultrasound, the first sample light portion40awill, in effect, pass through the volume of interest14before the pulse of ultrasound32reaches the volume of interest14. Thus, at time t2, the optical masking zone13resulting from the ultrasound32will be confined within the volume of non-interest16. As such, the first sample light portion40a, which does not pass through the optical masking zone13, exits the scattering medium12as the signal light44that is not affected by the ultrasound32, and thus, will be correlated with the holographic beat component of the interference light pattern48, whereas the second sample light portion40b, which passes through the optical masking zone13, exits the scattering medium12as background light46that will be affected by the ultrasound32, and thus, will be decorrelated from the holographic beat component of the interference light pattern48. After the pulse of sample light40is delivered to the scattering medium12(in effect, the optical parameter in the volume of interest14has already been detected), the pulse of ultrasound32passes through the volume of interest14at time t3, as illustrated inFIG.16c. As illustrated inFIGS.17a-17c, the ultrasound transducer34is closer to the volume of interest14than to the volume of non-interest16(i.e., the volume of interest14is shallower than the volume of non-interest16). In this case, the pulse of ultrasound32will first be delivered to the scattering medium12at time t0, and at time t1, the pulse of ultrasound32enters the volume of interest14, as illustrated inFIG.17a. At time t2, delivery of the pulse of ultrasound32into the scattering medium12ceases, as illustrated inFIG.17b. The interval between time t1and t2is selected, such that the ultrasound32completely spans the width of the volume of non-interest16when the pulse of ultrasound32completely exits the volume of interest14at time t3, at which time the pulse of sample light40is delivered to the scattering medium12as illustrated inFIG.17c. Because the speed of light is many orders greater than the speed of ultrasound, the second sample light portion40bwill, in effect, pass through the volume of non-interest16before the pulse of ultrasound32exits the volume of non-interest16. Thus, at time t2, the optical masking zone13resulting from the ultrasound32will be confined within the volume of non-interest16. As such, the first sample light portion40a, which does not pass through the optical masking zone13, will exit the scattering medium12as signal light44that is not affected by the ultrasound32, and thus, will be correlated with the holographic beat component of the interference light pattern48, whereas the second sample light portion40b, which passes through the optical masking zone13, will exit the scattering medium12as background light46that will be affected by the ultrasound32, and thus, will be decorrelated from the holographic beat component of the interference light pattern48. It should be appreciated that in the case where the ultrasound32is PW, the pulse of sample light40may only be applied when no portion of the ultrasound32is disposed within the volume of interest14, thus confining the optical masking zone13within the volume of non-interest16at the beginning of the measurement period, the ultrasound32may slightly transgress into the volume of the interest by the end of the measurement period. For example, if the duration of the measurement period (i.e., in this case the duration of the pulse of sample light40) is on the order of one microsecond, then the ultrasound32will travel about 1.5 millimeters, resulting in the blurring of the detected signal light44within the holographic beat component of the interference light pattern48. Thus, the optical masking zone13may not have a sharp edge during the duration of the measurement period. The optical detection system10may conveniently be configured for detecting different volumes of interest14simply by varying the frequency of the ultrasound32if delivered in a CW mode or varying the timing of the pulses of ultrasound32and sample light40if the ultrasound32is delivered in the PW mode. For example, in the context of detecting neural activity within the brain, the shallow neural areas of the brain (which would be a first volume of interest) could be detected, while masking the light in the scalp and skull (as a first volume of non-interest) during a first measurement period; then deeper neural areas of the brain (which would be a second volume of interest of interest), while masking the light in the scalp and skull, as well as the light in the shallower neural areas of the brain (as a second volume of non-interest); and so forth. For example, referring first toFIG.18a-18c, the ultrasound32, when delivered in the CW mode, can be swept from a relatively low frequency fus1at time t1(seeFIG.18a), to a relatively medial frequency fus2at time t2(seeFIG.18b), to a relatively high frequency fus3at time t3(seeFIG.18c). Of course, instead of sweeping, the ultrasound32may alternatively be discretely changed between the low frequency fus1t, medial frequency fus2, and high frequency fus3. At the low frequency fus1, the ultrasound32penetrates at deeper depths into the scattering medium12through a first relatively thick volume of non-interest16awithout substantially passing into a relatively deep first volume of interest14a(seeFIG.18a). In this manner, the second sample light portion40b, which passes through the first volume of non-interest16a, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the first volume of interest14a, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. At the medial frequency fus2, the ultrasound32penetrates shallower into the scattering medium12through a less thick second volume of non-interest16bwithout substantially passing into a shallower second volume of interest14b(seeFIG.18b). In this manner, the second sample light portion40b, which passes through the second volume of non-interest16b, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the second volume of interest14b, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. At the high frequency fus3, the ultrasound32penetrates even less deep into the scattering medium12through an even less thick third volume of non-interest16cwithout substantially passing into a third volume of interest14c(seeFIG.18c). In this manner, the second sample light portion40b, which passes through the third volume of non-interest16c, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the third volume of interest14c, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. Thus, it can be appreciated that, in this manner, the scattering medium12can be progressively detected from a greater depth to a shallower depth. Of course, the ultrasound32can be swept to a relatively high frequency fus3, to a relatively medial frequency fus2, to a relatively low frequency fus3, such that the scattering medium12can be progressively detected from a shallower depth to a greater depth. Alternatively, the ultrasound32may alternatively be discretely changed between the low frequency fus1, medial frequency fus2, and high frequency fus3in any order. As another example, referring first toFIG.19a-19c, the timing of the pulse of sample light40and the pulse of the ultrasound32, when delivered in the PW mode, can be decreased from a relatively long time interval i3between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40, to a relatively medial time interval i2between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40, to a relatively short time interval between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40. When there is a long interval the pulse of ultrasound32penetrates deeper into the scattering medium12through a relatively thick first volume of non-interest16abefore the pulse of sample light40is subsequently delivered to the relatively deep first volume of interest14a(seeFIG.19a). In this manner, the second sample light portion40b, which passes through the first volume of non-interest16a, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the first volume of interest14a, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. When there is medial interval i2, the pulse ultrasound32penetrates shallower into the scattering medium12through a less thick second volume of non-interest16bbefore the pulse of sample light40is subsequently delivered to the shallower second volume of interest14b(seeFIG.19b). In this manner, the second sample light portion40b, which passes through the second volume of non-interest16a, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the second volume of interest14b, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. When there is short interval i3, the pulse of ultrasound32penetrates even shallower into the scattering medium12through an even less thick third volume of non-interest16cbefore the pulse of sample light40is subsequently delivered to the even shallower third volume of interest14c(seeFIG.19c). In this manner, the second sample light portion40b, which passes through the third volume of non-interest16c, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48, while the first sample light portion40a, which passes through the third volume of interest14c, exits the scattering medium12as background light46that will be decorrelated from the holographic beat component of the interference light pattern48. Thus, it can be appreciated that, in this manner, the scattering medium12can be progressively detected from a greater depth to a shallower depth. Of course, the timing of the pulse of sample light40and the pulse of the ultrasound32can be increased from a relatively short time interval i3between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40, to a relatively medial time interval i2between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40, to a relatively long time interval between the beginning of the pulse of ultrasound32and the beginning of the pulse of sample light40, such that the scattering medium12can be progressively detected from a shallower depth to a greater depth. Alternatively, the time intervals i1, i2, and i3can be applied to the timing between the pulses of ultrasound32and the pulses of sample light40in any order to detect the depths of the scattering medium12in any order. Referring back toFIG.6, the optical detector24may comprise a pixel array (as in a camera), a single photodiode, a photodiode array, or other optical detectors, and may, e.g., take the form of a charged couple device (CCD) camera, or similar commercial-type image sensors, such as complementary metal-oxide-semiconductor (CMOS) sensor, photodiode (PD) array, avalanche photodiode (APD) array, single photon avalanche diode (SPAD) detector, time-of-flight (ToF) imaging camera, indium gallium arsenide (InGaAs) sensor, etc. The optical detector24may be a completely integrated device or may be arranged on closely spaced multiple devices or device regions. In the embodiment illustrated inFIG.20, the optical detector24includes an array of pixels68, which are configured for simultaneously detecting the spatial components of the interference light pattern48(shown inFIG.7). In the case where the interference light pattern48is a speckle light pattern, the spatial components are speckle grains (approximately the size of a wavelength of the light) of the speckle light pattern. Each pixel68of the optical detector24stores an intensity value I of a respective spatial component of the interference light pattern48. The optical detector24includes control inputs (not shown) for receiving control signals from the controller26, such that detection of the intensity values can be coordinated with the delivery of the sample light40described in further detail below. Although not illustrated, the optical detection system10may include magnification optics and/or apertures to magnify the individual speckle grains, which may have a size on the order of the wavelength of the near-infrared or visible light used to acquire the data, and hence on the order of hundreds of nanometers in size, to approximately the sizes of the pixels68of the optical detector array24. Thus, in the illustrated embodiment, the pixel sizes and pitches of the optical detector array24are matched to the speckle grain sizes and pitches of the speckle light pattern48via the appropriate magnification, although other embodiments are possible. As briefly discussed above, the interferometer20may generate a single interference light pattern48during each measurement period, in which case, only a single detector array24(e.g., a single camera) is needed to detect the interference light pattern48. For example, as illustrated inFIG.21, an optical beam combiner58′ (which replaces the optical beam splitter/combiner58illustrated inFIG.7) is configured for combining the sample light pattern47and the reference light42to generate a single interference light pattern48. That is, the optical beam combiner58′ transmits the sample light pattern47and reflects the reference light42, wherein they interfere to generate the interference light pattern48. As illustrated inFIG.22, each kth speckle of the interference light pattern48corresponds to a kth pixel68of the optical detector array24. That is, a spatial component of the sample light pattern47(i.e., the kth speckle grain of the speckle light field) interferes with the reference light42to generate a kth speckle grain of the interference light pattern48that is detected by kth pixel of the optical detector array24. It should be appreciated that althoughFIG.22illustrates one speckle grain “k,” an equivalent process for measuring the speckle grain k takes place for all speckles grains in parallel in the manner of imaging an entire speckle light field. In the case where a single detector array24is used, it may be desirable to incorporate pre-selected phase shifts or offsets between the sample arm and reference arm of the interferometer20(e.g., two phase shifts or offsets 0, π, or four phase shifts or offsets 0, π, π/2, 3π/2), such that multiple phase-modulated interference light patterns48are sequentially generated over multiple measurement periods, which interference light patterns48would then be processed to detect the optical parameter in the volume of interest14. In this case, it is preferable that the interferometer20cycle through the entire set of pre-selected shifts or offsets over a time interval that is quicker than the decorrelation time of the desired detected depth in the scattering medium12. Pre-selected phase shifts or offsets between the sample arm and reference arm of the interferometer20can be implemented by, e.g., incorporating a controllable optical phase shifter in the sample arm or reference arm of the interferometer20, as described in U.S. patent application Ser. No. 15/844,370, entitled “Pulsed Ultrasound Modulated Optical Tomography Using Lock-In Camera,” which is expressly incorporated herein by reference. The single detector array24may, e.g., comprise a conventional CCD camera or may be an optical lock-in camera arrangement, such as those described in U.S. patent application Ser. No. 15/844,370 and U.S. patent application Ser. No. 15/853,538, entitled “Systems and Methods for Quasi-Ballistic Photon Optical Coherence Tomography in Diffusive Scattering Media Using a Lock-In Camera Detection” (now U.S. patent Ser. No. 10,219,700), which is expressly incorporated herein by reference. As briefly discussed above, the interferometer20may concurrently generate multiple phase-modulated interference light patterns48during each measurement period, in which case, multiple detector arrays24(e.g., multiple cameras or dedicated spatial regions of a single camera), and in this case, two detector arrays24a,24bare used, as illustrated inFIG.24. The two detector arrays24aand24bare optically registered with each other to concurrently detect the two interference light patterns48aand48bover two phases. In this manner, two separate measurements of the volume of interest14can be made simultaneously or in short succession by measuring the interference between the sample light pattern47and reference light42at two separate phases differing from each other by an angular phase of π. Thus, the required phase-modulated interference light patterns48aand48bmay be more easily generated within the speckle decorrelation time of the scattering medium12. An optical beam splitter/combiner58″ (which replaces the optical beam splitter/combiner58illustrated inFIG.7) is configured for splitting the reference light42respectively into reference light42a,42brespectively having two different phases of 0 and π), splitting the sample light pattern47respectively into sample light patterns47aand47b, and concurrently combining the sample light patterns47aand47bwith the reference light42aand42bto respectively generate two interference light patterns48a(“Interference Light Pattern A”),48b(“Interference Light Pattern B”). That is, the sample light pattern47enters an input port58aof the optical beam splitter/combiner58″, where it is split into a reflected sample light pattern47aand a transmitted sample light pattern47b, and the reference light42enters another input port58bof the optical beam splitter/combiner58″, where it is split into a transmitted reference light42aand a reflected reference light42b. In a simultaneous manner, the reflected sample light pattern47ainterferes with the transmitted reference light42ato generate the interference light pattern48a, and the transmitted sample light pattern47binterferes with the reflected reference light42bto generate the interference light pattern48b. Due to power conservation, a four-port network, such as the optical beam splitter/combiner58″, requires the total power entering the input ports58a,58bto be equal to the total power exiting the output ports58c,58d, and thus, the transmitted reference light42awill have a nominal phase of 0, and the reflected reference light42bwill have a phase of π. That is, as will be described in further detail below, since the combined power of the DC terms of the interference light patterns48a,48bexiting the respective output ports58a,58bof the optical beam splitter/combiner58″ will be equal to the combined power of combined DC power of the sample light pattern47and reference light42respectively entering the input ports58a,58bof the optical beam splitter/combiner58″, the interfering AC beat pattern terms of the respective interference light patterns48a,48bwill need to differ in phase by 180 degrees such that they sum to zero. The optical detector array24aand detector array24bare respectively disposed at two output ports58c,58dof the optical beam splitter/combiner58″ for concurrently detecting the respective two interference light patterns48a,48b, and generating two pluralities of values representative of intensities of the spatial components (“speckle grains”) of the respective two interference light patterns48a,48b. Thus, the sample light pattern47and reference light42combine to project an interference light pattern48aonto the optical detector array24a, and likewise to project an interference light pattern48bonto the optical detector array24b, but with respect to a different phase of the reference light42. In the illustrated embodiment, the planes of the optical detector arrays24a,24bare perpendicular to each other, such that they face the respective output ports58c,58dof the optical beam splitter/combiner58″. The optical detector arrays24a,24bmay be conventional in nature (e.g., readily available conventional charge-coupled device (CCD) cameras and may take the form of e.g., similar commercial-type image sensors, such as complementary metal-oxide-semiconductor (CMOS) sensor, photodiode (PD) array, avalanche photodiode (APD) array, single photon avalanche diode (SPAD) detector, time-of-flight (ToF) imaging camera, indium gallium arsenide (InGaAs) sensor, etc. Although the optical detector arrays24a,24bare separate and distinct, the optical detector arrays24a,24bare optically aligned with each other, such that any given pixels on the optical detector arrays24a,24bhave a known one-to-one correspondence with each other. That is, as illustrated inFIG.24, a spatial component of the sample light pattern47(i.e., the kth speckle grain of the speckle light field) interferes with the reference light42with no phase shift (i.e., 0) to generate a kth speckle grain of the interference light pattern48athat is detected by kth pixel of the optical detector array24a, and the same kth speckle grain of the sample light pattern47interferes with the reference light42with a phase shift (i.e., π) to generate a corresponding kth speckle grain of the interference light pattern48bthat is detected by the corresponding kth pixel of the optical detector array24b. Since the kth pixel of the optical detector array24ahas a known correspondence via optical alignment with the kth pixel of the optical detector array24b, the pair of intensity values detected by the kth pixels of the optical detector arrays24a,24bare both representative of the kth speckle grain of the sample light pattern47, but at different phases. It should be appreciated that althoughFIG.24illustrates one speckle grain “k,” an equivalent process for measuring the speckle grain k takes place for all speckle grains in parallel in the manner of imaging an entire speckle light field. At each corresponding pair of kth pixels, the optical power received by the respective detector arrays24a,24bis equal to the summation of the power of the reference light42(PreferenceA and PreferenceB) input into the optical beam splitter/combiner58″, the sample light pattern47(PsampleA and PsampleB) input into the optical beam splitter/combiner58″, and an interference term between the reference light42and sample light pattern47(PinterfereA and PinterfereB). By the power conservation, the interference terms PinterfereA and PinterfereB are 180 degrees out of phase for the optical detector arrays24a,24b. Although two distinct detector arrays24a,24bhave been described, two distinct camera regions on a single camera can be used for detecting the two interference light patterns48a,48b. Furthermore, although the optical detector arrangement illustrated inFIGS.23and24only generates two phase-modulated interference light patterns48(0, π), alternative detector arrangements that generate more phase-modulated interference light patterns48, e.g., four phase-modulated interference light patterns48(0, π/2, π, 3π/2), can be used. Further details discussing different systems for simultaneously detecting an M number of interference light patterns are described in U.S. patent application Ser. No. 15/853,209, entitled “System and Method for Simultaneously Detecting Phase Modulated Optical Signals” (now U.S. Pat. No. 10,016,137), which is expressly incorporated herein by reference. Referring back toFIG.6, the controller26is configured for sending control signals to the signal generator32of the acoustic assembly22to control the amplitude, frequency, phase, and duration of the ultrasound30, and further for sending control signals to the drive circuit of the optical source50of the interferometer20to control the amplitude, duration, and if relevant, the pulsing of the sample light40. Preferably, the controller26operates the interferometer20in a pulsed wave (PW) mode, so that more energy can be packed into the pulses of sample light40to improve the signal-to-noise ratio. The controller26may operate the acoustic assembly22in either a continuous wave (CW) or a pulsed wave (PW) mode, as described in further detail below. The controller26is further configured for operating the optical detector22, such it detects the resulting interference light pattern48during the measurement period in coordination with the pulsed sample light40. The controller26may also be configured for sending control signals to the path length adjustment mechanism60to adjust the optical path length of the reference arm, and control signals to the optical detector24to coordinate detection of the interference light pattern48with the delivery of the sample light40into the scattering medium12, and in the case where the optical system is an OCT system, to adjust the optical path length of the reference arm for path length selection of the signal light44. The processor28is configured for extracting the holographic beat component from the interference light pattern48detected by the optical detector24, and determining the optical parameter of the volume of interest14based on the extracted holographic beat component of the interference light pattern48. The specific optical parameter determined by the processor28depends on the particular application of the optical detection system10. For example, if the optical detection system10is to be used for detecting neural activity, as briefly discussed above, the optical parameter may be a physiologically-dependent optical parameter. The physiologically-dependent optical parameter detected by the anatomical detection system can be, e.g., a level of deoxygenated and/or oxygenated hemoglobin concentration in the brain, or the relative abundance or the level of water concentration, or relative water concentration in the brain. In other embodiments, the physiologically-dependent optical parameter can be any parameter that varies in accordance with a change in an optical property of the brain (e.g., light absorption), an analyte concentration in the blood, analyte/metabolite in tissue, concentration of a substance (e.g., blood, hemoglobin) or a structure within tissue, the presence and concentration of lamellar bodies in amniotic fluid for determining the level of lung maturity of a fetus, the presence and/or concentration of meconium in the amniotic fluid, optical properties of other extravascular fluids, such as pleural, pericardial, peritoneal, and synovial fluids. In alternative embodiments, the physiologically-dependent optical parameter detected by the anatomical detection system may be a fast-optical signal (i.e., perturbations in the optical properties of neural tissue caused by mechanisms related to the depolarization of neural tissue, including, but not limited to, cell swelling, cell volume change, changes in membrane potential, changes in membrane geometry, ion redistribution, birefringence changes, etc.). The processor28may perform post-processing on the determined optical parameter to generate additional information on the volume of interest14. For example, the processor28may determine a level of neural activity within the brain based on the detected physiologically-dependent optical parameter. Although the controller26and processor28are described herein as being separate components, it should be appreciated that portions or all functionality of the controller26and processor28may be performed by a single computing device. Furthermore, although all of the functionality of the controller26is described herein as being performed by a single device, and likewise all of the functionality of the processor28is described herein as being performed by a single device, such functionality each of the controller26and the processor28may be distributed amongst several computing devices. Moreover, it should be appreciated that those skill in the art are familiar with the terms “controller” and “processor,” and that they may be implemented in software, firmware, hardware, or any suitable combination thereof. Decorrelation of the background light46from the time-varying temporal interference component of the interference light pattern48will now be described in further detail. Assuming homodyne combination of the signal light44and the reference light42(i.e., the signal light44and reference light42have the same frequency), and further assuming, for purposes of simplicity, that the sample light40is a rectangular pulse having a duration equal to the measurement period, the intensity of the interference light pattern48detected at the optical detector24(or each pixel of the optical detector24), can be expressed as: Intensity=∫0Top(Preference(t)+Psignal(t)+Pbackground(t)+2√{square root over (Psignal(t)×Preference(t))}×(sin(Øunknown))+2√{square root over (Pbackground(t)×Preference(t))}×(sin(θunknown−(2πfus)t)))dt,tm [4] where Preferencerepresents the reference light42as a function of time t, Psignalrepresents the signal light44as a function of time t, Pbackgroundrepresents the background light46as a function of time t, t0is the beginning of the measurement period, t1is the end of the measurement period, ϕunknownand θunknownare random phases of the respective signal light44and background light46in the interference light pattern48at the time of measurement, which originates via multiple scattering of coherent light inside the tissue, fusis the frequency of the ultrasound32, and Topis the duration the pulse of sample light40. Over the duration of the measurement period, equation [4] integrates to: Intensity=Top(Preference+Psignal+Pbackground+2√{square root over (Psignal×Preference)}×(cos(Øunknown)))+2√{square root over (Pbackground×Preference)}/2πfus×(cos(θunknown−(2πfus)Top)−cos(θunknown)) [5] The holographic beat component in equation [5] is represented by: Top(2√{square root over (Psignal×Preference)}×(cos(Øunknown)))+2√{square root over (Pbackground×Preference)}/2πfus×(cos(θunknown−(2πfus)Top)−cos(θunknown)). [6] Instead of having a constant angle in the cosine function in the background term 2√{square root over (Pbackground×Preference)}/2πfus×(cos(θunknown−(2πfus)Top)−cos(θunknown)), the presence of the ultrasound frequency fusin the cosine function creates a non-zero (indeed, rapid) speed of angular rotation of the cosine function with time, such that the cosine function in the signal term 2√{square root over (Psignal×Preference)}×(cos(Øunknown)) dominates equation [6], and the background term of equation [6] essentially reduces to zero, thereby decorrelating the background light46from the holographic beat component of equation [6]. That is, such decorrelation includes Raman-Nath and moving scattering center mechanisms that generates random, pseudo-random, or periodic phase compared to shifts to the reference light42, such that the background light46is masked from the detected optical parameter of the volume of interest14over time. Consistent with the discussion above with respect toFIG.12, as the ultrasound frequency fusincreases, decorrelation of the background light46from the holographic beat component of equation [6] likewise increases. Thus, equation [6] essentially reduces to: Top(2√{square root over (Psignal×Preference)}(cos(Øunknown))), [7] which represents the exclusive contribution of the signal light44to the holographic beat component of the interference light pattern48. It should also be appreciated that because the DC components in equation [5] (i.e., Preference, Psignal, and Pbackground) are constant across the two detector arrays24a,24b, they can be eliminated by creating multiple phase-modulated interference light patterns48; for example, in the case of a single detector array24, incorporating the pre-selected phase shifts or offsets between the sample arm and reference arm of the interferometer20, such that multiple phase-modulated interference light patterns48are sequentially generated over multiple measurement periods, as illustrated inFIGS.21and22, or in the case of multiple optically registering detector arrays24, concurrently combining the reference light42and sample light pattern47into the phase-modulated interference light patterns48, as illustrated inFIGS.23and24. For example, if two phase-modulated interference light patterns48are created, the intensities of these interference light patterns48, which are out of phase relative to each other by 180 degrees, can be subtracted from each other to eliminate the DC components. Alternatively, assuming a heterodyne combination of the signal light44and the reference light42(i.e., the signal light44and reference light42have different frequencies), and further assuming, for purposes of simplicity, that the sample light40is a rectangular pulse that last the duration of the measurement period, the intensity of the interference light pattern48detected at the optical detector24(or each pixel of the optical detector24), can be expressed as: Intensity=∫0Top(Preference(t)+Psignal(t)+Pbackground(t)+2√{square root over (Psignal(t)×Preference(t))}×(sin(Øunknown−(2πfshift)t))+2√{square root over (Pbackground(t)×Preference(t))}×(sin(θunknown−(2πfshift+2πfus)t)))dt, [8] where Preferencerepresents the reference light42as a function of time t, Psignalrepresents the signal light44as a function of time t, Pbackgroundrepresents the background light46as a function of time t, t0is the beginning of the measurement period, t1is the end of the measurement period, fshiftis the difference in frequency between the sample light40and the reference light42, ϕunknownand θunknownare random phases of the respective signal light44and background light46in the interference light pattern48at the time of measurement, which originates via multiple scattering of coherent light inside the tissue, fusis the frequency of the ultrasound32, and Topis the duration the sample light40. Over the duration of the measurement period, equation [8] integrates to: Intensity=Top(Preference+Psignal+Pbackground+2√{square root over (Psignal×Preference)}/2πfshift×(cos(Øunknown+(2πfshift)Top)−cos(Øunknown))+2√{square root over (Pbackground×Preference)}/(2πfshift+2πfus)×(cos(θunknown−(2πfshift+2πfus)Top)−cos(θunknown))) [9] The holographic beat component in equation [9] is represented by: Top(2√{square root over (Psignal×Preference)}/2πfshift×(cos(Øunknown+(2πfshift)Top)−cos(Øunknown)))+2√{square root over (Pbackground×Preference)}/(2πfshift+2πfus)×(cos(θunknown−(2πfshift+2πfus)Top)−cos(θunknown)). [10] Instead of having a constant angle in the cosine function in the background term 2√{square root over (Pbackground×Preference)}/(2πfshift+2πfus)×(cos(θunknown−(2πfshift+2πfus)Top)−cos(θunknown)), the presence of the ultrasound frequency fusin the cosine function creates a non-zero (indeed, rapid) speed of angular rotation of the cosine function with time, such that the cosine function in the signal term 2√{square root over (Psignal×Preference)}/2πfshift×(cos(Øunknown+(2πfshift)Top)−cos(Øunknown))) dominates equation [10], and the background term in equation [10] essentially reduces to zero, thereby decorrelating the background light46from the holographic beat component of equation [10]. That is, such decorrelation includes Raman-Nath and moving scattering center mechanisms that generates random, pseudo-random, or periodic phase compared to shifts to the reference light42, such that the background light46is masked from the detected optical parameter of the volume of interest14over time. Again, consistent with the discussion above with respect toFIG.12, as the ultrasound frequency fusincreases, decorrelation of the background light46from the holographic beat component of equation [6] likewise increases. Thus, equation [10] essentially reduces to: Top(2√{square root over (Psignal×Preference)}/2πfshift×(cos(Øunknown+(2πfshift)Top)−cos(Øunknown))), [11] which represents the exclusive contribution of the signal light44to the holographic beat component of the interference light pattern48. It should be appreciated that, although the ultrasound frequency fuscan be uniform (pure tone) over the measurement period (as illustrated inFIG.13a), the background term in equations [6] and [10] can be further minimized by varying the ultrasound frequency fusover the measurement period, e.g., by sweeping or randomizing the ultrasound frequency fus(as illustrated inFIG.13borFIG.13c), and even further minimized by varying, in addition to the frequency fus, the amplitude and/or phase of the ultrasound (as illustrated inFIG.13dorFIG.13e) or completely having a completely arbitrary wave (as illustrated inFIG.13f). It should be appreciated that, because the holographic beat component of the interference light pattern48varies over time in the case where the reference light42and signal light44are combined using a homodyne technique, it is preferred that the detector(s)24take the form of a lock-in camera, such that a detected intensity value can be instantaneously locked in, as described in U.S. patent application Ser. No. 15/844,370, entitled “Pulsed Ultrasound Modulated Optical Tomography Using Lock-In Camera,” which is expressly incorporated herein by reference. This should be contrasted with the homodyne case, where an intensity value is detected over an integration time, such that a lock-in camera is not required. Furthermore, it is preferred that, in the heterodyne case, the intensity values be measured in quadrature, such that the generation of four phase-modulated interference light patterns48is required, as described in U.S. patent application Ser. No. 15/844,370, entitled “Pulsed Ultrasound Modulated Optical Tomography Using Lock-In Camera, and U.S. patent application Ser. No. 15/853,209, entitled “System and Method for Simultaneously Detecting Phase Modulated Optical Signals” (now U.S. patent Ser. No. 10,016,137), which are expressly incorporated herein by reference. Referring now toFIG.25, the physical implementation of a non-invasive optical detection system10for use in the detection of neural activity within the brain (as the volume of interest14) through the scalp and skull (as the volume of non-interest16) of a person18will be described. As shown, the optical detection system10includes a wearable unit90that is configured for being applied to a person18, and in this case, worn on the head (as the scattering medium12) of the person18; an auxiliary head-worn or not head-worn unit92(e.g., worn on the neck, shoulders, chest, or arm) coupled to the wearable unit90via a wired connection93(e.g., electrical wires); and an optional remote processor94in communication with the patient-wearable auxiliary unit92coupled via a wireless connection95(e.g., radio frequency (RF) link). Alternatively, the optical detection system10may use a non-wired connection (e.g., an RF link) for providing power to or communicating between the respective wearable unit90and the auxiliary unit92, and/or a wired connection between the auxiliary unit92and the remote processor94. In the illustrated embodiment, the wearable unit90includes a support structure97that either contains or carries the interferometer20, the ultrasound transducer34of the acoustic assembly22, and the optical detector(s)24(shown inFIG.6). The wearable unit90may also include an output port98afrom which the sample light40generated by the interferometer20is emitted (from the optical source50), and an input port98binto which the sample light pattern44is input into the interferometer20(received by the optical detector(s)24). It should be appreciated that although the input port98bis illustrated in close proximity to the input port98a, the proximity between the input port98band the output port98amay be any suitable distance. The support structure97may be shaped, e.g., have a banana, headband, cap, helmet, beanie, other hat shape, or other shape adjustable and conformable to the user's head (as the scattering medium12), such that the ports98aand98bare in close contact with the outer skin of the body part, and in this case, the scalp (as the volume of non-interest16) of the head of the person18. An index matching fluid maybe used to reduce reflection of the light generated by the optical source50of the interferometer20from the outer skin of the scalp (as the volume of non-interest16). An adhesive or belt (not shown) can be used to secure the support structure94to the head (as the scattering medium12) of the person18. Notably, because the ultrasound32emitted by the ultrasound transducer34need not be hi-fidelity, and in fact, it is desirable to make the ultrasound32as noisy as possible, acoustic coupling between the ultrasound transducer34and the scalp of the person18can be inefficient, and therefore, a bubble-free liquid ultrasound medium for ensuring that there is sufficient acoustic coupling between the ultrasound transducer34and the scalp (as the volume of non-interest16) is not required. The auxiliary unit92includes a housing99that contains the controller26and the processor28(shown inFIG.6). In some embodiments, portions of the controller26and processor28may be integrated within the wearable unit90. The auxiliary unit92may additionally include a power supply (which if head-worn, may take the form of a rechargeable or non-chargeable battery), a control panel with input/output functions, a display, and memory. Alternatively, power may be provided to the auxiliary unit92wirelessly (e.g., by induction). The auxiliary unit92may further include the signal generator36of the acoustic assembly20, as well as any drive circuitry used to operate the interferometer20. The remote processor94may store detected data from previous sessions, and include a display screen. The interferometer20and detector24are preferably mechanically and electrically isolated from the acoustic assembly22, such that the emission of the ultrasound32by the acoustic assembly22, as well as the generation of RF and other electronic signals by the acoustic assembly22, minimally affects the detection of the optical signals by the interferometer20and generation of data by the optical detector24. The wearable unit90may include shielding (not shown) to prevent electrical interference and appropriate materials that attenuate the propagation of acoustic waves through the support structure94, although such shielding may not be needed due to the fact that high fidelity ultrasound is not required, as described above. It should be appreciated that because the optical detection system10has been described as comprising a single fixed source-detector pair, in other words, a single output port98aand a single input port98b, it can only detect a physiologically-dependent optical parameter in the brain tissue204between the ports98a,98b, as illustrated inFIG.26. The ports98a,98bare placed against the scalp200to detect regions of interest in the skull202, cerebral spinal fluid (CSF)204, and/or cortical brain tissue206. The various optical paths may first pass through the scalp200and skull202along a relatively straight path, briefly enter the brain tissue206, then exit along a relatively straight path. In the context of OCT, the reference arm in the interferometer20may be selected or adjusted (as described above with respect toFIG.7) based on the distance between the ports98a,98b, and the depth of the target tissue voxel15, and may, e.g., be approximately (or greater than) the sum of the distance between the ports98a,98band twice the depth of the target tissue voxel15. As depicted in the top half ofFIG.26, the greater distance of the target tissue voxel15may be across the X-Y plane as compared to its distance along the Z-direction. In optional embodiments, the optical detection system10may be modified, such that it can sequentially or simultaneously detect physiologically-dependent optical parameters in multiple target tissue voxels15by tiling multiple source-detector pairs across the scalp200. In this case, each target tissue voxel15is defined by a given output port98a(which is associated with the optical source50) at a given location and a given input port98b(which is associated with the optical detectors24) at a given location. Thus, multiple target tissue voxels15can be detected either by making the output port98amovable relative to the input port98band/or spacing multiple input ports98bfrom each other. For example, with reference toFIG.27, a plurality of input ports98bare located at fixed positions on the scalp200, and a single movable output port98amay be moved around at different locations across the scalp200along a predetermined path208(e.g., from a first location208ato a second location208b) around the input ports98bto distribute light into the target tissue voxel15from various locations on the surface of scalp200. The input ports98bmay be arranged in any desirable pattern over the scalp200. For example, they may be arranged or located in a symmetric or asymmetric array and/or may be arranged in a circular or radial pattern or a rectangular-shaped pattern. The field of view of the input ports98bmay have areas of overlap and/or may have little or no overlap. In some variations, the input ports98bmay be tiled adjacent to each other, such that the individual fields-of-view are adjacent to each other with little or no overlap. The aggregate of the individual fields-of-view may simulate a single camera with a large field-of-view. In any arrangement, the light emitted by the output port98amay be reflected and/or backscattered to the scalp200and enter the plurality of input ports98b. In effect, this creates a multitude of target tissue voxels15through the brain tissue206(shown inFIG.26) under the scalp200that are detected while the output port98amoves along the path208, as illustrated inFIG.27. The multiple “crisscrossed” target tissue voxels15may facilitate the generation of a high-resolution functional map of the upper layer of cortex of the brain206with spatial resolution given by the XY plane (i.e., along the plane of the scalp200) confinement of the paths and not limited by their lower Z confinement, in the manner of tomographic volume reconstruction, and in this method, defining the lateral cross-section of a bundle of tissue voxels as X-Y and the axial direction along Z. Moreover, moving the output port98awith respect to the input ports98bat one or more pre-determined locations may probe a region of interest from multiple angles and directions. That is, the output port98awill be create multiple target tissue voxels15extending from the pre-determined location to the multiple input ports98b, allowing optical data from the pre-determined location at the origin of the multiple tissue voxels to be acquired along multiple axes. Optical data taken across multiple axis across a region of interest may facilitate the generation of a 3-D map of the region of interest, such as from the tissue voxel. Optical data received by the input ports98bmay be used to generate detected optical properties with comparable resolution in the Z-direction (i.e., perpendicular to a scalp200as in the X-Y plane (i.e., along the scalp200), and/or may allow optical probing or interrogation of larger region in brain tissue206(e.g., across multiple target tissue voxels15over a surface of the scalp). Referring toFIG.29, having described the structure and function of the optical detection system10, one particular method100performed by the optical detection system10to non-invasively detect a target voxel15in the scattering medium12will now be described. In this example, the optical detection system10can be comparable to an OCT system. The controller26first adjusts interferometer20to select the path length of the sample light40for detection of optical parameters within the target voxel15within the volume of interest14of the scattering medium12, e.g., by sending a control signal to the path length adjustment mechanism60, as shown inFIG.8(step102). The controller26then operates the acoustic assembly22to emit ultrasound32into the scattering medium12during a measurement period, e.g., by sending a control signal to the signal generator36, e.g., using techniques illustrated inFIGS.11a-11band12a-12b(step104). Next, all of the ultrasound32emitted into the scattering medium12during the measurement period is substantially confined within the volume of non-interest16, e.g., using the techniques illustrated inFIGS.15,16a-16c, and17a-17c, creating an optical masking zone13within the volume of non-interest16(step106). Optionally, the controller26operates the acoustic assembly22to vary the waveform of the ultrasound32during the measurement period to further enhance the optical masking zone13, e.g., using the techniques illustrated inFIGS.13b-13f(step108). Then, the controller26operates the interferometer22to generate source light38, e.g., by sending a control signal to the drive circuit to pulse the light source50on and off (step110). The interferometer22(e.g., via the optical beam splitter52) splits the source light38into sample light40and reference light42(step112). The interferometer22then delivers the sample light40into the scattering medium12, e.g., using techniques illustrated inFIGS.11a-11band12a-12b(step114). As the sample light40scatters diffusively through the scattering medium12, a first portion40awill pass through the target voxel15of the volume of interest14and exit the scattering medium12as signal light44(step116), and a second portion40bwill pass through the optical masking zone13of the volume of non-interest16and exit the scattering medium12as masked background light46(step118), as illustrated inFIGS.6and7. As the signal light44and masked background light46exits the scattering medium12, they combine to create a sample light pattern47(step120). Next, the interferometer20then combines (e.g., via the optical beam combiner58) the reference light42with the sample light pattern47to generate one or more interference light patterns48, each having a holographic beat component (step122). The signal light44in the sample light pattern47is correlated to the holographic beat component of each interference light pattern48, while the masked background light46in the sample light pattern47is decorrelated, preventing it from contributing to holographic beat component of each interference light pattern48(step124), such that it does not contribute to holographic interference, but rather generates a rapidly time-varying signal component that integrates to approximately zero during the detection time. Then, under control of the controller24, the detector(s)24detect the intensities of the holographic beat component(s) of the interference light pattern(s)48, which corresponds to the intensity of the signal light44(step126). The combination of the reference light42with the sample light pattern47to generate the interference light pattern(s)48, and subsequent detection of the holographic beat component of each interference light pattern48can be performed using the techniques illustrated inFIGS.21-24. The processor30then determines the optical parameter of target voxel15in the volume of interest14based on the detected signal light44in the holographic beat component of each interference light pattern48(step128). The path length of the sample light40can be repeatedly adjusted for detection of optical parameters within other tissue voxels15within the volume of interest14of the scattering medium12in step102, and steps104-128can be repeated to determine optical parameters of the other target voxels15. The ultrasound32can be emitted into different volumes of non-interest16as illustrated inFIGS.18a-18cand19a-19c. The processor30may then perform post-processing on the determined optical parameters (step130), and in the case where the target voxel15is brain matter, such post-processing comprising determining the level of neural activity within the target voxel15based on the determined optical parameter of the target voxel(s)15. Although particular embodiments of the present inventions have been shown and described, it will be understood that it is not intended to limit the present inventions to the preferred embodiments, and it will be obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the present inventions. Thus, the present inventions are intended to cover alternatives, modifications, and equivalents, which may be included within the spirit and scope of the present inventions as defined by the claims. | 102,467 |
11857317 | DETAILED DESCRIPTION OF THE EMBODIMENTS A hyperspectral imaging device is an imaging device that is capable of resolving wavelengths of received light into signals representing multiple discrete wavelength bands, but which resolves wavelengths into more than the traditional color-cameral three overlapping primary color wavelength bands (red, green, and blue) at each pixel, or macropixel, of a received image. Such a hyperspectral imaging device may in some embodiments resolve wavelengths into more and narrower wavelength bands, by separately resolving intermediate colors such as yellow or orange into their own wavelength bands. A hyperspectral imaging device may also cover a broader range of the electromagnetic spectrum than visible light alone, such as by covering both visible and portions of the infrared light spectrum; some hyperspectral imaging devices are capable of resolving received infrared light into signals representing intensity of light received within each of a large number of separate wavelength bands including multiple bands within the infrared spectrum. Some hyperspectral imaging devices provide a spectrum at each pixel, or macropixel, of each image received; others may provide images having intensity information only within a selection of multiple, predetermined, wavelength bands. A wide-field hyperspectral imaging device is capable of acquiring full field of view images of the region of interest, such as the surgical field of view, similar to broad beam fluorescence imaging devices used for wide field imaging. Our hyperspectral imaging device is capable of selecting wavelengths of interest in the visible and infrared regions of the electromagnetic spectrum, and as such capable of acquiring multiple images at wavelengths of interest, using pixel by pixel full spectra reconstruction using multiple images acquired at wavelengths of interest. In a “snapshot” embodiment our device resolves light into 32 or 64 predetermined colors or wavelength bands, and in a tunable-filter embodiment, into 100 or more wavelength bands. FIG.1illustrates a system100for supporting surgery, according to some embodiments. The system ofFIG.1includes a microscope body102, which has multiple beam splitters104that permit light to be diverted to several optical ports simultaneously or alternatively in succession, depending on the microscope and operator preferences. Attached to a first optical port of body102is a tube106leading to a surgeon's binocular optical eyepieces108. Attached to a second optical port of body102are a first high definition electronic camera120and a second high definition electronic camera122. Cameras120,122are coupled to provide images to image capture interface124of a digital image processing system126. Attached to a third optical port of body102is a hyperspectral imaging device128that in an embodiment has a tunable filter130adapted to receive light from body102and a high resolution broad-bandwidth electronic camera132. In a particular embodiment, hyperspectral imaging device128couples to body102through a flexible, coherent, fiber-optic image-conveying, optical cable129. The camera132of the hyperspectral imaging device128is also coupled to provide images to image capture interface124of the digital processing system126. In an embodiment, tunable filter130is a liquid crystal tunable filter. In an alternative embodiment, tunable filter130is an acousto-optic tunable filter. Referring again toFIG.1, a tracker interface140of the image processing system126is coupled to use tracking sensors142attached to a reference location within an operating room to track relative locations of microscope location sensors144and patient location sensors146. In an embodiment, tracking sensors142and an associated processor of tracker interface140are a commercially available Treon® StealthStation®, (trademarks of Medtronic, Louisville, CO, USA) optical tracking system. Microscope location sensors144are rigidly attached to the microscope body102, and patient location sensors146are attached to a frame148that may be attached to a patient while the patient is undergoing a surgical procedure. In a particular embodiment, frame148is adapted to be attached to a patient's skull150by screws (not shown) for the duration of a neurosurgical procedure during which the patient's brain152is exposed, and during which patient's brain152may be operated on with surgical instruments154to remove or destroy one or more lesions156. Microscope body102also has zoom optics160, adapted for operation by a zoom motor/sensor162, and a focus adjustment (not shown) adapted for operation by a focus motor (not shown). The microscope also has multiple illuminators166,168. In an embodiment, illuminators166include white-light illuminators166, and wavelength-selective fluorescent stimulus illuminators168, operating under control of an illumination interface170of the image processing system126. The microscope body also has a heads-up display (HUD) projector172capable of providing graphical images through a combiner174of body102such that the graphical images are presented for viewing by a surgeon through surgeon's eyepieces108. The surgeon's field of view through the operating microscope and its associated HUD is co-registered with that of the imaging system, allowing display of tissue classifications, mapped tumor locations, and hyperspectral imaging results superimposed on visible brain tissue, one-to-one comparisons, and intraoperative surgical decision making. At standard working distances between microscope and surgical cavity, surgical instruments154fit between zoom optics160and tissue of brain152. Image processing system126also has a memory178into which image capture interface124saves images received from cameras120,122,132; and at least one processor180. Processor180is adapted for executing processing routines such as surface fluorescence quantification imaging qFI, fluorescence depth modeling routines186and both depth-resolved fluorescence imaging (qFI) and quantitative depth-resolved fluorescence imaging (qdFI), endogenous biomarker quantification using spatial frequency domain techniques (see below) and hyperspectral image processing routines188stored in memory178and operable on images stored in memory178. Processor180is also adapted for preparing images for display through display interface190onto monitor192, and for communicating through network interface194to server196; server196has database198containing information derived from preoperative MRI and CAT scans. Server196is also interfaced through a network to an MRI scanner143as known in the medical imaging art that provides preoperative images of a patient's brain152, including surface features141, and tumor156, prior to prepping the patient for surgery and opening the patient's skull150(brain152, tumor156, surface features141are shown with patient prepared for surgery and skull opened). Server196is also interfaced through a network to a CT scanner145that is capable of imaging a patient's brain prior to prepping the patient for surgery and opening the patient's skull150. While the system ofFIG.1is illustrated in context of an operative microscope, we anticipate that our surgical vision enhancement system may be constructed in any of several other formats useful in the surgical art, including a laparoscope format and an endoscope system. For example, a laparoscopic system280(FIG.1G) may have a coherent optical-fiber bundle282having a light-source-and-camera end286and a scope end287. The light source and camera end286is coupled through a beamsplitter288to a combined stimulus and broad spectrum light source resembling that ofFIG.1F, and beamsplitter288is also coupled to an infrared and visible light hyperspectral camera294resembling that ofFIG.1A. A combined projection and imaging lens290both projects spatially modulated light of desired wavelengths from scope end287of fiber bundle282onto tissue292, and images both fluorescent and backscattered light onto fiber bundle282and thus into camera294. A digital image processing system126, similar to that ofFIG.1, is provided to receive, record, and process images from the hyperspectral camera294and to drive the digital multimirror device of the spatial modulator268with predetermined spatial-light patterns. Operation of the system100has several modes, and each mode may require execution of several phases of processing on processor180, executing one or more of several routines, as mentioned above. Computational efficiency and high performance are desirable in processor180, since it is desirable to minimize the operative time for which a subject is anesthetized. For example, processor180executes the hyperspectral image processing routine to perform the hyperspectral fluorescence and reflectance imaging of the tissue, as described herein. Processor180executes hyperspectral, reflectance, and in some embodiments spatially modulated light, image processing to determine optical properties of the tissue, processor180then executes qFI (quantified fluorescence imaging) routines to correct fluorescence images for quantification of surface and near-surface fluorophores imaged in fluorescence images. The processor180also uses the hyperspectral camera128to capture a hyperspectral fluorescent image stack and executes dFI (depth-resolved fluorescent imaging) and/or qdFI (quantified depth-resolved fluorescent imaging) routines from memory178to process the hyperspectral fluorescent image stack to map depth and quantity of fluorophore in tissue. The hyperspectral fluorescence and reflectance imaging may also be performed in connection with stereo-optical extraction routines executed on processor180, using images captured by stereo cameras120,122, to perform tissue surface contour and feature extraction for light transport modeling in qFI, dFI & qdFI and tomographic display of mapped depth and quantity of fluorophore. In an embodiment the hyperspectral fluorescence and reflectance image processing is performed on processor180in connection with fluorescence depth modeling, as described in U.S. patent application Ser. No. 13/145,505, filed in the United States Patent and Trademark Office on Jul. 2, 2011, and U.S. Provisional Patent Application 61/588,708, filed on Jan. 20, 2012 and incorporated herein in its entirety by reference, and as described herein, where fluorescence and reflectance spectral information is derived from hyperspectral imaging device128. In an alternative embodiment the hyperspectral fluorescence and reflectance image processing is performed by processor180executing depth-resolved fluorescent imaging routines as described in the unpublished paper A Non-Model Based Optical Imaging Technique For Wide-Field Estimation Of Fluorescence Depth In Turbid Media Using Spectral Distortion submitted herewith as an attachment, and as described in PCT/US13/22266 filed Jan. 18, 2013, which claims priority to (523259) 61/588,708 filed Jan. 20, 2012, both of which are included herein by reference. In some embodiments, an optional ultrasound system197is provided to map deep brain structures using medical ultrasound as known in the art. In some embodiments, information from the ultrasound system197is coregistered with information from the stereo optical system herein described and jointly used for modeling shift of deep brain tumors and structures, particularly where surgical cavities exist and/or surgical instruments, such as retractors, are present in a surgical site. In an alternative embodiment, with reference toFIG.1Ahyperspectral imaging device128, which may optionally couple to microscope body102through optical cable129, has a lens system131adapted for focusing images, a dichroic filter133adapted for separating light into shorter wavelength light and longer wavelength light, and, for imaging the shorter wavelength light, a short wavelength tunable optical filter135and image sensor137. Light received from optic cable129enters imaging device128through a dichroic filter-changer136having a neutral-density filter and notch filters adapted to exclude stimulus-wavelength light. Also provided, and coupled to image the longer wavelength light, are a long wavelength tunable optical filter139and longer wavelength image sensor138. Each tunable optical filter135,139is a bandpass filter, in a particular embodiment with a three nanometer bandpass, and is tunable from 400 to 1000 nanometers wavelength, and image sensor137.138are broadband sensors as known in the optoelectronics art; in a particular embodiment short wavelength image sensor137is a CMOS (complementary metal oxide semiconductor) or CCD (charge coupled device) image sensor, while longer wavelength sensor138is a high-sensitivity electron-multiplying CCD image sensor. Tunable optical filters are coupled to, and controlled by, processor180such that they may be set to a desired wavelength by processor180. In an alternative embodiment, with reference toFIG.1Bhyperspectral imaging device128, which may optionally couple to microscope body102through optical cable129, has a lens system131adapted for focusing images and a photosensor device199having an rectangular array of tiling patterns of photosensors, such as tiling pattern127, where each tiling pattern corresponds to a pixel of a captured image. Unlike typical Bayer-pattern 3-color cameras, in an embodiment, each tiling pattern127has a rectangular pattern of sixteen, thirty-two, or sixty-four, or other rectangular pattern, of photosensors each of which has a color filter over it; while a selected few of the color filters are traditional red, green, and blue color filters adapted for generating a coregistered traditional color image, the remaining color filters are Fabry-Perot interference filters of differing thicknesses such that photosensor device199has a separate photosensor in each tiling pattern sensitive to each of N specific, preselected, wavelengths between 1000 and 400 nanometers; for N13,29, or61for a total number of photosensors per pattern of 16, 32, or 64. In alternative embodiments, the tiling pattern may have a different number of photosensors. Photosensor arrays with tiling patterns of integrated Fabry-Perot interference filters of multiple wavelengths over photosensors are expected to be available from IMEC vzw, Kapeldreef 75 3001 Leuven, Belgium in late 2013. Using a tiled photosensor array having a well-chosen selection of filters allows video-rate image collection of hyperspectral image cubes. In an embodiment, as illustrated inFIG.1C, fluorescent stimulus light source168has an intense, broadband, white light source such as a supercontinuum laser230arranged to project light through a tunable filter232having bandpass of 3 to 10 nanometers, and tunable for a bandpass center over the range 400 to 1100 nanometers wavelength. Wavelength-selected light passed from laser230through filter232is focused by lens234onto tissue152. Tunable filter232is electrically tunable, and is coupled to, and controlled by, processor180. In a particular embodiment, filter232is a tunable Lyot filter. In an alternative embodiment, illustrated inFIG.1D, fluorescent stimulus light source168has several light-emitting diodes (LEDs)251, each of which has a different emissions spectrum, each LED being coupled through a bandpass filter253; in a particular embodiment light-emitting diode and filter pairs are provided such that the light source168can be configured to provide light of violet 390 nm, blue 438 nm, cyan 475 nm, teal 512 nm, green 542 nm, yellow 586 nm, and red 631 nm wavelengths. A wavelength or wavelengths provided at any one time is determinable by driving only selected LEDs of LEDs251. Light from bandpass filters253is combined into a single beam by combiner255, and a lens257is provided to adjust beam shape. In an embodiment, a combination white and fluorescent-stimulus light source260(FIG.1F) capable of providing both unpatterned and spatially modulated light at either a selected stimulus wavelength or at a broadband white wavelengths is coupled to a lighting port of the microscope to illuminate tissue. In a particular embodiment, light source260has paired LEDs251and filters253adapted to provide selected wavelengths of stimulus light similar to those of the embodiment ofFIG.1D, and a controllable white light source262, such as a supercontinuum laser or xenon incandescent lamp. Light from one or more active light sources such as a selected LED251and filter253, or an active white light source262, is combined by combiner264and provided to a spatial modulator268; in a particular embodiment spatial modulator268is a digital multimirror device (DMD) capable of modulating light under control of a display controller270with any of thousands of spatial illumination patterns, including an unmodulated pattern. In other embodiments, spatial modulator268may incorporate a digitally-controlled liquid-crystal display and a display controller270, or may incorporate a slide-changer or film-transport device adapted to interpose selected film frames having spatial modulation patterns on them. Modulated light from spatial modulator268then passes through one or more lenses272, which may include a coherent fiber bundle, for transmitting and focusing the modulated light onto tissue. In an embodiment, white light illuminator166has a high-intensity, broadband, white light source such as a supercontinuum laser236or other lamp arranged to project light onto a mirror of an digital-micromirror projection device (DMD)238such those produced by Texas Instruments for use in digital projectors for computer graphical display and for use in digital projection televisions. Light from DMD238is projected by a lens system240onto tissue152. DMD238is equipped with DMD control electronics242as known in the art of digital projectors, and is coupled to an additional graphical display controller (not shown) of digital image processing system180. The arrangement of laser236, DMD238, lens240, control electronics242, display controller, and digital image processing system is capable of projecting either unpatterned light or a predetermined black-and-white pattern or image of light onto tissue152. System Functions Surgical applications of the system are described with reference to brain surgery; however the system is applicable to surgery on other organs as well. In a brain surgery situation patients are prepared, the system is operated, and surgery performed, according to the flowchart ofFIG.2. The system ofFIG.1is prepared and calibrated202for proper three-dimensional surface extraction, according to the procedure outlined below.FIG.3shows a cross-sectional illustration of the brain152ofFIG.1, showing skull150and meninges.FIGS.1,2, and3, are best viewed together with the following description. The patient is subjected to appropriate diagnostic and pre-operative MRI (Magnetic resonance Imaging) (pMR) and/or CT (Computed Tomography X-ray) (pMR) scans. These pMR scans provide a preoperative three-dimensional model of tissue of the patient, in a particular embodiment the tissue of the patient includes the patients' brain152(FIG.1andFIG.3). A surgeon performs preoperative planning204, which includes identifying lesion tissue, such as tumor tissue156, as targeted tissue for removal in the preoperative model of the tissue. The preoperative planning may also include identifying other important structures252, such as particular blood vessels, nerve tracts, nearby areas critical for particular functions such as Broca's area254, and other nearby structures that the surgeon desires to preserve during operation. The tumor tissue156targeted for removal, and other important structures252,254that are desired to be preserved, are marked in the preoperative model at their locations as provided in the preoperative scans, indicating their respective locations before surgery begins. The preoperative model established from preoperative scans are detailed and visualize some brain surface structures, such as blood vessels260, and sulci262; sulci (plural of sulcus) are creases or folds at the surface of the brain. The surface of the dura is presumed to be at the surface of the brain as shown in the pMR model and scans. A model of the surface of the brain is extracted from the pMR model and scans. The pMR model is in a patient-centered coordinate system. Once consent is obtained, the patient is prepared for surgery, and patient tracking sensors146are attached to the patient's skull. In some embodiments, fiducials are used to provide registration marks in preoperative and intraoperative imaging to ease registration of the pMR coordinate system to intraoperative imaging. The patient tracking sensors are registered to the patient-centered coordinate system of the pMR model. Positions of the patient tracking sensors are determined in the patient-centered coordinate system, and the patient's skull150is opened, exposing the dura256matter. Dura is opened. The microscope zoom optics160and focus are set to a desired runtime optical setting, and the microscope body102position is adjusted such that it is over the surgical wound and a field of view of the microscope includes brain tissue152over the tumor156. The microscope location and orientation is tracked relative to the patient using tracking sensors142, microscope location sensors144and patient tracking sensors146to register a focal plane of the microscope to the pMR coordinate system and pMR images. These sensors, and/or fiducials, may also be used to register intraoperative imaging of other modalities, such as X-Ray, CT or MRI, to the pMR coordinate system. A first pair of stereo images is then taken208. Once taken, this first pair of stereo images is then processed using any features visible on the brain surface as follows: a) Stereo visual surface extraction (FIG.4) is performed of the dural surface in the images to create a brain surface map by1) Warping302the images to equivalent images as if taken at the reference settings;2) Identifying304corresponding features in both warped images;3) Tracing rays from the corresponding features to determine306three-dimensional locations of those features, the 3-dimensional locations forming a point cloud;4) Constructing308an extracted dural surface map from the point cloud of three-dimensional locations; and5) Transforming the extracted brain surface map to the patient-centered coordinate system of the pMR model by applying any necessary rotations and translations. After a hyperspectral image stack is obtained and processed as described below under Hyperspectral Reflectance Imaging Mode by illuminating the brain surface with unpatterned or spatially unmodulated light, and/or a sequence of patterns of spatially structured white light from illuminator166, and photographing the surface with hyperspectral camera128, the image stack is processed by processor180to generate a map of absorption & scattering light transport parameters and chromophores of interest, such as oxygenated and deoxygenated hemoglobin, on or in the brain surface. These map images may be displayed. Processor180provides DMD spatial modulator238of white light illuminator166with a sequence of patterns for spatially modulated light, where the spatially modulated light is projected onto tissue152. A series of images of the brain is obtained214with each pattern of illuminating light at wavelengths of interest, including both stimulus and fluorescence wavelengths for a fluorophore that is expected to be present in tissue152. In particular embodiments, the subject has been administered appropriate medications such that tissue152contains one or more of protoporphyrin IX generated in tissue by metabolizing aminolevulinic acid, fluorescein or a fluorescein-labeled molecule such as an antibody, or indocyanine green or an indocyanine green-labeled molecule such as an antibody. In alternative embodiments other fluorophores may be used. These images are processed to estimate optical properties of tissue152in each voxel of tissue for improved quantification of fluorophore concentration and depth localization of fluorophores. As described below under Fluorescent Imaging Mode, the brain is illuminated with one or more stimulus wavelengths for the fluorophores, and images are captured 216 at one or more emissions wavelengths. In two-dimensional embodiments a two-dimensional map of fluorophore distribution is constructed218, and corrected using the estimated optical properties for quantification of fluorophore. In three-dimensional embodiments, a three-dimensional map of fluorophore distribution in tissue is constructed218, as described below with reference to Fluorescent Depth-Resolved Imaging Mode, or in other embodiments as described below with reference to Fluorescent Quantitative Depth-Resolved Imaging Mode, which includes use of the estimated optical properties for quantification of fluorophore concentrations. In an embodiment, the map describes fluorophore concentrations at up to one centimeter deep in the brain, or deeper in some other types of tissue such as breast tissue. This map is then combined with the extracted 3-dimensional surface model, and topographic or tomographic images of fluorophore concentration are displayed. In a particular embodiment, where two fluorophores are used, difference maps are also prepared indicating differences in concentrations between the two fluorophores, and these maps are displayed. A classifier, which in embodiments is one of a k-nearest-neighbors (kNN) classifier, a neural network classifier, and an support vector machines (SVM) classifier, is then used to classify220(FIG.2) tissue at each voxel, and thereby generate a map of tissue classifications up to one centimeter deep in the brain surface. The classifier operates on chromophore concentrations, including oxygenated and deoxygenated hemoglobin and ratios of oxygenated to deoxygenated hemoglobin, fluorophore concentrations, and optical properties as determined for that voxel. Finally, the images and generated maps are displayed. Calibration for 3-D Surface Extraction Calibration of the stereo surface mapping and its operation are as described in patent application “Method and Apparatus for Calibration of Stereo-Optical Three-Dimensional Surface-Mapping System” number PCT/US13/20352 filed 4 Jan. 2013, and its parent documents, the contents of which are incorporated herein by reference. Stereovision Calibration and Reconstruction The surface profile extraction system uses a stereo optical system, such as that illustrated inFIG.1or1A. With reference toFIG.6, the optical system is set402to a reference setting S0 of a set of one or more reference settings. A sequence of optical precalibration phantoms are positioned404in view of the system, having known surface profiles, and parameters for reconstruction surface profile extraction routine182are derived that are sufficient for reconstructing a surface profile from a pair of stereo images taken with the optical system set402to the reference setting. Techniques for stereo image calibration and reconstruction based on a pinhole camera model and radial lens distortion correction are outlined here for completeness, and are used in some embodiments. A 3D point in world space (X, Y, Z) is transformed into the camera image coordinates (x, y) using a perspective projection matrix: ,(xy1)=(αx0Cx00αyCy00010)×T×(XYZ1)(1) where αxand αyincorporate the perspective projection from camera to sensor coordinates and the transformation from sensor to image coordinates, (Cx, Cy) is the image center, and T is a rigid body transformation describing the geometrical relationship of the effective optical centers between the views of the two cameras,120,122. A precalibration phantom is prepared having reference marks at known positions in 3D space. A stereo pair of images is taken406of the precalibration phantom, assuming the precalibration phantom has known surface profile, providing a plurality of known points in three dimensions. A total of 11 camera parameters (6 extrinsic: 3 rotation and 3 translation; and 5 intrinsic: focal length, f, lens distortion parameter, kl, scale factor, Sx, and image center, (Cx, Cy)) are then determined through precalibration using a least squares fitting approach, and saved for later use as herein described. The intrinsic parameters include f focal length, x lens distortion coefficient, Sx non-square pixel scalar, Cx; Cy camera center. The extrinsic parameters include R(μx; μy; μz) rigid-body rotation, T(tx; ty; tz) rigid-body translation. Note that we now have a camera model that projects a point in the world to its image coordinates, the next step is to determine (i.e., calibrate) several unknown parameters among the equations presented above. In particular, the extrinsic camera parameters to be calibrated are the rotation and translation matrices (R; T) and the intrinsic parameters are the focal length (f), lens distortion coefficient • scale factor (Sx), and image center (Cx; Cy). The 3D precalibration phantoms have easily identified correspondence points or reference marks, where the correspondence points have known height relative to a phantom baseline. Each correspondence point should be identifiable in each of the images of the stereo pair. Stereo image rectification is performed in a method similar to that of Hai Sun, pages 38-47. Stereo image rectification is employed next to establish epipolar constraints that limit the search for correspondence points along “epipolar lines” (defined as the projection of the optical ray of one camera via the center of the other camera following a pinhole model). In addition, images are rotated so that pairs of epipolar lines are collinear and parallel to image raster lines in order to facilitate stereo matching. In an embodiment, an intensity-based correlation metric and a smoothness constraint aware used to find the correspondence points in both images of the pair. Each pair of correspondence points was is then transformed into their respective 3D camera space using the intrinsic parameters, and transformed into a common 3D space using the extrinsic parameters. Together with their respective camera centers in the common space, two optical rays were constructed with their intersection defining the 3D location of each of the correspondence point pair. Since the 3D locations of the correspondence points are known on the precalibration phantoms, the parameters are fit408such that the extraction to a common 3D space gives results where extracted 3D points of an effective surface profile of the precalibration phantom match heights of the known points on the precalibration phantom. These 3D surface profile extraction parameters are then saved410for later use below. Next, and not disclosed in Hai Sun, a secondary calibration phantom is positioned412in view of the optical system, and a stereo image pair of the runtime calibration phantom as viewed in the reference setting is captured and saved as part of calibration information. In an embodiment, the secondary calibration phantom is a two dimensional, flat, phantom having marks printed thereon. In an embodiment, the marks printed on the runtime calibration phantom are randomly generated squares of random intensities. In an alternative embodiment for use with cameras in aircraft or drones, the secondary calibration phantom is a particular, preselected, field or town. When it is desired to use the system to extract a surface profile of tissue152, the optical system is set to an arbitrary runtime setting, typically having at least some optical system parameters, such as optical magnification, differing from those for the reference setting. The secondary calibration phantom may be used to calibrate warping parameters for the runtime setting, or may be used to calibrate warping parameters for secondary calibration points stored in a library or table as described below; a calibration for the arbitrary runtime setting determined by interpolation into the table and used for 3D surface extraction. Calibration of settings performed using the secondary calibration phantom, whether used for a runtime setting or for determining secondary calibration points, is described herein as secondary calibration. Secondary Calibration With the optical system set452to the arbitrary desired setting, the secondary calibration phantom is positioned in view of the optical system in a position approximating that where tissue152will be present during surgery, and a stereo image pair of the secondary calibration phantom is captured or taken454by cameras120,122taken through the optical system with the optical system configured at secondary calibration setting S. Next, deformation field parameters DFP for image warping routine183are derived306such that application of image warping routine183to the stereo image pair of the phantom with optical system at desired setting S provides a deformed stereo image pair that closely matches the stereo image pair of the secondary phantom as taken with the optical system in the reference setting S0. The method for 3D surface extraction herein described warps stereo images captured using a desired setting S, using the deformation field obtained from images of a phantom at desired setting S and reference setting S0, into warped images corresponding to images taken at the reference setting S0. Because the reference setting S0 has been calibrated for surface extraction, the warped stereo images can then be used for surface reconstructing following the same calibration as determined for reference setting S0. The key to the technique is to find the equivalent image at a specific setting S0 that has been pre-calibrated for an image acquired at an arbitrary setting S. Image Deformation due to the Change in Image Acquisition Settings and Target Surface Orientation To determine image deformation due to the change in image acquisition settings (i.e., m magnification and f focal length), in an experimental embodiment a series of phantom images were acquired using a planar secondary calibration phantom with randomly generated squares of random grayscale intensity by successively changing one parameter from its reference value while maintaining other optical system parameters at the corresponding reference value; in other embodiments other secondary calibration phantoms may be used. In an embodiment, the reference values of image magnification (m0) and focal length (f0) correspond to the lowest magnification and the shortest focal length that the microscope offers, respectively. Because image magnification alters the image acquired independently from the change in focal length (f) or viewing angle (θ) (which was verified with the deformation fields generated by changing m at different f and θ), only one set of images is necessary to determine an image deformation field due to the change in m (acquired with f0). With m0, image deformation due to the change in f was also determined by successively increasing f from f0. For these phantom images, the secondary calibration phantom was perpendicular to an optical axis centered between the effective optical axes of the two cameras. With reference toFIG.8, in order to determine image deformation due to the change in θ, the pinhole camera model was employed. For arbitrary material points, q0 and qi initially on the secondary calibration phantom positioned at00, their corresponding image pixels, p0 and pi on the imaging plane, are co-linear with the pinhole camera lens. For a given material point, q0, its new pixel location when the target surface was rotated by θ, is given by the pixel location produced by the material point, qi on the original target surface (i.e., θ0), that intersects with the line segment generated by the pinhole lens and q0, as illustrated inFIG.8. Image deformation due to the change is then produced by subtracting the two pixel locations, pi and p0. Based on the above description of generating image deformation fields due to the change in m, f, and θ, the following pseudo procedure outlines the sequence of phantom image acquisitions:Set f=f0, and θ0=θ0, successively increase m from m0 and acquire images for each setting of m;Set m=m0 and θ=θ0, successively increase f from f0 and acquire images for each setting of f;Set m=m0 and f=f0, successively increase θ from θ0, and acquire images for each setting of θ; verify that the predicted image deformation field based on pinhole camera model matched with measurement. Image deformation due to the change in m and f are measured using the phantom images. By contrast, image deformation due to the change in θ is computed based on the pinhole camera model, and is verified using the phantom images. Once appropriate warping parameters, such as a warping deformation field, is determined, the microscope is positioned460over tissue152instead of the phantom, and stereo images of the tissue are obtained462from the cameras120,122. Image Warping to Reference Setting Next, the stereo images of the tissue are warped464by optical warping routine183into equivalent images as if they had been taken at the reference settings. A pseudo algorithm to warp images obtained at an arbitrary image acquisition setting (m, f) and surface orientation relative to the optical axis (θ): Use deformation field due to the change in m to generate image at setting of (m0, f, θ); Use the resulting image and analytical solution of deformation due to the change in θ, produce image at settings of (m0, f, θ0); Use the resulting image and deformation field due to the change in f, to produce a warped image at the reference settings, (m0, fθ, θ0); In an alternative embodiment, a single deformation field, or warping parameters, for the entire transformation from the arbitrary setting (m, f, θ) into a warped image corresponding to an image as if it had been taken at the reference setting (m0, f0, θ0) is used in a single warping operation. Next, the stereo precalibration parameters obtained from precalibration phantoms with the optical system at the reference setting (m0, fθ, θ0) are used to reconstruct466a surface profile of the tissue in 3D. The reconstructed surface profile may then be used with a computer model of deformation186of the tissue and a pre-surgery location of a tumor or lesion as determined in three dimensions from pre-surgery images obtained by conventional medical imaging devices such as CT scanners and MRI machines to locate468the tumor156as displaced during surgery in a manner similar to that described by Hai Sun. Alternatively, or in addition to displaced tumor locations, the computer model of deformation of the tissue may be used to determine intra-surgery locations of other anatomic features of the tissue so that these features may be preserved. Finally, image processor180uses a display system190to display the surface profile and tumor locations, or locations of other anatomic features, so that a surgeon may remove the tumor or lesion while preserving other critical anatomic features of the tissue. In an embodiment, an updated MRI (uMR) image stack is prepared470by warping or annotating the preoperative MRI to show the displaced locations of tumor and other structures. The determined displaced locations of tumor and other structures are displayed472to the surgeon, who may use this displayed information474to locate the tumor or additional tumor material for removal, or to determine whether the tumor has been successfully removed. Similarly, in alternate embodiments fluorescent images, differenced fluorescent images, depth resolved fluorescent images, and quantitative depth resolved fluorescent images may be displayed to the surgeon with and without uMR information. If the tumor has not all been removed, more tumor may be removed and the process repeated476beginning with determining warping parameters for a current optical setting456, in most embodiments by interpolating in table458, and capturing a new stereo image pair462of the tissue. Library-Based Calibrations It can be inconvenient to require a surgeon to position a secondary calibration phantom in the field of view of a surgical microscope when the surgeon changes focal length, magnification, or other optical parameters of the system. FIG.9illustrates a family of reference settings (including each reference setting S0) or primary calibration points352,354, together with secondary calibration points356,358,360,362,364,366,368, are stored in a warp deformation field parameter (DFP(n)) and 3D reconstruction parameter multidimensional table or library372(FIG.1). An encoder374is provided for the microscope zoom and focus controls. Table or library372is indexed by the zoom and focus control settings, which correspond to magnification and focal length. For simplicity, only magnification and focal length are illustrated inFIG.9in a two-dimensional diagram representative of a two-dimensional table, in an actual system additional optical parameters, such as microscope orientation angles θ, are provided as additional dimensions to the table. Each set of deformation field parameters is a constant representing no deformation for the primary calibration point S0 or points, or is derived by adjusting optical parameters of the system such as the image magnification (m) and focal length (f) parameters to correspond to the predetermined secondary calibration point, positioning the secondary calibration phantom, capturing an image pair is captured at this calibration point, and fitting of deformation parameters such that a warped image pair produced from the image pair closely resembles saved stereo images of the phantom captured at a reference setting S0, such as primary calibration point352. In this table-based embodiment, when surface profile extraction is desired at a runtime arbitrary optical setting set, such as setting370, during surgery by a surgeon, the runtime optical settings are determined by determining the magnification m, and focal length f, using the encoder374on the zoom and focus controls. Angles are determined by reading microscope angle information from tracker142. A deformation field parameter set for the runtime optical setting is then determined by interpolation from nearby entries in the table or library372. A runtime image pair of tissue is then captured. The runtime optical warping parameters are then used to warp the runtime image pair to an image pair that corresponds to the specific reference setting S0, 352 that was used for secondary calibration of the nearby entries in the table as heretofore described. 3D reconstruction is then performed using 3D reconstruction parameters determined for that specific reference setting. The use of a reference setting S0 at the extreme low magnification end of the optical system zoom range, and at a nearest focus length of the optical system focus range, has advantage in that it can be reproducibly set as there is a mechanical stop at these points. Further, when an image is warped to correspond to a lower magnification setting, 3D reconstruction may be more accurately performed than when it warped to a higher magnification where portions of the warped image exceed the boundaries of images used to calibrate the 3D reconstruction parameters. In an alternative embodiment, in order to provide more accurate 3D reconstruction at higher magnification and longer focal length settings, additional reference image acquisition settings at the midrange of optical system settings are used in addition to the extreme settings at the lowest magnification and shortest focal length. In this embodiment, additional reference settings354,355are provided at a midrange of magnification. Further, in a particular embodiment, additional reference settings355,357are provided at a reproducible, but greater than minimum, set-point of focal length. 3D reconstruction parameters are determined by primary calibration, similarly to the process heretofore described for determination of 3D reconstruction parameters for the reference setting S0, for each of these additional reference settings354,355,357. It is desirable that each reference setting S0, 352, 354, 355, 357 be a setting that the optical system can be reproducibly be returned to. Certain microscopes are provided with motorized focus and zoom controls, together with encoders374. These microscopes may be provided with a preset or bookmark memory permitting them to be returned to a predetermined preset of focus and zoom; these microscopes are particularly adaptable for operation with more than one reference setting. Other microscopes may be equipped with a mechanical detent, such as a detent at a midpoint setting of magnification (or zoom). In embodiments using these optical systems, each reference setting S0, 352, 354, 355 is a setting that is bookmarked or at mechanical detents. In a multiple-reference-setting embodiment, the plane of focal length and magnification, or in an embodiment having a single angle encoded a 3-space, or in an embodiment having two angles encoded a 4-space, is divided into quadrants, such as quadrant374,376,378, cubes, or hypercubes (hereinafter quadrant) respectively. In a multiple reference setting embodiment, secondary calibration points, such as calibration points364,366, and368, are determined at multiple optical system settings in each quadrant, according to the procedure for secondary calibration described above, where each secondary calibration point provides distortion field parameters DFPs for warping an image taken at the calibration point to the primary calibration point of the quadrant within which the secondary calibration point lies. For example, in the illustration ofFIG.9, top right quadrant secondary calibration points366provide DFPs for warping images to correspond to images taken at the top right quadrant primary calibration point or reference setting355; with bottom left quadrant secondary calibration points356,358,360provide DFPs for warping images to correspond to images taken at the bottom left quadrant primary calibration point or reference setting352. In the multiple-reference-setting embodiment, when a surgeon selects a runtime setting, such as setting370,380, the processor124uses the encoders143to determine the runtime setting. The processor180executes a selection routine to determine the quadrant in which the runtime setting occurs by comparing the runtime setting with settings of calibration points in the warp and 3D parameter table or library372. Typically, the quadrant is chosen to be that having a reference setting, such as reference setting352,355nearest in focal length to that of the runtime setting, and the nearest magnification setting less than the magnification of the runtime setting. A runtime distortion field parameter (DFP(run)) is then determined by interpolation, as heretofore described, between nearby secondary calibration points recorded in library372. As previously described, a runtime stereo image is then captured, and warped to correspond to images captured at the primary calibration point or reference setting, of that quadrant, such as setting352for the lower left quadrant374or setting355for runtime settings in the top right quadrant378. 3D extraction is then performed on the warped image, using 3D extraction parameters recorded in library372and associated with the primary calibration point or reference setting352,355, associated with that quadrant. Determining 3D Deformation Field In an alternative embodiment, instead of determining specific correspondence points, determining 3D coordinates of those 3D correspondence points, and deriving a 3D surface map from a cloud of such points, a 3D image warping deformation field is determined that maps a first image, such as a left image, of each stereo pair into an image that corresponds to the second image, such as a right image, of the stereo pair. A 3-D surface map is then determined from that 3D image warping deformation field. Image Reconstruction From Warping Field Stereovision reconstruction can be expressed by the following equation to determine the 3D spatial coordinate, P, for a given sampling point in the rectified left image, p: P=G(p,F(p))=G(p,p+u(p)), (1A) where F(p) is a functional form describing the image coordinate of the correspondence point of p in the rectified right image, and is obtained when the horizontal disparity, u(p), is available, and G is the geometrical operation (including transformation and triangulation) established from calibration. Therefore, reconstructing the 3D surface in space is reduced to establishing a disparity map between the two rectified images for a given set of calibration parameters. The quality (accuracy and density) and the computational efficiency of the disparity map determine overall performance in stereovision reconstruction. For purposes of this discussion, we refer to an unwarped left image and warp that image to correspond to a right image; however it is anticipated that left and right may be reversed in alternative embodiments. Establishing the disparity map between the rectified left (“undeformed”) and right (“deformed”) image pair is analogous to determining the motion field between the two images. Determining a Vertically-Unconstrained 3D Warping Deformation Field It is known that a particular point P(x, y, z) on a surface should appear along the same horizontal epipolar line e in each image of a stereo pair, although its location along that line will differ with the angle between the images and 3D height. In an embodiment, a 3D warping deformation field (3D-DFP) is determined by imposing a vertical, or epipolar, constraint while fitting deformation field parameters to the images. In a novel unconstrained embodiment, no such vertical constraint is imposed. In the unconstrained embodiment, using a variational model and assuming the image intensity of a material point, (x, y), or its corresponding pixel does not change, a gray value constancy constraint I(p+w)=I(p), (2) is assumed in which p=(x, y) and the underlying flow field, w(p), is given by w(p)=(μ(p), ν(p)), where μ(p) and ν(p) are the horizontal and vertical components of the flow field, respectively. Global deviations from the gray value constancy assumption are measured by an energy term EData(μ,ν)=∫ψ(|I(p+w)−I(p)|2)dp,(3) where a robust function, ψ(x)=√{square root over (x2+ε2)}, was used to enable an L1minimization in a particular study (ε=0.001). The gray value constancy constraint only applies locally and does not consider any interaction between neighboring pixels. Because the flow field in a natural scene is typically smooth, an additional piecewise smoothness constraint can be applied to the spatial domain, leading to the energy term ESmooth(μ,ν)=∫ϕ(|∇μ|2+|∇ν|2)dp,(4) where is a robust function chosen to be identical to ψ, and ∇ is the gradient operator where ❘"\[LeftBracketingBar]"∇(u)❘"\[RightBracketingBar]"2=ux2+uy2(ux=∂u∂x,uy=∂u∂y), which is analogous for ν. Combining the gray value constancy and piecewise smoothness constraints leads to an objective function in the continuous spatial domain given by E(μ,ν)=EDataαESmooth(5) where α(α>0; empirically chosen as 0.02 in a particular feasibility study) is a regularization parameter. Computing the optical flow is then transformed into an optimization problem to determine the spatially continuous flow field (defined by μ and ν) that minimizes the total energy, E. In this study, an iterative reweighted least squares algorithm, and a multi-scale approach starting with a coarse, smoothed image set were used to ensure global minimization. Disparity Estimation Based on Optical Flow In a particular flow-based stereo surface reconstruction study performed on intraoperative stereo pairs taken during surgical procedures, the rectified images were down-sampled to expedite processing, with sufficient resolution retained to provide adequate 3D modeling. The full-field horizontal displacements from two-frame optical flow on the two (down-sampled) rectified images served as the disparity map, u(p), from which texture-encoded 3D stereo surface is readily reconstructed from the geometrical operations defined above. Although the flow field is spatially smooth due to the smoothness constraint applied to the optimization, spurious disparities can still occur in regions of insufficient features and/or with occluded pixels, similarly to SSD-based correspondence matching. Instead of correcting for these spurious disparities in the solution field by applying appropriate constraints in optimization with additional burden in algorithmic implementation and increase in computational cost, we detect regions of spurious disparities using values of the vertical flow field, v(p). This strategy was possible because ground-truth values of zeroes for v(p) were known a priori as a direct result of the epipolar constraint where correspondence point pairs were pre-aligned on the same horizontal lines in rectified images. Therefore, pixels with large absolute values of vertical discrepancy ν(p) (such as pixels displaced above or below a certain threshold) that violate the epipolar constraint also indicate likely spurious horizontal disparities in the flow field, μ(p). In some embodiments these pixels are simply excluded from stereo surface reconstruction. In an alternative embodiment, the sampling pixels are empirically filtered into regions of high, mid, or low confidence levels based on the absolute vertical disparities, abs(v), when they were either less than a first threshold, between the first threshold and a second threshold, or above the second threshold in pixels, respectively, where these particular threshold values were empirically chosen. Horizontal disparity values for pixels with a high or low confidence level were either retained or removed, while those in-between were interpolated based on those of a high confidence level. Such a two-tier threshold interpolation/exclusion scheme was effective in maximizing regions of sufficient disparity accuracies while excluding from surface reconstruction those with insufficient features such as those due to specular artifacts or occluded pixels. An experimental embodiment using 3D reconstruction based upon optical flow using a vertically unconstrained image deformation fitting process and using vertical disparity for disparity detection provided superior surface reconstruction, and may permit more accurate determination of intraoperative tumor locations. Interpolation, Warp to Reference, Warp to 3D, Model Movement Putting together the heretofore described procedures, as illustrated inFIG.11, a calibration library or table372is prepared602by doing primary calibration using the 3D calibration phantoms at one or more reference settings, and 3D reconstruction parameters are stored for each setting. Secondary calibration points are then added into the table372by imaging a secondary calibration phantom at each reference setting, setting the optical system to correspond to each secondary calibration point, re-imaging the secondary calibration phantom, and determining warp field parameters that map the re-image of the secondary calibration phantom to match the image taken at a reference setting appropriate for use with that secondary calibration point; these warp field parameters are stored in the table. The optical system is then set to a desired setting604, and warp field parameters suitable for mapping images taken at the desired setting into warped images corresponding to images taken at a reference setting are determined606by reading warp parameters for secondary calibration points near the desired setting and interpolating to give interpolated warp parameters. A stereo image pair is obtained608from the cameras and the interpolated warp parameters are used to warp610that image pair to a warped image pair that corresponds to an image pair taken at the reference setting used for calibrating those secondary calibration points. A vertically-unconstrained warp-field fitting operation is then performed to determine6123D warp field parameters for warping a first image of the warped stereo image into a second image of the warped stereo image pair, and, where vertical deformation in the warp field exceeds a first limit, the warp field is adjusted, and where vertical deformation exceeds a second limit, associated image pixels are excluded from consideration in the warp-field fitting operation in a further iteration of fitting the 3D warp field parameters to the warped image pair. The fitted 3D warp field parameters are used to reconstruct614a surface profile of the tissue. This surface profile is in turn used to constrain a mechanical model of the tissue, the model is used to determine shift of structures in the tissue, such as a shift of a tumor616, and an intraoperative location of those structures and the tumor. The intraoperative structure locations and tumor location is then displayed618such that a surgeon can remove the tumor. The heretofore described procedure may be used to determine intraoperative positions of a lesion or other structures in tissue of the mammalian, including human brain or may be adapted to determining intraoperative positions in other soft-tissue organs. Operation in Hyperspectral Reflectance Imaging Mode The system herein described may be operated to produce hyperspectral reflectance images as follows. In embodiments having LED-based or incandescent white light illuminators166, the illuminators are turned on. In embodiments having illuminators as described with reference toFIG.1E, laser236is turned on, and processor180puts a blank white display on DMD238. In embodiments having a multiple-filter array hyperspectral camera128as discussed with reference toFIG.1B, a hyperspectral reflectance image stack is then captured directly. Each pixel of each wavelength image of the stack corresponds to light imaged by a photosensor of the array imager covered by a filter having bandpass at that wavelength, such that for every image stack a number of images are collected with each image corresponding to a wavelength in the range of the electromagnetic spectrum of interest. In this way a full spectrum can be reconstructed at each pixel in of the 3d image stack. In embodiments having a single-filter, single-broad-band-camera132hyperspectral camera as discussed with reference toFIG.1, or a dual-imager hyperspectral camera as discussed with reference toFIG.1A, filters130,135,139are set to each wavelength for which reflectance imaging is desired. Then, using the image sensor130,138,137appropriate for each determined wavelength, an image of the hyperspectral reflectance image stack is captured at those wavelengths. In embodiments having a multiple-filter array imager, reflectance images are captured at one or more wavelengths corresponding to illumination wavelengths; if white light is used for illumination, a full hyperspectral image stack is captured in a single, snapshot, operation. In an embodiment, separate images of the hyperspectral image stack are captured at each of several wavelengths of interest, including wavelengths corresponding to peak absorption wavelengths of oxyhemoglobin, and deoxyhemoglobin; these images may be displayed to a user by processor180on monitor192. A ratio image is also determined by ratioing intensity of corresponding pixels of the oxyhemoglobin and deoxyhemoglobin images to produce an image of hemoglobin saturation, and an image of a total of hemoglobin concentration may also be generated. Similar images at wavelengths suitable for use with other chromophores may also be used. Images may also be generated based on the scattering properties of the tissues derived from the hyperspectral reflectance images. The hyperspectral reflectance imaging, and spatially modulated (SM) hyperspectral reflectance imaging, are therefore performed in image processing routines executing on processor180that retrieve the optical properties separately for each emissions wavelength based on a look-up table derived from Monte Carlo simulations of the radiation transport or a diffusion theory approximation either modeled with numerical methods or estimated from analytical forms derived under plane wave assumptions. The recovered optical properties at multiple wavelengths then allows recovery of such medically useful markers as tissue oxygenation and other endogenous properties of the tissue Operation in Fluorescent Imaging Mode The system herein described may be operated to produce fluorescent images as follows. Fluorescent stimulus light source168is set to a preferred stimulus wavelength of a first fluorophore that is expected to be present in tissue152. In embodiments having a multiple-filter array hyperspectral camera128as discussed with reference toFIG.1B, a fluorescence image is then captured directly using photosensors of each tiling pattern having filters with bandpass at an expected fluorescence emission wavelength of the fluorophore. In embodiments having a single-filter, single-broad-band-camera132hyperspectral camera as discussed with reference toFIG.1, or a dual-imager hyperspectral camera as discussed with reference toFIG.1A, filters130,135,139are set to a first fluorescent emissions wavelength appropriate to the fluorophore. Then, using the image sensor130,138,137, an image of fluorescent emitted light is captured at that wavelength. These images may be displayed to a user by processor180on monitor192. Additional fluorescent images may be captured at a second, third, or fourth emissions wavelength appropriate to the fluorophore or fluorophores, further studies, and preferences of the surgeon. In some embodiments, including many embodiments making use of multiple fluorophores, fluorescent stimulus light source168is set to a second stimulus wavelength of fluorophore that is expected to be present in tissue152, in some embodiments this fluorophore is the first fluorophore, and in other embodiments it is a second fluorophore. A second fluorescence image, or set of fluorescence images, is then captured directly at one or more expected fluorescence emission wavelengths of the fluorophore. These images may also be displayed to a user by processor180on monitor192. In alternative embodiments, more than two stimulus wavelengths, and/or more than two fluorescent emissions wavelengths, may be used for fluorescence imaging. The wavelength selected for stimulus light and for the wavelength for capturing fluorescent emissions depends on the expected fluorophore, for example protoporphyrin IX has an absorption peak at 405 nanometers that may be used for stimulus light, and emissions wavelengths of 635 nanometers with a shoulder of 710-720 nanometers that may be used for fluorescent image capture. Similarly, fluorescein may be stimulated with stimulus light near 500 nanometers while emitting near 530 nanometers, a wavelength suitable for fluorescent emissions image capture. Also, Indocyanine Green (ICG) may be stimulated with light between 680-700 nanometers while emitting near 780 nanometers, a wavelength that may be used for fluorescent emissions image capture. The system is adaptable for use with other fluorophores by selecting appropriate stimulus and imaging wavelengths. Further, memory178has deconvolution or unmixing routines that, when executed, determine contributions to fluorescent hyperspectral captured image stacks from two, or in some embodiments more than two, separate fluorophores having different emissions wavelengths by processing a hyperspectral fluorescent emissions stack. A hyperspectral image stack essentially provides a spectrum of emissions as received by each pixel. Our work has shown deconvolving contributions from two, or in some cases more than two, fluorophores is often possible using a single emission spectra captured under a single stimulus wavelength of light and base spectra of each fluorophore present and tissue base autofluorescence. The present embodiment permits capturing separate hyperspectral image stacks under each of several stimulus light wavelengths, and this additional information is believed useful in simplifying deconvolution of contributions from some fluorophores and in extending the number of fluorophores that may be simultaneously quantified in Fluorescent Imaging Mode (FI), quantified Fluorescent Imaging Mode (qFI), Depth-Resolved Fluorescent Imaging Mode (dFI), and Quantified Depth-Resolved Fluorescent Imaging Mode (qdFI). Execution of the deconvolution or unmixing routines therefore generates independent fluorophore concentration maps for each fluorophore. Operation in Spatial-Frequency-Modulated Reflectance Mode Embodiments of the system having illuminators as described with reference toFIG.1Emay also be operated in a spatial-frequency-modulated reflectance-imaging mode to determine optical properties, including absorption & scattering properties, at each pixel of images of tissue152; or, in an alternative embodiment for improved quantification and depth resolution, at each voxel of a three-dimensional model of tissue152, In, this mode, laser236(or other broadband lamp) is turned on, and processor180puts a sequence of white display on DMD238. In embodiments having a multiple-filter array hyperspectral camera128as discussed with reference toFIG.1B, a hyperspectral reflectance image stack is then captured directly, with each pixel of each wavelength image of the stack corresponding to light imaged by a photosensor of the array imager covered by a filter having passband at that wavelength. In embodiments having a single-filter, single-broad-band-camera132hyperspectral camera as discussed with reference toFIG.1, or a dual-imager hyperspectral camera as discussed with reference toFIG.1A, filters130,135,139are set to each wavelength for which reflectance imaging is desired. Then, using the image sensor130,138,137appropriate for each determined wavelength, an image of the hyperspectral reflectance image stack is captured at that wavelength. In an embodiment, separate images of the hyperspectral image stack are captured at wavelengths including peak absorption wavelengths of oxyhemoglobin, and deoxyhemoglobin; these images may be displayed to a user by processor180on monitor192. A ratio image is also determined by rationing intensity of corresponding pixels of the oxyhemoglobin and deoxyhemoglobin images to produce an image of hemoglobin saturation. Similar images at wavelengths suitable for use with other chromophores may also be used. The spatially modulated mode is also used at fluorescent stimulus wavelengths and fluorescent emissions wavelengths to determine reflectance, absorbance, and scattering parameters for use in modes described below, including qFI, dFI, and qdFI modes. In an embodiment, spatially modulated mode is also used to recover the tissue surface profile in real-time using phase shifting profilometry (2). This involves retrieving the phase shift for every point in the reference plane, between a projected spatially modulated light pattern and a camera acquired image of the light pattern deformed by the surface. The phase shift is then used to calculate absolute height for all points on the surface in the reference plane. The first step is to generate the light patterns. We require 3 different patterns, each with a different phase. The reference patterns are given by: s1(x)=a0+a1·cos[2πf0x](Eq.2)s2(x)=a0+a1·cos[2πf0x+2π3]s3(x)=a0+a1·cos[2πf0x+4π3] Here f0 is the spatial frequency of the modulation, a0 is the offset, and al is the amplitude intensity. Since we illuminate with projected 8-bit grayscale images, we use a0=a1=225/2. We acquire one deformed light pattern for each projected pattern, yielding 3 deformed light patterns: d1(x,y)=a0+a1·cos[2πf0x+ϕ(x,y)](Eq.3)d2(x,y)=a0+a1·cos[2πf0x+2π3+ϕ(x,y)]d3(x,y)=a0+a1·cos[2πf0x+4π3+ϕ(x,y)] Here ϕ(x,y) is the phase shift for all points (x,y) in the reference plane. Two intermediary variables are then calculated from the 6 light patterns: S_=-3·(s2-s3)(2s1-s2-s3)D_=-3·(d2-d3)(2d1-d2-d3)(Eq.4) The Phase Shift is then Given by: ϕ(x,y)=unwrap(artan(D(x,y)))−unquwrap(artan(S(x,y))) (5) When unqwrap is a 2D phase unwrapper needed to correct the 2π (shifts caused by the arctan discontinuity. Finally, the absolute height at each point is calculated by: h(x,y)=l0ϕ(x,y)ϕ(x,y)-2πf0d0(Eq.6) Operation in Wide Field Quantitative Fluorescent-Imaging (qFI) Mode FIG.12illustrates readings of fluorophore intensity as observed in Fluorescent Imaging Mode of fluorophores at a constant concentration in phantoms having a variety of optical properties in left column1002. We apply spectral (based on spectrally-resolved detection) and/or spatial (based on spatially modulated illumination) constraints to the raw fluorescence data in order to obtain the required quantitative information by accounting for the effects of light scattering and absorption on the fluorescence images through correction algorithms based on light transport models and/or data normalization schemes. These corrections operate according to the flowchart ofFIG.20A, and begin by determining2332absorbance and reflectance parameters at fluorescence stimulus and emissions wavelengths for each pixel, or for each of many small multipixel regions, of the images. Fluorescence stimulus wavelength light is applied and fluorescence emission images are then acquired2334. These parameters are then used to correct2336fluorescence images for stimulus light reflected or absorbed before reaching fluorophore, and for fluorescent emissions light absorbed after emission and before being released from the tissue. We briefly outline three technical alternatives to realizing wide-field qFI, each of which may be used in one or another embodiment: (a) Technical realization of wide-field qFI: We use a wide-field qFI method which achieves a minimum sensitivity to CPpIX of 50 ng/ml with an error of no more than 20% over a field-of-view (FOV) of at least 4 cm2 in an acquisition time of less than 5 seconds. Wide-field qFI is technically challenging because corrections for light attenuation must consider contributions from surrounding tissues at every point in the FOV. In addition, tissue curvature and ambient lighting can compromise quantitative imaging, and degradation from these effects must be minimized. Hence, we developed three approaches to find the optimal method which meets our specifications where each presents tradeoffs in performance fidelity vs. implementation complexity. The first two are direct extensions of our point-probe technique, in which attenuation correction is achieved through measurement of the tissue's diffuse reflectance (spectrally-constrained′), and the two methods differ in whether the full spectrum or dual wavelength approximations are used. The third method (spatial light modulation′) illuminates the surgical surface with specific and varying spatial patterns of light which allow separation of the absorption and scattering contributions in tissue as described in the section “Operation in Spatial Frequency Modulated Reflectance Mode” above, and these absorption and scattering parameters are then used to correct the wide-field fluorescence image. Estimates of surface fluorophore concentrations, as corrected with the tissue optical properties, are illustrated in right column1004ofFIG.12. (1) Spectrally-constrained qFI with full reflectance spectrum: We used a full spectrum weighted basis solution to estimate tissue optical properties that is likely to be effective in single organ systems such as the brain where tissue optical properties are relatively homogeneous. Here, ground truth data (i.e., basis function responses) relating the measured wavelength-dependent diffuse reflectance (Rd(λ)) to the corresponding absorption (μa(λ)) and (reduced) scattering (μs(λ)) coefficients were generated using tissue-simulating liquid phantoms with a large range of known optical properties consistent with brain tissue. A 4D set of basis functions, [Rd, λ, μa, μs′], were created from this information, and wide-field spectrally-resolved reflectance images acquired during surgery will be decomposed into Rd, and a regularized minimization (e.g., Generalized Least Squares, GLS) was used to determine the best fit of Rd values as a weighted sum of basis function responses to estimate μa(λ) and μs′(λ) at every image pixel. A correction image derived from estimates of μa(λ) and μs′(λ) will be calculated using light transport models80and applied to the raw fluorescence image to produce a quantitative spectrally-resolved fluorescence image. In one embodiment, GLS is applied to each (x, y) corrected fluorescence spectra to unmix the contributions from PpIX and auto-fluorescence, and construct a full FOV image of PpIX. To evaluate the technical feasibility of the approach, we generated preliminary data shown inFIG.12which corrects the raw fluorescence using spectral constraints to calculate optical properties that significantly decrease the spectral distortions. Spectrally-constrained qFI with dual wavelength ratiometry: As an alternative, we will investigate an approximate method that uses measurements of tissue reflectance at2select wavelengths to correct for tissue attenuation. To demonstrate technical feasibility and clinical potential, initial evaluations of this technique have occurred in tissue-simulating phantoms, ex vivo brain tumor tissue from the CNS-1 rat glioma model, and in vivo during human glioma surgery, using the fluorescence/reflectance point-probe as a gold standard The dual-wavelength approximation yielded a linear relationship between the corrected raw fluorescence and the true PpIX concentration (R-squared=0.6387 for raw vs. R-squared=0.9942 for corrected fluorescence). As a first step towards clinical evaluation, we constructed a prototype spectrally-resolved imaging system and attached it to the surgical microscope. The system collects images of the surgical field continuously across the full visible spectrum (λ, =400-720 nm) and generates data in near-real time of both reflectance (under white light), Rd(x, y, λ), and fluorescence (under violet-blue light), F(x, y, λ). A two wavelength normalization algorithm was applied to the complete data set to derive a quantitative image of absolute PpIX concentration. Spatially-constrained qFI with spatial light modulation: The image processor180executing routines in memory178that perform method will estimate tissue absorption and scattering maps using spatial light modulation to correct the raw fluorescence images with the same light transport model as the full-spectrum approach. Here, the detected light pattern is affected by tissue scattering more at high modulation frequency; hence, scattering and absorption properties can be separated by scanning the frequency and relative phase of the illumination patterns. In preliminary studies of technical feasibility, a liquid-crystal-on-silicon device projected sinusoidal patterns of light intensity of varying phase onto the surface and reflected light patterns were captured with a CCD camera in tissue-simulating phantoms and in a rodent glioma model which showed that quantitative maps of tissue optical properties can be recovered with the technique. Modeling for qFI Some of the alternative approaches for qFI require light transport modeling in a wide-field geometry. We include factors such as curvature, variable light penetration, and excitation based on spatially modulated light. Specifically, we will merge an existing finite-element diffusion model with an existing Monte Carlo simulation algorithm—Monte Carlo is applied at small depths where diffusion theory can break down, while finite-elements will be used at greater depths where the diffusion model is accurate but Monte Carlo becomes computationally intractable (transition depth depends on wavelength since tissue absorption varies dramatically from violet-blue to red light). The fluorescence light transport model has the optical property maps and a 3D profile of the surgical surface as inputs (curvature is obtained from either a stereovision system we use routinely in the operating room or a 3D profiler based on reflection of a spatially modulated light pattern from the tissue surface). These data represent the actual geometry and relevant attenuation properties of tissue and allow the model to generate simulated fluorescence signals (i.e. basis solutions) from which the actual pixel-by-pixel PpIX concentrations are retrieved from a least-squares match of the measured response to the simulated basis solutions. Operation in Fluorescent Depth-Resolved Imaging Mode (dFI) Data-flow diagram D1and D2, as illustrated inFIGS.23-24, may prove helpful in understanding dFI mode for fluorophores, including infrared fluorophores, and diagrams D4and D5, as illustrated inFIGS.20-21may prove helpful in understanding dFI mode for visible fluorophores. Also helpful areFIG.15andFIG.16.FIG.15illustrates spectra of PpIX as emitted from tissue when the PpIX is located at different depths in tissue, andFIG.16illustrates a ratio of intensity of light at two wavelengths emitted by PpIX and detected above tissue at different depths as curve fit to an equation. Many other fluorophores exhibit similar spectral shifts with depth in tissue. All embodiments of depth-resolved imaging operate according to the basic flowchart ofFIG.23A. First, reflectance, scattering and absorption parameters are determined2302for tissue, in certain embodiments these parameters are determined by lookup in a table of parameters associated with tissue types, and in other certain embodiments by measurement of tissue using hyperspectral images taken under white light, and in particular using an image plane associated with emissions wavelengths of a fluorophore expected in the tissue. These parameters are used to determine2304a relationship between depth and a shift between observed emissions spectra and standard emissions spectra of the fluorophore; in embodiments using a table of parameters this will be a constant relationship for all pixels, in embodiments using measurement of tissue this may be a map of relationships of depth to spectral change that differ from pixel to pixel. While some embodiments use unstructured white light to determine these parameters, others use spatially modulated (also known as structured) light and determine a three dimensional map of scattering and absorption parameters in the tissue, allowing determination of accurate relationships between depth and spectra at each pixel. Stimulus wavelength light is applied2306to the tissue, such that any of the expected fluorophore present is stimulated to fluoresce. Measuring fluorescent emitted light at a at least a first and a second emission wavelengths associated with a fluorophore at each of a plurality of pixels; in embodiments this is accomplished by using the hyperspectral camera to record images2308at two, or more, emissions wavelengths associated with the fluorophore. A depth of the fluorophore at each pixel is then determined2310based upon the at least the absorption parameters and differences in intensity of the fluorescent emitted light at the first and the second emissions wavelengths. In some particular embodiments, additional emission wavelengths are used. Depth is not determined for pixels without significant fluorescent emissions. The depth determination at each pixel is based upon the relationship between depth and the ratios, and the measured fluorescent emitted light. In a particular embodiment, using the inclusion depth values at each pixel of the wide-field illumination area, a partial surface can be constructed, representing a partial topography of the tumor beneath the tissue surface. This involves thresholding the depth values at each pixel to eliminate points not in the inclusion, then using the remaining points as seeds to construct a triangular partial surface mesh. We then calculate the entire tumor geometry using a surface recreation algorithm described below: 1. Construct a 3D tetrahedral volume mesh representing the entire tissue domain interrogated. The tissue surface geometry obtained using spatially modulated illumination (see the Spatially Modulated Illumination subsection) is used as the surface of the domain, and a volume mesh is constructed based on this surface. 2. Project the illumination field onto the surface of the mesh based on directional movement orthogonal to the surface. 3. For each node in the tetrahedral mesh, cast a ray from the node to the nearest point in the illumination field. 4. If this ray intersects the partial surface of the tumor geometry, determine whether the point is in the tumor based on: (i) Surface coverage and (ii) Distance from the surface. Surface coverage is determined by creating a sphere around the point of intersection between the ray and the partial surface, then calculating the surface node density in that sphere relative to the node density outside the sphere. This represents the degree to which the surface is ‘in front of’ the point of interest. Distance from the surface is a direct distance calculation between the point of intersection and the node position in the mesh. The importance of these two factors, surface coverage and distance from the surface, is determined based on user-defined weighting factors. If a point is has sufficient surface coverage, and small distance from the surface, it is included in the tumor geometry. 3D spectroscopic fluorescence tomographic reconstruction is then performed using the tetrahedral mesh created with tumor and background spatial information encoded in the mesh. Initial optical property values are used, determined as described in the Spatially Modulated Illumination section. Laplacian regularization is used for reconstruction, with nodes in the mesh weighted by their proximity to the recreated tumor geometry (4). This allows the spatial prior information to guide the reconstruction process without assuming that the tumor geometry is perfect. The multispectral fluorescence tomography reconstruction recovers the optical properties at each node in the mesh, in particular fluorophore concentration. The partial depth information obtained using spectroscopic measurements of fluorescence and diffuse reflectance allow us to disentangle the effects of tumor depth and fluorescence concentration, which previously inhibited quantitative fluorescence reconstruction. The light modeling package NIRFAST is used for mesh creation and FEM-based modeling (5). However, a technique is being developed at Polytechnique based on Monte Carlo light transport simulations. We develop and test wide-field methods to map sub-surface fluorescence, first for (a) detection and depth determination (dFI) and for (b) PpIX (or other fluorophore) quantification at depth (qdFI). Here, ‘depth’ denotes distance below the surgical surface of the closest region of significant positive PpIX fluorescence (“sub-surface fluorescence topography”87). Our approach, both conceptually and practice is based on a combination of spectral and spatial constraints—although, here, the latter is critical to the separation of depth and PpIX concentration for qdFI, i.e., to distinguish accurately between weak fluorescence just below the surgical surface and stronger fluorescence at greater depth. The resulting dFI topographic maps informs the surgeon whether PpIX-containing tissues (or other expected fluorophores) exist beyond the superficial layers of the exposed surgical surface where quantitative assessment is made with qFI. The qdFI enhancement generates a topographic map of the actual CPpIX at depths which could impact the decision to continue tumor resection in areas where, e.g., motor and/or cognitive functions can be compromised by excessive tissue removal. Absolute CPpIX can also inform the surgeon on biological properties such as proliferation and degree of invasiveness that add to the decision-making process. A model-based dFI method, using a per-pixel map of absorbance and scattering parameters with per-pixel relationships of depth to emissions spectral shift is illustrated inFIG.24. In a non-modeled quantified fluorescence imaging alternative (FIG.23) the reflectance hyperspectral imaging is used to retrieve optical properties maps (absorption and scattering) using, for example using a look-up table approach. For this case, the full spectrum is required so hyperspectral imaging of white-light excitation is thought to be a requirement because these properties differ with wavelength and affect propagation of fluorescent emitted light to the tissue surface. The optical properties thus obtained are required for model-based qFI & dFI as well as qdFI. Operation in Quantitative Depth-Resolved Imaging (qdFI) Mode Data-flow diagram D3, as illustrated inFIG.14, may prove helpful in understanding qdFI mode. qdFI processing follows depth determination performed1402according to the methods previously discussed with reference to dFI mode andFIGS.15,23, and24. Reflectance parameters are determined for the tissue at stimulus wavelength, and scattering and absorbance parameters are determined for both stimulus and emissions wavelengths1404; this is a superset of the tissue optical parameters used for the dFI mode processing. These parameters are then used to correct the fluorescent emissions intensities by either fitting fluorophore intensity in a light transport model1406of tissue to observed fluorescent emissions images, or by determining1408an attenuation correction as an inverse of a total of reflectance at stimulus wavelength and total attenuation at both stimulus and emissions wavelengths between the tissue surface and the determined fluorophore depth, and applying1410the correction factors. We have demonstrated that (a) multi-wavelength excitation (400-600 nm) with integration of the fluorescence signal over all emission wavelengths and (b) spectrally-resolved detection following excitation at ˜633 nm85 allow subsurface PpIX topography—the first is very accurate (±0.5 mm) to a depth up to 3 mm in brain, while the latter is less accurate (>±1 mm) but can reach depths of 10 mm or more in brain, and potentially deeper in other tissues. Thus, we optimize the wavelength combinations for excitation and fluorescence detection to meet our performance targets. For the former, we illuminate at five predetermined wavelengths between 400 and 635 nm to match the PpIX absorption peaks, and obtain hyperspectral corresponding images of fluorescence as described in the section entitled Operation in Fluorescent Imaging Mode above. Preliminary data collected with a prototype hyperspectral imaging system where PpIX capsules are immersed in a tissue-simulating phantom were imaged and their spectra detected at depths up to 9 mm. Because light is attenuated by 2 orders of magnitude for each cm of depth, the fluorescence signals in dFI are smaller than their qFI surface counterparts. Following wavelength optimization, a new dFI multispectral-detection module based on a cooled CCD camera is provided in some embodiments for improved noise and sensitivity during surgery. Spectra at each pixel are determined from the hyperspectral images, and a depth estimated from a phantom-fitted equation as illustrated inFIG.16. Once a depth is estimated, this depth is displayable to a user as an image having brightness representing a quantified fluorescence with a color representing depth. Since optical properties of tissue are determined at multiple depths and at multiple wavelengths from the hyperspectral image stacks captured under spatially-modulated white light, these properties are used to correct received fluorescence spectra for tissue properties. The corrected received light at two or more wavelengths is then used to determine fluorescent ratios to estimate depth to retrieve the topographic maps and quantify fluorophore concentrations. There is also sufficient information available to deconvolved contributions to the captured spectra from two or more distinct fluorophores, permitting quantitative depth resolved fluorescent imaging of two or more fluorophores simultaneously; providing more information about tissue type than available when using only a single fluorophore. A strategy analogous to qFI is pursued based on two approaches: (1) techniques using normalization with spectrally-resolved wide-field reflectance images and (2) methods based on accurate light transport modeling in tissue. The dFI algorithms are developed for spectrally-resolved data (both excitation and emission fluorescence), while the qdFI algorithms combine spectral and spatially-modulated data to allow both depth and CPpIX at depth to be retrieved. Normalization techniques: Since distortion of the measured fluorescence is due to absorption features in the reflectance spectra, quantifying these elements in the wide-field spectrally-resolved reflectance images allows PpIX depth to be deconvolved from the measured fluorescence images. This is validated in phantoms from which empirical correction algorithms are derived. The technique is likely less robust than a full model-based approach (below), but reduces complexity. Model-based methods: For qdFI (and likely maximally-accurate dFI) the light transport Diffusion Theory/Monte Carlo hybrid model is used. Solutions providing the best fit to the surgical data will be processed into a 2D topographic depth image (dFI) and a CPpIX image at depth (qdFI). Two critical inputs are required for these simulations: (a) tissue optical properties as determined using spatially modulated light as described above, and (b) 3D profile of the surgical bed as determined by stereovision techniques described above. For dFI and qdFI, absorption and scattering properties averaged over the volume of tissue between the surface and the tumor are more appropriate, although the requirement is mitigated by the relative homogeneity of brain tissue on the length scales considered here (1-10 mm). If necessary, depth-resolved maps of tissue optical properties are generated by varying the spatial frequencies and phases in the spatially-modulated excitation light method. In order to validate the method, we fabricated tissue phantoms with different geometries (including simulated resected cavities of different curvature) and use them to evaluate conditions in which the depth accuracy falls below the threshold of ±0.5 mm for depths up to 3 mm and 1 mm for larger depths. In vivo studies will also proceed including experiments where (i) depth of tumor implantation will be varied between cohorts of animals and (ii) immediately after in vivo measurements and sacrifice, whole brains will be removed, and either sectioned for PpIX confocal fluorescence microscopy to map the subsurface tumor depth (with adjacent-section histopathologic confirmation) or dissected to remove tissue fragments for quantitative fluorometry. Preliminary data already exist that strongly support the technical feasibility of qFI, dFI and qdFI. Operation in depth-resolved Fluorescent Imaging mode with Tomographic Display In an embodiment, surface profiles as determined from stereoscopic imaging as described above are entered into a three-dimensional model of the tissue by three-dimensional modeling routines in memory178. Depth information as determined in the section entitled Operation in Quantitative Depth-Resolved Imaging qdFI Mode above is then entered into the three dimensional model of the tissue by marking voxels corresponding to the estimated depth of fluorophore. The three-dimensional model is then sliced and displayed to the surgeon as a sequence of tomographic images. Operation with Automatic Tissue Classification The optical properties of tissue at each pixel as determined in Operation in Spatial-Frequency-Modulated Reflectance Mode, the hemoglobin, oxyhemoglobin, and deoxyhemoglobin concentrations as determined above under Operation in Hyperspectral Reflectance Imaging Mode, the surface fluorophore concentrations as determined by qFI as describe above, the depth and quantity-at-depth information as determined in the section entitled Operation in Quantitative Depth-Resolved Imaging qDFI Mode above for each pixel are all provided to a trainable classifier such as a neural network classifier, kNN classifier, or in an alternative embodiment an SVM classifier; the classifier is implemented as classification routines in memory178and executed on the processor. The classifier is trained to provide a classification indicative of a probability that tumor exists at a location in tissue corresponding to that pixel. Classification results for each pixel are entered into a tissue classification map that is then displayed to the surgeon. Alternative Optical Systems An endoscopic system embodying many of the features and operating modes herein described is illustrated inFIG.17. In the endoscope ofFIG.17, a supercontinuum laser1102, or similar white light source, is coupled to pass light through a filter changer1104that is equipped with a clear filter and a tunable optical bandpass filter, the filter tunable from 400 to over 700 nanometer wavelengths with a bandpass as above described. Light from filter-changer1104, which may be filtered light, passes through a spatial modulator1106, such as a modulator based on a digital micromirror device where it is patterned. Light from the spatial modulator passes through a projection lens1108and a beam-splitter1110onto a proximal end of a coherent fiber bundle1112that runs through the endoscope. In an endoscopic embodiment, images of tissue as illuminated by spatially modulated light are processed to determine a surface profile, since endoscopes typically do not have stereo cameras120,122. At a second, or tissue, end of the coherent fiber bundle the patterned light passes through a tissue-viewing lens1114onto tissue1116. Light from tissue1116is imaged through lens1114and passes onto the tissue end of bundle1112, and is emitted from proximal end of bundle1112onto beam-splitter1110, where it is diverted through viewing lens1118and an optional filter1120into hyperspectral camera1122that corresponds to hyperspectral camera128and may have any of the forms previously described with reference to hyperspectral camera128. Signals derived by hyperspectral camera128from the light from tissue1116pass through camera interface1134and captured images are processed by processor1130configured with routines in memory1132, the routines having machine readable instructions for performing hyperspectral reflectance imaging, spatially modulated reflectance imaging, fluorescent imaging (including multiple-fluorophore fluorescent imaging), quantitative fluorescent imaging, and depth-resolved fluorescent imaging. Processed images are presented to the surgeon by processor1130through display adapter1136on monitor1138. The spatial modulator1106operates under control of a display adapter1140under control of processor1130. Since only one image is available, 3-dimensional surface extraction is not possible and non-modeled versions of fluorescent depth resolution are used in the routines in memory1132, such as that described in the draft article attached hereto and entitled A Non-Model Based Optical Imaging Technique For Wide-Field Estimation Of Fluorescence Depth In Turbid Media Using Spectral Distortion. In use, surgical tools inserted through a lumen of the endoscope of the embodiment ofFIG.17may be used by a surgeon to perform surgery, such as excision of polyps in a large intestine of a subject, under observation through images on monitor1138. In an alternative endoscopic embodiment, illustrated inFIG.18, a light source1102, filter-changer1104, spatial modulator1106, and lens1108are provided that provide light to a proximal end of coherent fiber bundle1112in an endoscope1150. At tissue end of bundle1112and a part of endoscope1150are an illumination projection lens1152that projects light from bundle1112onto tissue1160, and an imaging lens1154that focus light from tissue1160on a hyperspectral imager1156. Hyperspectral imager1156is an integrated multiple-filter imaging device similar to that previously discussed with reference to imaging device199. Alternatives In a simplified dFI embodiment lacking spatial modulation capability, a library of typical light scattering and absorption parameters for tissues of different types at fluorescent imaging wavelengths is included in memory178,1132. In this embodiment, an operator selects a predominant surface tissue type from entries in the library; the associated scattering and absorption parameters from the library are then used instead of parameters determined by measuring tissue to determine relationships of depth to spectral shift with fluorophore depth. It is expected that the system and methods disclosed herein are applicable to three-dimensional quantitative mapping autofluorescence of nicotine-adenine-dinucleotide (NAD) in tissue, and for three-dimensional quantitative mapping autofluorescence of activated calcium channels in tissue in real time. Since calcium channels are physiologically important in both cardiac muscle tissue and central nervous tissue, realtime quantitative maps of calcium channels are potentially useful in both cardiac surgery and neurosurgery. It is also expected that the present system can image concentrations of two or more fluorophores, using spectral information in the hyperspectral image stack to deconvolve contributions of each of several fluorophores and thereby provide images representing each fluorophore separately. This permits displaying concentrations of intrinsic fluorophores PpIX and NAD, or concentrations of two targeted agents having different fluorescent emissions spectra, separately to a surgeon and thereby permit better discrimination of healthy and diseased tissues. In an alternative implementation intended for use in open surgery, a camera system2502, including fluorescent stimulus illuminators and structured light illuminators as discussed with reference toFIG.1C,1D, or1F, at least one hyperspectral camera as discussed with reference toFIG.1,1A, or1B, and at least one additional camera to support stereovision which in a particular embodiment is a second hyperspectral camera, are located in view of a surgical site2506of a subject2508. Camera system2502provides stereo images to an image processor2504like that previously discussed with reference to image processing system126, and which performs stereo surface extraction, as well as hyperspectral image processing for heme oxygenation and surface and subsurface mapping of fluorescent agents, as heretofore described. Camera system2502may be ceiling mounted, or mounted on a movable stand permitting relocation within an operating room; if camera system2502is mounted on a movable stand it is equipped with tracking transponders2512. In addition to being coupled to display images on monitor2510, image processor2504is coupled to display images on a head-mounted display2513that is equipped with tracking transponders2514sufficient to determine both viewing angle and position of head-mounted display2513. Head-mounted display2513is adapted to be worn by, and in front of eyes of, a surgeon, not shown; head-mounted display2513is configured with a beamsplitting mirror2515that permits superposition of displayed images into a visual field of the surgeon. A tracking subsystem2516, similar to the tracker142previously discussed, is provided to determine positions and angles of head mounted display2513, and camera system2502, In this embodiment, image processor2504is configured to construct a three-dimensional computer model of a surface of the surgical site2506, and to annotate this model with information determined through hyperspectral imaging, such as maps of heme oxygenation and ischemia, maps of inflammation biomarkers, maps of fluorescent emissions from autofluorescent biomarkers such as PpIX or NAD, and quantified and depth-resolved maps of fluorophore concentrations as determined by qFI, dFI, and qdFI imaging as described above. The image processor then renders the annotated model into an image representing the surgical site2506(with annotations) as viewed from a tracked location of the head-mounted display2513, so that images displayed through head-mounted display2513portray the information derived from hyperspectral imaging superimposed on the surgeon's direct view of the surgical site, in doing so the image processor also renders and displays the partial surface model of depth-resolved fluorophore concentrations determined as described in the Depth-Resolved Fluorescent Imaging (dFI) section above. It is believed that the embodiment ofFIG.25, when displaying information derived by hyperspectral imaging such ischemia and as heme oxygenation, will be helpful to surgeons performing open-heart procedures intended to relieve ischemia. The embodiment is also expected to be useful when imaging fluorophore concentrations for a wide variety of cancer surgeries, and for assessing tissue viability during tissue debridement and diabetic amputation surgeries. Combinations It is anticipated that any one of the fluorescent stimulus light sources herein discussed with reference toFIG.1C,1D, or1F may be combined with any one of the hyperspectral cameras discussed with reference toFIG.1A or1B, or with a camera having a single wideband photodetector and tunable filter into a hyperspectral imaging system and coupled to the digital image processing system described herein to form a system adapted for use in quantitative and depth resolved fluorescent imaging. Various combinations will, however, have differing resolution and accuracy of depth determination and quantification. Further, it is anticipated that the following specific combinations of features will prove functional: An optical and image processing system designated A including a fluorescent stimulus light source adapted to provide light at a fluorescent stimulus wavelength; a spatial modulator coupled to modulate light forming spatially modulated light; projection apparatus configured to project onto tissue light selected from the group consisting of fluorescent stimulus light and spatially modulated light; a hyperspectral camera configured to receive light from tissue; and an image processing system coupled to receive images from the hyperspectral camera and configured with a memory containing machine readable instructions for performing at least one function selected from the group consisting of quantitative fluorescent imaging, and depth resolved fluorescent imaging and for displaying resultant processed fluorescent images. An optical and image processing system designated AA incorporating the system designated A wherein the function comprises depth resolved fluorescent imaging, and wherein the machine readable instructions include instructions for determining a relationship between depth and ratios of intensity at a first and a second emissions wavelength for a fluorophore in tissue; applying stimulus wavelength light; measuring fluorescent emitted light at a at least the first and the second emission wavelengths associated with the fluorophore at each of a plurality of pixels; determining a depth of the fluorophore at each pixel based upon the relationship between depth and the ratios, and the measured fluorescent emitted light. An optical and image processing system designated AB incorporating the system designated A or AA wherein the relationship between depth and ratios of intensity at the first and the second emissions wavelength are determined from images of the tissue. An optical and image processing system designated AC incorporating the system designated A, AA, or AB wherein the relationship between depth and ratios of intensity at the first and the second emissions wavelength is determined on a per-pixel basis from the images of the tissue. An optical and image processing system designated AD incorporating the system designated A, AA, or AB wherein the relationship between depth and ratios of intensity is determined from values in a library of tissue types. An optical and image processing system designated AE incorporating the system designated A, AA, AB, AC, or AD wherein the function include quantitative fluorescent imaging, and wherein the machine readable instructions include instructions for: determining reflectance and absorbance parameters at each pixel of an image at a stimulus wavelength; and using the reflectance and absorbance parameters to correct fluorescence emission images. An optical and image processing system designated AE incorporating the system designated A, AA, AB, AC, or AD wherein the machine readable instructions include instructions for providing spatially modulated light when obtaining images from which the reflectance and absorbance parameters are determined. An optical and image processing system designated AF incorporating the system designated A, AA, AB, AC, AD, or AE, wherein there are at least two cameras adapted to capture digital stereo images and the machine readable instructions further comprise instructions for extracting a surface profile from the stereo images. An optical and image processing system designated AG including the system designated AF, wherein the machine readable instructions further comprise instructions for determining an intraoperative location of structures located in preoperative medical images, and for displaying the determined intraoperative location. An optical and image processing system designated AH including the system designated AG and wherein the machine readable instructions further comprise instructions for displaying the determined intraoperative location with the processed fluorescent images. An optical and image processing system designated AI including the system designated AG or AH and wherein the machine readable instructions further comprise instructions for extracting a surface profile from depth-resolved fluorescent images. An optical and image processing system designated AJ including the system designated AG, AH, or AI further comprising a tracking subsystem adapted to determine a location and viewing angle of a display and wherein the machine readable instructions further comprise instructions for displaying rendered information selected from the group consisting of depth-resolved fluorescent images and intraoperative locations of structures as viewed from the determined location and viewing angle.. Conclusion Changes may be made in the above methods and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall there between. | 107,572 |
11857318 | DETAILED DESCRIPTION The present invention provides devices for characterizing regions of tissue and methods for using the devices. The devices are capable of locating, identifying, and characterizing tissue regions of interest in vivo. In one embodiment, the devices are ultrasound-guided. In one embodiment, the devices characterize regions of tissue using electrical impedance spectroscopy (EIS) sensors. In one aspect, the devices are useful in predicting plaque rupture, such as by determining the level of oxidized low density lipoprotein (oxLDL) and macrophage/foam cells present in an atheroma. In one aspect, the devices are useful in identifying metabolically active atherosclerotic lesions that are angiographically invisible. Definitions It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purpose of clarity, many other elements typically found in the art. Those of ordinary skill in the art may recognize that other elements and/or steps are desirable and/or required in implementing the present invention. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements and steps is not provided herein. The disclosure herein is directed to all such variations and modifications to such elements and methods known to those skilled in the art. Unless defined elsewhere, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, the preferred methods and materials are described. As used herein, each of the following terms has the meaning associated with it in this section. The articles “a” and “an” are used herein to refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. By way of example, “an element” means one element or more than one element. “About” as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, is meant to encompass variations of ±20%, ±10%, ±5%, ±1%, and ±0.1% from the specified value, as such variations are appropriate. As used herein, “imaging” may include ultrasonic imaging, be it one dimensional, two dimensional, three dimensional, or real-time three dimensional imaging (4D). Two dimensional images may be generated by one dimensional transducer arrays (e.g., linear arrays or arrays having a single row of elements). Three dimensional images may be produced by two dimensional arrays (e.g., those arrays with elements arranged in an n by n planar configuration) or by mechanically reciprocated, one dimensional transducer arrays. The terms “patient,” “subject,” “individual,” and the like are used interchangeably herein, and refer to any animal, or cells thereof whether in vitro or in situ, amenable to the methods described herein. In certain non-limiting embodiments, the patient, subject or individual is a human. As used herein, “sonolucent” is defined as a property wherein a material is capable of transmitting ultrasound pulses without introducing significant interference, such that an acceptable acoustic response can be obtained from the body structure(s) of interest. Throughout this disclosure, various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6, etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6, and any whole and partial increments there between. This applies regardless of the breadth of the range. Ultrasound-Guided Electrochemical Impedance Spectroscopy Device In one aspect, the present invention relates to devices for characterizing regions of tissue. Referring now toFIG.1A,FIG.1B,FIG.2A, andFIG.2B, a device10is depicted. Device10comprises a first catheter12and a second catheter14. First catheter12comprises a lumen having an open proximal end and a closed distal end. First catheter12can have any suitable length, such as a length between about 10 and 200 cm. In some embodiments, first catheter12comprises an inner diameter between about 0.5 and 2 mm. The inner diameter of first catheter12is preferably sized to fit ultrasound transducer16. Ultrasound transducer16can be any suitable transducer in the art, such as a piezoelectric ultrasound transducer or a capacitive ultrasound transducer. In various embodiments, device10can comprise one, two, three, four, or more ultrasound transducers16. Ultrasound transducer16is preferably positioned at the closed distal end of first catheter12. In some embodiments, ultrasound transducer16may be freely rotated within the lumen of first catheter12, such as by attachment to a torque wire. In some embodiments, ultrasound transducer16is immersed in a fluid within first catheter12. For example, first catheter12may be filled with water or phosphate buffered saline to maintain at least partial consistency in ultrasound signal propagation. Second catheter14comprises a lumen an open proximal end and a distal end. In some embodiments, second catheter14is parallel and adjacent to first catheter12, wherein the distal end of second catheter14is closed. In some embodiments, second catheter14at least partially envelopes first catheter12, wherein second catheter14and first catheter12are co-axial. In one embodiment, the distal end of second catheter14is closed around the outer surface of a co-axial first catheter12, such that the distal end of outer catheter14forms a seal around first catheter12. In some embodiments, the seal fuses second catheter14to first catheter12. In other embodiments, the seal permits independent rotation between second catheter14and first catheter12. Second catheter14can have any suitable length, such as between about 10 and 200 cm. In some embodiments, second catheter14comprises an inner diameter between about 1 and 3 mm. Positioned near the distal end of outer catheter is sensor22for characterizing a region of tissue. In one embodiment, sensor22is an EIS sensor. An EIS sensor can have any suitable sensor design, such as those recited in U.S. patent application Ser. No. 14/981,089, the contents of which are incorporated herein in its entirety. In some embodiments, the EIS sensor has a two-point electrode design, wherein two strips of conductive material are arranged in parallel with a gap in between. Exemplary dimensions include electrode lengths between about 0.2 and 5 mm, electrode widths between about 0.1 and 5 mm, and gap distances between about 0.1 and 5 mm. In some embodiments, the electrodes of an EIS sensor are set in a flexible substrate. For the purposes of bringing sensor22into physical contact with a region of tissue for characterization, the distal end of second catheter14has an expandable element for attaching sensor22. For example, in some embodiments, the expandable element can be an extendable arm or membrane. By attaching to the expandable element, sensor22can be brought closer to a region of interest. For example, to examine a region of the inner surface of a blood vessel, balloon18may be inflated such that the exterior of balloon18presses EIS sensor22against the region. In other embodiments, the expandable element can be a balloon18, which can be inflatable and deflatable via at least one aperture24on second catheter14. Balloon18preferably comprises an elastic material, such that balloon18expands when inflated and shrinks when deflated. Balloon18can have any suitable diameter when inflated, such as a diameter between about 1 and 15 mm. Preferably, the expandable element lies close to or is flush with the outer surface of second catheter14when not expanded. In certain embodiments, device10can include additional sensors, such as pressure sensors, flow sensors, temperature sensors, and the like. For example, in one embodiment, device10can comprise at least two sensors, wherein a first distal sensor can provide distal measurements, and a proximal measure can provide proximal measurements. The at least two sensors may be positioned upstream and downstream from a region of interest, such as in a fractional flow reserve technique. In some embodiments, the devices of the present invention may operate in conjunction with a computer platform system, such as a local or remote executable software platform, or as a hosted internet or network program or portal. In certain embodiments, portions of the system may be computer operated, or in other embodiments, the entire system may be computer operated. As contemplated herein, any computing device as would be understood by those skilled in the art may be used with the system, including desktop or mobile devices, laptops, desktops, tablets, smartphones or other wireless digital/cellular phones, televisions or other thin client devices as would be understood by those skilled in the art. The computer platform is fully capable of sending and interpreting device emissions signals as described herein throughout. For example, the computer platform can be configured to control ultrasound and EIS sensor parameters such as frequency, intensity, amplitude, period, wavelength, pulsing, and the like. The computer platform can also be configured to monitor and record insertion depth and location. The computer platform can be configured to record received signals, and subsequently interpret the signals. For example, the computer platform may be configured to interpret ultrasound signals as images and subsequently transmit the images to a digital display. The computer platform may also be configured to interpret changes in impedance and subsequently transmit the recorded changes to a digital display. The computer platform may further perform automated calculations based on the received signals to output data such as density, distance, composition, imaging, and the like, depending on the type of signals received. The computer platform may further provide a means to communicate the received signals and data outputs, such as by projecting one or more static and moving images on a screen, emitting one or more auditory signals, presenting one or more digital readouts, providing one or more light indicators, providing one or more tactile responses (such as vibrations), and the like. In some embodiments, the computer platform communicates received signals and data outputs in real time, such that an operator may adjust the use of the device in response to the real time communication. For example, in response to a stronger received signal, the computer platform may output a more intense light indicator, a louder auditory signal, or a more vigorous tactile response to an operator. The devices of the present invention can be made using any suitable method known in the art. The method of making may vary depending on the materials used. For example, devices substantially comprising a plastic or polymer may be milled from a larger block or injection molded. Likewise, devices substantially comprising a metal may be milled, cast, etched, or deposited by techniques such as chemical vapor deposition, spraying, sputtering, and ion plating. In some embodiments, the devices may be made using 3D printing techniques commonly used in the art. In various embodiments, the components of the present invention, including first catheter12, second catheter14, expandable element such as balloon18, and sensor22, are constructed from a biocompatible material. Preferably, the material is flexible, such that device10is at least partially pliable for improved range and reach. In certain embodiments, the components of the present invention are at least partially sonolucent. In another aspect, the present invention provides method for using the ultrasound-guided electrochemical impedance spectroscopy devices of the present invention, such as in tissue imaging and characterization of regions of interest. Referring now toFIG.3, an exemplary method100of accurately characterizing a tissue region of interest is presented. Method100begins with step110, wherein an exemplary device of the present invention is positioned near a tissue. In step120, the tissue is imaged with an ultrasound transducer on the device to locate a region of interest. In step130, the ultrasound imaging data is used to position the device such that a sensor on the device faces the region of interest. In step140, the sensor of the device is touched to the region of interest by expanding an expandable element on the device. In step150, the region of interest is characterized using the sensor. As described elsewhere herein, the devices of the present invention include embodiments using ultrasound guidance and EIS sensors to locate, identify, and characterize tissue in vivo. The devices and methods are useful in many applications, such as diagnosing tissue abnormalities or lesions in difficult to reach areas. For example, the devices and methods may be used to locate plaques in an artery and determine the likelihood of rupture based on plaque content. With reference toFIG.3, characterizing a plaque may begin with placing an ultrasound-guided EIS device of the present invention in a patient's artery, wherein ultrasound imaging can determine the location, orientation, and size of any plaques present. Based on the ultrasound imaging data, an operator may position the device such that the EIS sensor faces a particular region of a plaque to characterize. The balloon is then inflated to touch the EIS sensor to the region of interest. The balloon can be inflated with air or fluid, and pressure can vary depending on the size of the balloon and the size of the artery. Once the EIS sensor has been brought into physical contact with the region of interest, impedance is determined by driving an AC current through the plaque using the EIS sensor. Any suitable voltage may be used, such as voltages between about 10 and 100 mV. To obtain an impedance spectrum, current is measured by swiping the frequencies, such as in a range anywhere between about 100 Hz to 5 MHz. A determination of plaque stability may be made based on the impedance values, wherein higher impedance spectra indicate higher oxLDL and macrophage/foam cell content and a greater chance of rupture. 3D Electrochemical Impedance Spectroscopy Device The present invention also relates to devices for characterizing 3D regions of tissue. The devices include a 6-point electrode configuration. This configuration enables 15 alternating EIS permutations of 2-point electrode arrays. This configuration allows for comprehensive impedance mapping and detection of lipid-rich atherosclerotic lesions. Referring now toFIG.4A,FIG.4B, andFIG.5AthroughFIG.5D, an exemplary device210is depicted. Device210comprises catheter212and inflatable balloon214. Catheter212comprises a lumen having an open proximal end and a closed distal end. In various embodiments, the distal end of catheter212comprises one or more features for locating device210, such as one or more radio-opaque marker216shown inFIG.5A. Catheter212can have any suitable length as determined by one skilled in the art. Catheter212can have any suitable diameter, such as a diameter between about 0.5 and 1.5 mm. Balloon214is positioned near the distal end of catheter212. In various embodiments, catheter212comprises at least one aperture213such that the lumen of catheter212is fluidly connected to the interior of balloon214, such as for inflation purposes. Balloon214can be inflated using any appropriate substance, including air, an inert gas, or an aqueous solution. Balloon214can have any suitable dimension as determined by one skilled in the art. For example, in a deflated conformation depicted inFIG.5D, balloon214lies flush against the exterior of catheter212and can have a diameter slightly larger than the diameter of catheter212, accounting for the thickness of balloon214material. Typical deflated balloon214diameters can be between about 1 and 2 mm. In an inflated conformation shown inFIG.5D, balloon214can have a diameter between about 2 and 20 mm. Balloon214comprises a plurality of electrodes220. Electrodes220can have any suitable design. For example, electrodes220can each comprise a single strip of conductive material set on a flexible substrate. Exemplary dimensions include electrode lengths between about 0.1 and 1 mm, and widths between about 0.1 and 1 mm. In certain embodiments, electrodes220are arranged in two circumferential rows. In one embodiment, such as inFIG.5BandFIG.5C, balloon214comprises six electrodes220, wherein a first group of three electrodes220a,220b, and220care arranged equidistantly about the circumference of balloon214closer to the proximal end of catheter212, and a second group of three electrodes220d,220e, and220fare arranged equidistantly about the circumference of balloon214closer to the distal end of catheter212. In various embodiments, the first group and the second group of electrodes can be separated by a distance between about 1 and 5 mm. As will be understood by those having skill in the art, the equidistant arrangement of the three electrodes in the first and second groups provides a 120° separation between adjacent electrodes, as depicted inFIG.5C. In various embodiments, device210may further comprise one or more features for enhancing the performance of the device. For example, in some embodiments device210may further comprise a covering to protect or insulate the components of device210, such as one or more heat-shrink tubing222and224depicted inFIG.6E. In certain embodiments, device210can include additional sensors, such as pressure sensors, flow sensors, temperature sensors, and the like. In some embodiments, device210may operate in conjunction with a computer platform system, such as a local or remote executable software platform, or as a hosted internet or network program or portal. In certain embodiments, portions of the system may be computer operated, or in other embodiments, the entire system may be computer operated. As contemplated herein, any computing device as would be understood by those skilled in the art may be used with the system, including desktop or mobile devices, laptops, desktops, tablets, smartphones or other wireless digital/cellular phones, televisions or other thin client devices as would be understood by those skilled in the art. The computer platform is fully capable of sending and interpreting device emissions signals as described herein throughout. For example, the computer platform can be configured to control EIS sensor parameters such as frequency, intensity, amplitude, period, wavelength, pulsing, and the like. The computer platform can also be configured to monitor and record insertion depth and location. The computer platform can be configured to record received signals, and subsequently interpret the signals. For example, the computer platform may be configured to interpret EIS signals between selected electrodes. The computer platform may also be configured to interpret changes in impedance and subsequently transmit the recorded changes to a digital display. The computer platform may further perform automated calculations based on the received signals to output data such as density, distance, composition, imaging, and the like, depending on the type of signals received. The computer platform may further provide a means to communicate the received signals and data outputs, such as by projecting one or more static and moving images on a screen, emitting one or more auditory signals, presenting one or more digital readouts, providing one or more light indicators, providing one or more tactile responses (such as vibrations), and the like. In some embodiments, the computer platform communicates received signals and data outputs in real time, such that an operator may adjust the use of the device in response to the real time communication. For example, in response to a stronger received signal, the computer platform may output a more intense light indicator, a louder auditory signal, or a more vigorous tactile response to an operator. The devices of the present invention can be made using any suitable method known in the art. The method of making may vary depending on the materials used. For example, devices substantially comprising a plastic or polymer may be milled from a larger block or injection molded. Likewise, devices substantially comprising a metal may be milled, cast, etched, or deposited by techniques such as chemical vapor deposition, spraying, sputtering, and ion plating. In some embodiments, the devices may be made using 3D printing techniques commonly used in the art. In various embodiments, the components of the present invention, including catheter212, balloon214, radiopaque marker216, electrodes220, heat-shrink tubing222, and heat-shrink tubing224, are constructed from a biocompatible material. Preferably, the material is flexible, such that device210is at least partially pliable for improved range and reach. In another aspect, the present invention provides method for using the 3D electrochemical impedance spectroscopy devices of the present invention, such as in tissue imaging and characterization of regions of interest. Referring now toFIG.7, an exemplary method300of accurately characterizing a tissue region of interest is presented. Method300begins with step310, wherein an exemplary device of the present invention having a balloon with a plurality of sensors positioned on the surface of the balloon is positioned near a tissue. In step320, the balloon is inflated to contact at least two of the sensors to the tissue. In step330, the impedance between a pair of adjacent sensors contacting the tissue is measured, for every permutation of nonrepeating pairs of adjacent sensors contacting the tissue. In step340, the tissue is characterized by generating a 3D impedimetric map of the tissue from the impedance measurements. As described elsewhere herein, the devices of the present invention use EIS sensors to characterize tissue in vivo. The devices and methods are useful in many applications, such as diagnosing tissue abnormalities in difficult to reach areas or identifying lesions not visible through conventional imaging. For example, the devices and methods may be used to detect angiographically invisible atherosclerotic lesions that are metabolically active. With reference toFIG.7, characterizing a plaque may begin with placing an EIS device of the present invention in a patient's artery, wherein the balloon is inflated to touch the EIS sensors to the inner surface of a section of the artery. The balloon can be inflated with air or fluid, and pressure can vary depending on the size of the balloon and the size of the artery. Once the EIS sensors have been brought into physical contact with the inner surface of the section of the artery, impedance is measured between every pair of adjacent EIS sensors by driving an AC current through the tissue using the EIS sensors. Any suitable voltage may be used, such as voltages between about 1 and 100 mV. To obtain an impedance spectrum, current is measured by swiping the frequencies, such as in a range anywhere between about 1 Hz to 10 MHz. Combining the EIS measurements from every permutation of nonrepeating adjacent EIS sensor pairs generates a 3D impedimetric map of the section of artery, wherein higher impedance spectra indicate the presence of metabolically active lipids. EXPERIMENTAL EXAMPLES The invention is further described in detail by reference to the following experimental examples. These examples are provided for purposes of illustration only, and are not intended to be limiting unless otherwise specified. Thus, the invention should in no way be construed as being limited to the following examples, but rather, should be construed to encompass any and all variations which become evident as a result of the teaching provided herein. Without further description, it is believed that one of ordinary skill in the art may, using the preceding description and the following illustrative examples, utilize the present invention and practice the claimed methods. The following working examples therefore, specifically point out the preferred embodiments of the present invention, and are not to be construed as limiting in any way the remainder of the disclosure. Example 1: Plaque Characterization Using Integrated Electrochemical Spectrum and Intravascular Ultrasound Sensors Vulnerable plaque rupture is the leading cause of death in the developed world. Growing evidence suggests that thin-cap fibroatheromas rich in macrophage/foam cells and oxidized low density lipoprotein (oxLDL) are prone to destabilization. However, it is challenging to characterize the vulnerable plaques with individual detection methods. Presented herein is an integrated sensor composed of an electrochemical spectrum (EIS) sensor to measure plaque laden with oxLDL and a broadband intravascular ultrasound (IVUS) transducer to acquire plaque morphology. Correlation analysis of EIS and IVUS results leads to improved characterization of the vulnerable plaques in vivo. The integrated sensor was fabricated on an acoustic-transparent ethylene-vinyl acetate (EVA) tube. The IVUS transducer with a center frequency of 15 MHz was mounted at the tip of a torque coil and fitted inside the tube. The EIS sensor was attached on a balloon mounted on the outer surface of the EVA tube (seeFIG.8A). The chirp signal was used to excite the IVUS transducer and pulse compression algorithm was used to improve the imaging quality. Then the EIS sensor was positioned to areas of plaques and the EIS was acquired with the balloon inflated. The integrated sensors were deployed in 7 rabbits: 4 control rabbits fed on normal chow and 3 rabbits fed on a high-fat diet to develop atherosclerotic plaques. The EIS results (FIG.8B) indicate >15% differences on the impedance magnitude at >100 kHz and >11% differences on the phase at 15-100 kHz between the high fat diet and the control group. The IVUS imaging result (FIG.8C) reveals the plaques inside the lumen, which were validated by histology. In the imaging position, the plaques cover about 40% of the lumen. The prototype iteration of the sensor does not account for the relative orientation of the EIS and IVUS sensors. As a result, the EIS sensor may not be positioned directly on the plaques of interest, leading to a relatively large standard deviation (up to 17%). Example 2: Ultrasonic Transducer-Guided Electrochemical Impedance Spectroscopy to Lipid-Laden Plaques Previous studies have demonstrated that endoluminal EIS distinguishes pre-atherogenic lesions associated with oxidative stress in fat-fed New Zealand White (NZW) rabbits (Ai L et al., American Journal of Physiology-Cell Physiology 294.6 (2008): C1576-C1585; Hwang J et al., Free Radical Biology and Medicine 41.4 (2006): 568-578; Rouhanizadeh M et al., Micro Electro Mechanical Systems, 2004. 17th IEEE International Conference on. (MEMS). IEEE, 2004; Yu F et al., Biosensors and Bioelectronics 30.1 (2011): 165-173; Yu F et al., Annals of biomedical engineering 39.1 (2011): 287-296). Specifically, vessel walls harboring oxidized low density lipoprotein (oxLDL) exhibit distinct EIS signals (Yu F et al., Annals of biomedical engineering 39.1 (2011): 287-296). oxLDL and foam cell infiltrates in the subendothelial layer engendered an elevated frequency-dependent EIS by using concentric bipolar microelectrodes (Yu F et al., Annals of biomedical engineering 39.1 (2011): 287-296). Specific electric elements were evaluated to simulate working and counter electrodes at the electrode-endoluminal tissue interface (Yu F et al., Biosensors and Bioelectronics 30.1 (2011): 165-173). The application of EIS strategy was established to detect oxLDL-rich fibroatheroma using explants of human coronary, carotid, and femoral arteries (Yu F et al., Biosensors and Bioelectronics 30.1 (2011): 165-173). The regions of elevated EIS correlated with intimal thickening detected via high-frequency (60 MHz) IVUS imaging and by prominent oxLDL staining (Cao H et al., Biosensors and Bioelectronics 54 (2014): 610-616). In this context, the following study integrates both IVUS imaging and EIS measurements to characterize the metabolically active, albeit non-obstructive lesions when patients are undergoing diagnostic angiogram or primary coronary intervention. Rupture-prone plaques consist of oxLDL and necrotic core with low conductivity. When alternating current (AC) is applied to a plaque, the oxLDL-rich lesion is analogous to a capacitance component, exhibiting both elevated electrical impedance magnitude and negative phase. The divergence of electrical impedance between the oxLDL-laden plaque and healthy vessel provides a sensitive and specific assessment of atherosclerotic lesions. A catheter-based 2-point micro-electrode configuration was developed for intravascular deployment in NZW rabbits (Packard R R S et al., Annals of biomedical engineering 44.9 (2016): 2695-2706). An array of 2 flexible rectangular electrodes, 3 mm in length by 300 μm in width, and separated by 300 μm, was microfabricated and mounted on an inflatable balloon catheter for EIS measurement of oxLDL-rich lesions. Upon balloon inflation by air pressure, the 2-point electrode array conformed to the arterial wall to increase sensitivity by deep intraplaque penetration via alternating current (AC). The frequency sweep from 100 Hz-300 kHz generated distinct changes in both impedance (Ω) and phase (ϕ) in relation to varying degrees of intraplaque oxLDL burden in the aorta (Packard R R S et al., Annals of biomedical engineering 44.9 (2016): 2695-2706). IVUS imaging visualizes the endoluminal surface, eccentricity of the plaque, intraplaque echogenicity and arterial wall thickness (Ma J et al., Applied physics letters 106.11 (2015): 111903). The mechanically scanning IVUS transducer (20˜45 MHz) or the radial array transducer (10˜20 MHz), transmitting and receiving the high frequency ultrasonic waves, is capable of delineating the cross-sectional anatomy of coronary artery wall in real time with 70 to 200 μm axial resolution, 200 to 400 μm lateral resolution, and 5 to 10 mm imaging depth (Brezinski M E et al., Heart 77.5 (1997): 397-403; Elliott M R et al., Physiological measurement 17.4 (1996): 259). For these advantages, simultaneous IVUS-guided EIS measurement enabled precise alignment of the visualized plaques with the balloon-inflatable EIS sensor; thereby, providing both topological and biochemical information of the plaque (FIG.2A). Ex vivo assessment of NZW rabbit aortas was performed after 8 weeks of high-fat diet, and the results demonstrated significant reproducible measurements in both impedance and phase (p-value<0.05) via IVUS-guided EIS assessment. Thus, the integrated sensor design enhanced IVUS-visualized plaques and EIS-detected oxLDL to assess metabolically unstable plaques with clinical implications in reducing procedure time and X-rays exposure. The methods and materials are now described. Integrated Sensor Design Built on prior intravascular techniques (Cao H et al., Biosensors and Bioelectronics 54 (2014): 610-616; Yu F et al., Biosensors and Bioelectronics 43 (2013): 237-244), the catheter-based dual sensors cannulate through aortas to reach the lesion sites for detection (FIG.2A). While advancing, the balloon is deflated (inset ofFIG.2A) and the whole diameter of the exemplary sensor embodiment is 2.3 mm. When the sensor reaches the detecting sites, the IVUS transducer scans the section of aorta through the imaging window by rotating and pulling-back. In case lesion sites are detected, the whole sensor is further advanced and rotated to align the EIS sensor at the lesion sites. Air is then pumped though the outer catheter to inflate the balloon, allowing the 2-point electrodes to make contact with the lesions. EIS measurement is performed and the impedance characteristics indicate the presence or absence of intraplaque lipid (oxLDL). Performance of the integrated sensor was established by the IVUS-visualized endolumenal plaque and EIS-detected intraplaque oxLDL (FIG.2B). The two sensors were intravascularly deployed by two layers of catheters bonded together at the end of the outer layer. The 45 MHz IVUS transducer (Li X et al., IEEE transactions on ultrasonics, ferroelectrics, and frequency control 61.7 (2014): 1171-1178) was enclosed in the inner catheter at the imaging window while the EIS sensor was affixed to a balloon that was anchored on the outer catheter. The inner catheter was designed to be longer than the outer catheter by ˜2-10 cm for the IVUS imaging window. The IVUS imaging process required the acoustic wave to reach aorta walls and echo back to the IVUS transducer. For this reason, the inner catheter was acoustically transparent with matched impedance and low attenuation; thereby, allowing for acoustic wave penetration. The acoustic impedance match was established by two strategies: 1) water or phosphate-buffered saline (PBS) was injected into the inner catheter, and 2) the IVUS catheter was longer than the outer catheter by 2-10 cm (preset length) to avoid the balloon or the outer catheter from obstructing the acoustic path. The rotational and pullback scanning were made possible by positioning the IVUS transducer in the inner catheter (outer diameter=1.3 mm). The transducer was navigated by a torque wire. The flexibility of IVUS transducer torque wire allowed for deployment into the inner catheter. The optimal EIS signal was demonstrated by inflating the balloon, allowing the 2-point electrode array to be in transient contact with the lumen. The balloon was mounted on the outer catheter (outer diameter=2 mm and inner diameter=1.7 mm). The IVUS first scanned across the clear imaging window to visualize the lumen and plaques. Next, the EIS sensor was advanced to align with the IVUS window. Air was pumped through the gap between the inner and outer catheters to inflate the balloon for EIS measurement. These sequential steps effectively minimized the interference between the EIS sensor the IVUS acoustic pathway. Principles of EIS EIS is the macroscopic representation of the electric field and current density distribution within the specimen being tested (FIG.9AthroughFIG.9D). Applying quasi-electrostatic limits to Maxwell's equations, the field distribution can be described as follows (Larsson J et al., American Journal of Physics 75.3 (2007): 230-239): ∇·(σ*∇φ)=0 (Eq. 1) where σ*=σT+jωεT. σTand εTdenote the conductivity and permittivity of the sample, respectively, ω the angular frequency, j=√{square root over (−1)}, and φ the voltage distribution. Current density, {right arrow over (J)}=σ*{right arrow over (E)}, is calculated with the distribution of electric field, {right arrow over (E)}. Finally, electrical impedance of the sample, z, according to Maxwell's equations, is expressed as follows: Z=Δφ∫SJ⇀·dS⇀(Eq.2) where {right arrow over (S)} denotes the electrode-tissue interface area, and Δφ the voltage difference across the two electrodes of the EIS sensor. The resistance and reactance value of the impedance is represented as a resister, R, and a capacitor, C (FIG.9A). Contact impedance, Zc, at the interface between the electrode and tissue, is not negligible in most cases, and is taken into account in the measuring system as previously reported (Cao H et al., Biosensors and Bioelectronics 54 (2014): 610-616; Yu F et al., Biosensors and Bioelectronics 43 (2013): 237-244). The electrochemical impedance signal consists of both magnitude and phase information (FIG.9D). The low conductivity of oxLDL is the basis for an elevated magnitude in impedance in the oxLDL-laden plaques. In contrast, the high conductivity of healthy aorta walls exhibits lower impedance magnitude in response to the alternating current (AC). The complex impedance of the tissue is expressed as: Z=(1jωC//R)=R-jωCR21+ω2C2R2(Eq.3)Z=R1+ω2C2R2(Eq.4)ϕ=-arctan(ωCR)(Eq.5) where ω is the angular frequency; and ϕ the phase. Procedure A two-point electrode was designed as the EIS sensor for this deep tissue penetration (Packard R R S et al., Annals of biomedical engineering 44.9 (2016): 2695-2706). The configuration of the two electrodes for the EIS sensor were identical at 3 mm in length and 0.3 mm in width, and were aligned in parallel at 0.3 mm kerf (gap) apart (inset ofFIG.2B,FIG.9A). The 2-point electrodes made electrical contact with the plaques upon balloon inflation. During EIS assessment, AC current was driven through the plaque while maintaining a constant peak voltage. The current was recorded to calculate the electromagnetic impedance of the plaque in terms of impedance magnitude and phase (example inFIG.9D). By swiping the frequencies, an impedance spectrum was acquired. The relatively long electrode (3 mm) and narrow kerf (0.3 mm) allowed for high current penetration through the plaques. The flexible EIS sensor was fabricated on a polyimide substrate. Copper (12 μm) was deposited on the polyimide (12 μm) via plated-through-hole (PTH) methodology. Subsequently, the copper was selectively removed by chemical etching based on lithographically-defined pattern using dry film photoresist. A subsequent lamination was done to cover a majority of the copper area with a second layer of polyimide (12 μm), while leaving the sensor area exposed. Finally, Au/Ni (200 nm/20 nm) was immersion-coated on the exposed electrodes. The polyimide substrate is not stretchable, which ensures that the EIS sensor is free from cracking or discontinuities. The leading wires (30 cm long) were copper layers fabricated together with the sensor and covered by the second polyimide layer. The proximal end of the leading wires was connected to a Series G 300 Potentiostat (Gamry Instruments Inc., PA, USA) for EIS measurement. EIS measurements were deployed to the ex vivo aortas from NZW rabbit in the presence or absence of IVUS guidance. Five control rabbits fed on a normal chow diet (n=5) and 3 age-matched high-fat fed NZW male rabbits (n=3) were analyzed (Anichkov 1955; Anichkov and Volkova 1954; Anitschkow et al. 1983). High-fat animals were placed on a 1.5% cholesterol and 6.0% peanut oil diet (Harlan Laboratory). After 9 weeks, thoracic aorta sections were dissected for the IVUS-guided EIS measurements. The ultrasound transducer rotated in the inner catheter to acquire the cross-sectional imaging around the catheter. The ultrasonic A-lines were acquired every 0.65 degrees and 550 A-lines were acquired in each frame. After digitization, the echo signal was filtered with pass band between 10 MHz and 100 MHz. After localizing the plaques, the balloon catheter (FIG.2A) was advanced to align with the lesion sites. The balloon was inflated at ˜2 atm (˜200 kPa), facilitating the EIS sensors in contact with the lumen or lesions for assessing the electrical impedance. Alternative voltage (50 mV amplitude) was applied to the 2-point electrodes, and the current was measured to determine the electrical impedance at the frequency swipes from 100 Hz to 300 kHz. A similar approach was performed without IVUS guidance. The individual measurements were repeated 5 times. The IVUS-guided images and EIS measurements were validated by histology. The aortic segments were fixed in 4% paraformaldehyde, embedded in paraffin and serially sectioned at 5 μm for histological analyses. Lipids were identified by Hematoxylin and Eosin (H&E) staining and oxLDL-laden macrophages by F4/80 staining (monoclonal rat anti-mouse antibody, Invitrogen). Statistical analysis analyzed the significance of EIS results. Average and standard deviation demonstrated the impedance characteristics and the measurement variability. A distinct differentiation between oxLDL-laden and lesion-free aortas indicated a preferable impedance characterization. Student's t-test and analysis of variance with multiple comparisons adjustment were performed. A p-value<0.05 was considered statistically significant. The results are now described. Integrated Sensor A prototype of the integrated sensor consisted of an EIS sensor and an IVUS transducer (FIG.10A). The two sensors, 2-point electrodes and ultrasonic transducer, were fabricated individually, followed by integration for the catheter-based deployment to assess oxLDL-laden plaques. The two-point electrodes for EIS sensor (Packard R R S et al., Annals of biomedical engineering 44.9 (2016): 2695-2706) was fabricated on polyimide by depositing Au/Ni electrode, and the leading wires were embedded by a second layer of polyimide while an opening at the distal end allowing for the EIS sensing (FIG.10B). The IVUS transducer (Li X et al., IEEE transactions on ultrasonics, ferroelectrics, and frequency control 61.7 (2014): 1171-1178) was mounted on a rotational shaft to generate radial cross-sectional images of the aortas. Interference between the two elements was minimized by separating them spatially. The IVUS transducer was positioned in the acoustic image window distal to the balloon and EIS sensor. Intravascular ultrasound imaging visualized the topography of the aorta and identified the endoluminal atherosclerotic lesions. The plaques were identified by their distinct scattering characteristics (inset ofFIG.11AandFIG.11B). In the IVUS-guided measurement, the EIS sensor was steered to the endoluminal sites to assess the eccentric plaques present in the thoracic aorta. In contrast, random EIS measurements were performed without the IVUS-guidance to compare variability and reproducibility. Electrochemical Impedance Spectroscopy In both the IVUS guided- and non-guided EIS measurements, the mean values of the impedance magnitude (kΩ) in oxLDL-laden plaque were elevated as compared to the control (FIG.11C,FIG.11D). The non-IVUS-guided EIS harbored a wide range of standard deviations, with the lower limits overlapping with those of control (FIG.11C), likely from misalignment with the plaque. In the case of random measurement, EIS at Sites2and3aligned with the lesion, resulting distinct impedance magnitude, whereas EIS measurement at Site1(lesion free) was indistinct from the control. In the case of IVUS-guided measurement, the EIS measurements were aligned with the lesions, resulting in reduced standard deviations and increased frequency-dependent separation from those of control across the entire frequency range (100 Hz-300 kHz) (FIG.11F). In addition to the impedance magnitude, the phase (ϕ) spectra provided an alternative detection for the oxLDL-laden lesions (FIG.11E,FIG.11F). As supported by Eq. (5) and corresponding analysis, the phase of all the measurements overlapped at high frequencies (>20 kHz). The optimal phase separation between lesion sites and control occurred at <15 kHz. In the random measurements, the phases of lesion sites overlapped with the control (FIG.11E), while in the guided measurement the lesion sites were distinct at <15 kHz (FIG.11F). Statistical analysis demonstrated the EIS measurements with and without IVUS guidance (FIG.11AthroughFIG.11F). In the case of IVUS guidance, impedance magnitude (kΩ) at Sites2&3was distinct from control, whereas measurement at Site1was insignificant. EIS measurements were statistically insignificant considering all results (FIG.12A). IVUS-guided EIS measurements demonstrated statistically significant differences with the added advantage of smaller data spread in a given condition leading to smaller standard deviations (p<0.0001) (FIG.12A). Phase delay, an alternative measure derived from EIS, demonstrated similar trends (FIG.12B). Significant statistics were observed at <20 kHz with IVUS-guidance, whereas insignificance exhibited throughout the frequency range without IVUS-guidance. The novelty of the current work resides in the integrated sensor design to enable IVUS-guided EIS assessment of metabolically unstable plaque. The double-layer catheter allowed for the flexible 2-point electrodes to affix to the balloon anchored to the outer catheter while the rotating ultrasonic transducer was deployed to the inner catheter. The imaging window distal to the balloon provided matched acoustic impedance, enabling the high-frequency transducer (45 MHz) to visualize the vessel lumen and 2-point electrode to align with the plaques. Upon balloon inflation, oxLDL-laden plaques exhibited statistically distinct EIS measurements. Thus, the present study introduces the first IVUS-guided EIS sensor to detect intraplaque oxLDL with reduced standard deviation and increased statistical significance in both impedance and phase delay. The integrated sensor strategy paves the way to diagnose vulnerable plaques to predict acute coronary events or stroke. The non-guided EIS measurements require repeated trials at multiple sites in need of deflating and re-inflating of the balloon, prolonging procedure time and fluoroscopy X-rays exposure, whereas the IVUS imaging prior to EIS measurement visualizes the anatomy to enable precise alignment with lesions for EIS measurement. Statistically significant results were obtained by the IVUS-guided EIS measurement (p<0.0001 for magnitude and p<0.005 for phase within 15 kHz), whereas measurements without the guidance reduced the significance (p<0.07 for magnitude and p<0.4 within 15 kHz). As a result, reliable detection of intraplaque oxLDL was obtained from a single measurement, reducing patient exposure to radiation and operation time. The advent of near-infrared fluorescence (NIRF) provides cysteine protease activity as an indicator of inflammation (Weissleder R et al., Nature biotechnology 17.4 (1999): 375-378), and the use of glucose analogue [18F]-fluorodeoxyglucose (18FDG) reveals metabolic activity by Positron Emission Tomography (PET) (Rudd J H F et al., Circulation 105.23 (2002): 2708-2711). However, injection of contrast agents is required for NIRT and radioactive isotopes PET imaging. Using concentric bipolar microelectrodes for EIS measurements, significant frequency-dependent increases were previously demonstrated in EIS magnitude among fatty streaks (Stary Type II lesions), fibrous cap oxLDL-rich (Type III or IV), oxLDL-free (Type V), and calcified lesions (type VII) (Yu F et al., Biosensors and Bioelectronics 30.1 (2011): 165-173). To enhance the specificity, the present study hereby established a dual sensing modalities, integrating ultrasound (IVUS) and electrochemical impedance (EIS) for early detection of the mechanically and metabolically unstable lesions (FIG.13). The integrated sensing modalities allow initial identification and visualization by IVUS, then electrochemical characterization by EIS. In addition to the electrochemical (EIS) strategy, alternative techniques have been developed to assess the thin-cap fibroatheroma (Kolodgie F D et al., Current opinion in cardiology 16.5 (2001): 285-292; Virmani R et al., Progress in cardiovascular diseases 44.5 (2002): 349-356; Virmani R et al., Journal of the American College of Cardiology 47.8 Supplement (2006): C13-C18) and intraplaque angiogenesis (Doyle B et al., Journal of the American College of Cardiology 49.21 (2007): 2073-2080; Khurana R et al., Circulation 112.12 (2005): 1813-1824; Purl R et al., Nature Reviews Cardiology 8.3 (2011): 131-139) for plaque vulnerability. Integrated IVUS and optical coherence tomography (OCT) catheters were developed to acquire high resolution thin fibrous cap and the underlying necrotic core simultaneously (Li X et al., IEEE Journal of Selected Topics in Quantum Electronics 20.2 (2014): 196-203). Whereas the incremental imaging data helps determine the characteristics of plaque, the OCT technique is limited by the need to avoid the light scattering by the red blood cells. For this reason, saline solution flushing is essential for imaging acquisition (Li X et al., IEEE Journal of Selected Topics in Quantum Electronics 20.2 (2014): 196-203). In the setting of acute coronary events, transient lack of blood perfusion is clinically prohibitive. Photoacoustics is an emerging approach based on the high photo-absorption and thermal expansion of blood, and has been applied to image angiogenesis (Wang L V et al., Nature photonics 3.9 (2009): 503-509; Wang L V et al., Science 335.6075 (2012): 1458-1462; Xu M et al., Review of scientific instruments 77.4 (2006): 041101). Intravascular photoacoustics enables to image vasa vasorum and intraplaque micro-vessels visualization (Jansen K et al., Optics letters 36.5 (2011): 597-599; Wang B et al., Optics express 18.5 (2010): 4889-4897; Wang B et al., IEEE Journal of selected topics in Quantum Electronics 16.3 (2010): 588-599). However, the heat generated from thermal expansion poses an adverse effect on the vulnerable plaque (Stefanadis C et al., Circulation 99.15 (1999): 1965-1971). For intravascular photoacoustic imaging, the blood in the arteries absorbs light energy to the same or greater extent than that of vasa vasorum, resulting in obstruction to the intravascular photoacoustic imaging. Similar to OCT, saline flushing to remove red blood cells is essential (Wang B et al., IEEE Journal of selected topics in Quantum Electronics 16.3 (2010): 588-599). Acoustic angiography (Gessner R et al., IEEE transactions on ultrasonics, ferroelectrics, and frequency control 57.8 (2010): 1772-1781; Gessner R C et al., Journal of Biomedical Imaging 2013 (2013): 14) took advantage of high nonlinearity of microbubble contrast agents (Lindner J R et al., Nature Reviews Drug Discovery 3.6 (2004): 527-533; Lindner J R et al., Journal of the American Society of Echocardiography 15.5 (2002): 396-403) that were carried to micro-vessels by circulation, exciting them at fundamental frequency and detecting at high frequency super harmonics. Dual frequency intravascular ultrasound transducers (Ma J et al., IEEE transactions on ultrasonics, ferroelectrics, and frequency control 61.5 (2014): 870-880; Ma J et al., Physics in medicine and biology 60.9 (2015): 3441) were designed (Ma J et al., Applied physics letters 106.11 (2015): 111903) to visualize the vasa vasorum and intraplaque vasculature. The acoustic angiography techniques benefited from larger penetration depth and free of heating. The primary limitation of dual frequency harmonic imaging is its dependence on microbubble injection. Presence of microbubbles also engenders high scattering for the harmonic signals in the vasa vasorum. Occlusion of blood flow followed by saline flushing is also indicated. Besides super-harmonic imaging, the dual frequency IVUS enabled to image the thin-cap fibroatheroma with the high-frequency transducer (30 MHz) and the intraplaque oxidized lipid (oxLDL) with the low-frequency transducer (6.5 MHz) (Ma J et al., IEEE transactions on ultrasonics, ferroelectrics, and frequency control 61.5 (2014): 870-880). Unlike the aforementioned techniques, the IVUS-guided EIS assesses the biochemical property of plaques without the need to perform occlusion flushing. Furthermore, the aforementioned techniques focus on topological information (fibrous cap or vasculature), while the IVUS-guided EIS combines both anatomy and metabolic properties (oxLDL). Example 3: 3-D Electrochemical Impedance Spectroscopy Mapping of Arteries to Detect Metabolically Active but Angiographically Invisible Atherosclerotic Lesions Electrochemical Impedance Spectroscopy (EIS) is the macroscopic representation of the electric field and current density distribution within the specimen being tested. EIS characterizes the dielectric properties of blood vessels and lipid-rich plaques (Yu F, et al., Biosensors and Bioelectronics 30, 165-173 (2011); Yu F, et al., Annals of Biomedical Engineering 39, 287-296 (2011); Yu F, et al., Biosensors and Bioelectronics 43, 237-244 (2013)). Applying quasi-electrostatic limits to Maxwell's equations, the field distribution is described as follows (Larsson J, American Journal of Physics 75, 230-239 (2007)): ∇·(σ*∇φ)=0 (Eq. 6) where σ*=σT+jωεT, σTand εTdenote the conductivity and permittivity of the sample, respectively, ω the angular frequency, j=√{square root over (−1)}, and φ the voltage distribution. Current density, {right arrow over (J)}=σ*{right arrow over (E)}, is calculated with the distribution of electric field, {right arrow over (E)}. Thus, electrical impedance of the sample, z, according to Maxwell's equations, is expressed as follows: Z=Δφ∫SJ⇀·dS⇀(Eq.7) where {right arrow over (S)} denotes the electrode-tissue interface area, and Δφ the voltage difference across the two electrodes of the EIS sensor. Intravascular ultrasound (IVUS)-guided EIS sensors detect atherosclerotic lesions associated with oxidative stress in fat-fed New Zealand White (NZW) rabbits (Ma J, et al., Sensors and Actuators B: Chemical 235, 154-161 (2016)). Specifically, vessel walls harboring oxidized low density lipoprotein cholesterol (oxLDL) exhibit distinct EIS values (Yu F, et al., Annals of Biomedical Engineering 39, 287-296 (2011)). However, atherosclerotic lesions are often eccentric and multiple. To detect these lesions, a novel 6-point electrode configuration for comprehensive 3-D endoluminal interrogation of angiographically invisible atherosclerosis was developed. OxLDL (Sevanian A, et al., Arteriosclerosis, Thrombosis, and Vascular Biology 16, 784-793 (1996)) in atherosclerotic lesions display distinct frequency-dependent electrical and dielectrical properties (Suselbeck T, et al., Basic Res Cardiol 100, 446-452 (2005); Streitner I, et al., Atherosclerosis 206, 464-468 (2009)). Concentric bipolar electrodes were developed to assess elevated EIS signals in oxLDL-rich lesions from human coronary, carotid, and femoral arteries (Yu F, et al., Biosensors and Bioelectronics 30, 165-173 (2011)). By deploying the flexible and stretchable bipolar electrodes to the aorta of NZW rabbits, a significant increase in impedance magnitude was demonstrated in oxLDL-rich plaques (Yu F, et al., Annals of Biomedical Engineering 39, 287-296 (2011)). A 2-point micro-electrode configuration was further established to allow for deep intraplaque penetration via alternating current (AC) (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). The frequency sweep from 10 to 300 kHz generated an increase in capacitance, providing distinct changes in impedance (Ω) in relation to varying degrees of aortic intraplaque lipid burden (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). To advance intravascular EIS interrogation for 3-D endoluminal plaque detection, the following study investigates a 6-point electrode configuration to enable 15 alternating electrochemical impedance spectroscopy (EIS) permutations of 2-point electrode arrays for comprehensive impedance mapping and detection of lipid-rich atherosclerotic lesions. The individual electrode configuration are identical in each row, where three electrodes are circumferentially and equidistantly positioned. In addition to optimal contact with the endoluminal surface, this new 6-point configuration enables 15 different pairs of electrodes to provide 3-D interrogation of the endoluminal area. The capability of 3-D EIS sensors to detect angiographically invisible early atherosclerotic lesions in different aortic segments from NZW rabbits on high-fat diet-induced hypercholesterolemia is demonstrated. The 3-D EIS measurements are in close agreement with the equivalent circuit model for aortas consisting of vessel tissue, atherosclerotic lesion, blood, and perivascular fat. Statistical analysis corroborates the 3-D EIS permutations for early atherosclerosis detection, with clinical implications to prevent acute coronary syndromes or strokes. The materials and methods are now described. Sensor Design and Fabrication The newly designed 6-point EIS sensor featured six individual electrodes that were circumferentially mounted on an inflatable balloon (FIG.5A). The individual electrodes were identical in dimensions (600 μm×300 μm) and connected to an impedance analyzer (Gamry Series G 300 potentiostat, PA) that was installed in a desktop computer. Specifically, there were three electrodes embedded in each row, and the distance between the two rows was 2.4 mm (FIG.5B). Within each layer, the 3 electrodes were equidistantly placed around the circumference of the balloon at 120° separation from each other (FIG.5C). This 6-point configuration optimized the contact with the endoluminal surface for EIS measurements. Furthermore, the 6 electrodes allowed for 15 different combinations of 2-point electrodes for 3-dimensional endoluminal interrogation. To micro-fabricate the EIS sensors, flexible polyimide electrodes were first acquired (FPCexpress.com, Ontario, CA) with a nominal length of up to 1 meter to bypass the need for interconnects that are required between electrode pads and wirings (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). These flexible electrodes have been pre-constructed according to the pattern shown inFIG.5AthroughFIG.5Dusing the following process: a copper layer (12 μm) was deposited onto the polyimide substrate (12 μm) through electroplating, followed by selective chemical etching of the lithographically-defined patterns via the dry film photoresist (FIG.6A,FIG.6B). A second layer of polyimide (12 μm) was added to cover the majority of the copper area using a lamination process. The copper area that eventually became the electrode/contact pad was left uncovered. Finally, a layer of Au/Ni (50 nm/2000 nm) was added through electroless-nickel-immersion-gold (ENIG) process. To develop the catheter-based device for intravascular deployment, the inflatable balloon (9 mm in length, 1 mm diameter under deflation and ˜3 mm diameter under inflation, Ventiona Medical, NH) was mounted on the terminal end of the catheter (40 cm in length) (Vention Medical, NH). Miniature holes were designed to enable balloon inflation. A pair of tantalum foils (1×1 mm, Advanced Research Materials, Oxford, UK) was incorporated to both ends of the balloon as a radiopaque marker, and was secured by a short segment of heat-shrink tube (in green) (Vention Medical, NH). The front end of the flexible electrodes was mounted onto the balloon by the silicone adhesive (Henkel, CT), and the rest of the electrodes and the catheter were encapsulated with the insulating heat-shrink tube 40 cm long, amber (Vention Medical, NH). Electrical connection to the impedance analyzer (Gamry Series G 300 potentiostat) was made via the soldering wires on the exposed contact pads to the terminal end of the flexible electrodes. The prototype of the 6-point sensor comprised the radio-opaque markers, the inflatable balloon, and the electrodes packaged around the catheter (FIG.5AthroughFIG.5D). A mechanical pump (Atrion Medical Products Inc., Arab, AL) was connected to the end of the catheter to induce balloon inflation. Equivalent Electrical Circuits Equivalent circuit models were developed to analyze the electrochemical impedance of atherosclerotic lesions at 3 distinct segments of the aorta (FIG.14AthroughFIG.14D) in the setting of balloon inflation and deflation (FIG.14E,FIG.14F). A cross-sectional perspective of the circuit configuration provides the operational principle underlying electrode-tissue interface for the endoluminal EIS interrogation. Four main types of tissue contribute to the aggregated impedance values; namely, blood, aortic wall, atherosclerotic plaque and perivascular fat circumscribing the vessel. In this context, a simple circuit block was first applied to generalize the impedimetric behavior of the individual tissues, consisting of a parallel circuit with two paths: 1) a resistive element (R1) in series with a capacitive element (C) to model the cells; 2) a pure resistive element (R2) to model the extracellular materials (red frame inFIG.14G) (Aroom K R, et al., Journal of Surgical Research 153, 23-30 (2009)). The electrode-tissue interface was modeled using the constant phase element (CPE) to take into account the non-linear double layer capacitance behavior. The impedance of the interface can be expressed as: ZCPE=1Y(jω)a(Eq.8) where Y denotes the nominal capacitance value, and a a constant between 0 and 1, representing the non-ideal interface effects. When the balloon was deflated, blood was included as the primary component (Zblood) in the circuit model as other tissues were shielded by the presence of blood in contact with the electrodes (FIG.14G). When the balloon was inflated, the endoluminal surface was in contact with the electrodes. As a result, all of the tissue types contributed to the path of the current flow, accounting for the parallel circuit configuration for the blood, plaque, vessel wall (aorta), and perivascular fat (Zblood//Zplaque//Zaorta//Zperi) (FIG.14H). Animal Model Analyses were conducted in n=4 control rabbits fed a chow diet and n=5 age-matched high-fat fed NZW male rabbits (Cao H, et al., Biosensors and Bioelectronics 54, 610-616 (2014)). High-fat animals were placed on a 1.5% cholesterol and 6.0% peanut oil diet (Harlan laboratory) for 8 weeks prior to harvesting. Animals were anesthetized with isofluorane gas, endotracheally intubated and placed on a mechanical ventilator. A femoral cut-down was performed and a 4-French arterial sheath placed in the common femoral artery. Under fluoroscopic guidance (Siemens Artis Zeego with robotic arm) and iodinated contrast dye injection, the EIS sensor was advanced for in vivo interrogation of the distal abdominal aorta (site no. 1), followed by the aorta at the level of renal artery bifurcation (site no. 2), and finally at the level of the thoracic aorta (site no. 3). (FIG.14AthroughFIG.14D). Following animal harvesting, aortic samples from the 3 sites were sent for histological analyses. In the adopted scheme of high-fat feeding, these 3 anatomic sites corresponded to areas with trace atherosclerosis, or fatty streaks (abdominal aorta), and mild plaque (thoracic aorta) by histology. EIS Measurement EIS measurement was conducted along the aorta, namely, abdominal aorta proximal to the aortic bifurcation, abdominal aorta at the level of the renal artery bifurcation, and thoracic aorta. The device with radiopaque markers was placed at these pre-selected segments of the aorta via invasive angiography that was performed during fluoroscopy to mark the exact position of the endoluminal EIS measurements. During balloon inflation, a constant pressure at ˜10 psi (pounds per square inch) was applied through a mechanical pump to establish contact with the endoluminal surface. EIS measurements were conducted using the Gamry system (Gamry Series G 300 potentiostat, PA, USA). At each interrogation site, two replicates of each fifteen permutations were performed. AC signals with peak-to-peak voltages of 50 mV and frequencies ranging from 1-300 kHz were delivered at each site. The impedance magnitudes were acquired at 10 data points per frequency decade. Histology After euthanasia, rabbits were perfused through the left ventricle with normosaline followed by 4% paraformaldehyde. Following fixation, aortic segments which had been identified in vivo by the radiopaque markers and interrogated during invasive angiography, were marked based on anatomic landmarks, excised, and samples sent to the CVPath Institute (Gaithersburg, MD) for further processing and staining. Following cryosectioning, samples were stained with hematoxylin and eosin and oil-Red-O for neutral lipids. Atherosclerotic areas identified by oil-Red-O were quantified using image J software (National Institutes of Health, Bethesda, MD). Statistical Analyses To test for differences in impedance values, the Brown-Forsythe test was used to determine significance across groups, and Dunnett's test was used for multiple comparisons and correction of multiple testing. Impedance values were further compared using the Mann-Whitney test for differences in medians, and the Kolmogorov-Smirnov test for differences in global value distributions. GraphPad version 6 was used to perform the statistical analyses. A P-value<0.05 was considered significant. The results are now described Intravascular Deployment for EIS Permutations Impedimetric interrogation was demonstrated in the thoracic aorta, abdominal aorta at the level of the renal artery bifurcation, and abdominal aorta proximal to the aortic bifurcation (FIG.14AthroughFIG.14D). Prior to the EIS measurements, angiographic images were obtained to localize the EIS sensors as demarcated by the radiopaque tantalum pairs. Invasive angiography was unable to detect early atherosclerotic lesions (FIG.14BthroughFIG.14D). Invasive angiography was unable to detect early atherosclerotic lesions (Kashyap V S et al., Journal of Endovascular Therapy 15.1 (2008): 117-125) in high-fat diet fed rabbits (FIG.14BthroughFIG.14D). A representative schematic of the intravascular sensor with deflated (FIG.14E) and inflated (FIG.14F) balloon is shown. The equivalent circuit includes the blood as primary circuit component upon balloon-deflation (FIG.14G,FIG.21A). The equivalent circuit further includes the aorta, plaque, blood, and pericardial fat all as the circuit components upon balloon-inflation (FIG.14H,FIG.21B). 3-D EIS Mapping The 6-point electrode configuration enabled the demonstration of 15 EIS permutations (3+6+6) consisting of three 2-point electrodes that were vertically linked between the two rows (FIG.15A), six 2-point electrodes that were linked circumferentially within rows (FIG.15B), and six 2-point electrodes that were cross-linked diagonally between the two rows (FIG.15C). This novel combination of 15 permutations paved the way for flexible 3-dimensional interrogation and impedimetric mapping of the arterial segment over 3 rings, or sub-segments, as illustrated (FIG.15DthroughFIG.15F). For the 3-D mapping, each color represents impedance values using a distinct electrode permutation (FIG.15G), with lighter colors indicating lower impedances and darker colors higher impedances, as illustrated using a logarithmic scale. This user-friendly readout permits rapid clinical impedance interpretation of lesion types, detection of clinically silent atherosclerosis, and physician adoption based on usability. EIS Measurements Representative real-time EIS measurements of impedance and phase are compared among the three segments of the aorta and correlated with histological findings (FIG.16AthroughFIG.16L). Histology staining with oil-Red-O for atherosclerotic lesions supported the changes in EIS in response to the eccentric plaques from the thoracic aorta to the renal bifurcation and distal abdominal aorta (FIG.16AthroughFIG.16C). Early lesions, also known as fatty streaks, stained positive for intra-lesion neutral lipids by oil-Red-O (FIG.16E). The sweeping frequency ranged from 1-300 kHz within which the impedance decreased monotonously across conditions. Differences in impedance were most significant at 1 kHz (FIG.16DthroughFIG.16F). In the 3-D impedance map, lighter colors (yellow) were observed in control aortas (FIG.16C), intermediate colors (yellow, light brown) in fatty streaks (FIG.16G), and darker colors (dark brown, black) were present in the atherosclerotic plaques (FIG.16K). Furthermore, increasing delays in phase were identified with lesion progression from control (FIG.16J) to fatty streak (FIG.16K) to mild plaque (FIG.16L). The behavior of the EIS measurements was reflective of the heterogeneous composition of the atherosclerotic lesions. Equivalent Circuit Modeling Equivalent circuits were modeled to predict changes in EIS in response to balloon deflation and inflation (FIG.14AthroughFIG.14H). The model parameters (FIG.14G,FIG.14H) were numerically calculated by using a simplex algorithm available in the Gamry Echem Analyst software. Curve fitting was performed by incorporating the model parameters, namely; blood, vessel tissue, plaque, and perivascular fat. The theoretical curve fittings were in agreement with the experimental EIS measurements of impedance (Ω) (FIG.17A) and phase (°) (FIG.17B) in response to balloon deflation and inflation. Under the deflation state, the electrodes were in contact with the highly conductive blood, and the model parameters for the vessel tissue, lipid-rich plaques, and perivascular fat were electrically shielded for the equivalent circuit model (FIG.14E,FIG.14G,FIG.17A,FIG.17B). Under the inflation state, the electrodes were in contact with the endoluminal surface and/or plaque, plus the additional model parameters from the vessel wall (FIG.14F,FIG.14H,FIG.17A,FIG.17B). The constant phase element can be described by the two variables Y and a. Our fitting results in both balloon deflated and inflated condition for Y/a are 321 nS·sa/0.691 (where n=nano, S=Siemens, and s=seconds) and 248 nS·sa/0.659, respectively, which indicates there is no significant distinction between the two scenarios in terms of contact impedance. As shown above in Equation 3, Y denotes the empirical admittance value, and a is a constant between 0 and 1, ω is the angular frequency and j=√−1. Fitting from the two circuit models (FIG.14GandFIG.14H) yields two sets of Y and a: 321 nS·s{circumflex over ( )}a/0.691 and 248 nS·s{circumflex over ( )}a/0.659, for balloon deflation and inflation, respectively. A Bode diagram comparison of the two different constant phase elements is presented inFIG.18to show the behavior of the ZCPE, which is similar to that of a typical capacitance, i.e. being linear in magnitude and constant in phase. The value of a not being equal to 1 verifies the non-ideal capacitive behavior the electrode-tissue interface. The fitting results of other resistance and capacitance values are shown inFIG.21A, as well as a physical model to compare the circuit model fitting with reported conductivity and permittivity of different tissues. Data Analysis of 3-D EIS Measurements From each aortic interrogation point, the 15 permutations obtained at 1 kHz are displayed as the medians and the 2 extreme values (minimum-maximum) of the impedance range. These demonstrated a tight spread of values in control aortas, with a median impedance of 13.79 kΩ and a narrow range of global values ranging from a minimum of 5.22 kΩ to a maximum of 19.13 kΩ. Fatty streaks identified in the abdominal aorta at the level of renal artery bifurcation area or proximal to the aortic bifurcation demonstrated a median (17.75 kΩ) and range of values (minimum 5.79-maximum 77.05 kΩ) intermediate to values in control segments and to values in mild plaque segments from the thoracic aorta (median 58.32 kΩ, minimum 30.86 kΩ, maximum 16.17 kΩ) (FIG.17C). There was a significant difference across groups (P<0.001) which was maintained when doing pair-wise comparisons with correction for multiple testing. Comparing control to fatty streak, there was a significant difference in impedance medians (P=0.016) and value distributions (P=0.024) which was further accentuated when comparing control to mild atherosclerotic plaque (P<0.001 for differences in both medians and value distributions). The impedance differences between fatty streaks and mild atherosclerotic plaques were also highly significant (P<0.001 for differences in both medians and value distributions). The presented novel 6-point configuration advances disease detection to flexible 3-D interrogation of early atherosclerotic lesions that harbor distinct electrochemical properties otherwise invisible by current imaging modalities such as invasive angiography. Atherosclerosis is a chronic inflammatory disease of the arterial wall resulting from a complex interplay between heritability (Kathiresan S, et al., Cell 148, 1242-1257 (2012)), environmental factors (Jackson S P, Nature Medicine 17, 1423-1436 (2011)), intestinal microbiota (Koeth R A, et al., Nature Medicine 19, 576-585 (2013)), biomechanical forces (Brown A J, et al., Nature Reviews Cardiology 13, 210-220 (2016)), and other causes. Atherosclerosis develops over decades (Libby P, New England Journal of Medicine 368.21 (2013): 2004-2013) with evidence of early lesions, or fatty streaks, present in autopsy series of young adults who died in their early 20s (Enos W F et al., Journal of the American Medical Association 152.12 (1953): 1090-1093). The end result of the advanced stages of the pathobiology of atherosclerosis remains a leading cause of mortality and morbidity worldwide through clinical manifestations such as acute coronary syndromes or strokes (Libby P, et al., Nature 473, 317-325 (2011)). Invasive coronary angiography is considered the gold standard of coronary artery disease determination. Whereas this technique permits visualization of established atherosclerotic plaques, it does not have the necessary spatial resolution to detect early stages of the disease (Dweck M R, et al., Nature Reviews Cardiology 13, 533-548 (2016)). Thus, 3-D EIS mapping provides the dielectric property at the electrode-tissue interface to detect metabolically active atherosclerotic lesions, albeit angiographically invisible, for possible early medical intervention and prevention of acute coronary syndromes or stroke. Percutaneous transluminal coronary angioplasty (PTCA)—or balloon dilation of the coronary arteries—has been routinely performed for 40 years, initially as a standalone procedure (Grüntzig A, Percutaneous Vascular Recanalization. Springer Berlin Heidelberg, 1978. 57-65), and subsequently has been combined with coronary stent deployment (Sigwart U et al., New England Journal of Medicine 316.12 (1987): 701-706). The safety of balloon dilation of arteries to treat atherosclerotic lesions is well documented in humans (Indolfi C et al., Nature Reviews Cardiology (2016)) and experimental models (Iqbal J et al., Annals of biomedical engineering 44.2 (2016): 453-465), with success determined by a post-procedure angiogram (Levine G N et al., Circulation 124.23 (2011): e574-e651). Clinical application in humans of the 6-point EIS sensor would similarly require intravascular advancement of the catheter and verification of balloon inflation under angiographic guidance, with close monitoring of impedance characteristics to differentiate the distinct patterns obtained when the sensor is only in contact with blood (deflated, or not fully deployed) as opposed to the endoluminal vessel wall (inflated, or fully deployed) (FIG.17A). The novel 6-point configuration of the present study advances disease detection to 3-D interrogation of early atherosclerotic lesions that harbor distinct electrochemical properties otherwise invisible by current imaging modalities such as invasive angiography. The unique configuration allowed for three stretchable electrodes to be circumferentially and equidistantly positioned in individual rows, thereby generating 15 combinations of 2-point permutations for arterial wall EIS measurements. The elongated flexible polyimide electrodes eliminated the packaging challenge to connect the individual miniaturized contact pads with electrical wires for the 2-point sensors (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). The addition of active electrodes (from 2 to 6) engendered 15 different permutations to extend the EIS measurements from a focal region to an entire circumferential ring of the aorta. Data fitting results further demonstrate close agreement between the experimental EIS measurements and theoretical equivalent circuit modeling (FIG.17AthroughFIG.17C). Irrespective of the atherosclerotic lesion size and 3-D architecture, the 6-point configuration detected eccentric and small lesions, also known as fatty streaks, as validated by histology staining of atherosclerotic lesions (FIG.16AthroughFIG.16L,FIG.17AthroughFIG.17C). Thus, increased EIS measurements were detected in terms of impedance (2) in the non-obstructive lesions that occupied less than 5% of the luminal diameter, but harbored metabolically active lipids (FIG.16AthroughFIG.16L,FIG.17AthroughFIG.17C). It is recognized that tissues store charges, and frequency-dependent electrical impedance (Z) develops in response to applied alternating current (AC). Previously proposed applications of EIS include assessment of cellular viability of human cancer cells (Hondroulis E et al., Theranostics (2014): 919-930) and amyloid β-sheet misfolding (Li H et al., Theranostics 4.7 (2014): 701). When an AC current is applied to the plaque in a vessel, a complex electric impedance (Z) is generated as a function of frequency. Z is defined as the summation of a real number (r) and the resistance (Xc) multiplied by the complex number (i) (Z=r+Xci) (Yu F, et al., Biosensors and Bioelectronics 30, 165-173 (2011); Aroom K R, et al., Journal of Surgical Research 153, 23-30 (2009)). Fat-free tissue is known as a good electrical conductor for its high water (approximately 73%) and electrolytes content (ions and proteins), whereas fatty tissue is anhydrous and a poor conductor. Thus, these electrical properties synergistically render a significantly lower electrical conductivity in lipid-rich plaques (σ*=σ+iωε; σ, ε being the intrinsic conductivity and permittivity of the tissue) than the rest of the blood and vascular components. Early stage lesions, though small in size and thus difficult to detect with conventional angiography, still exhibit a distinctive impedimetric behavior as opposed to normal arteries. These EIS properties have formed the electrochemical basis to detect early stage lipid-rich plaques. Several distinct electrode configurations have been reported for intravascular impedimetric interrogation. Süselbeck et al. introduced a catheter-based 4-point electrode configuration to address the electrode-tissue contact impedance issue (Suselbeck T, et al., Basic Res Cardiol 100, 446-452 (2005)). However, its relatively large device dimension—required to accommodate 4 electrodes (2 cm in total length)—posed a clinical challenge for intravascular deployment (Suselbeck T, et al., Basic Res Cardiol 100, 446-452 (2005)). The 2-point concentric electrode configuration was previously demonstrated to provide a ˜2000 fold reduction in device dimension (300 μm in diameter) to enable integration with different sensing modalities, including ultrasonic transducers and flow sensors (Yu F, et al., Biosensors and Bioelectronics 43, 237-244 (2013); Ma J, et al., Sensors and Actuators B: Chemical 235, 154-161 (2016)). Furthermore, the concentric configuration addressed the heterogeneous tissue composition, uneven surface topography and non-uniform current distribution of the atherosclerotic lesions. Recently, the 2-point electrode concept was introduced by implementing two identical flexible electrodes (240 μm in diameter) with a large separation (400 μm); thus, providing a deep current penetration for intraplaque burden detection (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). Although 4-electrode systems can be miniaturized to become more suitable in clinical applications, they still occupy twice the space of 2-electrode systems. This issue further manifests itself when multiple measuring sites are required, as in the case of the 6-point configuration presented in the current study. There would be 12 electrodes needed to implement the equivalent for 4-electrode systems, thereby greatly complicating the possible electrode layout design as well as the electrical connections to the measurement instruments. Regarding the electrode-tissue interface impedance, as shown inFIG.17Athere is a clear shift of the impedance value throughout the frequency spectrum between balloon deflation and inflation. These impedance values are composed of the interface impedance as well as the tissues under interrogation, thereby indicating that the interface impedance is dominated by the tissue impedance. Hence, the impedance measurement reflects varying responses from the underlying tissues (atherosclerotic plaques, aorta, etc.) and can be utilized in evaluating different tissue compositions. It is worth noting that if the electrodes were to be further miniaturized, thereby increasing the electrode-tissue interface impedance, the tissue impedance might not be the dominating component in the 2-electrode system. Further treatment, e.g. electroplating platinum black onto the electrode to reduce the interface impedance, may be necessary to achieve high measurement specificity. The aforementioned EIS sensing devices only focus on the local current detection that merely detects a small region of the entire endovascular segment where the atherosclerotic lesions are often eccentric and multiple. The present study advances EIS sensing by implementing the 6-point configuration to optimize 3-D detection of small, angiographically invisible, atherosclerotic lesions. This unique configuration allowed for six stretchable electrodes to be circumferentially and equidistantly positioned in individual rows around a dilatable balloon (FIG.5AthroughFIG.5D,FIG.6AthroughFIG.6E) and were deployed in vivo in the NZW rabbit model of atherosclerosis. Upon balloon inflation, all electrodes were made to be in contact with the endoluminal surface. The elongated flexible polyimide electrodes eliminated the packaging challenge to connect the individual miniaturized contact pads with electrical wires for the 2-point sensors (Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)). The addition of active electrodes (from 2 to 6) engendered 15 different permutations to extend the EIS measurements from a focal region to an entire circumferential ring of the aorta. Data fitting results using an equivalent circuit model further demonstrate close agreement between the experimental EIS measurements and theoretical equivalent circuit modeling (FIG.17AthroughFIG.17C). The fitting results are presented inFIG.21Aand a detailed physical modeling was further performed to demonstrate that the findings are in reasonable agreement with reported electrical properties of multiple tissues, thereby validating the simplified circuit model (FIG.19A,FIG.19B,FIG.20A,FIG.20B,FIG.21B,FIG.21C). The local EIS measurements were then reconstructed into 3-D impedimetric mapping (FIG.15AthroughFIG.15G) to significantly enhance the visualization quality and translational applicability of the impedance data. The detailed physical modeling is described as follows. For each tissue type (blood, aorta, plaque, perivascular fat), the total impedance can be written based on the circuit model inFIG.14Gas: Z=A-B·j(Eq.9)A=ω2C2R1R2(R1+R2)+R11+ω2C2(R1+R2)2(Eq.10)B=ωCR221+ω2C2(R1+R2)2(Eq.11) where ω denotes the angular frequency, j=√−1, R1, R2, and C represent the two resistances and capacitance value from the circuit model. To demonstrate that the fitting results of all the resistance and capacitance (shown inFIG.21A) from different tissues are reasonable, a physical model is presented that uses the intrinsic electrical properties and geometric factors of each tissue to obtain their impedance value, which will then be compared with the Z shown above. First, the cross-sectional schematics showing the relative position of electrodes and different tissues under either balloon inflation or deflation are depicted inFIG.19AandFIG.19B. The condition shown inFIG.19Acorresponds to the circuit model presented inFIG.14Gand will be utilized for validating the parameters for blood. The scenario inFIG.19B(modeled asFIG.4H) is considered for aorta wall, plaque, and perivascular fat as the dimension of blood is difficult to estimate when the balloon is inflated. For plaque and perivascular fat, a simple impedance model can be considered, as shown inFIG.20A. The impedance of the tissue can be written as (Sun T et al., Langmuir 26.6 (2009): 3821-3828): Z′=1jωɛ*G(Eq.12) where ε* represents the complex permittivity: ɛ*=ɛɛ0-jσω(Eq.13) where σ, ε denotes the conductivity and relative permittivity of the tissue, ε0=8.85e-12 F/m the vacuum permittivity, and G=ad/l the geometric factor. For plaque and perivascular fat, the scenario inFIG.20Ais considered, and based on the histology image inFIG.14EandFIG.14F, the estimate of l=2πr/3 can be made. Note that the electrode pair shown inFIG.20AandFIG.20Bis separated by ⅓ of the circumference due to the design (seeFIG.5AthroughFIG.5D). For blood and the aorta wall (as a complete circular object), the scenario shown inFIG.20Bis considered, where ⅓ of the tissue is in parallel with the remaining ⅔, therefore yielding an effective l=4πr/9. From Eq. 12 and Eq. 13, the impedance value of each tissue can be obtained merely based on their intrinsic electrical properties and geometrical variables: Z′=A′-B′·j(Eq.14)A=σlad(σ2+ω2(ɛɛ0)2)(Eq.15)B=ωɛɛ0lad(σ2+ω2(ɛɛ0)2)(Eq.16) Z is calculated using the fitting results, and Z′ is achieved through the electrical properties of different tissues and geometric variables estimated from the histology image (as shown inFIG.14AthroughFIG.14H). The value ω=10 kHz was chosen for all of the calculations.FIG.21Ashows the achieved resistance and capacitance values for each of the tissues obtained from fitting.FIG.21Blists all of the parameters used in the calculation.FIG.21Cshows the comparison between the results obtained from the circuit model and the results from the physical model (A,′,B′). As is evident, results from the two sets of calculation are all within an order of magnitude of each other. The source of discrepancy could rise from: a) the irregularity of the actual tissue geometry as compared to the simple geometry used in the calculation; b) there is a relatively wide range of reported electrical property values of individual tissues, e.g. the conductivity of fat varies around 3-5 fold in the published literature (Awada K A et al., IEEE Transactions on Biomedical Engineering 45.9 (1998): 1135-1145; Gabriel C et al., Physics in medicine and biology 41.11 (1996): 2231; Hasgall P A et al., “IT'IS Database for thermal and electromagnetic parameters of biological tissues. Version 2.6, Jan. 13, 2015.” (2015)). In conclusion, the presented circuit model can reasonably describe the actual electrical behavior of the multiple existing tissues. The present study demonstrates that the impedance value distribution obtained from different combinations of the 6-point electrodes at a frequency of 1 kHz exhibits a significantly wider range in aortas of high-fat diet fed rabbits compared to controls on a normal diet (FIG.17C). This finding signifies a major characteristic shift from healthy arteries to ones with subclinical atherosclerosis and therefore can serve as a detection criterion. The wider range of impedance value arises from the fact that the existence of eccentric and multiple atherosclerotic lesions around the endoluminal surface increases the overall impedance variation compared to a homogeneous healthy artery. Previously designed EIS devices (Yu F, et al., Biosensors and Bioelectronics 30, 165-173 (2011); Streitner I, et al., Atherosclerosis 206, 464-468 (2009); Suselbeck T, et al., Basic Res Cardiol 100, 446-452 (2005); Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016)) interrogated only limited segments of the vessel, potentially missing lesions that are not in close proximity with the electrodes. Therefore, the present new 6-point configuration permits comprehensive 3-D mapping and successful detection of eccentric and small atherosclerotic lesions that harbor metabolically active lipids (FIG.16AthroughFIG.16L,FIG.17AthroughFIG.17C), however remain invisible with conventional angiography (FIG.14BthroughFIG.14D). In summary, a novel 6-point electrode design is introduced for early detection of subclinical atherosclerotic lesions. The unique electrode configuration allows for 3 stretchable electrodes to be circumferentially and equidistantly positioned in individual layers. The 15 EIS permutations provide a paradigm shift, allowing the reconstruction of a 3-D map of impedance spectroscopy. In this context, metabolically active plaques, also known as fatty streaks, have been identified that harbor lipid-laden macrophage foam cells (Yu F, et al., Annals of Biomedical Engineering 39, 287-296 (2011); Yu F, et al., Biosensors and Bioelectronics 43, 237-244 (2013); Packard R R S, et al., Annals of Biomedical Engineering 44, 2695-2706 (2016); Cao H, et al., Biosensors and Bioelectronics 54, 610-616 (2014)) that are otherwise non-detectable by current angiography. Thus, 3-D EIS mapping holds translational promises for early detection and prevention of acute coronary syndromes or strokes. The disclosures of each and every patent, patent application, and publication cited herein are hereby incorporated herein by reference in their entirety. While this invention has been disclosed with reference to specific embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention. The appended claims are intended to be construed to include all such embodiments and equivalent variations. | 90,375 |
11857319 | DETAILED DESCRIPTION FIG.1illustrates an embodiment of a physiological measurement system100having a monitor101and a sensor assembly101. The physiological measurement system100allows the monitoring of a person, including a patient. In particular, the multiple wavelength sensor assembly101allows the measurement of blood constituents and related parameters, including oxygen saturation, HbCO, HBMet and pulse rate. In an embodiment, the sensor assembly101is configured to plug into a monitor sensor port103. Monitor keys105provide control over operating modes and alarms, to name a few. A display107provides readouts of measured parameters, such as oxygen saturation, pulse rate, HbCO and HbMet to name a few. FIG.2Aillustrates a multiple wavelength sensor assembly201having a sensor203adapted to attach to a tissue site, a sensor cable205and a monitor connector201. In an embodiment, the sensor203is incorporated into a reusable finger clip adapted to removably attach to, and transmit light through, a fingertip. The sensor cable205and monitor connector201are integral to the sensor203, as shown. In alternative embodiments, the sensor203can be configured separately from the cable205and connector201, although such communication can advantageously be wireless, over public or private networks or computing systems or devices, through intermediate medical or other devices, combinations of the same, or the like. FIGS.2B-Cillustrate alternative sensor embodiments, including a sensor211(FIG.2B) partially disposable and partially reusable (resposable) and utilizing an adhesive attachment mechanism. Also shown is a sensor213being disposable and utilizing an adhesive attachment mechanism. In other embodiments, a sensor can be configured to attach to various tissue sites other than a finger, such as a foot or an ear. Also a sensor can be configured as a reflectance or transflectance device that attaches to a forehead or other tissue surface. The artisan will recognize from the disclosure herein that the sensor can include mechanical structures, adhesive or other tape structures, Velcro wraps or combination structures specialized for the type of patient, type of monitoring, type of monitor, or the like. FIG.3illustrates a block diagram of an exemplary embodiment of a monitoring system300. As shown inFIG.3, the monitoring system300includes a monitor301, a noninvasive sensor302, communicating through a cable303. In an embodiment, the sensor302includes a plurality of emitters304irradiating the body tissue306with light, and one or more detectors308capable of detecting the light after attenuation by tissue306. As shown inFIG.3, the sensor302also includes a temperature sensor307, such as, for example, a thermistor or the like. The sensor302also includes a memory device308such as, for example, an EEPROM, EPROM or the like. The sensor302also includes a plurality of conductors communicating signals to and from its components, including detector composite signal conductors310, temperature sensor conductors312, memory device conductors314, and emitter drive signal conductors316. According to an embodiment, the sensor conductors310,312,314,316communicate their signals to the monitor301through the cable303. Although disclosed with reference to the cable303, a skilled artisan will recognize from the disclosure herein that the communication to and from the sensor306can advantageously include a wide variety of cables, cable designs, public or private communication networks or computing systems, wired or wireless communications (such as Bluetooth or WiFi, including IEEE 801.11a, b, or g), mobile communications, combinations of the same, or the like. In addition, communication can occur over a single wire or channel or multiple wires or channels. In an embodiment, the temperature sensor307monitors the temperature of the sensor302and its components, such as, for, example, the emitters304. For example, in an embodiment, the temperature sensor307includes or communicates with a thermal bulk mass having sufficient thermal conduction to generally approximate a real-time temperature of a substrate of the light emission devices304. The foregoing approximation can advantageously account for the changes in surface temperature of components of the sensor302, which can change as much or more than ten degrees Celsius (10° C.) when the sensor302is applied to the body tissue306. In an embodiment, the monitor101can advantageously use the temperature sensor307output to, among other things, ensure patient safety, especially in applications with sensitive tissue. In an embodiment, the monitor301can advantageously use the temperature sensor307output and monitored operating current or voltages to correct for operating conditions of the sensor302as described in U.S. patent application Ser. No. 11/366,209, filed Mar. 1, 2006, entitled “Multiple Wavelength Sensor Substrate,” and which is hereby incorporated by reference in its entirety. The memory308can include any one or more of a wide variety of memory devices known to an artisan from the disclosure herein, including an EPROM, an EEPROM, a flash memory, a combination of the same or the like. The memory308can include a read-only device such as a ROM, a read and write device such as a RAM, combinations of the same, or the like. The remainder of the present disclosure will refer to such combination as simply EPROM for ease of disclosure; however, an artisan will recognize from the disclosure herein that the memory308can include the ROM, the RAM, single wire memories, combinations, or the like. The memory device308can advantageously store some or all of a wide variety data and information, including, for example, information on the type or operation of the sensor302, type of patient or body tissue306, buyer or manufacturer information, sensor characteristics including the number of wavelengths capable of being emitted, emitter specifications, emitter drive requirements, demodulation data, calculation mode data, calibration data, software such as scripts, executable code, or the like, sensor electronic elements, sensor life data indicating whether some or all sensor components have expired and should be replaced, encryption information, monitor or algorithm upgrade instructions or data, or the like. In an embodiment, the memory device308can also include emitter wavelength correction data. In an advantageous embodiment, the monitor reads the memory device on the sensor to determine one, some or all of a wide variety of data and information, including, for example, information on the type or operation of the sensor, a type of patient, type or identification of sensor buyer, sensor manufacturer information, sensor characteristics including the number of emitting devices, the number of emission wavelengths, data relating to emission centroids, data relating to a change in emission characteristics based on varying temperature, history of the sensor temperature, current, or voltage, emitter specifications, emitter drive requirements, demodulation data, calculation mode data, the parameters it is intended to measure (e.g., HbCO, HbMet, etc.) calibration data, software such as scripts, executable code, or the like, sensor electronic elements, whether it is a disposable, reusable, or multi-site partially reusable, partially disposable sensor, whether it is an adhesive or non-adhesive sensor, whether it is reflectance or transmittance sensor, whether it is a finger, hand, foot, forehead, or ear sensor, whether it is a stereo sensor or a two-headed sensor, sensor life data indicating whether some or all sensor components have expired and should be replaced, encryption information, keys, indexes to keys or has functions, or the like monitor or algorithm upgrade instructions or data, some or all of parameter equations, information about the patient, age, sex, medications, and other information that can be useful for the accuracy or alarm settings and sensitivities, trend history, alarm history, sensor life, or the like. FIG.3also shows the monitor301comprising one or more processing boards318communicating with one or more host instruments320. According to an embodiment, the board318includes processing circuitry arranged on one or more printed circuit boards capable of installation into the handheld or other monitor301, or capable of being distributed as an OEM component for a wide variety of host instruments320monitoring a wide variety of patient information, or on a separate unit wirelessly communicating to it. As shown inFIG.3, the board318includes a front end signal conditioner322including an input receiving the analog detector composite signal from the detector308, and an input from a gain control signal324. The signal conditioner322includes one or more outputs communicating with an analog-to-digital converter326(“A/D converter326”). The A/D converter326includes inputs communicating with the output of the front end signal conditioner322and the output of the temperature sensor307. The converter326also includes outputs communicating with a digital signal processor and signal extractor328. The processor328generally communicates with the A/D converter326and outputs the gain control signal324and an emitter driver current control signal330. The processor328also communicates with the memory device308. As shown in phantom, the processor328can use a memory reader, memory writer, or the like to communicate with the memory device308. Moreover,FIG.3also shows that the processor328communicates with the host instrument320to for example, display the measured and calculated parameters or other data. FIG.3also shows the board318including a digital-to-analog converter332(“D/A converter332”) receiving the current control signal330from the processor328and supplying control information to emitter driving circuitry334, which in turns drives the plurality of emitters304on the sensor302over conductors316. In an embodiment, the emitter driving circuitry334drives sixteen (16) emitters capable of emitting light at sixteen (16) predefined wavelengths, although the circuitry334can drive any number of emitters. For example, the circuitry334can drive two (2) or more emitters capable of emitting light at two (2) or more wavelengths, or it can drive a matrix of eight (8) or more emitters capable of emitting light at eight (8) or more wavelengths. In addition, one or more emitters could emit light at the same or substantially the same wavelength to provide redundancy. In an embodiment, the host instrument320communicates with the processor328to receive signals indicative of the physiological parameter information calculated by the processor328. The host instrument320preferably includes one or more display devices336capable of providing indicia representative of the calculated physiological parameters of the tissue306at the measurement site. In an embodiment, the host instrument320can advantageously includes virtually any housing, including a handheld or otherwise portable monitor capable of displaying one or more of the foregoing measured or calculated parameters. In still additional embodiments, the host instrument320is capable of displaying trending data for one or more of the measured or determined parameters. Moreover, an artisan will recognize from the disclosure herein many display options for the data available from the processor328. In an embodiment, the host instrument320includes audio or visual alarms that alert caregivers that one or more physiological parameters are falling below or above predetermined safe thresholds, which are trending in a predetermined direction (good or bad), and can include indications of the confidence a caregiver should have in the displayed data. In further embodiment, the host instrument320can advantageously include circuitry capable of determining the expiration or overuse of components of the sensor302, including, for example, reusable elements, disposable elements, or combinations of the same. Moreover, a detector could advantageously determine a degree of clarity, cloudiness, transparence, or translucence over an optical component, such as the detector308, to provide an indication of an amount of use of the sensor components and/or an indication of the quality of the photo diode. An artisan will recognize from the disclosure herein that the emitters304and/or the detector308can advantageously be located inside of the monitor, or inside a sensor housing. In such embodiments, fiber optics can transmit emitted light to and from the tissue site. An interface of the fiber optic, as opposed to the detector can be positioned proximate the tissue. In an embodiment, the physiological monitor accurately monitors HbCO in clinically useful ranges. This monitoring can be achieved with non-fiber optic sensors. In another embodiment, the physiological monitor utilizes a plurality, or at least four, non-coherent light sources to measure one or more of the foregoing physiological parameters. Similarly, non-fiber optic sensors can be used. In some cases the monitor receives optical signals from a fiber optic detector. Fiber optic detectors are useful when, for example, monitoring patients receiving MRI or cobalt radiation treatments, or the like. Similarly, light emitters can provide light from the monitor to a tissue site with a fiber optic conduit. Fiber optics are particularly useful when monitoring HbCO and HbMet. In another embodiment, the emitter is a laser diode place proximate tissue. In such cases, fiber optics are not used. Such laser diodes can be utilized with or without temperature compensation to affect wavelength. FIG.4shows one embodiment of a memory device on the sensor308. Memory device308has a read only section401and a read write section403. One of ordinary skill in the art will understand that the read only and read write sections can be on the same memory or on a separate physical memory. One of ordinary skill in the art will also understand that the read only block401and the read write block403can consist of multiple separate physical memory devices or a single memory device. The read only section401contains read only information, such as, for example, sensor life monitoring functions (SLM)405, near expiration percentage407, update period409, expiration limit411, index of functions413, sensor type or the like. The read write section403contains numerous read write parameters, such as the number of times sensor is connected to a monitoring system415, the number of times the sensor has been successfully calibrated417, the total elapsed time connected to monitor system419, the total time used to process patient vital parameters421, the cumulative current applied to LEDs423, the cumulative temperature of sensor on patient425, the expiration status427, and the number of times clip is depressed429. Although described in relation to certain parameters and information, a person of ordinary skill in the art will understand from the disclosure herein that more or fewer read only and read/write parameters can be stored on the memory as is advantageous in determining the useful life of a sensor. FIG.5illustrates a flow chart of one embodiment of the read/write process between the monitor and the sensor. In block501, the monitor obtains sensor parameters from the sensor. For example, in block501, the monitor can access the read only section401of the memory device in order to obtain functions such as SLM functions405, near expiration percentage407, update period409, expiration limit411, and/or the index of functions413. The monitor then uses these functions in block503to track sensor use information. In block503, the monitor tracks sensor use information, such as, for example, the amount of time the sensor is in use, the amount of time the sensor is connected to a finger, the number of times the sensor opens and closes, the average temperature, the average current provided to the sensor, as well as any other stress that can be experienced by the sensor. The monitor then writes this use information on a periodic basis to the sensor at block505. At decision block507, the monitor decides whether or not the sensor life is expired based on the obtained parameters from the sensor and the use information. If the sensor's life has not expired at block507, then the system returns to block503where the monitor continues to track sensor use information. If, however, at decision block507the monitor decides that the sensor life has expired, the monitor will display a sensor life expired at block509. Sensor use information can be determined in any number of ways. For example, in an embodiment, in order to determine the life of the emitters, the number of emitter pulses can be counted and an indication stored in memory. In an embodiment, the time period in which power is provided to the sensor is determined and an indication stored in memory. In an embodiment, the amount of current supplied to the sensor and/or LEDs is monitored and an indication is stored in memory. In an embodiment, the number of times the sensor is powered up or powered down is monitored and an indication is stored in memory. In an embodiment, the number of times the sensor is connected to a monitor is tracked and an indication is stored in memory. In an embodiment, the number of times the sensor is placed on or removed from a patient is monitored and an indication is stored in the memory. The number of times the sensor is placed on or removed from a patient can be monitored by monitoring the number of probe off conditions sensed, or it can be monitored by placing a separate monitoring device on the sensor to determine when the clip is depressed, opened, removed, replaced, attached, etc. In an embodiment, the average operating temperature of the sensor is monitored and an indication stored. This can be done, for example, through the use of bulk mass as described above, or through directly monitoring the temperature of each emitter, or the temperature of other parts of the sensor. In an embodiment, the number of different monitors connected to the sensor is tracked and an indication is stored in memory. In an embodiment, the number of times the sensor is calibrated is monitored, and an indication is stored in the memory. In an embodiment, the number of patients which use a sensor is monitored and an indication is stored. This can be done by, for example, by storing sensed or manually entered information about the patient and comparing the information to new information obtained when the sensor is powered up, disconnected and/or reconnected, or at other significant events or periodically to determine if the sensor is connected to the same patient or a new patient. In an embodiment, a user is requested to enter information about the patient that is then stored in memory and used to determine the useful sensor life. In an embodiment, a user is requested to enter information about cleaning and sterilization of the sensor, and an indication is stored in the memory. Although described with respect to measuring certain parameters in certain ways, a person of ordinary skill in the art will understand from the disclosure herein that various electrical or mechanical measurement can be used to determine any useful parameter in measuring the useful life of a sensor. The monitor and/or the sensor determines the sensor life based on sensor use information. In an embodiment, the monitor and/or sensor uses a formula supplied by the sensor memory to measure the sensor life using the above described variables. In an embodiment, the formula is stored as a function or series of functions, such as SLM functions405. In an embodiment, experimental or empirical data is used to determine the formula used to determine the sensor's life. In an embodiment, damaged and/or used sensors are examined and use information is obtained in order to develop formulas useful in predicting the useful sensor life. In an embodiment, a formula or a set of formulas is stored in the monitor's memory. An indication of the correct formula or set of formulas to be used by the monitor is stored in the sensor. The indication stored on the sensor is read by the monitor so that the monitor knows which formula or series of formulas are to be used in order to determine the useful life of the sensor. In this way, memory space is saved by storing the function or set of functions on the monitor's memory and only storing an indication of the correct function or functions to be used on the sensor memory. In an embodiment, a weighted function or average of functions is determined based on the sensor/monitor configuration. For example, in an embodiment, the sensor life function is the sum of a weighted indication of use, for example, in an embodiment, the following sensor life function is used: Σinfijcj(1) where fijrefers to a function determined based on operating conditions and cjrefers to an indication of sensor use. For example, the correct fijcan be determined from a table such as: Time1Time2Temp.CurrentCalibrationsAgeModel. . .. . .F1f1, 1f2, 1f3, 1f4, 1f5, 1f6, 1f7, 1. . .. . .F2f1, 2f2, 2f3, 2f4, 2f5, 2f6, 2f7, 2. . .. . .F3f1, 3f2, 3f3, 3f4, 3f5, 3f6, 3f7, 3. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . Where Firefers the type of sensor and/or the type and number of parameters being monitored. For each different sensor and for each different parameter, a separate function is used in determining the useful life of a sensor. In an embodiment, the correct Fifor a given sensor can be stored on the sensor memory. In an embodiment, all of the functions fijfor a sensor are stored in the sensor memory. In an embodiment, the entire table is stored in the sensor memory. cjcan be determined from the monitored sensor parameters. For example, a cjfor can be determined by counting the total time in use, averaging use time during certain parameters, squaring use time, etc. Thus a cjcan be an indication of use. In an embodiment, the correct cjfor the number of times the sensor has been turned on or off can be determined by the following formula: ec100(2) where c is the number of times turned on or off. In an embodiment, when the useful life of a sensor has been reached, the monitor or sensor sounds an alarm or gives a visual indication that the sensor is at the end of its life. In an embodiment, the monitor will give an indication that the sensor is bad. In an embodiment, the monitor will not output data. In an embodiment, an indication of the end of the sensor life is not given while the sensor is actively measuring vital signs. In an embodiment, the percent of life left in a sensor is indicated. In an embodiment, an estimated remaining use time is indicated. In an embodiment, an indication that the end of the sensor life is approaching is indicated without giving a specific percentage or time period. FIGS.6A and6Billustrate flowcharts of embodiments of sensor life monitoring systems. Referring toFIG.6A, in an embodiment of a sensor life monitoring system, a sensor including a memory device is connected to a monitor. The sensor transmits sensor information to the monitor at block601. The information can include one or more of a function, use parameters, expiration parameters, or any other sensor specific information useful in determining the life expiration of a sensor. At block603, the sensor information is used to determine the correct function to use in determining the sensor expiration date. Any previous use information transmitted is also used during the monitoring process. At block605, the patient monitor monitors the sensor use. Optionally, the monitor periodically writes updated use information to the sensor at block607or in an embodiment, the use information is written once at the end of a monitoring cycle. At block609, the monitor computes sensor life parameters and sensor life expiration. The system then moves onto decision block611where it is determined whether the sensor life has expired. If the sensor life has expired, then the system moves to block613where an indication of the sensor life expiration is given. If the sensor life has not expired at decision block611, then the system returns to block605, where sensor use is monitored. FIG.6Billustrates a flowchart where the sensor life is calculated on the sensor instead of the monitor. At block671, the patient monitor monitors sensor use. The use information is supplied to the sensor at block673, the use information is recorded. At block675, the sensor calculates the sensor life expiration. The system then moves onto decision block677. At decision block677, if the sensor has expired, the system moves onto block679, where the sensor sends an expiration indication to the monitor and the monitor indicates the sensor expiration at block681. If, however, at block671the sensor has not expired, the system returns to block671where the sensor use is monitored. FIG.7illustrates a flowchart of an embodiment of a system for measuring the life of a sensor. In the course of monitoring a patient, information is written on the EPROM. Because the EPROM is finite in the amount of information it can hold, at some point, the EPROM becomes full. When the EPROM becomes full, the sensor will need to be replaced. Thus, an EPROM full signal indicates that the life span of the sensor has expired. The EPROM's memory capacity can be chosen to so as to estimate the life of the sensor. In addition, the monitor can be programmed to write to the sensor at set intervals so that after a predictable period of time, the EPROM's memory will be full. Once the EPROM is full, the monitor gives an audio and/or visual indication that the sensor needs to be replaced. Referring toFIG.7, the patient monitoring system determines whether to write to the sensor EPROM at block700. If information is not to be written to the EPROM at block700, then the system continues at block700. If information is to be written to the EPROM at block700, then the system continues to block701, where the system determines if the EPROM is full. If the EPROM is full, then the system moves to block703, where the system writes information to the EPROM. Once the information has been written, the system returns to block700where it waits until information is to be written to the EPROM. If at block701, the system determines that the EPROM is full, then the system moves to block703, where an indication is given to the user that the sensor needs to be replaced. In an embodiment, the sensor can be refurbished and used again. For example, if the memory used is an erasable memory module, then the sensor's memory can be erased during the refurbishment process and the entire sensor can be used again. In an embodiment, each time part or all of the memory is erased, an indicator of the number of times the memory has been erased is stored on the memory device. In this way, an indication of the number of refurbishments of a particular sensor can be kept. If a write only memory is used, then parts of the sensor can be salvaged for reuse, but a new memory module will replace the used memory module. In an embodiment, once the sensor memory is full, the sensor is discarded. In an embodiment, various parts of used sensors can be salvaged and reused. In an embodiment, the sensor keeps track of various use information as described above. The sensor memory can then be reviewed to see which parts of the used sensor can be salvaged based on the use information stored in the memory. For example, in an embodiment, an indication of the number of times the clip is depressed is stored in memory. A refurbisher can look at that use information and determine whether the mechanical clip can be salvaged and used on a refurbished sensor. Of course, the same principals apply to other aspects of the sensor, such as, for example, the LEDs, the cables, the detector, the memory, or any other part of the sensor. Although the foregoing invention has been described in terms of certain preferred embodiments, other embodiments will be apparent to those of ordinary skill in the art from the disclosure herein. For example, although disclosed with respect to a pulse oximetry sensor, the ideas disclosed herein can be applied to other sensors such as ECG/EKG sensor, blood pressure sensors, or any other physiological sensors. Additionally, the disclosure is equally applicable to physiological monitor attachments other than a sensor, such as, for example, a cable connecting the sensor to the physiological monitor. Additionally, other combinations, omissions, substitutions and modifications will be apparent to the skilled artisan in view of the disclosure herein. It is contemplated that various aspects and features of the invention described can be practiced separately, combined together, or substituted for one another, and that a variety of combination and subcombinations of the features and aspects can be made and still fall within the scope of the invention. Furthermore, the systems described above need not include all of the modules and functions described in the preferred embodiments. Accordingly, the present invention is not intended to be limited by the recitation of the preferred embodiments, but is to be defined by reference to the appended claims. | 29,416 |
11857320 | While embodiments of the disclosure are amenable to various modifications and alternative forms, specifics thereof shown by way of example in the drawings will be described in detail. It should be understood, however, that the intention is not to limit the disclosure to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the subject matter as defined by the claims. DETAILED DESCRIPTION Various blood sequestration devices are described herein for use in accessing the vein of a subject. It is to be appreciated, however, that the example embodiments described herein can alternatively be used to assess the vasculature of a subject at locations other than a vein, including but not limited to the artery of a subject. It is additionally to be appreciated that the term “caregiver,” “clinician,” or “user” refers to any individual that can collect a blood sample for blood culture analysis with any of the example embodiments described herein or alternative combinations thereof. Similarly, the term “patient” or “subject,” as used herein is to be understood to refer to an individual or object in which the blood sequestration device is utilized, whether human, animal, or inanimate. Various descriptions are made herein, for the sake of convenience, with respect to the procedures being performed by a clinician to access the vein of the subject, while the disclosure is not limited in this respect. It is also to be appreciated that the term “distal,” as used herein refers to the direction along a longitudinal axis of the blood sequestration device that is closest to the subject during the collection of a blood sample. Conversely, the term “proximal,” as used herein, refers to the direction lying along the longitudinal axis of the blood sequestration device that is further away from the subject during the collection of a blood sample, opposite to the distal direction. Referring toFIG.1A, a plan view of a blood sequestration device100is depicted in accordance with a first embodiment of the disclosure. In one embodiment, the blood sequestration device100can include a body member102having an interior wall104defining a generally “Y” shaped fluid conduit106. The fluid conduit106can include a distal portion108, a first proximal portion110, and a second proximal portion112. The distal portion108can include an inlet port114configured to be fluidly coupled to vasculature of a patient. For example, in one embodiment, the inlet port114can be in fluid communication with a catheter assembly116. The catheter assembly116can include a catheter hub118and a catheter tube120. In one embodiment, the catheter tube120can extend from a tapered distal end to a proximal end, where the catheter tube120can be operably coupled to the catheter hub118. The catheter tube120can define a lumen configured to provide a fluid pathway between a vein of the subject and the catheter hub118. In one embodiment, the catheter tube120can include a barium radiopaque line to ease in the identification of the catheter tube120during radiology procedures. In an alternative embodiment, the catheter tube120can include a metallic radiopaque line, or any other suitable radiopaque material. The catheter hub118can include a catheter hub body having a distal end, a proximal end and an internal wall defining an interior cavity therebetween. The interior cavity can include a proximal portion extending from an open proximal end, and a distal portion in proximity to the distal end. In one embodiment, the distal end of the catheter hub body is operably coupled to the proximal end of the catheter tube120, such that the lumen of the catheter tube is in fluid communication with the proximal portion of the interior cavity. In some embodiments, the catheter assembly116can further include an extension tube121operably coupling the catheter assembly116to the blood sequestration device100. In other embodiments, the blood sequestration device100can be directly coupled to the catheter assembly116and/or the blood sequestration device100and the catheter assembly116can be formed as a unitary member. Some embodiments of the catheter assembly116can further include a wing assembly122configured to aid a clinician in gripping, maneuvering and/or securing of the catheter assembly116to the patient during the collection of a blood sample. The first proximal portion110can define a sequestration chamber124configured to isolate an initial quantity of blood during the collection of a blood sample for blood culture analysis. For example, in one embodiment, blood from the vasculature of the patient under normal pressure can flow into and fill the sequestration chamber124, thereby displacing a quantity of gas initially trapped within the sequestration chamber124. The first proximal portion110can include a vent path126configured to enable the escape of the gas initially trapped within the sequestration chamber124, while inhibiting the escape of blood. For example, in one embodiment, the vent path126can be sealed at one end by a plug128. The plug128can be made out of an air permeable, hydrophilic material that enables the passage of air, but inhibits the passage of liquid. For example, in one embodiment, the plug128can include a plurality of pores shaped and sized to enable the passage of low-pressure gas, but inhibit the passage of low-pressure fluid, such that the pores of the plug128become effectively sealed upon contact with the low-pressure fluid. Air that resides within the sequestration chamber124is therefore pushed through the plug128by the incoming blood, until the blood reaches the plug128or is otherwise stopped. In one embodiment, the plug128can be inserted into the vent path126(as depicted inFIG.1A). For example, in one embodiment, the vent path126can define a Luer connector configured to accept a portion of the plug128. In another embodiment, the vent plug128can be adhered to the body member102, so as to occlude the vent path126(as depicted inFIG.1B). Alternatively, the vent plug128can be shaped and sized to fit within the first proximal portion110of the fluid conduit106at a proximal end of the sequestration chamber124. In yet another embodiment, the plug128can be operably coupled to an extension tube130, which can be operably coupled to the distal end of the first proximal portion, (as depicted inFIG.1C) such that an interior volume of the extension tubing defines at least a portion of the sequestration chamber, thereby enabling the increase of the internal capacity of the sequestration chamber124. In one embodiment, the sequestration chamber124has a volume of at least 0.15 mL, although other volumes of the sequestration chamber124are also contemplated. In some embodiments, a longitudinal axis of the first proximal portion110of the fluid conduit106can be axially aligned with a longitudinal axis of the distal portion108of the fluid conduit106. In this manner, the axial alignment of the first proximal portion110with the distal portion108can promote an initial flow of blood into the sequestration chamber124. In some embodiments, the body member102of the blood sequestration device100can be constructed of a clear or translucent material configured to enable a clinician to view the presence of blood within the sequestration chamber124. In this respect, the clinician can monitor the proper isolation of an initial portion of blood during the collection of a blood sample for blood culture analysis. The second proximal portion112can define a fluid path and an outlet port132configured to be fluidly coupled to a blood collection device134. For example, in one embodiment, the outlet port132can define a Luer connector configured to accept a portion of the blood collection device134. In other embodiments, the outlet port132can define a threaded portion configured to be threadably coupled to a portion of the blood collection device134. In some embodiments, the blood collection device134can be a vial or syringe136fluidly coupled to the outlet port132by an extension tube138. In some embodiments, the flow of blood into the second proximal portion112can be inhibited by the blood collection device134. For example, in one embodiment, the blood collection device134can include a clamp140configured to occlude the extension tube138and/or inhibit the venting of an initial quantity of gas present in the second proximal portion112and portions of the blood collection device134, such that a natural pressure of the trapped gas within the second proximal portion112inhibits a flow of blood into the second proximal portion112. In some embodiments, a longitudinal axis of the second proximal portion112of the fluid conduit106can be at an oblique angle to a longitudinal axis of the distal portion108of the fluid conduit106. In this manner, the oblique angle of the second proximal portion110can enable a smooth flow of blood past an opening into the sequestration chamber124and into the second proximal portion112, once the sequestration chamber124has been filled with the initial quantity of blood for isolation. Referring toFIGS.2A-B, a blood sequestration device200is depicted in accordance with a second embodiment of the disclosure. The blood sequestration device200can include a body member202having an interior wall204defining a fluid conduit206. The fluid conduit206can define an inlet port214, a vented sequestration chamber224, and an outlet port232. The inlet port214can be configured to be fluidly coupled to vasculature of a patient. For example, in one embodiment, the inlet port214can be in fluid communication with a catheter assembly216. In some embodiments, the blood sequestration device200can be operably coupled to the catheter assembly216by an extension tube221. In other embodiments, the blood sequestration device200can be directly coupled to the catheter assembly216and/or the blood sequestration device200and the catheter assembly216can be formed as a unitary member. Some embodiments of the catheter assembly216can further include a wing assembly configured to aid a clinician and gripping, maneuvering, and/or securing of the catheter assembly to the patient during the collection of a blood sample. The vented sequestration chamber224can be configured to isolate an initial quantity of blood during the collection of a blood sample. For example, in one embodiment, blood from the vasculature of the patient under normal pressure can flow into and fill the vented sequestration chamber224, thereby displacing a quantity of gas initially trapped within the sequestration chamber224. The vented sequestration chamber224can include a vent path226sealed by an air permeable, hydrophilic material plug228configured to enable the passage of air, but inhibit the passage of liquid. Accordingly, air that resides within the vented sequestration chamber224can be pushed through the plug228by the incoming blood, until the blood reaches the plug228or is otherwise stopped. The outlet port232can be positioned between the inlet port214and the vented sequestration chamber224. In one embodiment, the outlet port232can be positioned on a side wall of the body member202, substantially orthogonal to a longitudinal axis of the inlet port214and/or the sequestration chamber224. In some embodiments, the interior wall204of the fluid conduit206can define a restricted flow path portion242configured to aid in the isolation of an initial quantity of blood within the vented sequestration chamber224. In some embodiments, the restricted flow path portion242is defined by contours of the interior wall204of the fluid conduit. In other embodiments, the restricted flow path portion242is defined by a separate flow restrictor element244positioned within the fluid conduit206(as depicted inFIGS.2A-3B). In some embodiments, the outlet port232can be initially sealed during the collection of a sample of blood, such that a flow of blood entering the inlet port214naturally follows a path of least resistance into the vented sequestration chamber224, where an initial quantity of blood can be isolated. Accordingly, in one embodiment, sealing of the outlet port232causes a natural pressure of gas trapped in proximity to the outlet port232to inhibit a flow of blood into the outlet port232. In one embodiment, the outlet port232can define a Luer connector configured to accept a portion of a blood collection device234. The blood collection device234can be configured to occlude the outlet port232, so as to inhibit the flow of blood into the outlet port232and encourage the natural flow of an initial quantity of blood into the vented sequestration chamber224. In one embodiment, the outlet port232can include a needle free connector246shiftable from a naturally biased close position to an open position upon the insertion of a Luer taper (as depicted inFIGS.3A-B). Referring toFIGS.4A-B, a blood sequestration device300is depicted in accordance with a third embodiment of the disclosure. The blood sequestration device300can include a body member302and an elastomeric blood control valve342. The body member302can include an interior wall304defining a fluid conduit306having an inlet port314, a vented sequestration chamber324, and an outlet port332. The inlet port314can be configured to be fluidly coupled to a vein of a patient, so as to enable a flow of blood from vasculature of the patient to flow into the fluid conduit306of the blood sequestration device300. For example, in one embodiment, the inlet port314can be fluidly coupled to a catheter assembly316. The vented sequestration chamber324can be configured to isolate an initial quantity of blood during the collection of a blood sample. For example, in one embodiment, blood from the vasculature of the patient under normal pressure can flow into and fill the vented sequestration chamber324, thereby displacing a quantity of gas initially trapped within the sequestration chamber324. The vented sequestration chamber324can include a vent path326sealed by an air permeable, hydrophilic material plug328configured to enable the passage of air, but inhibit the passage of liquid. In one embodiment, the vented sequestration chamber324can be positioned between the inlet port314and the outlet port332. In one embodiment, the vented sequestration chamber324can extend from a side wall of the body member302at an oblique angle relative to a longitudinal axis of the inlet port314and/or the outlet port332. In another embodiment, the vented sequestration chamber324can extend from the side wall of the body member302substantially orthogonal to a longitudinal axis of the inlet port314and/or outlet port332. In some embodiments, a portion of the vented sequestration chamber324can be defined by a length of flexible hollow tubing330. In some embodiments, the vented sequestration chamber has a volume of at least 0.15 mL, although other volumes of the vented sequestration chamber324are also contemplated. The elastomeric blood control valve342can be positioned between the inlet port314and the outlet port332. The elastomeric blood control valve342can be configured to move from an initial, closed position (as depicted inFIG.4A) to inhibit a flow of blood from the inlet port314to the outlet port332, to an open position (as depicted inFIG.4B) where the elastomeric blood control valve342permits the flow of blood from the inlet port314to the outlet port332. In the initial, closed position a natural pressure of gas trapped in proximity to the outlet port332inhibits a flow of blood into the outlet port, such that blood naturally flows into the vented sequestration chamber324. Upon shifting the blood control valve342to the open position, the blood flow will follow the path of least resistance to exit the blood sequestration device300at the outlet port332, to which a blood collection device can be operably coupled. Further, when the blood control valve342is in the open position, the blood control valve342is arranged such that the vented sequestration chamber324is sealed from fluid communication with the fluid conduit306. In one embodiment, the elastomeric blood control valve342can include an actuator344secured to the interior wall304of the body member302, so as to extend axially within the fluid conduit306. The actuator344can be a rigid, hollow member configured to enable fluid to pass therethrough. The elastomeric blood control valve342can further include a seal member346secured within the fluid conduit306of the body member302with the aid of the actuator344, such that the seal member346is axially shiftable relative to the actuator344between the closed position in which flow of fluid through the blood control valve342is inhibited or restricted, and the open position, in which the seal member346is shifted relative to the actuator344, thereby enabling the flow of fluid from the inlet port314, through the elastomeric blood control valve342, and out through the outlet port332. One example of such a blood control valve is disclosed in U.S. Pat. No. 9,545,495, the contents of which are incorporated by reference herein. Referring toFIGS.5A-C, a blood sequestration device400is depicted in accordance with a fourth embodiment of the disclosure. The blood sequestration device400can be configured to automatically retract and safely house a sharpened distal tip of the needle following the isolation of an initial quantity of blood during the collection of a blood sample. The blood sequestration device400can include a needle402, a sequestration body404, a needle housing406, and a biasing mechanism408. The needle402can include an elongate cylindrically shaped metal structure defining a lumen that extends between a sharpened distal tip410and a proximal end412. The sharpened distal tip410can be constructed and arranged to pierce the skin of the subject during needle insertion. For example, in one embodiment, the sharp distal tip410can include a V-point designed to reduce the penetration force used to penetrate the needle402and a portion of the sequestration body404through the skin, tissue, and vein wall of a subject. In one embodiment, the length of the needle402can be extended to aid in accessing vasculature of obese patients. The proximal end412of the needle402can be operably coupled to a needle hub414. In some embodiments, the needle402and needle hub414can be collectively referred to as a needle assembly. In one embodiment, the needle hub414can be constructed to provide a visual indication of a flashback when the sharpened distal tip410of the needle402enters the vein of the subject. For example, in one embodiment, the needle hub414can define a flash chamber in communication with the lumen of the needle402. The sequestration body404can coaxially ride over at least a portion of the needle402. In one embodiment, the sequestration body404can include a catheter portion416and a sequestration chamber418. The catheter portion416can include a catheter hub420and a catheter tube422. The catheter tube can extend from a distal taper end to a proximal end, where the catheter tube422can be operably coupled to the catheter hub420. The catheter tube422can define a lumen configured to provide a fluid pathway between a vein of the subject and the catheter hub420. In one embodiment, the catheter tube422can include a barium radiopaque line to ease in the identification of the catheter tube422during radiology procedures. The catheter hub420can include a catheter hub body having a distal end, a proximal end, and an internal wall defining an interior cavity therebetween. The interior cavity can include a proximal portion extending from an open proximal end, and a distal portion in proximity to the distal end. In one embodiment, the distal end of the catheter hub body can be operably coupled to the proximal end of the catheter tube422, such that the lumen of the catheter tube is in fluid communication with the proximal portion of the interior cavity. In some embodiments, the catheter portion416can further comprise a closed catheter assembly, including an extension tube424, an extension tube clamp426, and a needleless connector428. Alternatively, the interior wall defining the interior cavity of the catheter hub420can further define a side port (not depicted) configured to enable an alternative fluid communication path with the interior cavity of the catheter hub420. In one embodiment, the side port can be positioned substantially orthogonal to a longitudinal axis of the catheter hub420. The side port can be selectively sealed by a flexible sealing member position within the interior cavity of the catheter hub420. Some embodiments can further include a wing assembly430configured to aid a clinician and gripping, maneuvering and/or securing the sequestration body404to the subject during the collection of a blood sample. The sequestration chamber418can be configured to isolate an initial quantity of blood during the collection of a blood sample. In one embodiment, the sequestration chamber418can have a distal end, a proximal end, and an internal wall defining an interior cavity therebetween. The distal end of the sequestration chamber418can be operably coupled to the proximal and the catheter hub420, such that interior cavities of the catheter hub420and sequestration chamber418are in fluid communication. The proximal end of the sequestration chamber418can define a vent path432configured to enable the escape of gas initially trapped within the sequestration chamber418, while inhibiting the escape of blood. For example, in one embodiment, the vent path432can be sealed at one end by a valve or septum434. In one embodiment, the septum434can be configured to enable at least a portion of the needle402to pass therethrough during insertion of the catheter tube422into the vein of the subject. The septum434can be configured to seal upon withdrawal of the needle402through the septum432, thereby inhibiting the leakage of blood after the needle402has been withdrawn. In one embodiment, the septum can further be made out of an air permeable, hydrophilic material configured to enable the passage of air, but inhibit the passage of liquid, thereby enabling air that resides within the sequestration chamber418to be evacuated through the septum432by the incoming initial quantity of blood to be sequestered. The needle housing406can have a distal end, a proximal end, and a housing wall436defining a needle housing cavity therebetween. The needle housing cavity can be shaped and sized to accommodate at least a portion of the needle hub414there within. The needle hub414can be slidably coupled to the needle housing406between an initial, blood collection position (as depicted inFIG.5B), in which at least a portion of the needle402extends beyond the needle housing406, and a safe position, in which the sharpened distal tip410of the needle402is housed within the needle housing406. The biasing mechanism408can be operably coupled between the needle hub414and the distal end of the needle housing406, and can be configured to naturally bias the needle hub414to the safe position. In one embodiment, the biasing mechanism408can be a coil spring, although other biasing mechanisms are also contemplated. The needle housing wall436can further define a channel438including a blood collection position notch439, into which a guide lock440of the needle hub414can extend. In some embodiments, rotation of the needle hub414relative to the needle housing406about its longitudinal axis can cause the guide lock440to rotate out of the blood collection position notch439, such that the natural bias of the biasing mechanism408can shift the needle hub414to the safe position, wherein the needle hub414is guided by the guide lock440of the needle hub414to traverse along a length of the channel438. Referring toFIGS.6A-D, a blood sequestration device500is depicted in accordance with a fifth embodiment of the disclosure. The blood sequestration device500can be configured to automatically retract and safely house a sharpened distal tip of the needle following the isolation of an initial quantity of blood during the collection of a blood sample. The blood sequestration device500can include a housing502, needle504, needle biasing mechanism506, and movable element508. The housing502can have a distal end510, proximal end512and housing wall514defining a cavity516. As depicted inFIG.6A, in one embodiment, the housing402can generally be formed in the shape of a truncated sector, wherein the interior surface of the housing wall514along the distal end510forms an arc in which points along the interior surface of the housing wall514along the distal end510are generally equidistant from a point518located in proximity to the proximal end512of the housing502. The movable element508can reside at least partially within the cavity516of the housing502, and can be pivotably coupled to the housing502about point518, such that the movable element508is configured to rotate or shift relative to the housing502between an initial blood sequestration position (as depicted inFIG.6A), a blood collection position (as depicted inFIG.6B), and a needle retraction position (as depicted inFIG.6C-D). In one embodiment, the movable element508can define one or more chambers and/or fluid pathways. For example, in one embodiment, the movable element508can define a sequestration chamber520, a blood collection pathway522, and a chamber524configured to house at least a portion of the needle504upon retraction. In one embodiment, the movable element508can further define one or more push tabs526configured to protrude from the housing502to enable a clinician to manipulate the movable element508relative to the housing502between the initial blood sequestration position, blood collection position, and needle retraction position. The needle504can include an elongate cylindrical shaped metal structure defining a lumen that extends between a sharpened distal tip528and a proximal end530. The sharpened distal tip528can be constructed and arranged to pierce the skin of the subject during needle insertion. The proximal end530of the needle504can be operably coupled to a needle hub532. In some embodiments, the needle504and the needle hub532can be collectively referred to as a needle assembly. The needle hub532can be slidably coupled to the housing502between an initial position (as depicted inFIG.6A), in which at least a portion of the needle504extends beyond the housing502, and a safe position (as depicted inFIG.6D), in which the sharpened distal tip528of the needle504is housed within the housing502. The biasing mechanism506can be operably coupled between the needle hub532and the distal end of the housing502, and can be configured to naturally bias the needle hub532to the safe position. In one embodiment, the blood sequestration device500can be provided in the initial blood sequestration position, with the needle504extending outwardly from the distal end510of the housing502. Upon insertion of the needle504into the vein of a subject, blood flows through the lumen of the needle504, and into the sequestration chamber520defined in the movable element508. The sequestration chamber520can include a vent path534configured to enable the escape of gas initially trapped within the sequestration chamber520, while inhibiting the escape of blood. For example, in one embodiment, the vent path534can be sealed by a plug536, constructed of an air permeable, hydrophilic material that enables the passage of air, but inhibits the passage of liquid. Air that resides within the sequestration chamber520is therefore pushed through the plug536by the incoming blood, until the blood reaches the plug536or is otherwise stopped. In one embodiment, the sequestration chamber520has a volume of at least 0.15 mL, although other volumes of the sequestration chamber520are also contemplated. Once an initial quantity of blood has been sequestered within the sequestration chamber520, a clinician can manipulate the one or more push tabs526to cause the movable element508to shift from the initial blood sequestration position to the blood collection position. In the blood collection position, blood can flow from the vein of the subject through the lumen of the needle504, through the blood collection pathway522defined within the movable element508, and out of the housing502through an outlet port538, which can be operably coupled to a blood collection device via an extension tube540. Once a satisfactory quantity of blood has been collected, a clinician can manipulate the one or more push tabs526to cause the movable element508to shift from the blood sequestration position to the needle retraction position. Prior to movement of the movable element508to the needle retraction position, a distal surface of the movable element508can inhibit retraction of the needle504into the cavity516of the housing502. By contrast, the chamber524configured to house at least a portion of the needle502upon retraction can include structure defining an opening542shaped and sized to enable the needle hub532to pass therethrough, thereby enabling the needle504to be retracted within the chamber524under the natural bias of the needle biasing mechanism506to the safe position. In the safe position, the sharpened distal tip528of the needle504is housed within the chamber524to reduce the risk of unintended needle sticks. In some embodiments, movement of the movable element508to the needle retraction position can cause the one or more push tabs to be shifted into the cavity506of the housing502, thereby inhibiting a clinician from further movement of the movable element508. In one embodiment, movement of the movable element to the needle retraction position can cause a portion of the movable element508and/or housing502to crimp the extension tube540, thereby inhibiting leakage of fluid from an attached blood collection device. In embodiments, the shift between the initial sequestration position (as depicted inFIG.6A) and the blood collection position (as depicted inFIG.6B), and the shift from the blood collection position (as depicted inFIG.6B) to the needle retraction position (as depicted inFIGS.6C-D) can occur as one fluid motion. In alternative embodiments, an interference protrusion may be introduced within the distal end510, and within the rotation path of the movable element508, such that the clinician is aware, via tactile feedback, that the movable element508is in the blood collection position and a pause is warranted. In yet other embodiments, a ratchet mechanism can be introduced into pivoting point518such that the movable element508ceases movement in the blood collection position and the clinician must manipulate the one or more push tabs526again to move the movable element508from the blood collection position to the needle retraction position. Referring toFIGS.7A-D, in some embodiments, the blood sequestration device500′ can include a first push tab526A and a second push tab526B configured to protrude from the housing502to enable a clinician to manipulate the movable element508relative to the housing502between the initial blood sequestration position, blood collection position, and needle retraction position. In one embodiment, manipulation of the first push tab526A in a first direction causes the movable element508to shift from the initial blood sequestration position to the blood collection position. Manipulation of the second push tab526B in a second direction causes the movable element508to shift from the blood collection position to the needle retraction position. Other configurations of push tabs526defined by the movable element508are also contemplated. In embodiments, the shift between the initial sequestration position (as depicted inFIG.7A) and the blood collection position (as depicted inFIG.7B), and the shift from the blood collection position (as depicted inFIG.7B) to the needle retraction position (as depicted inFIGS.7C-D) can occur separately as fluid motions. In alternative embodiments, an interference protrusion may be introduced within the distal end, and within the rotation path of the movable element508, such that the clinician is aware, via tactile feedback, that the movable element508is in the blood collection position and a pause is warranted. In yet other embodiments, a ratchet mechanism can be introduced into the pivoting point such that the movable element508ceases movement in the blood collection position and the clinician must manipulate the one or more push tabs526A again to move the movable element508from the blood collection position to the needle retraction position, but bypassing the initial sequestration position. In some embodiments, the blood sequestration device can further include a catheter assembly to aid in the collection of a blood sample. Referring toFIGS.8A-D, a blood sequestration device600is depicted in accordance with a sixth embodiment of the disclosure. The blood sequestration device600can be configured to automatically retract and safely house a sharpened distal tip of a needle following the insertion of a catheter assembly for the collection of a blood sample. The blood sequestration device600can include a housing602, needle604, needle biasing mechanism606, movable element608, and catheter assembly650. The housing602can have a distal end610, a proximal end612and a housing wall614defining a cavity616. As depicted inFIG.8A, in one embodiment, the housing602can generally be formed in the shape of a truncated sector, wherein the interior surface of the housing wall614along the distal end610forms an arc in which points along the interior surface of the housing wall614along the distal end610are generally equidistant from a point618located in proximity to the proximal end612of the housing602. The movable element608can reside at least partially within the cavity616of the housing602, and can be pivotably coupled to the housing602about a point618, such that the movable element608is configured to rotate or shift relative to the housing602between an initial blood sequestration position (as depicted inFIG.8A), a needle retraction position (as depicted inFIGS.8B-C), and a blood collection position (as depicted inFIG.8D). In one embodiment, the movable element608can define one or more chambers and/or fluid pathways. For example, in one embodiment, the movable element608can define a sequestration chamber620, a blood collection pathway622, and a chamber624configured to house at least a portion of the needle604upon retraction. In one embodiment, the movable element608can further define one or more push tabs626configured to protrude from the housing602to enable a clinician to manipulate the movable element608relative to the housing602between the initial blood sequestration position, needle retraction position, and blood collection position. The needle604can include an elongate cylindrical shaped metal structure defining a lumen that extends between a sharpened distal tip628and a proximal end630. The sharpened distal tip628can be constructed and arranged to pierce the skin of the subject during needle insertion. The proximal end630of the needle604can be operably coupled to a needle hub632. In some embodiments, the needle604and needle hub632can be collectively referred to as a needle assembly. The needle hub632can be slidably coupled to the housing602between an initial position (as depicted inFIG.8A), in which a least a portion of the needle604extends beyond the housing602, and a safe position (as depicted inFIG.8C), in which the sharpened distal tip628of the needle604is housed within the housing602. The biasing mechanism606can be operably coupled between the needle hub632and the distal end of the housing602, and can be configured to naturally bias the needle hub632to the safe position. The catheter assembly650can include a catheter tube652and a catheter hub654. The catheter assembly650can be configured to coaxially ride over at least a portion of the needle604and/or needle assembly. In one embodiment, the catheter hub654can be operably coupled to the distal end610of the housing602. In one embodiment, the blood sequestration device600can be provided in the initial blood sequestration position, with the needle604and catheter assembly650extending outwardly from the distal end610of the housing602. Upon insertion of the needle604and catheter tube652into the vein of the subject, blood flows through the lumen of the needle604, and into the sequestration chamber620defined within the movable element608. The sequestration chamber620can include a vent path634configured to enable the escape of gas initially trapped within the sequestration chamber620, while inhibiting the escape of blood. For example, in one embodiment, the vent path634can be sealed by a plug636, constructed of an air permeable, hydrophilic material that enables the passage of air, but inhibits the passage of liquid. Air that resides within the sequestration chamber620is therefore pushed through the plug636by the incoming blood, until the blood reaches the plug636or is otherwise stopped. In one embodiment, the sequestration chamber620has a volume of at least 0.15 mL, although other volumes of the sequestration chamber620are also contemplated. Once an initial quantity of blood has been sequestered within the sequestration chamber620, a clinician can manipulate the one or more push tabs626to cause the movable element608to shift from the initial blood sequestration position to the needle retraction position. Prior to movement of the movable element608to the needle retraction position, a distal surface of the movable element608can inhibit retraction of the needle604into the cavity616of the housing602. By contrast, the chamber624, configured to house at least a portion of the needle604upon retraction, can include structure defining an opening642shaped and sized to enable the needle hub632to pass therethrough, thereby enabling the needle604to be retracted within the chamber624under the natural bias of the needle biasing mechanism606to the safe position. In the safe position, the sharpened distal tip628of the needle604is housed within the chamber624to reduce the risk of unintended needle sticks, while leaving the catheter assembly650in place within the subject's vein. Once the needle604has been safely retracted, a clinician can manipulate the one or more push tabs626to cause the movable element608to shift from the needle retraction position to the blood collection position. In the blood collection position, blood can flow from the vein of the subject through the catheter assembly650, through the blood collection pathway622defined within the movable element608, and out of the housing602through an outlet port638, which can be operably coupled to a blood collection device via an extension tube640. Once a satisfactory quantity of blood has been collected, a clinician can remove the catheter assembly650from the patient's vein. In embodiments, the shift between the initial sequestration position (as depicted inFIG.8A) and the needle retraction position (as depicted inFIGS.8B-C), and the shift from the needle retraction position (as depicted inFIGS.8B-C) and the blood collection position (as depicted inFIG.8D) can occur as one fluid motion. In other words, after the initial blood flow is sequestered in the initial sequestration position, the clinician can manipulate the one or more push tabs626such that the movable element608rotates to the blood collection position, thereby rotating through the needle retraction position. In alternative embodiments, an interference protrusion may be introduced within the distal end, and within the rotation path of the movable element608, such that the clinician is aware, via tactile feedback, that the movable element608is in the needle retraction position and a pause is warranted. In yet other embodiments, a ratchet mechanism can be introduced into pivoting point618such that the movable element608ceases movement in the needle retraction position and the clinician must manipulate the one or more push tabs626again to move the movable element from the needle retraction position to the blood collection position. Referring toFIGS.9A-C, in some embodiments, the blood collection pathway622and chamber624defined within the movable element608can be combined. In this embodiment, once an initial quantity of blood has been sequestered within the sequestration chamber620, a clinician can manipulate the one or more push tabs626to move the movable element608to shift from the initial blood sequestration position to the blood collection position, which enables both retraction of the needle604within the chamber624under the natural bias of the needle biasing mechanism606to the safe position, as well as a flow of blood from the vein of the subject through the catheter assembly650, through the blood collection pathway622defined within the movable element608, and out of the housing602through an outlet port638, which can be operably coupled to a blood collection device. Other configurations of chambers and/or fluid pathways within the movable element608are also contemplated. In embodiments, the shift between the initial sequestration position (as depicted inFIG.9A) and the blood collection and needle retraction position (as depicted inFIG.9B-C), can occur freely. In alternative embodiments, an interference protrusion may be introduced within the distal end, and within the rotation path of the movable element608, such that the moveable element608does not easily rotate into the blood collection and needle retraction position, unless the clinician purposefully manipulates the movable element608past the tactile feed back of the interference protrusion. In yet other embodiments, a ratchet mechanism can be introduced into the pivoting point such that the movable element608ceases movement in the initial sequestration position such that the clinician must manipulate the one or more push tabs626through the ratchet mechanism in order to move the movable element608from the interference protrusion to the blood collection position and needle retraction position. It should be understood that the individual steps used in the methods of the present teachings may be performed in any order and/or simultaneously, as long as the teaching remains operable. Furthermore, it should be understood that the apparatus and methods of the present teachings can include any number, or all, of the described embodiments, as long as the teaching remains operable. Various embodiments of systems, devices, and methods have been described herein. These embodiments are given only by way of example and are not intended to limit the scope of the claimed inventions. It should be appreciated, moreover, that the various features of the embodiments that have been described may be combined in various ways to produce numerous additional embodiments. Moreover, while various materials, dimensions, shapes, configurations and locations, etc. have been described for use with disclosed embodiments, others besides those disclosed may be utilized without exceeding the scope of the claimed inventions. Persons of ordinary skill in the relevant arts will recognize that the subject matter hereof may comprise fewer features than illustrated in any individual embodiment described above. Reference in the specification to “one embodiment,” “an embodiment,” or “some embodiments” means that a particular feature, structure, or characteristic, described in connection with the embodiment, is included in at least one embodiment of the teaching. Appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Moreover, the embodiments described herein are not meant to be an exhaustive presentation of the ways in which the various features of the subject matter hereof may be combined, nor are the embodiments mutually exclusive combinations of features. Rather, the various embodiments can comprise a combination of different individual features selected from different individual embodiments, such that, as understood by persons of ordinary skill in the art, elements described with respect to one embodiment can be implemented in other embodiments even when not described in such embodiments unless otherwise noted. Although a dependent claim may refer in the claims to a specific combination with one or more other claims, other embodiments can also include a combination of the dependent claim with the subject matter of each other dependent claim or a combination of one or more features with other dependent or independent claims. Such combinations are proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended also to include features of a claim in any other independent claim even if this claim is not directly made dependent to the independent claim. Any incorporation by reference of documents above is limited such that no subject matter is incorporated that is contrary to the explicit disclosure herein. Any incorporation by reference of documents above is further limited such that no claims included in the documents are incorporated by reference herein. Any incorporation by reference of documents above is yet further limited such that any definitions provided in the documents are not incorporated by reference herein unless expressly included herein. For purposes of interpreting the claims, it is expressly intended that the provisions of Section 112, sixth paragraph of 35 U.S.C. are not to be invoked unless the specific terms “means for” or “step for” are recited in a claim. | 45,834 |
11857321 | DETAILED DESCRIPTION Devices and methods for collecting, diverting, sequestering, isolating, etc. an initial volume of bodily fluid to reduce contamination in subsequently procured bodily fluid samples are described herein. Any of the fluid control devices described herein can be configured to receive, procure, and/or transfer a flow, bolus, volume, etc., of bodily fluid. A first reservoir, channel, flow path, or portion of the device can receive an initial amount of the bodily fluid flow, which then can be substantially or fully sequestered therein (e.g., contained or retained, circumvented, isolated, segregated, vapor-locked, separated, and/or the like). In some instances, contaminants such as dermally residing microbes or the like can be included and/or entrained in the initial amount of the bodily fluid and likewise are sequestered in or by the first reservoir or first portion of the device. Once the initial amount is sequestered, any subsequent amount of the bodily fluid flow can be diverted, channeled, directed, and/or otherwise allowed to flow to or through a second portion of the device, and/or any additional flow path(s). Based at least in part on the initial amount being sequestered, the subsequent amount(s) of bodily fluid can be substantially free from contaminants that may otherwise produce inaccurate, distorted, adulterated, and/or false results in some diagnostics and/or testing. In some instances, the initial amount of bodily fluid also can be used, for example, in other testing such as those less affected by the presence of contaminants, can be discarded as a waste volume, can be infused back into the patient, and/or can be used for any other suitable clinical application. In some embodiments, a feature of the fluid control devices and/or methods described herein is the use of an external negative pressure source (e.g., provided by a fluid collection device or any other suitable means) that can (1) overcome physical patient challenges which can limit and/or prevent a sufficient pressure differential to fully engage the sequestration chamber and/or to transition fluid flow to the fluid collection device (e.g., a differential in blood pressure to ambient air pressure); (2) result in proper filling of the sequestration chamber with a clinically validated and/or desirable volume of bodily fluid; (3) result in efficient, timely, and/or user-accepted consistency with the bodily fluid collection process; and/or (4) provide a means of transitioning fluid flow (e.g., automatically or by manipulation to move any number of physical components of the system or by changing, switching, engaging, and/or otherwise providing desired fluid flow dynamics) to enable sequestration and/or isolation of the initial amount (e.g., a pre-sample) and collection of a subsequent sample. In some embodiments, for example, an apparatus for procuring bodily fluid samples with reduced contamination includes a housing, an actuator, and a flow controller. The housing forms at least a portion of a sequestration chamber, and has an inlet configured to be fluidically coupled to a bodily fluid source, and an outlet configured to be fluidically coupled to a fluid collection device. The fluid collection device exerts a suction force in at least a portion of the housing when fluidically coupled to the outlet. The actuator is coupled to the housing and has a first configuration in which the inlet is in fluid communication with the sequestration chamber, and a second configuration in which the inlet is in fluid communication with the outlet and is fluidically isolated from the sequestration chamber. The flow controller is disposed in the housing and defines a portion of the sequestration chamber. The flow controller has a first state in which the portion of the sequestration chamber has a first volume, and a second state in which the portion of the sequestration chamber has a second volume greater than the first volume. When the actuator is in the first configuration, the flow controller transitions from the first state to the second state in response to the suction force to draw an initial volume of bodily fluid into the portion of the sequestration chamber. The actuator is configured to be transitioned to the second configuration after the initial volume of bodily fluid is drawn into the sequestration chamber to (1) sequester the sequestration chamber from the inlet, and (2) allow a subsequent volume of bodily fluid to flow from the inlet to the outlet in response to the suction force. In some embodiments, an apparatus for procuring bodily fluid samples with reduced contamination includes a housing, an actuator, and a flow controller. The housing forms at least a portion of a sequestration chamber, and has an inlet configured to be fluidically coupled to a bodily fluid source, and an outlet configured to be fluidically coupled to a fluid collection device. The fluid collection device exerts a suction force in at least a portion of the housing when fluidically coupled to the outlet. The actuator is coupled to the housing and has a first configuration in which the inlet is in fluid communication with the sequestration chamber, and a second configuration in which the inlet is in fluid communication with the outlet and is fluidically isolated from the sequestration chamber. The flow controller is disposed in the housing and defines a portion of the sequestration chamber. The flow controller has a first a first state in which a first side of the flow controller is in contact with at least a portion of a first surface of the sequestration chamber, and a second state in which a second side of the flow controller is in contact with at least a portion of a second surface of the sequestration chamber, opposite the first surface. The flow controller transitions from the first state to the second state when the actuator is in the first configuration, as a result of the suction force being exerted on the second side of the flow controller to draw an initial volume of bodily fluid into a portion of the sequestration chamber defined between the first surface and the first side of the flow controller. The actuator is configured to be transitioned to the second configuration after the initial volume of bodily fluid is drawn into the sequestration chamber to (1) sequester the sequestration chamber from the inlet, and (2) allow a subsequent volume of bodily fluid to flow from the inlet to the outlet in response to the suction force. In some embodiments, a fluid control device can include a housing, a flow controller, and an actuator. The housing has an inlet and an outlet, and forms a sequestration chamber. The inlet is configured to be placed in fluid communication with a bodily fluid source. The outlet is configured to be placed in fluid communication with a fluid collection device configured to exert a suction force within at least a portion of the housing. The actuator is coupled to the housing and is configured to establish fluid communication between the inlet and the sequestration chamber when in a first state and to establish fluid communication between the inlet and the outlet when placed in a second state. The flow controller is disposed in the sequestration chamber and is configured to transition from a first state to a second state in response to the suction force when the actuator is in its first state to allow an initial volume of bodily fluid to flow into a portion of the sequestration chamber. The portion of the sequestration chamber has a first volume when the flow controller is in the first state and a second volume greater than the first volume when the flow controller is in the second state. The actuator is configured to be transitioned to its second state after the initial volume of bodily fluid is received in the portion of the sequestration chamber to (1) sequester the sequestration chamber, and (2) allow a subsequent volume of bodily fluid to flow from the inlet to the outlet in response to the suction force. In some embodiments, a method for procuring bodily fluid samples with reduced contamination using a fluid control device having a housing, an actuator, and a flow controller includes establishing fluid communication between a bodily fluid source and an inlet of the housing. A fluid collection device is coupled to an outlet of the housing and exerts a suction force within at least a portion of the housing when coupled to the outlet. The flow controller is transitioned from a first state to a second state in response to the suction force, increasing a volume of a sequestration chamber collectively defined by the flow controller and a portion of the housing. In response to the increase in volume, a first portion of the sequestration chamber receives a volume of air contained in a flow path defined between the bodily fluid source and the sequestration chamber, and a second portion of the sequestration chamber receives an initial volume of bodily fluid. The actuator is transitioned from a first configuration to a second configuration after receiving the initial volume of bodily fluid in the second portion of the sequestration chamber to (1) sequester the sequestration chamber and (2) allow a subsequent volume of bodily fluid to flow from the inlet to the outlet in response to the suction force. In some embodiments, a method for procuring a bodily fluid sample with reduced contamination using a fluid control device having a housing, a flow controller, and an actuator can include, for example, establishing fluid communication between a bodily fluid source and an inlet of the housing. A fluid collection device is fluidically coupled to an outlet of the housing. The flow controller is transitioned from a first state to a second state in response to a suction force exerted by the fluid collection device to increase a volume of a first portion of the sequestration chamber and a second portion of the sequestration chamber. The first portion of the sequestration chamber receives a volume of air contained in a flow path defined between the bodily fluid source and the sequestration chamber in response to the increase in the volume of the first portion of the sequestration chamber the second portion of the sequestration chamber. The second portion of the sequestration chamber receives an initial volume of bodily fluid in response to the increase in the volume of the first portion of sequestration chamber and the second portion of the sequestration chamber. After receiving the initial volume of bodily fluid in the second portion of the sequestration chamber, the actuator is transitioned from a first state to a second state to (1) sequester the sequestration chamber and (2) allow a subsequent volume of bodily fluid (e.g., the bodily fluid sample) to flow from the inlet to the outlet in response to the suction force. Any of the embodiments and/or methods described herein can be used in the procurement of clean or substantially unadulterated bodily fluid samples such as, for example, blood samples. In some instances, bodily fluid samples (e.g., blood samples) can be tested for the presence of one or more potentially undesirable microbes, such as bacteria (e.g., Gram-Positive bacteria and/or Gram-Negative bacteria), fungi, yeast (e.g.,Candida), and/or the like. Various technologies can be employed to assist in the detection of the presence of microbes as well as other types of biological matter, specific types of cells, biomarkers, proteins, antigens, enzymes, blood components, and/or the like during diagnostic testing. Examples include but are not limited to molecular polymerase chain reaction (PCR), magnetic resonance and other magnetic analytical platforms, automated microscopy, spatial clone isolation, flow cytometry, whole blood (“culture free”) specimen analysis (e.g., NGS) and associated technologies, morphokinetic cellular analysis, and/or other common or evolving and advanced technologies to characterize patient specimens and/or to detect, identify, type, categorize, and/or characterize specific organisms, antibiotic susceptibilities, and/or the like. For example, in some instances, microbial testing can include incubating patient samples in one or more vessels that may contain culture media (e.g., a nutrient rich and/or environmentally controlled medium to promote growth, and/or other suitable medium(s)), common additives, and/or other types of solutions conducive to microbial growth. Any microbes and/or organisms present in the patient sample flourish and/or grow over time in the culture medium (e.g., a variable amount of time from less than an hour to more than several days—which can be longer or shorter depending on the diagnostic technology employed). The presence of the microbes and/or organisms can be detected (e.g., by observing carbon dioxide levels and/or other detection methods) using automated, continuous monitoring, and/or other methods specific to the analytical platform or technology used for detection, identification, and/or the like. The presence of microbes and/or organisms in the culture medium suggests the presence of the same microbes and/or organisms in the patient sample, which in turn, suggests the presence of the same microbes and/or organisms in the bodily fluid of the patient from whom the sample was obtained. In other instances, a bodily fluid sample may be analyzed directly (i.e., not incubated) for the presence of microbes and/or organisms. When the presence of microbes is identified in the sample used for testing, the patient may be diagnosed and prescribed one or more antibiotics or other treatments specifically designed to treat or otherwise remove the undesired microbes and/or organisms from the patient. Patient samples, however, can become contaminated during procurement and/or otherwise can be susceptible to false results. For example, microbes from a bodily surface (e.g., dermally residing microbes) that are dislodged during the specimen procurement process (e.g., either directly or indirectly via tissue fragments, hair follicles, sweat glands, and other skin adnexal structures) can be subsequently transferred to a culture medium, test vial, or other suitable specimen collection or transfer vessel with the patient sample and/or otherwise included in the specimen that is to be analyzed. Another possible source of contamination is from the person drawing the patient sample. For example, equipment, supplies, and/or devices used during a patient sample procurement process often include multiple fluidic interfaces (e.g., patient to needle, needle to transfer adapter, transfer adapter to sample vessel, catheter hub to syringe, syringe to transfer adapter, needle/tubing to sample vessels, and/or any other fluidic interface or any combination(s) thereof), each of which can introduce points of potential contamination. In some instances, such contaminants may thrive in a culture medium and/or may be otherwise identified, thereby increasing a risk or likelihood of a false positive microbial test result, which may inaccurately reflect the presence or lack of such microbes within the patient (i.e., in vivo). Such inaccurate results because of contamination and/or adulteration are a concern when attempting to diagnose or treat a wide range of suspected illnesses, diseases, infections, patient conditions, and/or other maladies. For example, false results from microbial tests may lead to a patient being unnecessarily subjected to one or more anti-microbial therapies, and/or may lead to misdiagnosis and/or delayed treatment of a patient illness, any of which may cause serious side effects or consequences for the patient including, for example, death. As such, false results can produce an unnecessary burden and expense on the health care system due to extended length of patient stay and/or other complications associated with erroneous treatments. The use of diagnostic imaging equipment to arrive at these false results is also a concern from both a cost perspective and a patient safety perspective as unnecessary exposure to concentrated radiation associated with a variety of imaging procedures (e.g., CT scans) has many known adverse effects on long-term patient health. As used in this specification and/or any claims included herein, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, the term “a member” is intended to mean a single member or a combination of members, “a material” is intended to mean one or more materials, and/or the like. As used herein, “bodily fluid” can include any fluid obtained directly or indirectly from a body of a patient. For example, “bodily fluid” includes, but is not limited to, blood, cerebrospinal fluid, urine, bile, lymph, saliva, synovial fluid, serous fluid, pleural fluid, amniotic fluid, mucus, sputum, vitreous, air, and/or the like, or any combination thereof. As used herein, the words “proximal” and “distal” refer to the direction closer to and away from, respectively, a user who would place a device into contact with a patient. Thus, for example, the end of a device first touching the body of a patient would be a distal end of the device, while the opposite end of the device (e.g., the end of the device being manipulated by the user) would be a proximal end of the device. As used herein, the terms “about,” “approximately,” and/or “substantially” when used in connection with stated value(s) and/or geometric structure(s) or relationship(s) is intended to convey that the value or characteristic so defined is nominally the value stated or characteristic described. In some instances, the terms “about,” “approximately,” and/or “substantially” can generally mean and/or can generally contemplate a value or characteristic stated within a desirable tolerance (e.g., plus or minus 10% of the value or characteristic stated). For example, a value of about 0.01 can include 0.009 and 0.011, a value of about 0.5 can include 0.45 and 0.55, a value of about 10 can include 9 to 11, and a value of about 1000 can include 900 to 1100. Similarly, a first surface may be described as being substantially parallel to a second surface when the surfaces are nominally parallel. While a value, structure, and/or relationship stated may be desirable, it should be understood that some variance may occur as a result of, for example, manufacturing tolerances or other practical considerations (such as, for example, the pressure or force applied through a portion of a device, conduit, lumen, etc.). Accordingly, the terms “about,” “approximately,” and/or “substantially” can be used herein to account for such tolerances and/or considerations. As used herein, the terms “pre-sample,” “first,” and/or “initial,” can be used interchangeably to describe an amount, portion, or volume of bodily fluid that is collected and/or sequestered prior to procuring a “sample” volume. A “pre-sample,” “first,” and/or “initial” volume can be a predetermined, defined, desired, and/or given amount of bodily fluid. For example, a predetermined and/or desired pre-sample volume of bodily fluid can be a drop of bodily fluid, a few drops of bodily fluid, a volume of about 0.1 milliliter (mL), about 0.2 mL, about 0.3 mL, about 0.4 mL, about 0.5 mL, about 1.0 mL, about 2.0 mL, about 3.0 mL, about 4.0 mL, about 5.0 mL, about 10.0 mL, about 20.0 mL, about 50.0 mL, and/or any volume or fraction of a volume therebetween. In other embodiments, a pre-sample volume can be greater than 50 mL or less than 0.1 mL. In some specific embodiments, a predetermined and/or desired pre-sample volume can be between about 0.1 mL and about 5.0 mL. In other embodiments, a pre-sample volume can be, for example, a combined volume of any number of lumen (e.g., lumen that form at least a portion of a flow path from the bodily fluid source to an initial collection chamber, portion, reservoir, etc.). As used herein, the terms “sample,” “second,” and/or “subsequent” can be used interchangeably to describe an amount, portion, or volume of bodily fluid that is used, for example, in one or more sample or diagnostic tests. A “sample” volume can be either a random volume or a predetermined or desired volume of bodily fluid collected after collecting, sequestering, and/or isolating a pre-sample volume of bodily fluid. In some embodiments, a desired sample volume of bodily fluid can be about 10 mL to about 60 mL. In other embodiments, a desired sample volume of bodily fluid can be less than 10 mL or greater than 60 mL. In some embodiments, for example, a sample volume can be at least partially based on one or more tests, assays, analyses, and/or processes to be performed on the sample volume. The embodiments described herein can be configured to transfer bodily fluid substantially free of contaminants to one or more fluid collection device(s). In some embodiments, a fluid collection device can include, but is not limited to, any suitable vessel, container, reservoir, bottle, adapter, dish, vial, syringe, device, diagnostic and/or testing machine, and/or the like. In some embodiments, a fluid collection device can be substantially similar to or the same as known sample containers such as, for example, a Vacutainer® (manufactured by Becton Dickinson and Company (BD)), a BacT/ALERT® SN or BacT/ALERT® FA (manufactured by Biomerieux, Inc.), and/or any suitable reservoir, vial, microvial, microliter vial, nanoliter vial, container, microcontainer, nanocontainer, and/or the like. In some embodiments, a fluid collection device can be substantially similar to or the same as any of the sample reservoirs described in U.S. Pat. No. 8,197,420 entitled, “Systems and Methods for Parenterally Procuring Bodily-Fluid Samples with Reduced Contamination,” filed Dec. 13, 2007 (“the 420 Patent”), the disclosure of which is incorporated herein by reference in its entirety. In some embodiments, a fluid collection device can be devoid of contents prior to receiving a sample volume of bodily fluid. For example, in some embodiments, a fluid collection device or reservoir can define and/or can be configured to define or produce a vacuum or suction such as, for example, a vacuum-based collection tube (e.g., a Vacutainer®), a syringe, and/or the like. In other embodiments, a fluid collection device can include any suitable additives, culture media, substances, enzymes, oils, fluids, and/or the like. For example, a fluid collection device can be a sample or culture bottle including, for example, an aerobic or anaerobic culture medium. The sample or culture bottle can be configured to receive a bodily fluid sample, which can then be tested (e.g., after incubation via in vitro diagnostic (IVD) tests, and/or any other suitable test) for the presence of, for example, Gram-Positive bacteria, Gram-Negative bacteria, yeast, fungi, and/or any other organism. In some instances, if such a test of the culture medium yields a positive result, the culture medium can be subsequently tested using a PCR-based system to identify a specific organism. In some embodiments, a sample reservoir can include, for example, any suitable additive or the like in addition to or instead of a culture medium. Such additives can include, for example, heparin, citrate, ethylenediaminetetraacetic acid (EDTA), oxalate, sodium polyanethol sulfonate (SPS), and/or the like. In some embodiments, a fluid collection device can include any suitable additive or culture media and can be evacuated and/or otherwise devoid of air. While the term “culture medium” can be used to describe a substance configured to react with organisms in a bodily fluid (e.g., microorganisms such as bacteria) and the term “additive” can be used to describe a substance configured to react with portions of the bodily fluid (e.g., constituent cells of blood, serum, synovial fluid, etc.), it should be understood that a sample reservoir can include any suitable substance, liquid, solid, powder, lyophilized compound, gas, etc. Moreover, when referring to an “additive” within a sample reservoir, it should be understood that the additive could be a culture medium, such as an aerobic culture medium and/or an anaerobic culture medium contained in a culture bottle, an additive and/or any other suitable substance or combination of substances contained in a culture bottle and/or any other suitable reservoir such as those described above. That is to say, the embodiments described herein can be used with any suitable fluid reservoir or the like containing any suitable substance or combination of substances. The embodiments described herein and/or portions thereof can be formed or constructed of one or more biocompatible materials. In some embodiments, the biocompatible materials can be selected based on one or more properties of the constituent material such as, for example, stiffness, toughness, durometer, bioreactivity, etc. Examples of suitable biocompatible materials include metals, glasses, ceramics, or polymers. Examples of suitable metals include pharmaceutical grade stainless steel, gold, titanium, nickel, iron, platinum, tin, chromium, copper, and/or alloys thereof. A polymer material may be biodegradable or non-biodegradable. Examples of suitable biodegradable polymers include polylactides, polyglycolides, polylactide-co-glycolides (PLGA), polyanhydrides, polyorthoesters, polyetheresters, polycaprolactones, polyesteramides, poly(butyric acid), poly(valeric acid), polyurethanes, and/or blends and copolymers thereof. Examples of non-biodegradable polymers include nylons, polyesters, polycarbonates, polyacrylates, polysiloxanes (silicones), polymers of ethylene-vinyl acetates and other acyl substituted cellulose acetates, non-degradable polyurethanes, polystyrenes, polyvinyl chloride, polyvinyl fluoride, poly(vinyl imidazole), chlorosulphonate polyolefins, polyethylene oxide, and/or blends and copolymers thereof. The embodiments described herein and/or portions thereof can include components formed of one or more parts, features, structures, etc. When referring to such components it should be understood that the components can be formed by a singular part having any number of sections, regions, portions, and/or characteristics, or can be formed by multiple parts or features. For example, when referring to a structure such as a wall or chamber, the structure can be considered as a single structure with multiple portions, or as multiple, distinct substructures or the like coupled to form the structure. Thus, a monolithically constructed structure can include, for example, a set of substructures. Such a set of substructures may include multiple portions that are either continuous or discontinuous from each other. A set of substructures can also be fabricated from multiple items or components that are produced separately and are later joined together (e.g., via a weld, an adhesive, or any suitable method). While some of the embodiments are described herein as being used for procuring bodily fluid for one or more culture sample testing, it should be understood that the embodiments are not limited to such a use. Any of the embodiments and/or methods described herein can be used to transfer a flow of bodily fluid to any suitable device that is placed in fluid communication therewith. Thus, while specific examples are described herein, the devices, methods, and/or concepts are not intended to be limited to such specific examples. Referring now to the drawings,FIG.1is a schematic illustration of a fluid control device100according to an embodiment. Generally, the fluid control device100(also referred to herein as “control device” or “device”) is configured to withdraw bodily fluid from a patient. A first portion or amount (e.g., an initial amount) of the withdrawn bodily fluid is sequestered from a second portion or amount (e.g., a subsequent amount) of the withdrawn bodily fluid. In some instances, contaminants or the like can be sequestered within the first portion or amount, leaving the second portion or amount substantially free of contaminants. The second portion or amount of bodily fluid can then be used as a biological sample in one or more tests (e.g., a blood culture test or the like), as described in more detail herein. The first portion or amount of bodily fluid can be discarded as waste, reinfused into the patient, or used in any suitable test that is less likely to produce false, inaccurate, distorted, inconsistent, and unreliable results as a result of potential contaminants contained therein. The control device100includes a housing110, a flow controller140, and an actuator150. The housing110of the device100can be any suitable shape, size, and/or configuration. For example, in some embodiments, the housing110can have a size that is at least partially based on an initial amount or volume of bodily fluid configured to be transferred into and/or sequestered within a portion of the housing110. In some embodiments, the housing110can have a size and/or shape configured to increase the ergonomics and/or ease of use associated with the device100. Moreover, in some embodiments, one or more portions of the housing110can be formed of a relatively transparent material configured to allow a user to visually inspect and/or verify a flow of bodily fluid through at least a portion of the housing110. The housing110has and/or forms an inlet113, an outlet114, and a sequestration chamber130. The inlet113is configured to fluidically couple to a lumen-containing device, which in turn, can place the housing110in fluid communication with a bodily fluid source. For example, the housing110can be coupled to and/or can include a lumen-containing device that is in fluid communication with the inlet113and that is configured to be percutaneously disposed in a patient (e.g., a butterfly needle, intravenous (IV) catheter, peripherally inserted central catheter (PICC), intermediary lumen-containing device, and/or the like). Thus, bodily fluid can be transferred from the patient and/or other bodily fluid source to the housing110via the inlet113, as described in further detail herein. The outlet114can be placed in fluid communication with a fluid collection device180(e.g., a fluid or sample reservoir, syringe, evacuated container, culture bottle, etc.). As described in further detail herein, the control device100can be used and/or manipulated to selectively transfer a volume of bodily fluid from a bodily fluid source, through the inlet113, the housing110, and the outlet114to the fluid collection device180. The housing110can define at least a portion of any number of fluid flow paths. For example, as shown inFIG.1, the housing110defines one or more fluid flow paths115between the inlet113and the sequestration chamber130and/or one or more fluid flow paths116between the inlet113and the outlet114. As described in further detail herein, the control device100and/or the housing110can be configured to transition between any number of states, operating modes, and/or configurations to selectively control bodily fluid flow through at least one of the fluid flow paths115and/or116. Moreover, the control device100and/or the housing110can be configured to transition automatically (e.g., based on pressure differential, time, electronically, saturation of a membrane, an absorbent and/or barrier material, etc.) or via intervention (e.g., user intervention, mechanical intervention, or the like). The sequestration chamber130is at least temporarily placed in fluid communication with the inlet113via the fluid flow path(s)115. As described in further detail herein, the sequestration chamber130is configured to (1) receive a flow and/or volume of bodily fluid from the inlet113and (2) sequester (e.g., separate, segregate, contain, retain, isolate, etc.) the flow and/or volume of bodily fluid therein. The sequestration chamber130can have any suitable arrangement such as, for example, those described herein with respect to specific embodiments. It should be understood, however, that the control device100and/or the housing110can have a sequestration chamber130arranged in any suitable manner and therefore, the sequestration chamber130is not intended to be limited to those shown and described herein. For example, in some embodiments, the sequestration chamber130can be at least partially formed by the housing110. In other embodiments, the sequestration chamber130can be a reservoir placed and/or disposed within a portion of the housing110. In other embodiments, the sequestration chamber130can be formed and/or defined by a portion of the fluid flow path115. That is to say, the housing110can define one or more lumens and/or can include one or more lumen defining device(s) configured to receive an initial flow or volume of bodily fluid from the inlet113, thereby forming and/or functioning as the sequestration chamber130. The sequestration chamber130can have any suitable volume and/or fluid capacity. For example, in some embodiments, the sequestration chamber130can have a volume and/or fluid capacity between about 0.1 mL and about 5.0 mL. In some embodiments, the sequestration chamber130can have a volume measured in terms of an amount of bodily fluid (e.g., the initial or first amount of bodily fluid) configured to be transferred in the sequestration chamber130. For example, in some embodiments, the sequestration chamber130can have a volume sufficient to receive an initial volume of bodily fluid as small as a microliter or less of bodily fluid (e.g., a volume as small as 20 drops of bodily fluid, 10 drops of bodily fluid, 5 drops of bodily fluid, a single drop of bodily fluid, or any suitable volume therebetween). In other embodiments, the sequestration chamber130can have a volume sufficient to receive an initial volume of bodily fluid up to, for example, about 5.0 mL, 10.0 mL, 15.0 mL, 10.0 mL, 30.0 mL, 40.0 mL, 50.0 mL, or more. In some embodiments, the sequestration chamber130can have a volume that is equal to at least some of the volumes of one or more lumen(s) placing the sequestration chamber130in fluid communication with the bodily fluid source (e.g., a combined volume of a lumen of a needle, the inlet113, and at least a portion of the fluid flow path115). The outlet114of the housing110is in fluid communication with and/or is configured to be placed in fluid communication with the fluid flow paths115and/or116. The outlet114can be any suitable outlet, opening, port, stopcock, lock (e.g., a luer lock), seal, coupler, valve (e.g. one-way, check valve, duckbill valve, umbrella valve, and/or the like), etc. and is configured to be physically and/or fluidically coupled to the fluid collection device180. In some embodiments, the outlet114can be monolithically formed with the fluid collection device180. In other embodiments, the outlet114can be at least temporarily coupled to the fluid collection device180via an adhesive, a resistance fit, a mechanical fastener, a threaded coupling, a piercing or puncturing arrangement, a number of mating recesses, and/or any other suitable coupling or combination thereof. In still other embodiments, the outlet114can be operably coupled to the fluid collection device180via an intervening structure (not shown inFIG.1), such as sterile tubing and/or the like. In some embodiments, the arrangement of the outlet114can be such that the outlet114is physically and/or fluidically sealed prior to coupling to the fluid collection device180. In some embodiments, the outlet114can be transitioned from a sealed configuration to an unsealed configuration in response to being coupled to the fluid collection device180and/or in response to a negative pressure differential between an environment within the outlet114and/or housing110and an environment within the fluid collection device180. Although the outlet114of the control device100and/or the housing110is described above as being fluidically coupled to and/or otherwise placed in fluid communication with the fluid collection device180, in other embodiments, the device100can be used in conjunction with any suitable bodily fluid collection device, system, adapter, and/or the like. For example, in some embodiments, the device100can be used in or with any suitable fluid transfer device and/or adapter such as those described in U.S. Pat. No. 10,123,783 entitled, “Apparatus and Methods for Disinfection of a Specimen Container,” filed Mar. 3, 2015 (referred to herein as “the '783 patent”) and/or U.S. Patent Publication No. 2015/0342510 entitled, “Sterile Bodily-Fluid Collection Device and Methods,” filed Jun. 2, 2015 (referred to herein as “the '510 publication”), the disclosure of each of which is incorporated herein by reference in its entirety. The fluid collection device180can be any suitable device for at least temporarily containing a bodily fluid, such as, for example, any of those described in detail above (e.g., an evacuated container, a sample reservoir, a syringe, a culture bottle, etc.). In some embodiments, the fluid collection device180can be a sample reservoir that includes a vacuum seal that maintains negative pressure conditions (vacuum conditions) inside the sample reservoir, which in turn, can facilitate withdrawal of bodily fluid from the patient, through the control device100, and into the sample reservoir, via a vacuum or suction force. In embodiments in which the fluid collection device180is an evacuated container or the like, the user can couple the fluid collection device180to the outlet114to initiate a flow of bodily fluid from the patient and into the device100such that a first or initial portion of the flow of bodily fluid is transferred into and sequestered by the sequestration chamber130, and a second or subsequent portion of the flow of bodily fluid bypasses and/or is otherwise diverted away from the sequestration chamber130and into the fluid collection device180(e.g., via the outlet114), as described in further detail herein. The flow controller140of the device100is at least partially disposed within the housing110and is configured to control, direct, and/or otherwise facilitate a selective flow of fluid through at least a portion of the housing110. More particularly, in some embodiments, the flow controller140can be disposed within and/or can at least partially define a portion of the sequestration chamber130and/or an inner volume of the sequestration chamber130that receives the initial flow or amount of bodily fluid. In some embodiments, the flow controller140can be disposed within the housing110such that one or more surfaces of the flow controller140and one or more inner surfaces of the housing110collectively define the sequestration chamber130. Said another way, the flow controller140can be disposed within the sequestration chamber130such that an inner surface of the housing110at least partially defining the sequestration chamber130and one or more surfaces of the flow controller140collectively define a portion of the sequestration portion130and/or a volume within the sequestration chamber130. In some embodiments, the flow controller140can form a barrier and/or otherwise can fluidically isolate at least a portion of the fluid flow path115from at least a portion of the fluid flow path116. For example, the flow controller140can be disposed in the housing110such that a first side and/or surface of the flow controller140is selectively in fluid communication with the at least a portion of the fluid flow path115and/or the inlet113, and a second side and/or surface of the flow controller140is selectively in fluid communication with at least a portion of the fluid flow path116and/or the outlet114. The flow controller140can be any suitable shape, size, and/or configuration. For example, the flow controller140can be, for example, a membrane, a diaphragm, a bladder, a plunger, a piston, a bag, a pouch, and/or any other suitable member having a desired stiffness, flexibility, and/or durometer. In some embodiments, the flow controller140can be configured to transition from a first state to a second state in response to a negative pressure differential and/or suction force exerted on at least a portion of the flow controller140. For example, in some embodiments, the flow controller140can be a bladder configured to transition or “flip” from a first state to a second state in response to a negative pressure differential and/or suction force exerted on a surface of the bladder, as described in further detail herein with reference to specific embodiments. The flow controller140can be in a first state prior to using the device100(e.g., a storage or non-use state) and in response to the outlet114be fluidically coupled to the fluid collection device180(e.g., a collection device defining or configured to define a negative pressure and/or suction force), the flow controller140can be transitioned to a second state. In some embodiments, the flow controller140can define at least a portion of the sequestration chamber130when the flow controller140is in the second state. In some embodiments, the arrangement of the flow controller140is such that the sequestration chamber130defines and/or has a first volume when the flow controller140is in the first state and a second volume, greater than the first volume, when the flow controller140is placed in the second state. As described in further detail herein, the increase in the volume of the sequestration chamber130can result in a suction force operable to draw the initial volume of bodily fluid into the sequestration chamber130. Moreover, in some embodiments, the flow controller140can have a size, shape, and/or configuration that allows the sequestration chamber130to receive a volume of air or gas (e.g., a volume of air disposed in the flow path between the bodily fluid source and the sequestration portion) and the initial amount or volume of bodily fluid. In such embodiments, the flow controller140can be configured to define any number of portions, volumes, channels, etc., that can receive and/or contain at least one of a volume of air or the initial volume of bodily fluid. In some embodiments, a size, shape, arrangement, and/or constituent material of the flow controller140can be configured and/or otherwise selected such that the flow controller140transitions from the first state to the second state in a predetermined manner and/or with a predetermined or desired rate. In some instances, controlling a rate at which the flow controller140transitions from the first state to the second state can, in turn, control and/or modulate a rate of bodily fluid flow into the sequestration chamber130and/or a magnitude of a suction force generated in the sequestration chamber130that is operable in drawing the initial volume of bodily fluid into the sequestration chamber130. Although not shown inFIG.1, in some embodiments, the housing110can include a valve, a membrane, a porous material, a restrictor, an orifice, and/or any other suitable member, device, and/or feature configured to modulate a suction force exerted on a surface of the flow controller140, which in turn, can modulate the rate at which the flow controller140transitions from the first state to the second state. In some instances, controlling a rate at which the flow controller140transitions and/or a magnitude of a pressure differential and/or suction force generated within the sequestration chamber130can reduce, for example, hemolysis of a blood sample and/or a likelihood of collapsing a vein (e.g., which is particularly important when procuring bodily fluid samples from fragile patients). In some instances, modulating the transitioning of the flow controller140and/or the pressure differential generated in the sequestration chamber130can at least partially control an amount or volume of bodily fluid transferred into the sequestration chamber130(i.e., can control a volume of the initial amount of bodily fluid). The actuator150of the device100is at least partially disposed within the housing110and is configured to control, direct, and/or otherwise facilitate a selective flow of fluid through at least a portion of the housing110. The actuator150can be any suitable shape, size, and/or configuration. For example, in some embodiments, the actuator150can be any suitable member or device configured to transition between a first state and a second state. In some embodiments, for example, the actuator150can be a valve, plunger, seal, membrane, bladder, flap, plate, rod, switch, and/or the like. In some embodiments, the actuator150can include one or more seals configured to selectively establish fluid communication between the fluid flow channels113and116when the actuator150is transitioned from a first state to a second state. The actuator150can be actuated and/or transitioned between the first state and the second state in any suitable manner. For example, in some embodiments, transitioning the actuator150can include activating, pressing, moving, translating, rotating, switching, sliding, opening, closing, and/or otherwise reconfiguring the actuator150. In some instances, the actuator150can transition between the first and the second state in response to a manual actuation by the user (e.g., manually exerting a force on a button, slider, plunger, switch, valve, rotational member, conduit, etc.). In other embodiments, the actuator150can be configured to automatically transition between the first state and the second state in response to a pressure differential (or lack thereof), a change in potential or kinetic energy, a change in composition or configuration (e.g., a portion of an actuator could at least partially dissolve or transform), and/or the like. In still other embodiments, the actuator150can be mechanically and/or electrically actuated or transitioned (e.g., via a motor and/or the like) based on a predetermined time, volume of bodily fluid received, volumetric flow rate of a flow of bodily fluid, flow velocity of a flow of bodily fluid, etc. While examples of actuators and/or ways in which an actuator can transition are provided, it should be understood that they have been presented by way of example only and not limitation. In some embodiments, the actuator150can be configured to isolate, sequester, separate, and/or otherwise prevent fluid communication between at least a portion of the fluid flow path115and at least a portion of the fluid flow path116when in the first state and can be configured to place the fluid flow path115(or at least a portion thereof) in fluid communication with the fluid flow path116(or at least a portion thereof) when in the second state. In addition, the actuator150can be configured to sequester, separate, isolate, and/or otherwise prevent fluid communication between the sequestration chamber130and the inlet113, the outlet114, and/or at least a portion of the fluid flow paths115and116. Accordingly, when the actuator150is placed in its second state, the sequestration chamber130can be sequestered and/or fluidically isolated from other flow paths or portions of the housing110and the inlet113can be placed in fluid communication with the outlet114. As such, the actuator150can allow a subsequent volume of bodily fluid (e.g., a volume of bodily fluid after the initial volume of bodily fluid) to be transferred to the fluid collection device180fluidically coupled to the outlet114, as described in further detail herein. As described above, the device100can be used to procure a bodily fluid sample having reduced contamination from microbes such as, for example, dermally residing microbes, and/or the like. For example, in some instances, a user such as a doctor, physician, nurse, phlebotomist, technician, etc. can manipulate the device100to establish fluid communication between the inlet113and the bodily fluid source (e.g., a vein of a patient, cerebral spinal fluid (CSF) from the spinal cavity, urine collection, and/or the like). As a specific example, in some instances, the inlet113can be coupled to and/or can include a needle or the like that can be manipulated to puncture the skin of the patient and to insert at least a portion of the needle in the vein of the patient, thereby placing the inlet113in fluid communication with the bodily fluid source (e.g., the vein, an IV catheter, a PICC, etc.). In some embodiments, once the inlet113is placed in fluid communication with the bodily fluid source (e.g., the portion of the patient), the outlet114can be fluidically coupled to the fluid collection device180. As described above, in some embodiments, the fluid collection device180can be any suitable reservoir, container, and/or device configured to receive a volume of bodily fluid. For example, the fluid collection device180can be an evacuated reservoir or container that defines a negative pressure and/or can be a syringe that can be manipulated to produce a negative pressure. In some instances, coupling the outlet114to the fluid collection device180selectively exposes at least a portion of the fluid flow path116to the negative pressure and/or suction force within the fluid collection device180. As described above, a portion and/or surface of the flow controller140can be in fluid communication with the fluid flow path116and, as such, the negative pressure and/or suction force can be exerted on the portion and/or surface of the flow controller140. The negative pressure and/or suction force, in turn, can be operable to transition the flow controller140from its first state, in which the sequestration chamber130has the first volume, to its second state, in which the sequestration chamber130has the second volume, greater than the first volume. As such, an initial volume of bodily fluid can be drawn into the sequestration chamber130in response to the transitioning of the flow controller140(e.g., the increase in volume of the sequestration chamber130as a result of the flow controller140transitioning from the first state to the second state). In some embodiments, for example, the flow controller140can be a bladder or the like configured to transition or “flip” in response to the negative pressure. The flow controller140can be configured to transition in a predetermined manner and/or with a predetermined rate, which in turn, can control, modulate, and/or otherwise determine one or more characteristics associated with a flow of an initial volume of bodily fluid into the sequestration chamber130. In some embodiments, the flow controller140and, for example, one or more inner surfaces of the housing110can collective define a number of different portions of the sequestration chamber130. In such embodiments, at least one of the portions of the sequestration chamber130can be configured to contain a volume of air that was drawn into the sequestration chamber130immediately before the initial volume of bodily fluid, as described in detail above. Thus, the transitioning of the flow controller140from the first state to the second state can result in the initial portion of the volume of bodily fluid (also referred to herein as an “initial volume” or a “first volume”) flowing from the inlet113, through at least a portion of the fluid flow path115, and into the sequestration chamber130. In some embodiments, transitioning the flow controller140from the first state to the second state can transition the control device100from a first or initial state or configuration to a second state or configuration in which the initial portion or volume of bodily fluid can flow in or through at least a portion the fluid flow path115and into the sequestration chamber130. The initial volume of bodily fluid can be any suitable volume of bodily fluid, as described above. For example, in some instances, the control device100can remain in the second state or configuration until a predetermined and/or desired volume (e.g., the initial volume) of bodily fluid is transferred to the sequestration chamber130. In some embodiments, the initial volume can be associated with and/or at least partially based on a volume of the sequestration chamber130or a portion thereof (e.g., a volume sufficient to fill the sequestration chamber130or a desired portion of the sequestration chamber130). In other embodiments, the initial volume of bodily fluid can be associated with and/or at least partially based on an amount or volume of bodily fluid that is equal to or greater than a volume associated with the fluid flow path defined between the bodily fluid source and the sequestration chamber130. In still other embodiments, the control device100can be configured to transfer a flow of bodily fluid (e.g., the initial volume) into the sequestration chamber130until a pressure differential between the sequestration chamber130and the fluid flow path115and/or the bodily fluid source is brought into substantial equilibrium and/or is otherwise reduced below a desired threshold. After the initial volume of bodily fluid is transferred and/or diverted into the sequestration chamber130, the control device100can be transitioned from the second state or configuration to a third state or configuration. For example, in some embodiments, the actuator150can be transitioned from its first state to its second state when the initial volume of bodily fluid is transferred into the sequestration chamber130, which in turn, places the control device100in its third state. More particularly, in some embodiments, the arrangement of the control device100and/or the sequestration chamber130can be such that a flow of bodily fluid into the sequestration chamber130substantially stops or slows in response to receiving the initial volume. In some embodiments, for example, the sequestration chamber130can receive the flow of bodily fluid (e.g., the initial volume of bodily fluid) until a pressure differential equalizes within the sequestration chamber130and/or between the sequestration chamber130and the fluid flow path115and/or the bodily fluid source. In some instances, the user can visually inspect a portion of the device100and/or housing110to determine that the initial volume of bodily fluid is disposed in the sequestration chamber130and/or that the flow of bodily fluid into the sequestration chamber130has slowed or substantially stopped. In some embodiments, the user can exert a force on the actuator150and/or can otherwise actuate the actuator150to transition the actuator150from its first state to its second state. In other embodiments, the actuator150can be transitioned automatically (e.g., without user intervention). The transitioning of the actuator150from its first state to its second state (e.g., placing the control device100in its third state or configuration) can sequester, isolate, separate, and/or retain the initial volume of the bodily fluid in the sequestration chamber130. As described in further detail herein, in some instances, contaminants such as, for example, dermally residing microbes or the like dislodged during the venipuncture event, other external sources of contamination, colonization of catheters and PICC lines that are used to collect samples, and/or the like can be entrained and/or included in the initial volume of the bodily fluid. Thus, such contaminants are sequestered in the sequestration chamber130when the initial volume is sequestered therein. In addition to sequestering the initial volume of bodily fluid in the sequestration chamber130, placing the actuator150in its second state can also establish fluid communication between at least a portion of the fluid flow paths115and116such that a subsequent volume(s) of bodily fluid can flow through at least a portion the fluid flow paths115and/or116from the inlet113to the outlet114. For example, in some embodiments, transitioning the actuator150from its first state to its second state can, for example, open or close a port or valve, move one or more seals, move or remove one or more obstructions, define one or more portions of a flow path, and/or the like. With the fluid collection device180fluidically coupled to the outlet114and with the control device100being in the third state or configuration, the negative pressure differential and/or the suction force otherwise exerted on the flow controller140can be exerted on or through at least a portion of the fluid flow paths115and116. Thus, any subsequent volume(s) of the bodily fluid can flow from the inlet113, through at least a portion of the fluid flow paths115and116, through the outlet114, and into the fluid collection device180. As described above, sequestering the initial volume of bodily fluid in the sequestration chamber130prior to collecting or procuring one or more sample volumes of bodily fluid (e.g., in the fluid collection device180) reduces and/or substantially eliminates an amount of contaminants in the one or more sample volumes. Moreover, in some embodiments, the arrangement of the control device100can be such that the control device100cannot transition to the third state prior to collecting and sequestering the initial volume in the sequestration chamber130. FIGS.2-11illustrate a fluid control device200according to another embodiment. The fluid control device200(also referred to herein as “control device” or “device”) can be similar in at least form and/or function to the device100described above with reference toFIG.1. For example, as described above with reference to the device100, in response to being placed in fluid communication with a negative pressure source (e.g., a suction or vacuum source), the device200can be configured to (1) withdraw bodily fluid from a bodily fluid source into the device200, (2) divert and sequester a first portion or amount (e.g., an initial volume) of the bodily fluid in a portion of the device200, and (3) allow a second portion or amount (e.g., a subsequent volume) of the bodily fluid to flow through the device200—bypassing the sequestered initial volume—and into a fluid collection device fluidically coupled to the device200. As such, contaminants or the like can be sequestered in or with the initial volume of bodily fluid, leaving the subsequent volume of bodily fluid substantially free of contaminants. The fluid control device200(also referred to herein as “control device” or “device”) includes a housing210, a flow controller240, and an actuator250. In some embodiments, the control device200or at least a portion of the control device200can be arranged in a modular configuration in which one or more portions of the housing210and/or the actuator250can be physically and fluidically coupled (e.g., by an end user) to collectively form the control device200. Similarly, in some embodiments, the control device200can be packaged, shipped, and/or stored independent of a fluid collection device (e.g., a sample reservoir, syringe, etc.) and/or an inlet device (e.g., a needle, catheter, peripheral intravenous line (M), peripherally inserted central catheter (PICC), etc.), which a user can couple to the control device200before or during use. In other embodiments, the control device200need not be modular. For example, in some embodiments, the control device200can be assembled during manufacturing and delivered to a supplier and/or end user as an assembled device. In some embodiments, the control device200can include and/or can be pre-coupled (e.g., during manufacturing and/or prior to being delivered to an end user) to a fluid collection device such as any of those described above. Similarly, in some embodiments, the control device200can include and/or can be pre-coupled to an inlet device such as any of those described herein. The housing210of the control device200can be any suitable shape, size, and/or configuration. The housing210includes an actuator portion212and a sequestration portion220. The actuator portion212of the housing210receives at least a portion of the actuator250. The sequestration portion220of the housing210is coupled to a cover235and includes, receives, houses, and/or at least partially defines a sequestration chamber230. As described in further detail herein, the housing210can include and/or can define a first port217and a second port218, each of which establishes fluid communication between the actuator portion212and the sequestration portion220of the housing210to selectively control and/or allow a flow of fluid through one or more portions of the housing210. As shown inFIGS.2-6, the actuator portion212of the housing210includes an inlet213and an outlet214. The inlet213is configured to be placed in fluid communication with a bodily fluid source to receive a flow of bodily fluid therefrom, as described in detail above. For example, the inlet213can be coupled directly or indirectly to a lumen-containing device such as a needle, IV catheter, PICC line, and/or the like, which in turn, is in fluid communication with the bodily fluid source (e.g., inserted into a patient). The outlet214is configured to be fluidically coupled to a fluid collection device such as any of those described above. For example, the fluid collection device can be a sample reservoir, a syringe, an intermediary bodily fluid transfer device, adapter, or vessel (e.g., a transfer adapter similar to those described in the '783 patent), and/or the like. Moreover, the fluid collection device can define and/or can be manipulated to define a vacuum within the fluid collection device such that coupling the fluid collection device to the outlet214generates a negative pressure differential between one or more portions of the housing210, as described in further detail herein. As shown, for example, inFIGS.7-11, the actuator portion212defines a fluid flow path215in fluid communication with the inlet213and a fluid flow path216in fluid communication with the outlet214. More particularly, the fluid flow path215(e.g., a first fluid flow path) is configured to selectively place the inlet213in fluid communication with the first port217and the fluid flow path216(e.g., a second fluid flow path) is configured to selectively place the outlet214in fluid communication with the second port218. In addition, after an initial volume of bodily fluid has been transferred into the sequestration chamber230, fluid communication can be established between the fluid flow paths215and216, thereby allowing a subsequent volume of bodily fluid to flow from the inlet213, through at least a portion of the fluid flow paths215and216, and to the outlet214(and/or to a fluid collection device coupled to the outlet214), as described in further detail herein. The sequestration portion220of the housing210can be any suitable shape, size, and/or configuration. As shown, for example, inFIGS.6-8, the sequestration portion220includes and/or forms an inner surface, a portion of which is arranged and/or configured to form a first contoured surface221. At least a portion of the first contoured surface221can form and/or define a portion of the sequestration chamber230, as described in further detail herein. Furthermore, the first port217and the second port218are configured to form and/or extend through a portion of the first contoured surface221to selectively place the sequestration chamber230in fluid communication with the fluid flow paths215and216, as described in further detail here. The sequestration portion220is configured to include, form, and/or house, a contour member225and the flow controller240. More particularly, as shown inFIGS.6-8, the sequestration portion220receives and/or is coupled to the contour member225such that the flow controller240is disposed therebetween. In some embodiments, the contour member225can be fixedly coupled to the sequestration portion220via an adhesive, ultrasonic welding, and/or any other suitable coupling method. In some embodiments, the contour member225, the sequestration portion220, and the flow controller240can collectively form a substantially fluid tight and/or hermetic seal that isolates the sequestration portion220from a volume outside of the sequestration portion220. As shown, a cover235is configured to be disposed about the contour member225such that the cover235and the sequestration portion220of the housing210enclose and/or house the contour member225and the flow controller240. In some embodiments, the cover235can be coupled to the contour member225and/or the sequestration portion220via an adhesive, ultrasonic welding, one or more mechanical fasteners, a friction fit, a snap fit, a threaded coupling, and/or any other suitable manner of coupling. In some embodiments, the cover235can define an opening, window, slot, etc. configured to allow visualization of at least a portion of the sequestration chamber230. While the contour member225and the cover235are described above as being separate pieces and/or components, in other embodiments, the contour member225can be integrated and/or monolithically formed with the cover235. The contour member225includes and/or forms a second contoured surface226. The arrangement of the contour member225and the sequestration portion220of the housing210can be such that at least a portion of the first contoured surface221is aligned with and/or opposite a corresponding portion of the second contoured surface226of the contour member225(see e.g.,FIG.8). As such, a space, volume, opening, void, chamber, and/or the like defined between the first contoured surface221and the second contoured surface226forms and/or defines the sequestration chamber230. Moreover, the flow controller240is disposed between the first contoured surface221and the second contoured surface226and can be configured to transition between a first state and a second state in response to a negative pressure differential and/or suction force applied to at least a portion of the sequestration chamber230, as described in further detail herein. The ports217and218of the housing210can be any suitable shape, size, and/or configuration. As described above, the first port217is in fluid communication with the sequestration chamber230and can selectively establish fluid communication between the sequestration chamber230and the fluid flow path215and/or the inlet213. More specifically, the first port217is in fluid communication with a first portion of the sequestration chamber230defined between the second contoured surface226and a first side of the flow controller240. As described in further detail herein, the first port217can be configured to provide and/or transfer a flow of bodily fluid from the inlet213and the fluid flow path215and into the first portion of the sequestration chamber230defined between the second contoured surface226and the first side of the flow controller240in response to the flow controller240transitioning from a first state to a second state. The second port218is in fluid communication with the sequestration chamber230and can selectively establish fluid communication between the sequestration chamber230and the fluid flow path216and/or the outlet214. More specifically, the second port218is in fluid communication with a second portion of the sequestration chamber230defined between the first contoured surface221and a second side of the flow controller240(e.g., opposite the first side). As described in further detail herein, the second port218can be configured to expose the second portion of the sequestration chamber230defined between the first contoured surface221and the second side of the flow controller240to a negative pressure differential and/or suction force resulting from the fluid collection device (e.g., an evacuated container, a culture bottle, a syringe, and/or the like) being fluidically coupled to the outlet214. In turn, the negative pressure differential and/or suction force can be operable to transition the flow controller240from its first state to its second state. In some instances, it may be desirable to modulate and/or control a magnitude of the negative pressure differential. As such, the second port218can include and/or can be coupled to a restrictor219. The restrictor219can be configured to limit and/or restrict a flow of fluid (e.g., air or gas) between the second portion of the sequestration chamber230and the fluid flow path216, thereby modulating and/or controlling a magnitude of a pressure differential and/or suction force applied on or experienced by the flow controller240, as described in further detail herein. The flow controller240is disposed within the housing210between the sequestration portion220and the contour member225(e.g., within the sequestration chamber230). The flow controller240can be any suitable shape, size, and/or configuration. Similarly, the flow controller240can be formed of any suitable material (e.g., any suitable biocompatible material such as those described herein and/or any other suitable material). For example, the flow controller240can be a fluid impermeable bladder, membrane, diaphragm, and/or the like configured to be transitioned from a first state and/or configuration to a second state and/or configuration. In some embodiments, the flow controller240(e.g., bladder) can include any number of relatively thin and flexible portions configured to deform in response to a pressure differential across the flow controller240. For example, in some embodiments, the flow controller240can be formed of or from any suitable medical-grade elastomer and/or any of the biocompatible materials described above. In some embodiments, the flow controller240can have a durometer between about 5 Shore A and about 70 Shore A, between about 10 Shore A and about 60 Shore A, between about 20 Shore A and about 50 Shore A, between about 30 Shore A and about 40 Shore A, and/or any other suitable durometer. In some embodiments, the flow controller240can be formed of or from silicone having a durometer between about 20 Shore A and about 50 Shore A. More particularly, in some such embodiments, the flow controller240can be formed of or from silicone having a durometer of about 30 Shore A. In some embodiments, the flow controller240can include relatively thin and flexible portions having a thickness between about 0.001″ and about 0.1″. In other embodiments, the relatively thin and flexible portions can have a thickness that is less than 0.001″ or greater than 0.1″. In some embodiments, the flow controller240can have a size and/or shape configured to facilitate, encourage, and/or otherwise result in fluid flow with a desired set of flow characteristics. Similarly, the flow controller240can be formed of or from a material having one or more material properties and/or one or more surface finishes configured to facilitate, encourage, and/or otherwise result in fluid flow with the desired set of flow characteristics. As described in further detail herein, the set of flow characteristics can be and/or can include a relatively even or smooth fluid flow, a substantially laminar fluid flow and/or a fluid flow with relatively low turbulence, a fluid flow with a substantially uniform front, a fluid flow that does not readily mix with other fluids (e.g., a flow of bodily fluid that does not mix with a flow or volume of air), and/or the like. In the embodiment shown inFIG.2-11, the flow controller240is a bladder (or diaphragm) formed of or from silicone having a durometer of about 30 Shore A. The flow controller240(e.g., bladder) includes a first deformable portion241, a second deformable portion242, and a third deformable portion243. In addition, the flow controller240defines an opening244. As shown, for example, inFIG.8, the flow controller240can be positioned within the sequestration portion220of the housing210such that the first port217extends through the opening244. In some embodiments, the arrangement of the flow controller240is such that a surface of the flow controller240defining the opening244forms a substantially fluid tight seal with a portion of the inner surface of the sequestration portion220of the housing210(e.g., the portion defining and/or forming the first port217). Moreover, the flow controller240can include one or more portions configured to form one or more seals with and/or between the flow controller240and each of the contoured surfaces221and226, as described in further detail herein. The deformable portions241,242, and243of the flow controller240can be relatively thin and flexible portions configured to deform in response to a pressure differential between the first side of the flow controller240and the second side of the flow controller240. More particularly, the deformable portions241,242, and243can each have a thickness of about 0.005″. As shown, for example, inFIGS.8and10, the deformable portions241,242, and243of the flow controller240correspond to and/or have substantially the same general shape as at least a portion of the contoured surfaces221and/or226. As such, the deformable portions241,242, and243and the corresponding portion of the contoured surfaces221and/or226can collectively form and/or define one or more channels or the like, which in turn, can receive the initial volume of bodily fluid, as described in further detail herein. As described above, the flow controller240is configured to transition between a first state and a second state. For example, when the flow controller240is in its first state, the deformable portions241,242, and243are disposed adjacent to and/or substantially in contact with the second contoured surface226, as shown inFIG.8. More specifically, the first deformable portion241can be disposed adjacent to and/or substantially in contact with a first recess227formed by the second contoured surface226, the second deformable portion242can be disposed adjacent to and/or substantially in contact with a second recess228formed by the second contoured surface226, and the third deformable portion243can be disposed adjacent to and/or substantially in contact with a third recess229formed by the second contoured surface226. As such, the first portion of the sequestration chamber230(e.g., the portion defined between the second contoured surface226and the first surface of the flow controller240) can have a relatively small and/or relatively negligible volume. In contrast, when the flow controller240is transitioned from its first state to its second state (e.g., in response to a negative pressure applied and/or transmitted via the second port218), at least the deformable portions241,242, and243are disposed adjacent to and/or substantially in contact with the first contoured surface221. More specifically, the first deformable portion241can be disposed adjacent to and/or substantially in contact with a first recess222formed by the first contoured surface221, the second deformable portion242can be disposed adjacent to and/or substantially in contact with a second recess223formed by the first contoured surface221, and the third deformable portion243can be disposed adjacent to and/or substantially in contact with, for example, a non-recessed portion of the first contoured surface221. Accordingly, a volume of the first portion of the sequestration chamber230is larger when the flow controller240is in its second state than when the flow controller is in its first state. In other words, the deformable portions241,242, and243and the second contoured surface226can define one or more channels (e.g., the sequestration chamber230) configured to receive the initial volume of bodily fluid. In some instances, the increase in the volume of the first portion of the sequestration chamber230can result in a negative pressure or vacuum therein that can be operable to draw the initial volume of bodily fluid into the sequestration chamber230, as described in further detail herein. Moreover, in some embodiments, the arrangement of deformable portions241,242, and/or243can be such that a volume of air drawn into the sequestration chamber230immediately before the flow of bodily fluid can flow into and/or be disposed in a portion of the sequestration chamber230corresponding to the first deformable portion241and/or the second deformable portion242. While the flow controller240is particularly described above with reference toFIGS.6-11, in other embodiments, the flow controller240and/or the sequestration chamber230can have any suitable configuration and/or arrangement. For example, in some embodiments, the contoured surfaces221and/or226can include more or fewer recesses (e.g., the recesses222and223and the recesses227,228, and229, respectively). In other embodiments, a depth of one or more recesses can be modified. Similarly, the flow controller240can be modified in any suitable manner to substantially correspond to a shape and/or configuration of the contoured surfaces221and/or226. In some embodiments, such modifications can, for example, modify one or more characteristics associated with a flow of a gas (e.g., air) and/or fluid (e.g., bodily fluid), one or more characteristics associated with the manner or rate at which the flow controller240transitions, and/or the like, as described in further detail herein. While the flow controller240is described as being a bladder or the like including a number of deformable portions, in other embodiments, a flow controller can be arranged and/or configured as, for example, a bellows, a flexible pouch, an expandable bag, an expandable chamber, a plunger (e.g., similar to a syringe), and/or any other suitable reconfigurable container or the like. In addition, the sequestration chamber230at least partially formed by the flow controller240can have any suitable shape, size, and/or configuration. The actuator250of the control device200can be any suitable shape, size, and/or configuration. At least a portion of the actuator250is disposed within the actuator portion212of the housing210and is configured to be transitioned between a first state, configuration, and/or position and a second state, configuration, and/or position. In the embodiment shown inFIGS.2-11, the actuator250is configured as an actuator rod or plunger configured to be moved relative to the actuator portion212of the housing210. The actuator250includes an end portion251disposed outside of the housing210and configured to be engaged by a user to transition the actuator250between its first state and its second state. As shown inFIGS.6-11, a portion of the actuator250includes and/or is coupled to a set of seals255. The seals255can be, for example, o-rings, elastomeric over-molds, proud or raised dimensions or fittings, and/or the like. The arrangement of the actuator250and the actuator portion212of the housing210can be such that an inner portion of the seals255forms a fluid tight seal with a surface of the actuator250and an outer portion of the seals255forms a fluid tight seal with an inner surface of the actuator portion212of the body210. In other words, the seals255form one or more fluid tight seals between the actuator250and the inner surface of the actuator portion212. As shown in7-11, the actuator250includes and/or is coupled to four seals255which can be distributed along the actuator250to selectively form and/or define one or more flow paths therebetween. Moreover, the actuator250defines a flow channel252defined between a pair of seals255which can aid and/or facilitate the fluid communication between the fluid flow paths215and216when the actuator250is transitioned to its second state, as described in further detail herein. While the actuator250is described above as including four seals255, in other embodiments, the actuator250can include fewer than four seals255or more than four seals255. In some embodiments, the actuator portion212of the housing210and the actuator250collectively include and/or collectively form a lock. For example, as shown inFIGS.6and8, the actuator portion212of the housing210can define an opening238and the actuator250can include a locking member, latch, protrusion, tab, and/or the like (referred to herein as “lock253”) configured to be disposed, at least partially, within the opening238. In some embodiments, the lock253can be arranged and/or disposed in the opening238and can limit and/or substantially prevent the actuator250from being removed from the housing210. In some embodiments, the lock253can be transitioned between a locked state, in which the lock253limits and/or substantially prevents the actuator250from being moved relative to the housing210, and an unlocked state, in which the actuator250can be moved, for example, between its first state and/or position and its second state and/or position. In some instances, such an arrangement may limit and/or substantially prevent the actuator250from being actuated, for example, prior to transferring the initial volume of bodily fluid in the sequestration chamber230. In other embodiments, the lock253can transition from the unlocked state to a locked state, for example, after transferring the initial volume of bodily fluid into the sequestration chamber230. As shown inFIGS.7and8, when the actuator250is disposed in its first state and/or position (e.g., prior to using the device100), the fluid flow path215can establish fluid communication between the inlet213and the first port217. More particularly, the actuator250can be in a position relative to the housing210such that each of the seals255is disposed on a side of the inlet213opposite to a side of the inlet213associated with the first port217. In other words, the actuator250and/or the seals255do not obstruct and/or occlude the fluid flow path215when the actuator250is in the first state and/or position, as shown inFIGS.7and8. As such, when the actuator250is in the first state and/or position, a volume of bodily fluid (e.g., an initial volume) can flow from the inlet213, through the fluid flow path215and the first port217, and into the sequestration chamber230, as described in further detail herein. As shown inFIGS.9-11, a force can be exerted on the end portion251of the actuator250to place the actuator250in its second state and/or position. When in the second state and/or position, the inlet213and the outlet214are placed in fluid communication via at least a portion of the fluid flow paths215and216and/or the flow channel252. As shown inFIGS.9and11, the actuator250can be position such that the inlet213and the outlet214are each disposed between the same pair of seals255, thereby allowing a flow of bodily fluid therethrough. In addition, the flow channel252defined by the actuator250assists and/or facilitates the flow of bodily fluid (see e.g.,FIG.11). For example, in some embodiments, the flow channel252can establish fluid communication between a portion of the fluid flow path215defined by the inlet213and a portion of the fluid flow path216defined by the outlet214. Moreover, the arrangement of the seals255is such that the first port217and the second port218are each sequestered and/or isolated from each of the inlet213and the outlet214. As such, placing the actuator250in the second state and/or position can (1) sequester and/or isolate the sequestration chamber230and any volume of bodily fluid disposed therein and (2) establish fluid communication between the inlet213and the outlet214, thereby allowing a volume of bodily fluid to flow through the device200and into a fluid collection device (not shown) fluidically coupled to the outlet214. In some embodiments, the set of seals255can be configured to sequester, isolate, and/or seal one or more portions of the device200prior to establishing fluid communication between other portions of the device200. For example, in some embodiments, the actuator250can be in a first position relative to the actuator portion212of the housing210when in the first state, as described above. In such instances, actuating the actuator250(e.g., exerting a force of the end portion251of the actuator250) can include moving the actuator250from the first position relative to the actuator portion212to a second position relative to the actuator portion212, in which (1) a first seal255is disposed between the first port217and the inlet213and/or a lumen thereof, (2) the inlet213and/or the lumen thereof is disposed between the first seal255and a second seal255, (3) the outlet214and/or a lumen thereof is disposed between the second seal255and a third seal255, and (4) the second port218is disposed between the third seal255and a fourth seal255. In this manner, the inlet213is sequestered from the first port217, the outlet214is sequestered from the second port218, and fluid communication has not yet been established between the inlet213and the outlet214(e.g., the inlet213is sequestered from the outlet214). In some instances, actuating the actuator250can further include moving the actuator250from the second position relative to the actuator portion212to a third position relative to the actuator portion212, in which the actuator250is in the second state. As such, the second seal255is disposed between the first port217and the inlet213and/or the lumen thereof, each of the inlet213and the outlet214(and/or the lumens thereof) is disposed between the second seal255and the third seal255, and the second port218is disposed between the third seal255and the fourth seal255. Thus, each of the first port217and the second port218are sequestered from the inlet213and the outlet214(and/or the lumens thereof), and fluid communication is established (e.g., via the flow channel252) between the inlet213and the outlet (and/or the lumens thereof). While the actuator250, in this example, is described as being moved between the first, second, and third positions relative to the actuator portion212, it should be understood that transitioning the actuator250from the first state to the second state can include moving the actuator250in a substantially continuous manner from the first position relative to the actuator portion212, through the second position relative to the actuator portion212, and to the third position relative to the actuator portion212. In other embodiments, the actuator250can be actuated, moved, and/or transitioned, in any number of discrete steps. For example, in some instances, the actuator250can be transitioned a first predetermined amount to move the actuator250from the first position relative to the actuator portion212to the second position relative to the actuator portion212and can then be transitioned (e.g., in a second and/or discrete step) a second predetermined amount to move the actuator250from the second position relative to the actuator portion212to the third position relative to the actuator portion212. While the actuator250is described above as including four seals255, in other embodiments, an actuator can be functionally similar to the actuator250and can include fewer than four seals (e.g., one seal, two seals, or three seals) or more than four seals (e.g., five seals, six seals, seven seals, or more). As described above, the device200can be used to procure a bodily fluid sample having reduced contamination (e.g., contamination from microbes such as, for example, dermally residing microbes, microbes external to the bodily fluid source, and/or the like). For example, prior to use, the device200can be in its first, initial, and/or storage state or operating mode, in which each of the flow controller240and the actuator250is in its respective first or initial state. With the device200in the first state, a user such as a doctor, physician, nurse, phlebotomist, technician, etc. can manipulate the device200to establish fluid communication between the inlet213and the bodily fluid source (e.g., a vein of a patient). Once the inlet213is placed in fluid communication with the bodily fluid source, the outlet214can be fluidically coupled to a fluid collection device (not shown inFIGS.2-11). In the embodiment shown inFIGS.2-11, for example, the fluid collection device can be an evacuated container, a culture bottle, a sample reservoir, a syringe, and/or any other suitable container or device configured to define or produce a negative pressure, suction force, vacuum, and/or energy potential. When the actuator250is in the first position and/or configuration, the inlet213of the housing210is in fluid communication with, for example, the fluid flow path215, which in turn, is in fluid communication with the first port217. The outlet214of the of the housing210is in fluid communication with the fluid flow path216, which in turn, is in fluid communication with the second port218. More particularly, one or more of the seals255of the actuator250can be in a position relative to the actuator portion212of the housing210that (1) allows and/or establishes fluid communication between the inlet213, the fluid flow path215, and the first port217and (2) fluidically isolates the inlet213, the fluid flow path215, and the first port217from the outlet214, the fluid flow path216, and the second port218, as shown inFIGS.7and8. Thus, when the control device200is in the first state or operating mode (e.g., when the actuator250and the flow controller240are each in their first state), fluidically coupling the fluid collection device to the outlet214generates and/or otherwise results in a negative pressure differential and/or suction force within at least a portion of the fluid flow path216and, in turn, within the portion of the sequestration chamber230defined between a surface of the flow controller240(e.g., a first surface) and the first contoured surface221of the housing210. The flow controller240is in the first state and/or configuration prior to the fluid collection device being coupled to the outlet214. In the embodiment shown inFIGS.2-11, the flow controller240is a fluid impermeable bladder, diaphragm, membrane, and/or the like that can have a flipped, inverted, collapsed, and/or empty configuration (e.g., the first state and/or configuration) prior to coupling the fluid collection device to the outlet214. For example, as shown inFIG.8, the flow controller240can be disposed adjacent to and/or in contact with the second contoured surface226when the flow controller240is in its first state and/or configuration. Said another way, the first side of the flow controller240(opposite the second side) can be disposed adjacent to and/or can be in contact with the second contoured surface226. As described above, the flow controller240is configured to transition from its first state and/or configuration to its second state and/or configuration in response to the negative pressure differential and/or suction force generated within the portion of the sequestration chamber230defined between the flow controller240and the first contoured surface221. For example, the flow controller240can be configured to transition, move, “flip”, and/or otherwise reconfigure to its second state and/or configuration in which the flow controller240and/or the second side of the flow controller240(opposite the first side) is disposed adjacent to and/or in contact with the first contoured surface221, as shown inFIG.10. Said another way, the negative pressure differential and/or suction force draws, pulls, and/or otherwise moves at least a portion of the flow controller240toward the first contoured surface221and away from the second contoured surface226. Moreover, the control device200is placed in its second state and/or configuration when the actuator250is in its first state and the flow controller240is in its second state. The transitioning of the flow controller240results in an increase in an inner volume of the portion of the sequestration chamber230defined between a surface of the flow controller240(e.g., the first side of the flow controller240) and the second contoured surface226. The increase in the inner volume can, in turn, result in a negative pressure differential between the portion of the sequestration chamber230(defined at least in part by the flow controller240) and, for example, the inlet213that is operable in drawing at least a portion of an initial flow, amount, or volume of bodily fluid from the inlet213, through the fluid flow path215and the first port217, and into the portion of the sequestration chamber230. In some instances, the initial volume and/or flow of bodily fluid can be transferred into the sequestration chamber230until, for example, the flow controller240is fully expanded, flipped, and/or transitioned, until the negative pressure differential is reduced and/or equalized, and/or until a desired volume of bodily fluid is disposed within the portion of the sequestration chamber230. In some instances, it may be desirable to modulate and/or control a manner in which the flow controller240is transitioned and/or a magnitude of the negative pressure differential and/or suction force generated within the sequestration chamber230on one or both sides of the flow controller240. In the embodiment shown inFIGS.2-11, for example, the second port218defines, includes, receives, and/or is otherwise coupled to the restrictor219that establishes fluid communication between the fluid flow path216and the portion of the sequestration chamber230defined between the flow controller240and the first contoured surface221. In some embodiments, the restrictor219can define a lumen or flow path having a relatively small diameter (e.g., relative to a diameter of at least a portion of the fluid flow path216). For example, in some embodiments, the restrictor219can have a diameter of about 0.0005″, about 0.001″, about 0.003″, about 0.005″, about 0.01″, about 0.1″, about 0.5″, or more. In other embodiments, the restrictor219can have a diameter less than 0.0005″ or greater than 0.5″. In some embodiments, the restrictor219can have a predetermined and/or desired length of about 0.01″, about 0.05″, about 0.1″, about 0.15″, about 0.2″, about 0.5″, or more. In other embodiments, the restrictor219can have a predetermined and/or desired length that is less than 0.01″ or more than about 0.5″. Moreover, in some embodiments, the restrictor219can have any suitable combination of diameter and length to allow for and/or to provide a desired fluid (e.g., air) flow characteristic through at least a portion of the control device200. While the restrictor219is described above as defining a relatively small lumen and/or flow path, in other embodiments, a restrictor can have any suitable shape, size, and/or configuration. For example, in some embodiments, a restrictor can be a porous material, a semi-permeable member or membrane, a mechanical valve, float, and/or limiter, and/or any other suitable member or device configured to modulate a pressure differential across at least a portion thereof. In the embodiment shown inFIGS.2-11, the relatively small diameter of the restrictor219results in a lower magnitude of negative pressure being applied through and/or within the portion of the sequestration chamber230than would otherwise be applied with a restrictor have a larger diameter or if the second port218did not include or receive a restrictor219. For example, in some embodiments, a fluid collection device and/or other suitable negative pressure source may define and/or produce a negative pressure differential having a magnitude (e.g., a negative magnitude) of about 0.5 pounds per square inch (PSI), about 1.0 PSI, about 2.0 PSI, about 3.0 PSI, about 4.0 PSI, about 5.0 PSI, about 10.0 PSI, about 12.5 PSI, or about 14.7 PSI (at or substantially at atmospheric pressure at about sea level). In some embodiments, a fluid collection device such as an evacuated container or the like can have a predetermined negative pressure of about 12.0 PSI. Accordingly, by controlling the diameter and/or length of the restrictor219, the amount of negative pressure to which the portion of the sequestration chamber230is exposed and/or the rate at which the negative pressure is applied can be controlled, reduced, and/or otherwise modulated. In some instances, the use of the restrictor219can result in a delay or ramp up of the negative pressure exerted on or in the portion of the sequestration chamber230. Although the pressure modulation is described above as being based on a diameter of the restrictor219(i.e., a single restricted flow path), it should be understood that this is presented by way of example only and not limitation. Other means of modulating the magnitude of negative pressure to which the portion of the sequestration chamber230is exposed can include, for example, a porous material, a valve, a membrane, a diaphragm, a specific restriction, a vent, a deformable member or flow path, and/or any other suitable means. In other embodiments, a control device can include any suitable number of restricted flow paths, each of which can have substantially the same diameter or can have varied diameters. For example, in some embodiments, a control device can include up to 100 restricted flow paths or more. In such embodiments, each of the restricted flow paths can have a diameter of between about 0.0005″ and about 0.1″, between about 0.0005″ and about 0.05″, or between about 0.0005″ and about 0.01″. In some embodiments, multiple restricted flow paths can be configured to selectively provide a flow path between the outlet214and the portion of the sequestration chamber230that exposes the portion of the sequestration chamber230to the negative pressure differential. In some embodiments, modulating and/or controlling a magnitude of the pressure to which the portion of the sequestration chamber230is exposed can, in turn, modulate a rate at which one or more volumes of the sequestration chamber230are increased. In some instances, modulating the rate of volume increase (and thus, suction force) can modulate and/or limit a magnitude of pressure exerted on the bodily fluid and/or within a vein of a patient. In some instances, such pressure modulation can reduce, for example, hemolysis of a blood sample and/or a likelihood of collapsing a vein. In some instances, the ability to modulate and/or control an amount or magnitude of negative pressure or suction can allow the control device200to be used across a large spectrum of patients that may have physiological challenges whereby negative pressure is often needed to facilitate collection of bodily fluid such as, for example, blood (i.e. pressure differential between atmospheric pressure and a patient's vascular pressure is not sufficient to facilitate consistent and sufficiently forceful flow) but not so much pressure that a rapid force flattens, collapses, caves-in, and/or otherwise inhibits patency and ability to collect blood. In some embodiments, the shape, size, and/or arrangement of the sequestration chamber230and/or the flow controller240, the magnitude of the negative pressure differential or suction force, and/or the way in which the negative pressure differential or suction force is exerted can dictate and/or control a rate and/or manner in which the flow controller240is transitioned from the first state to the second state. In some instances, controlling the rate, order, and/or manner in which the flow controller240is transitioned can result in one or more desired flow characteristics associated with a flow of air, gas, and/or bodily fluid into and/or through at least a portion of the sequestration chamber230. For example, the arrangement included in this embodiment can be such that a transitioning and/or flipping of the third deformable portion243of the flow controller240is completed prior to completion of the transitioning and/or flipping of the first and second deformable portions241and242. In some instances, this arrangement can be such that a portion of the sequestration chamber230collectively defined by the first deformable portion241and the first recess227of the second contoured surface226(e.g., a first volume of the sequestration chamber230) receives at least a portion of a volume of air that was within the fluid flow path between the bodily fluid source and the sequestration chamber230prior to the fluid flow path receiving and/or being filled with bodily fluid. Similarly, a portion of the sequestration chamber230collectively defined by the second deformable portion242and the second recess228of the second contoured surface226(e.g., a second volume of the sequestration chamber230) can receive at least a portion of the volume of air that was within the fluid flow path. In other words, the transitioning of the flow controller240can vent, evacuate, and/or purge air or gas from the fluid flow path between the bodily fluid source and the sequestration chamber230, which can then be collected, stored, and/or contained within the first and second volumes of the sequestration chamber230. On the other hand, a portion of the sequestration chamber230collectively defined by the third deformable portion243and the third recess229of the second contoured surface226(e.g., a third volume of the sequestration chamber230) can receive the initial volume of bodily fluid that flows through the fluid flow path between the bodily fluid source and the sequestration chamber230after the air or gas is collected in the first and/or second volumes of the sequestration chamber230. In some instances, such an arrangement and/or order of the deformable portions241,242, and/or243transitioning can result in an even flow of the initial volume of bodily fluid into, for example, the third volume of the sequestration chamber230. More particularly, the third deformable portion243is configured to complete or substantially complete the transition and/or flip from its first state and/or position prior to a complete or a substantially complete transition and/or flip of the first and/or second deformable portions241and/or242, respectively, which in turn, can allow the bodily fluid to flow into and/or through at least a portion of the third deformable portion243with a substantially uniform front. In this manner, the third deformable portion243can be in the second state, configuration, and/or position prior to the flow of bodily fluid entering the sequestration chamber230. Thus, the third volume of the sequestration chamber230can have and/or can define a relatively consistent and/or uniform cross-sectional shape and/or area as the flow of bodily fluid enters the sequestration chamber230, which in turn, can limit wicking of a portion of the bodily fluid flow, inconsistent local flow rates of the bodily fluid flow, and/or an otherwise uneven filling of the third volume of the sequestration chamber230. As shown inFIGS.8and10, the first contoured surface221includes the recesses222and223that are each deeper than a portion of the first contoured surface221aligned and/or otherwise associated with the third deformable portion243of the flow controller240. Said another way, a distance between the first recess222and the second recess223of the first contoured surface221and the first recess227and the second recess228, respectively, of the second contoured surface226is greater than a distance between the portion of the first contoured surface221and the third recess229of the second contoured surface226. Accordingly, a distance traveled when the first and second deformable portions241and242transition and/or flip is greater than a distance traveled when the third deformable portion243transitions and/or flips. Furthermore, a width of the first and second deformable portions241and242can be similar to or less than a width of the third deformable portion243. In some instances, such an arrangement can allow the third deformable portion243to complete or substantially complete its transition and/or flip prior to each of the first and second deformable portions241and242, respectively, completing or substantially completing its transition and/or flip. In other embodiments, a distance traveled and/or a width of one or more of the deformable portions241,242, and/or243can be modified (increased or decreased) to modify and/or change a rate, order, and/or sequence associated with the deformable portions241,242, and/or243transitioning and/or flipping from the first state to the second state. In some embodiments, including fewer deformable portions or including more deformable portions can, for example, modify a relative stiffness of or associated with each deformable portion and/or can otherwise control a rate and/or manner in which each of the deformable portions transitions or flips, which in turn, can control a rate and/or manner in which fluid (e.g., air and/or bodily fluid) flows into the sequestration chamber230. For example, in some embodiments, increasing a number of deformable portions can result in a decrease in surface area on which the negative pressure is exerted, which in turn, can increase a pressure differential sufficient to transition and/or flip the deformable portions. While the deformable portions241,242, and243are shown inFIGS.8and10as having substantially the same thickness, in other embodiments, at least one deformable portion can have a thickness that is different from a thickness of the other deformable portions (e.g., the deformable portion241can have a different thickness than the thicknesses of the deformable portion242and/or the deformable portion243(or vice versa or in other combinations). In some instances, increasing a thickness of a deformable portion relative to a thickness of the other deformable portions can increase a stiffness of that deformable portion relative to a stiffness of the other deformable portions. In some such instances, the increase in the stiffness of the thicker deformable portion can, in turn, result in the other deformable portions (e.g., the thinner deformable portions) transitioning and/or flipping prior to the thicker/stiffer deformable portion transitioning and/or flipping. In some embodiments, a deformable portion can have a varied thickness along at least a portion of the deformable portion. In some embodiments, a size, shape, material property, surface finish, etc. of the flow controller240and/or the deformable portions241,242, and/or243can also facilitate, encourage, and/or otherwise result in fluid flow with the substantially uniform front. For example, the third volume of the sequestration chamber230(collectively defined by the third deformable portion243and the third recess229of the second contoured surface226) can have a size, shape, diameter, perimeter, and/or cross-sectional area that can limit and/or substantially prevent mixing of air with the bodily fluid flow (e.g., the front of the flow) due, at least in part, to a surface tension between the flow of bodily fluid and each of the third deformable portion243and the third recess229of the second contoured surface226. In some embodiments, for example, the third volume of the sequestration chamber230can have a cross-sectional area between about 0.0001 square inch (in2), and about 0.16 in2, between about 0.001 in2and about 0.08 in2, between about 0.006 in2and about 0.06 in2, or between about 0.025 in2and about 0.04 in2. In other embodiments, the third volume of the sequestration chamber230can have a cross-sectional area that is less than 0.0001 in2or greater than 0.16 in2. In some embodiments, the flow controller240and/or the contoured member225(or at least the second contoured surface226thereof) can be formed of or from a material having one or more material properties and/or one or more surface finishes configured to facilitate, encourage, and/or otherwise result in fluid flow with the desired set of flow characteristics. In other embodiments, the flow controller240and/or the second contoured surface226can have a coating configured to result in the desired set of flow characteristics. For example, in some embodiments, the flow controller240and/or the second contoured surface226can be formed of and/or can otherwise include a coating of a hydrophobic material or a hydrophilic material. Moreover, the flow controller240and at least a portion of the contoured member225(or at least the second contoured surface226thereof) can be formed of or from the same material and/or can include the same coating or can be formed of or from different materials and/or can include different coatings. Similarly, the flow controller240and/or the second contoured surface226can include any suitable surface finish which can be substantially the same or different. In some instances, a non-exhaustive list of a desired set of flow characteristics can be and/or can include one or more of a relatively even or smooth fluid flow, a substantially laminar fluid flow and/or a fluid flow with relatively low turbulence, a fluid flow with a substantially uniform front, a fluid flow that does not readily mix with other fluids (e.g., a flow of bodily fluid that does not mix with a flow or volume of air), a flow with a relatively uniform velocity, and/or the like. While certain aspects and/or features of the embodiment shown inFIGS.2-11are described above, along with ways in which to modify and/or “tune” the aspects and/or features, it should be understood that a flow controller and/or a sequestration chamber (or any structure forming a sequestration chamber) can have any suitable arrangement to result in desired rate, manner, and/or order of conveying the initial volume of bodily fluid into one or more portions or volumes of the sequestration chamber230. In some embodiments, a flow controller and/or a sequestration chamber can include and/or can incorporate any suitable combination of the aspects and/or features described above. Any number of the aspects and/or features described above can be included in a device and can act in concert or can act cooperatively to result in the desired fluid flow and/or desired fluid flow characteristics through at least a portion of the sequestration chamber. Moreover, it should be understood that the aspects and/or features described above are provided by way of example only and not limitation. Having transferred the initial volume of bodily fluid into the sequestration chamber230, a force can be exerted on the end portion251of the actuator250to transition and/or place the actuator250in its second position, state, operating mode, and/or configuration, as described in above. In some instances, prior to exerting the force on the end portion251of the actuator250, the actuator250may be transitioned from a locked configuration or state to an unlocked configuration or state. In the embodiment shown inFIGS.2-11, the transition of the actuator250can be achieved by and/or can otherwise result from user interaction and/or manipulation of the actuator250. In other embodiments, however, the transition of the actuator250can occur automatically in response to negative pressure and/or associated flow dynamics within the device200, and/or enacted by or in response to an external energy source that generates one or more dynamics or states that result in the transitioning of the actuator250. As shown inFIGS.9-11, the control device200is placed in its third state when each of the flow controller240and the actuator250is in its second state. When the actuator250is transitioned to its second state, position, and/or configuration, the inlet213and the outlet214are placed in fluid communication (e.g., via a portion of the fluid flow paths215and216and/or the flow channel252) while the first port217and the second port218are sequestered, isolated, and/or otherwise not in fluid communication with the inlet213and/or the outlet214. As such, the initial volume of bodily fluid is sequestered in the portion of the sequestration chamber230(e.g., the third volume of the sequestration chamber230, as described above). Moreover, in some instances, contaminants such as, for example, dermally residing microbes and/or any other contaminants can be entrained and/or included in the initial volume of the bodily fluid and thus, are sequestered in the sequestration chamber230when the initial volume is sequestered therein. As such, the negative pressure previously exerted on or through the fluid flow path216and through the second port218is now exerted on or through the outlet214and the inlet213via, for example, at least a portion of the fluid flow paths215and216and/or the flow channel252of the actuator250(FIG.11). In response, bodily fluid can flow from the inlet213, through the actuator portion212of the housing210, through the outlet214, and into the fluid collection device coupled to the outlet214. Accordingly, the device200can function in a manner substantially similar to that of the device100described in detail above with reference toFIG.1. FIGS.12-21illustrate a fluid control device300according to another embodiment. The fluid control device300(also referred to herein as “control device” or “device”) can be similar in at least form and/or function to the devices100and/or200described above. For example, as described above with reference to the devices100and200, in response to being placed in fluid communication with a negative pressure source (e.g., a suction or vacuum source), the device300can be configured to (1) withdraw bodily fluid from a bodily fluid source into the device300, (2) divert and sequester a first portion or amount (e.g., an initial volume) of the bodily fluid in a portion of the device300, and (3) allow a second portion or amount (e.g., a subsequent volume) of the bodily fluid to flow through the device300—bypassing the sequestered initial volume—and into a fluid collection device fluidically coupled to the device300. As such, contaminants or the like can be sequestered in or with the initial volume of bodily fluid, leaving the subsequent volume of bodily fluid substantially free of contaminants. In some embodiments, portions and/or aspects of the control device300can be similar to and/or substantially the same as portions and/or aspects of the control device200described above with reference toFIGS.2-11. Accordingly, such similar portions and/or aspects may not be described in further detail herein. The fluid control device300(also referred to herein as “control device” or “device”) includes a housing310, a flow controller340, and an actuator350. In some embodiments, the control device300or at least a portion of the control device300can be arranged in a modular configuration in which one or more portions of the housing310and/or the actuator350can be physically and fluidically coupled (e.g., by an end user) to collectively form the control device300. Similarly, in some embodiments, the control device300can be packaged, shipped, and/or stored independent of a fluid collection device (e.g., a sample reservoir, syringe, etc.) and/or an inlet device (e.g., a needle, catheter, PIV, PICC, etc.), which a user can couple to the control device300before or during use. In other embodiments, the control device300need not be modular. For example, in some embodiments, the control device300can be assembled during manufacturing and delivered to a supplier and/or end user as an assembled device. In some embodiments, the control device300can include and/or can be pre-coupled (e.g., during manufacturing and/or prior to being delivered to an end user) to a fluid collection device such as any of those described above. Similarly, in some embodiments, the control device300can include and/or can be pre-coupled to an inlet device such as any of those described herein. The housing310of the control device300can be any suitable shape, size, and/or configuration. The housing310includes an actuator portion312and a sequestration portion320. The actuator portion312receives at least a portion of the actuator350. The sequestration portion320is coupled to a cover335and includes, receives, houses, and/or at least partially defines a sequestration chamber330. As described in further detail herein, the housing310can include and/or can define a first port317and a second port318, each of which establishes fluid communication between the actuator portion312and the sequestration portion320of the housing310to selectively control and/or allow a flow of fluid through one or more portions of the housing310. As shown inFIGS.12-16, the actuator portion312of the housing310includes an inlet313and an outlet314, and defines a fluid flow path315(e.g., a first fluid flow path) that is configured to selectively place the inlet313in fluid communication with the first port317and a fluid flow path316(e.g., a second fluid flow path) that is configured to selectively place the outlet314in fluid communication with the second port318. The inlet313of the housing310is configured to be placed in fluid communication with a bodily fluid source (e.g., in fluid communication with a patient via a needle, IV catheter, PICC line, etc.) to receive a flow of bodily fluid therefrom, as described in detail above. The outlet314is configured to be fluidically coupled to a fluid collection device such as any of those described above (e.g., a sample reservoir, a syringe, culture bottle, an intermediary bodily fluid transfer device or adapter, and/or the like). The fluid collection device can define and/or can be manipulated to define a vacuum or negative pressure that results in a negative pressure differential between desired portions of the housing310when the fluid collection device is coupled to the outlet314. In addition, after an initial volume of bodily fluid has been transferred into the sequestration chamber330, fluid communication can be established between the fluid flow paths315and316to allow a subsequent volume of bodily fluid (e.g., a bodily fluid sample) to flow through the device300and into the fluid collection device. Accordingly, the actuator portion312of the housing310can be substantially similar in at least form and/or function to the actuator portion212of the housing210and thus, is not described in further detail herein. The sequestration portion320of the housing310can be any suitable shape, size, and/or configuration. As shown, for example, inFIGS.16-18, the sequestration portion320includes and/or forms an inner surface, a portion of which is arranged and/or configured to form a first contoured surface321. At least a portion of the first contoured surface321can form and/or define a portion of the sequestration chamber330, as described in further detail herein. Furthermore, the first port317and the second port318are configured to form and/or extend through a portion of the first contoured surface321to selectively place the sequestration chamber330in fluid communication with the fluid flow paths315and316, as described in further detail here. The sequestration portion320is configured to include, form, and/or house, a contour member325and the flow controller340. More particularly, as shown inFIGS.16-18, the sequestration portion320receives and/or is coupled to the contour member325such that the flow controller340is disposed therebetween. In some embodiments, the contour member325can be fixedly coupled to the sequestration portion320via an adhesive, ultrasonic welding, and/or any other suitable coupling method. In some embodiments, the contour member325, the sequestration portion320, and the flow controller340can collectively form a substantially fluid tight and/or hermetic seal that isolates the sequestration portion320for a volume outside of the sequestration portion320. As shown, a cover335is configured to be disposed about the contour member325such that the cover335and the sequestration portion320of the housing310enclose and/or house the contour member325and the flow controller340. In some embodiments, the cover335can be coupled to the contour member325and/or the sequestration portion320via an adhesive, ultrasonic welding, one or more mechanical fasteners, a friction fit, a snap fit, a threaded coupling, and/or any other suitable manner of coupling. In some embodiments, the cover335can define an opening, window, slot, etc. configured to allow visualization of at least a portion of the sequestration chamber330. While the contour member325and the cover335are described above as being separate pieces and/or components, in other embodiments, the contour member325can be integrated and/or monolithically formed with the cover335. The contour member325includes and/or forms a second contoured surface326. The arrangement of the contour member325and the sequestration portion320of the housing310can be such that at least a portion of the first contoured surface321is aligned with and/or opposite a corresponding portion of the second contoured surface326of the contour member325(see e.g.,FIG.18). As such, a space, volume, opening, void, chamber, and/or the like defined between the first contoured surface321and the second contoured surface326forms and/or defines the sequestration chamber330. Moreover, the flow controller340is disposed between the first contoured surface321and the second contoured surface326and can be configured to transition between a first state and a second state in response to a negative pressure differential and/or suction force applied to at least a portion of the sequestration chamber330, as described in further detail herein. The ports317and318of the housing310can be any suitable shape, size, and/or configuration. As described in detail above with reference to the first port217, the first port317is in fluid communication with a first portion of the sequestration chamber330defined between the second contoured surface326and a first side of the flow controller340and is configured to provide and/or transfer a flow of bodily fluid from the inlet313and/or the fluid flow path315to the first portion of the sequestration chamber330in response to the flow controller340transitioning from a first state to a second state. As described above with reference to the second port218, the second port318is in fluid communication with a second portion of the sequestration chamber330defined between the first contoured surface321and a second side of the flow controller340(e.g., opposite the first side). As such, the second port318can be configured to expose the second portion of the sequestration chamber330to a negative pressure differential and/or suction force resulting from the fluid collection device operable to transition the flow controller340from its first state to its second state, as described in detail above with reference to the device200. In addition, the second port318can include and/or can be coupled to a restrictor319configured to limit and/or restrict a flow of fluid (e.g., air or gas) between the second portion of the sequestration chamber330and the fluid flow path316, thereby modulating and/or controlling a magnitude of a pressure differential and/or suction force applied on or experienced by the flow controller340, as described in detail above with reference to the restrictor219of the device200. The flow controller340disposed in the sequestration portion320of the housing310can be any suitable shape, size, and/or configuration. Similarly, the flow controller340can be formed of any suitable material (e.g., any suitable biocompatible material such as those described herein and/or any other suitable material). For example, the flow controller340can be a fluid impermeable bladder configured to be transitioned from a first state and/or configuration to a second state and/or configuration. In some embodiments, the flow controller340(e.g., bladder) can include any number of relatively thin and flexible portions configured to deform in response to a pressure differential across the flow controller340. In some embodiments, the flow controller340can be substantially similar in at least form and/or function to the flow controller240described in detail above with reference toFIGS.2-11. For example, in some embodiments, the flow controller340can be formed of or from any suitable material and/or can have any suitable durometer such as described the materials and/or durometers described above with reference to the flow controller240. Similarly, the flow controller340can have a size, shape, surface finish, and/or material property(ies) configured to facilitate, encourage, and/or otherwise result in fluid flow with a desired set of flow characteristics, as described above with reference to the flow controller240. Accordingly, portions of the flow controller340may not be described in further detail herein. In the embodiment shown inFIG.12-21, the flow controller340is a bladder formed of or from silicone having a durometer of about 30 Shore A. The flow controller340(e.g., bladder) includes a first deformable portion341and a second deformable portion342. In addition, the flow controller340defines an opening344configured to receive at least a portion of the first port317, as described above with reference to the flow controller240. In some embodiments, the flow controller340can include one or more portions configured to form one or more seals with and/or between the flow controller340and each of the contoured surfaces321and326, as described in further detail herein. The deformable portions341and342of the flow controller340can be relatively thin and flexible portions configured to deform in response to a pressure differential between the first side of the flow controller340and the second side of the flow controller340. More particularly, the deformable portions341and342can each have a thickness of about 0.005″. As shown, for example, inFIGS.18and20, the deformable portions341and342of the flow controller340correspond to and/or have substantially the same general shape as at least a portion of the contoured surfaces321and/or326. As such, the deformable portions341and342and the corresponding portion(s) of the contoured surfaces321and/or326can collectively form and/or define one or more channels, volumes, and/or the like, which in turn, can receive the initial volume of bodily fluid, as described in further detail herein. As described above with reference to the flow controller240, the flow controller340is configured to transition between a first state and a second state. For example, when the flow controller340is in its first state, the first deformable portion341can be disposed adjacent to and/or substantially in contact with a first recess327formed by the second contoured surface326and the second deformable portion342can be disposed adjacent to and/or substantially in contact with a second recess328formed by the second contoured surface326. As such, the first portion of the sequestration chamber330(e.g., the portion defined between the second contoured surface326and the first surface of the flow controller340) can have a relatively small and/or relatively negligible volume. In contrast, when the flow controller340is transitioned from its first state to its second state (e.g., in response to a negative pressure applied and/or transmitted via the second port318), the first deformable portion341can be disposed adjacent to and/or substantially in contact with a first recess322formed by the first contoured surface321and the second deformable portion342can be disposed adjacent to and/or substantially in contact with a second recess323formed by the first contoured surface321. Accordingly, a volume of the first portion of the sequestration chamber330is larger when the flow controller340is in its second state than when the flow controller is in its first state. As described in detail above with reference to the sequestration chamber230and flow controller240, the increase in the volume of the first portion of the sequestration chamber330can result in a negative pressure or vacuum therein that can be operable to draw a volume of air or gas as well as the initial volume of bodily fluid into the sequestration chamber330. While the flow controller340is particularly shown and described, in other embodiments, the flow controller340and/or the sequestration chamber330can have any suitable configuration and/or arrangement. For example, in some embodiments, the contoured surfaces321and/or326can include more or fewer recesses (e.g., the recesses322and323and the recesses327and328). In other embodiments, a depth of one or more recesses can be modified. Similarly, the flow controller340can be modified in any suitable manner to substantially correspond to a shape and/or configuration of the contoured surfaces321and/or326. While the flow controller340is described as being a bladder or the like including a number of deformable portions, in other embodiments, a flow controller can be arranged and/or configured as, for example, a bellows, a flexible pouch, an expandable bag, an expandable chamber, a plunger (e.g., similar to a syringe), and/or any other suitable reconfigurable container or the like. In addition, the sequestration chamber330at least partially formed by the flow controller340can have any suitable shape, size, and/or configuration. The actuator350of the control device300can be any suitable shape, size, and/or configuration. At least a portion of the actuator350is disposed within the actuator portion312of the housing310and is configured to be transitioned between a first state, configuration, and/or position and a second state, configuration, and/or position. In the embodiment shown inFIGS.12-21, the actuator350is configured as an actuator rod or plunger configured to be moved relative to the actuator portion312of the housing310. The actuator350includes a set of seals355and defines a flow channel352. The actuator350further includes an end portion351disposed outside of the housing310and configured to be engaged by a user to transition the actuator350between its first state, in which the fluid flow path315can establish fluid communication between the inlet313and the first port317, and its second state, in which (1) the first port317(and thus, the sequestration chamber330) are sequestered and/or fluidically isolated and (2) the inlet313and the outlet314are placed in fluid communication via at least a portion of the fluid flow paths315and316and/or the flow channel352of the actuator350. As such, the actuator350is similar in form and/or function to the actuator250described above with reference toFIGS.2-11. Thus, the actuator350is not described in further detail herein. The device300can be used to procure a bodily fluid sample having reduced contamination (e.g., contamination from microbes such as, for example, dermally residing microbes, microbes external to the bodily fluid source, and/or the like) in a manner substantially similar to the manner described above with reference to the device200. For example, prior to use, the device300can be in its first, initial, and/or storage state or operating mode, in which each of the flow controller340and the actuator350is in its respective first or initial state. With the device300in the first state, a user such as a doctor, physician, nurse, phlebotomist, technician, etc. can manipulate the device300to establish fluid communication between the inlet313and the bodily fluid source (e.g., a vein of a patient). Once the inlet313is placed in fluid communication with the bodily fluid source, the outlet314can be fluidically coupled to a fluid collection device (not shown inFIGS.12-21). In the embodiment shown inFIGS.12-21, for example, the fluid collection device can be an evacuated container, a culture bottle, a sample reservoir, a syringe, and/or any other suitable container or device configured to define or produce a negative pressure, suction force, vacuum, and/or energy potential. When the actuator350is in the first position and/or configuration, the inlet313of the housing310is in fluid communication with, for example, the fluid flow path315, which in turn, is in fluid communication with the first port317(see e.g.,FIGS.17and18). The outlet314of the of the housing310is in fluid communication with the fluid flow path316, which in turn, is in fluid communication with the second port318(see e.g.,FIGS.17and18). As described in detail above, when the control device300is in the first state or operating mode (e.g., when the actuator350and the flow controller340are each in their first state), fluidically coupling the fluid collection device to the outlet314generates and/or otherwise results in a negative pressure differential and/or suction force within at least a portion of the fluid flow path316and, in turn, within the portion of the sequestration chamber330defined between a surface of the flow controller340(e.g., a first surface) and the first contoured surface321of the housing310. The flow controller340is in the first state and/or configuration prior to the fluid collection device being coupled to the outlet314. In the embodiment shown inFIGS.12-21, the flow controller340is a fluid impermeable bladder and/or the like that can have a flipped, inverted, collapsed, and/or empty configuration (e.g., the first state and/or configuration) prior to coupling the fluid collection device to the outlet314. For example, as shown inFIG.18, the flow controller340can be disposed adjacent to and/or in contact with the second contoured surface326when the flow controller340is in its first state and/or configuration. As described above, the flow controller340is configured to transition from its first state and/or configuration to its second state and/or configuration in response to the negative pressure differential and/or suction force generated within the portion of the sequestration chamber330defined between the flow controller340and the first contoured surface321. For example, the flow controller340can be disposed adjacent to and/or in contact with the second contoured surface326when the flow controller340is in its first state (FIG.18) and can be transitioned, moved, “flipped”, placed, and/or otherwise reconfigured into its second state in which the flow controller340is disposed adjacent to and/or in contact with the first contoured surface321(FIG.20). Moreover, the control device300is placed in its second state and/or configuration when the actuator350is in its first state and the flow controller340is in its second state. The transitioning of the flow controller340results in an increase in an inner volume of the portion of the sequestration chamber330defined between a surface of the flow controller340(e.g., a second surface opposite the first surface) and the second contoured surface326. As described in detail above with reference to the device200, the increase in the inner volume can, in turn, result in a negative pressure differential between the portion of the sequestration chamber330(defined at least in part by the flow controller340) and, for example, the inlet313that is operable in drawing at least a portion of an initial flow, amount, or volume of bodily fluid from the inlet313, through the fluid flow path315and the first port317, and into the portion of the sequestration chamber330. In some instances, the initial volume and/or flow of bodily fluid can be transferred into the sequestration chamber330until, for example, the flow controller340is fully expanded, flipped, and/or transitioned, until the negative pressure differential is reduced and/or equalized, and/or until a desired volume of bodily fluid is disposed within the portion of the sequestration chamber330. Moreover, the restrictor319can be configured to restrict, limit, control, and/or modulate a magnitude of the negative pressure differential and/or suction force generated within the sequestration chamber330and/or on a surface of the flow controller340, which in turn, can modulate a suction force within one or more flow paths and/or within the bodily fluid source (e.g., the vein of the patient), as described above with reference to the device200. In other embodiments, the second port318and/or any suitable portion of the device300can be configured to modulate a suction force within one or more portions of the sequestration chamber330in any suitable manner such as, for example, those described above with reference to the device200. In some embodiments, the shape, size, and/or arrangement of the sequestration chamber330and/or the flow controller340, the magnitude of the negative pressure differential or suction force, and/or the way in which the negative pressure differential or suction force is exerted can dictate and/or control a rate and/or manner in which the flow controller340is transitioned from the first state to the second state. For example, while the flow controller240is described above as including the first deformable portion241, the second deformable portion242, and the third deformable portion243, the flow controller340included in the embodiment shown inFIGS.12-21includes only the first deformable portion341and the second deformable portion342. Moreover, as shown inFIGS.18and20, the recesses322and323of the first contoured surface321have substantially the same depth. In some embodiments, such an arrangement can, for example, limit and/or reduce an amount of negative pressure and/or suction force sufficient to transition and/or flip the first deformable portion341relative to the amount of negative pressure and/or suction force sufficient to transition and/or flip the first and second deformable portions341and342of the flow controller340. As described above, in some embodiments, the first deformable portion341can have a thickness and/or stiffness that is greater than a thickness and/or stiffness of the second deformable portion342such that the second deformable portion342completes or substantially completes its transition and/or flip before the first deformable portion341completes or substantially completes its transition and/or flip. In other embodiments, the flow controller340can include any suitable feature, structure, material property, surface finish, and/or the like, and/or any other portion of the device300can include any suitable feature, structure, etc. configured to control an order and/or manner in which the flow controller340transitions from the first state to the second state, such as any of those described above with reference to the flow controller240. In some embodiments, the arrangement of the flow controller340may result in the device300being compatible with fluid collection devices having a relatively low amount of negative pressure. In some embodiments, such an arrangement may also facilitate and/or simplify one or more manufacturing processes and/or the like. In some instances, controlling the rate, order, and/or manner can result in one or more desired flow characteristic associated with a flow of air, gas, and/or bodily fluid into and/or through at least a portion of the sequestration chamber230. As described above with reference to the deformable portions241and242, the first deformable portion341and the first recess327of the second contoured surface326(e.g., a first volume of the sequestration chamber330) can be configured to receive a volume of air that was within the fluid flow path between the bodily fluid source and the sequestration chamber330prior to the fluid flow path receiving and/or being filled with the flow of bodily fluid. In other words, the transitioning of the flow controller340can vent or purge air or gas from the fluid flow path between the bodily fluid source and the sequestration chamber330, which can then be stored or contained within the first and second volumes of the sequestration chamber330. On the other hand, a portion of the sequestration chamber330collectively defined by the second deformable portion342and the second recess328of the second contoured surface326(e.g., a second volume of the sequestration chamber330) can be configured to receive the initial volume of bodily fluid that flows through the fluid flow path between the bodily fluid source and the sequestration chamber330after the air or gas is vented and/or purged. Thus, as described above with reference to the device200, the initial volume can be transferred into the sequestration chamber330. In some instances, the arrangement of the sequestration chamber330and/or the flow controller340can result in an even flow of the initial volume of bodily fluid into, for example, the second volume of the sequestration chamber330. For example, as described in detail above with reference to the device200, the sequestration chamber330and/or the flow controller340can be configured and/or arranged such that bodily fluid flows into and/or through at least a portion of the sequestration chamber330(e.g., the second volume of the sequestration chamber330) with a uniform flow front and substantially without mixing with a volume of air in the sequestration chamber330. In other embodiments, a flow controller can have any other suitable arrangement to result in desired rate, manner, and/or order of conveying the initial volume of bodily fluid into one or more portions or volumes of the sequestration chamber330such as, for example, any of those described above with reference to the device200. Having transferred the initial volume of bodily fluid into the sequestration chamber330, a force can be exerted on the end portion351of the actuator350to transition and/or place the actuator350in its second position, state, operating mode, and/or configuration, as described in above. In some instances, prior to exerting the force on the end portion351of the actuator350, the actuator350may be transitioned from a locked configuration or state to an unlocked configuration or state. In the embodiment shown inFIGS.12-21, the transition of the actuator350can be achieved by and/or can otherwise result from user interaction and/or manipulation of the actuator350. In other embodiments, however, the transition of the actuator350can occur automatically in response to negative pressure and/or associated flow dynamics within the device300, and/or enacted by or in response to an external energy source that generates one or more dynamics or states that result in the transitioning of the actuator350. As shown inFIGS.19-21, the control device300is placed in its third state when each of the flow controller340and the actuator350is in its second state. When the actuator350is transitioned to its second state, position, and/or configuration, the inlet313and the outlet314are placed in fluid communication (e.g., via the fluid flow path316and/or the flow channel352) while the fluid flow path315and/or the first port317is/are sequestered, isolated, and/or otherwise not in fluid communication with the inlet313and/or the outlet314. As such, the initial volume of bodily fluid is sequestered in the portion of the sequestration chamber330(e.g., the third volume of the sequestration chamber330, as described above). Moreover, in some instances, contaminants such as, for example, dermally residing microbes and/or any other contaminants can be entrained and/or included in the initial volume of the bodily fluid and thus, are sequestered in the sequestration chamber330when the initial volume is sequestered therein. As such, the negative pressure otherwise exerted on or through the fluid flow path316and through the second port318is now exerted on or through the outlet314and the inlet313via, for example, at least a portion of the fluid flow paths315and316and/or the flow channel352of the actuator350(FIG.21). In response, bodily fluid can flow from the inlet313, through the actuator portion312of the housing310, through the outlet314, and into the fluid collection device coupled to the outlet314. Accordingly, the device300can function in a manner substantially similar to that of the devices100and/or200described in detail above. FIGS.22-27illustrate a fluid control device400according to another embodiment. The fluid control device400(also referred to herein as “control device” or “device”) can be similar in at least form and/or function to the devices100,200, and/or300described above. For example, as described above with reference to the devices100,200, and/or300, in response to being placed in fluid communication with a negative pressure source (e.g., a suction or vacuum source), the device400can be configured to (1) withdraw bodily fluid from a bodily fluid source into the device400, (2) divert and sequester a first portion or amount (e.g., an initial volume) of the bodily fluid in a portion of the device400, and (3) allow a second portion or amount (e.g., a subsequent volume) of the bodily fluid to flow through the device400—bypassing the sequestered initial volume—and into a fluid collection device fluidically coupled to the device400. As such, contaminants or the like can be sequestered in or with the initial volume of bodily fluid, leaving the subsequent volume of bodily fluid substantially free of contaminants. In some embodiments, portions and/or aspects of the control device400can be similar to and/or substantially the same as portions and/or aspects of at least the control device200described above with reference toFIGS.2-11. Accordingly, such similar portions and/or aspects may not be described in further detail herein. The fluid control device400includes a housing410, a flow controller440, and an actuator450. In some embodiments, the control device400or at least a portion of the control device400can be arranged in a modular configuration (e.g., including one or more independent or separate components that are later assembled) or can be arranged in an integrated or at least partially integrated configuration (e.g., including one or more components that are pre-assembled or pre-coupled), as described above with reference to the device200. For example, in some embodiments, the control device400can include and/or can be coupled to a fluid collection device and/or an inlet device such as any of those described above. The housing410of the control device400can be any suitable shape, size, and/or configuration. In general, the housing410can be substantially similar in at least form and/or function to the housing210. Accordingly, while certain components, features, aspects, and/or functions of the housing410are identified in the drawings and discussed below, such similarities are not described in further detail herein and should be considered similar to the corresponding components, features, aspects, and/or functions described above with reference to the device200unless explicitly described to the contrary. The housing410includes an actuator portion412and a sequestration portion420. The actuator portion412receives at least a portion of the actuator450. The sequestration portion420is coupled to a cover435and includes, receives, houses, and/or at least partially defines a sequestration chamber430. As described in further detail herein, the housing410can include and/or can define a first port417and a second port418, each of which establishes fluid communication between the actuator portion412and the sequestration portion420of the housing410to selectively control and/or allow a flow of fluid through one or more portions of the housing410. The actuator portion412of the housing410includes an inlet413and an outlet414, and defines a fluid flow path415(e.g., a first fluid flow path) that is configured to selectively place the inlet413in fluid communication with the first port417and a fluid flow path416(e.g., a second fluid flow path) that is configured to selectively place the outlet414in fluid communication with the second port418. The actuator portion412of the housing410can be substantially similar in at least form and/or function to the actuator portion212of the housing210and thus, is not described in further detail herein. The sequestration portion420of the housing410can be any suitable shape, size, and/or configuration. The sequestration portion420is configured to include, form, and/or house, a contour member425and the flow controller440. More specifically, a cover435is configured to be disposed about the contour member425such that the cover435and the sequestration portion420of the housing410enclose and/or house the contour member425and the flow controller440. The sequestration portion420of the housing410and/or components thereof or coupled thereto can be substantially similar in at least form and/or function to the sequestration portion220of the housing210(and/or components thereof or coupled thereto) and thus, is/are not described in further detail herein. As shown for example, inFIGS.24-27, the sequestration portion420includes and/or forms an inner surface, a portion of which is arranged and/or configured to form a first contoured surface421. At least a portion of the first contoured surface421can form and/or define a portion of the sequestration chamber430, as described in further detail herein. Furthermore, the first port417and the second port418are configured to form and/or extend through a portion of the first contoured surface421to selectively place the sequestration chamber430in fluid communication with the fluid flow paths415and416, as described above with reference to the device200. The first contoured surface421can be any suitable shape, curvature, and/or texture, and can, for example, be substantially similar to the first contoured surface221of the housing220. For example, the first contoured surface421includes and/or forms at least a first recess422and a second recess423. The first contour surface421can differ from the first contoured surface221, however, by including any number of ventilation ridges424, as shown inFIGS.24-27. The distribution of the ventilation ridges424on the first contour surface421can include multiple arrangements. For example, the first contour surface421can have one ventilation ridge424, multiple ventilation ridges424, multiple concentric ventilation ridges424, etc. disposed within and/or formed by the first recess422and/or one ventilation ridge424, multiple ventilation ridges424, or multiple concentric ventilation ridges424disposed within and/or formed by the second recess423of the first contour surface421, as shown inFIGS.25and27. In some implementations, the ventilation ridges424are configured to reduce and/or control the ability or the likelihood of the flow controller440or portions thereof forming a seal when placed in contact with the first contour surface421in response to a negative pressure applied and/or transmitted via the second port418(e.g., a negative pressure in a volume between the first contoured surface421and the flow controller440). Said another way, the ventilation ridges424can form discontinuities along one or more portions of the first contoured surface421that, for example, can prevent air from being trapped in localized areas between the flow controller440and one or more portions of the first contour surface421by allowing air to flow freely between the flow controller440and one or more portions of the first contour surface421, as described in further detail herein. As shown inFIGS.24-27, the sequestration portion420receives and/or is coupled to the contour member425such that the flow controller440is disposed therebetween. In some embodiments, the contour member425can be substantially similar in at least form and/or function to the contour member225described above with reference to the device200. For example, the contoured member425includes and/or forms a second contoured surface426. The second contour surface426can be any suitable shape, curvature, and/or texture, and can, for example, be substantially similar to the second contoured surface226of the housing220. For example, the second contoured surface426includes and/or forms a first recess427, a second recess428, and a third recess429. The second contour surface426can differ from the second contoured surface226, however, by including any number of ventilation channels431, as shown inFIGS.24-27. The distribution of the ventilation channels431on the second contour surface426can include multiple arrangements. For example, the second contour surface426can be configured to have one ventilation channel431, multiple ventilation channels431, or multiple concentric ventilation channels431disposed within and/or formed by the first recess427and/or one ventilation channel431, multiple ventilation channels431, or multiple concentric ventilation channels431disposed within and/or formed by the second recess428of the second contour surface426, as shown inFIGS.25and27. The ventilation channels431are configured to reduce and/or control the ability or the likelihood of the flow controller440or portions thereof forming a seal when placed in contact with the second contour surface426in response to a positive pressure (e.g., in a volume between the first contoured surface421and the flow controller440), as described above with reference to the ventilation ridges424. While the first contour surface421is described above as including the ventilation ridges424and the second contour surface426is described above as including the ventilation channels431, it should be understood that the ventilation ridges424and the ventilation channels431have been presented by way of example only and not limitation. Various alternatives and/or combinations are contemplated. For example, in some embodiments, the first contour surface421can include ventilation channels while the second contour surface426can include ventilation ridges. In other embodiments, the first contour surface421and/or the second contour surface426can include a combination of ventilation channels and ventilation ridges. As such, the contour surfaces421and426can include one or more discontinuity having any suitable shape, size, and/or configuration that can allow for and/or otherwise ensure that air can flow between the flow controller440and the contour surfaces421and426. Moreover, while each of the contour surfaces421and426is shown as including a ventilation feature or discontinuity, in other embodiments, the first contour surface421can include a ventilation feature or discontinuity while the second contour surface426does not, or vice versa. The flow controller440disposed in the sequestration portion420of the housing410can be any suitable shape, size, and/or configuration. Similarly, the flow controller440can be formed of any suitable material (e.g., any suitable biocompatible material such as those described herein and/or any other suitable material). For example, the flow controller440can be a fluid impermeable bladder configured to be transitioned from a first state and/or configuration to a second state and/or configuration. In some embodiments, the flow controller440(e.g., bladder) can include any number of relatively thin and flexible portions configured to deform in response to a pressure differential across the flow controller440. In some embodiments, the flow controller440can be substantially similar in at least form and/or function to the flow controller240described in detail above with reference toFIGS.2-11. For example, in some embodiments, the flow controller440can be formed of or from any suitable material and/or can have any suitable durometer such as described the materials and/or durometers described above with reference to the flow controller240. Similarly, the flow controller440can have a size, shape, surface finish, and/or material property(ies) configured to facilitate, encourage, and/or otherwise result in fluid flow with a desired set of flow characteristics, as described above with reference to the flow controller240. Accordingly, portions of the flow controller440may not be described in further detail herein. In the embodiment shown inFIG.22-27, the flow controller440is a bladder formed of or from silicone having a durometer of about 30 Shore A. The flow controller440(e.g., bladder) includes a first deformable portion441, a second deformable portion442, and a third deformable portion443. In addition, the flow controller440defines an opening444configured to receive at least a portion of the first port417, as described above with reference to the flow controller240. In some embodiments, the flow controller440can include one or more portions configured to form one or more seals with and/or between the flow controller440and each of the contoured surfaces421and426. For example, as shown inFIGS.24-27, the deformable portions441,442and443of the flow controller440correspond to and/or have substantially the same general shape as at least a portion of the contoured surfaces421and/or426. As such, the deformable portions441,442and443and the corresponding portion(s) of the contoured surfaces421and/or426can collectively form and/or define one or more volumes, and/or the like, which in turn, can receive the initial volume of bodily fluid, as described in further detail herein. As described above with reference to the flow controller240, the flow controller440is configured to transition between a first state and a second state. For example, when the flow controller440is in its first state, the first deformable portion441can be disposed adjacent to and/or substantially in contact with a first recess427formed by the second contoured surface426, the second deformable portion442can be disposed adjacent to and/or substantially in contact with a second recess428, and the third deformable portion443can be disposed adjacent to and/or substantially in contact with a second recess429formed by the second contoured surface426. As such, the first portion of the sequestration chamber430(e.g., the portion defined between the second contoured surface426and the first surface of the flow controller440) can have a relatively small and/or relatively negligible volume. In contrast, when the flow controller440is transitioned from its first state to its second state (e.g., in response to a negative pressure applied and/or transmitted via the second port418), at least the deformable portions441,442, and443are disposed adjacent to and/or substantially in contact with the first contoured surface421. More specifically, the first deformable portion421can be disposed adjacent to and/or substantially in contact with a first recess422formed by the first contoured surface421, the second deformable portion442can be disposed adjacent to and/or substantially in contact with a second recess423formed by the first contoured surface421, and the third deformable portion243can be disposed adjacent to and/or substantially in contact with, for example, a non-recessed portion of the first contoured surface421, as described above with reference to the flow controller240. The actuator450of the control device400can be any suitable shape, size, and/or configuration. At least a portion of the actuator450is disposed within the actuator portion412of the housing410and is configured to be transitioned between a first state, configuration, and/or position and a second state, configuration, and/or position. In the embodiment shown inFIGS.22-27, the actuator450is configured as an actuator rod or plunger configured to be moved relative to the actuator portion412of the housing410. The actuator450includes a set of seals455and defines a flow channel452. The actuator450further includes an end portion451disposed outside of the housing410and configured to be engaged by a user to transition the actuator450between its first state, in which the fluid flow path415can establish fluid communication between the inlet413and the first port417, and its second state, in which (1) the first port417(and thus, the sequestration chamber430) are sequestered and/or fluidically isolated and (2) the inlet413and the outlet414are placed in fluid communication via at least a portion of the fluid flow paths415and416and/or the flow channel452of the actuator450. As such, the actuator450is similar in form and/or function to the actuator250described above with reference toFIGS.2-11. Thus, the actuator450is not described in further detail herein. The device400can be used to procure a bodily fluid sample having reduced contamination (e.g., contamination from microbes such as, for example, dermally residing microbes, microbes external to the bodily fluid source, and/or the like) in a manner substantially similar to the manner described above with reference to the device200. For example, prior to use, the device400can be in its first, initial, and/or storage state or operating mode, in which each of the flow controller440and the actuator450is in its respective first or initial state. With the device400in the first state, a user such as a doctor, physician, nurse, phlebotomist, technician, etc. can manipulate the device400to establish fluid communication between the inlet413and the bodily fluid source (e.g., a vein of a patient). Once the inlet413is placed in fluid communication with the bodily fluid source, the outlet414can be fluidically coupled to a fluid collection device (not shown inFIGS.22-27). In the embodiment shown inFIGS.22-27, for example, the fluid collection device can be an evacuated container, a culture bottle, a sample reservoir, a syringe, and/or any other suitable container or device configured to define or produce a negative pressure, suction force, vacuum, and/or energy potential. When the actuator450is in the first position and/or configuration, the inlet413of the housing410is in fluid communication with, for example, the fluid flow path415, which in turn, is in fluid communication with the first port417. The outlet414of the of the housing410is in fluid communication with the fluid flow path416, which in turn, is in fluid communication with the second port418(see e.g.,FIG.24). As described in detail above, when the control device400is in the first state or operating mode (e.g., when the actuator450and the flow controller440are each in their first state), fluidically coupling the fluid collection device to the outlet414generates and/or otherwise results in a negative pressure differential and/or suction force within at least a portion of the fluid flow path416and, in turn, within the portion of the sequestration chamber430defined between a surface of the flow controller440(e.g., a first surface) and the first contoured surface421of the housing410. The flow controller440is in the first state and/or configuration prior to the fluid collection device being coupled to the outlet414. In the embodiment shown inFIGS.22-27, the flow controller440is a fluid impermeable bladder and/or the like that can have a flipped, inverted, collapsed, and/or empty configuration (e.g., the first state and/or configuration) prior to coupling the fluid collection device to the outlet414. For example, as shown inFIGS.24and25, the flow controller440can be disposed adjacent to and/or in contact with the second contoured surface426when the flow controller440is in its first state and/or configuration. As described above, the controller440is configured to transition from its first state and/or configuration to its second state and/or configuration in response to the negative pressure differential and/or suction force generated within the portion of the sequestration chamber430defined between the flow controller440and the first contoured surface421. For example, the flow controller440can be disposed adjacent to and/or in contact with the second contoured surface426when the flow controller440is in its first state (FIGS.24and25) and can be transitioned, moved, “flipped”, placed, and/or otherwise reconfigured into its second state in which the flow controller440is disposed adjacent to and/or in contact with the first contoured surface421(FIGS.26and27). Moreover, the ventilation channels431formed by the second contour surface426can allow air to flow between the second contoured surface426and the flow controller440, which can, in some instances, reduce a likelihood of pockets of air being trapped between the second contoured surface426and the flow controller440if and/or when a positive pressure is applied in a volume between the flow controller440and the first contoured surface421via the port418(e.g., a positive pressure that drives and/or urges the flow controller440toward the second contoured surface426such as during manufacturing, testing, and/or use). The control device400is placed in its second state and/or configuration when the actuator450is in its first state and the flow controller440is in its second state. The transitioning of the flow controller440results in an increase in an inner volume of the portion of the sequestration chamber430defined between a surface of the flow controller440(e.g., a second surface opposite the first surface) and the second contoured surface426. As described in detail above with reference to the device200, the increase in the inner volume can, in turn, result in a negative pressure differential between the portion of the sequestration chamber430(defined at least in part by the flow controller440) and, for example, the inlet413that is operable in drawing at least a portion of an initial flow, amount, or volume of bodily fluid from the inlet413, through the fluid flow path415and the first port417, and into the portion of the sequestration chamber430. In some instances, the initial volume and/or flow of bodily fluid can be transferred into the sequestration chamber430until, for example, the flow controller440is fully expanded, flipped, and/or transitioned, until the negative pressure differential is reduced and/or equalized, and/or until a desired volume of bodily fluid is disposed within the portion of the sequestration chamber430. Moreover, the restrictor419can be configured to restrict, limit, control, and/or modulate a magnitude of the negative pressure differential and/or suction force generated within the sequestration chamber430and/or on a surface of the flow controller440, which in turn, can modulate a suction force within one or more flow paths and/or within the bodily fluid source (e.g., the vein of the patient), as described above with reference to the device200. In other embodiments, the second port418and/or any suitable portion of the device400can be configured to modulate a suction force within one or more portions of the sequestration chamber30in any suitable manner such as, for example, those described above with reference to the device200. In some embodiments, the shape, size, and/or arrangement of the sequestration chamber430and/or the flow controller440, the ventilation channels431and/or the ventilation ridges424, the magnitude of the negative pressure differential or suction force, and/or the way in which the negative pressure differential or suction force is exerted can dictate and/or control a rate and/or manner in which the flow controller440is transitioned from the first state to the second state. In some instances, controlling the rate, order and/or manner in which the flow controller440is transitioned can result in one or more desired flow characteristics associated with a flow of air, gas, and/or bodily fluid into and/or through at least a portion of the sequestration chamber. For example, the arrangement included in this embodiment can be such that a transitioning and/or flipping of the third deformable portion443of the flow controller440is completed prior to completion of the transitioning and/or flipping of the first and second deformable portions441and442. Moreover, the arrangement of the ventilation ridges424along the first contoured surface421can increase a likelihood and/or can ensure that the flow controller440transitions and/or flips in a desired manner or sequence by preventing potential flow restrictions and/or seals that may otherwise prevent the negative pressure differential or suction force from transitioning and/or flipping a portion of the flow controller440disposed on an opposite side of the restriction or seal. This arrangement can be such that a portion of the sequestration chamber430collectively defined by the first deformable portion441and the first recess427of the second contoured surface426(e.g., a first volume of the sequestration chamber430) receives at least a portion of a volume of air that was within the fluid flow path between the bodily fluid source and the sequestration chamber430prior to the fluid flow path receiving and/or being filled with bodily fluid. Similarly, a portion of the sequestration chamber430collectively defined by the second deformable portion442and the second recess428of the second contoured surface426(e.g., a second volume of the sequestration chamber430) can receive at least a portion of the volume of air that was within the fluid flow path. Alternative arrangements of the sequestration chamber430and/or the flow controller440can be similar in form and function to those described above with reference to the sequestration chamber230and/or the flow controller240, and thus they are not described in further detail herein. Having transferred the initial volume of bodily fluid into the sequestration chamber430, a force can be exerted on the end portion451of the actuator450to transition and/or place the actuator450in its second position, state, operating mode, and/or configuration, as described in above. In some instances, prior to exerting the force on the end portion451of the actuator450, the actuator450may be transitioned from a locked configuration or state to an unlocked configuration or state. In the embodiment shown inFIGS.22-27, the transition of the actuator450can be achieved by and/or can otherwise result from user interaction and/or manipulation of the actuator450. In other embodiments, however, the transition of the actuator450can occur automatically in response to negative pressure and/or associated flow dynamics within the device400, and/or enacted by or in response to an external energy source that generates one or more dynamics or states that result in the transitioning of the actuator450. As shown inFIGS.26and27, the control device400is placed in its third state when each of the flow controller440and the actuator450is in its second state. When the actuator450is transitioned to its second state, position, and/or configuration, the inlet413and the outlet414are placed in fluid communication (e.g., via the fluid flow path416and/or the flow channel452) while the fluid flow path415and/or the first port417is/are sequestered, isolated, and/or otherwise not in fluid communication with the inlet413and/or the outlet414. As such, the initial volume of bodily fluid is sequestered in the portion of the sequestration chamber430. Moreover, in some instances, contaminants such as, for example, dermally residing microbes and/or any other contaminants can be entrained and/or included in the initial volume of the bodily fluid and thus, are sequestered in the sequestration chamber430when the initial volume is sequestered therein. As such, the negative pressure otherwise exerted on or through the fluid flow path416and through the second port418is now exerted on or through the outlet414and the inlet413via, for example, at least a portion of the fluid flow paths415and416and/or the flow channel452of the actuator450. In response, bodily fluid can flow from the inlet413, through the actuator portion412of the housing410, through the outlet414, and into the fluid collection device coupled to the outlet414. Accordingly, the device400can function in a manner substantially similar to that of the devices100and/or200described in detail above. Referring now toFIG.28, a flowchart is presented illustrating a method10of using a fluid control device to obtain a bodily fluid sample with reduced contamination according to an embodiment. The fluid control device can be similar to and/or substantially the same as any of the fluid control devices100,200,300, and/or400described in detail above. Accordingly, the fluid control device (also referred to herein as “control device” or “device”) can include a housing, a flow controller, and an actuator. The method10includes establishing fluid communication between a bodily fluid source and an inlet of the housing, at11. For example, in some embodiments, a user can manipulate the fluid control device to physically and/or fluidically couple the inlet to a lumen-containing device (e.g., a needle, IV, PICC line, etc.), which in turn, is in fluid communication with a patient. In other embodiments, the bodily fluid source can be a source of bodily fluid other than a patient (e.g., a reservoir, container, etc.). A fluid collection device is coupled to an outlet of the housing, at12. The coupling of the fluid collection device to the outlet is configured to result in and/or otherwise generate a negative pressure differential within at least a portion of the fluid control device, as described in detail above with reference to the devices100,200,300, and/or400. In some embodiments, for example, the fluid collection device can be an evacuated container, a sample or culture bottle that defines a negative pressure, a syringe, and/or the like. The flow controller of the control device is transitioned from a first state to a second state in response to a suction force exerted by the fluid collection device to increase a volume of a sequestration chamber collectively defined by the flow controller and a portion of the housing, at13. For example, in some embodiments, the flow controller can be a fluid impermeable bladder or the like—similar to the flow controllers240,340, and/or440described in detail above—that is disposed within the sequestration chamber. The flow controller (e.g., bladder) can define any number of deformable portions configured to transition, deform, flip, and/or otherwise reconfigure in response to a suction force. In some embodiments, a first portion of the sequestration chamber can be associated with and/or at least partially defined by a first deformable portion of the flow controller and a second portion of the sequestration chamber can be associated with and/or at least partially defined by a second deformable portion of the flow controller. In some embodiments, the arrangement of the flow controller within the sequestration chamber can be such that the first portion and the second portion of the sequestration chamber are on a first side of the flow controller (e.g., fluid impermeable bladder) and a third portion of the sequestration chamber is on a second side of the flow controller opposite the first side. As described above with reference to at least the devices200,300, and/or400, the arrangement of the housing, flow controller, and actuator can be such that when the actuator is in a first state and/or configuration, the inlet is in fluid communication with the first and/or second portions of the sequestration chamber (e.g., via a port similar to the first ports217,317, and/or417described above) and the outlet is in fluid communication with the third portion of the sequestration chamber (e.g., via a port similar to the second ports217,317, and/or417described above). As such, the third portion of the sequestration chamber can be exposed to at least a portion of the suction force generated by the fluid collection device, which in turn, is operable to transition the flow controller from its first state to its second state. The first portion of the sequestration chamber receives a volume of air contained in a flow path defined between the bodily fluid source and the sequestration chamber in response to the increase in the volume of the sequestration chamber, at14. For example, in some embodiments, the inlet of the housing can be fluidically coupled to a needle or lumen-containing device that is, in turn, inserted into a portion of the patient. As such, the flow path can be collectively defined by, for example, a lumen of the needle or lumen-containing device, a lumen of the inlet of the housing, and a lumen of one or more flow paths, channels, openings, ports, etc. of the defined by the housing. In other words, the control device can be configured to purge the flow path of air prior to transferring bodily fluid into the sequestration chamber. In some embodiments, the first portion of the sequestration chamber can be, for example, a center or central portion of the sequestration chamber. In some embodiments, the first portion of the sequestration chamber can be collectively formed by any number of regions, volumes, and/or sections (e.g., similar to the sequestration chambers230and/or430described above). In other embodiments, the first portion of the sequestration chamber can be a single and/or continuous portion (e.g., similar to the sequestration chamber330described above). In still other embodiments, the first portion of the sequestration chamber and the second portion of the sequestration chamber can be “inline” such that the entire sequestration chamber or substantially the entire sequestration chamber is a single and/or continuous volume. For example, in some embodiments, the sequestration chamber can have a shape and/or arrangement similar to those described in detail in U.S. Patent Publication Serial No. 2019/0076074 entitled, “Fluid Control Devices and Methods of Using the Same,” filed Sep. 12, 2018 (referred to herein as “the '074 Publication”), the disclosure of which is incorporated herein by reference in its entirety. The second portion of the sequestration chamber receives an initial volume of bodily fluid in response to the increase in the volume of the sequestration chamber, at15. More specifically, the second portion of the sequestration chamber can receive the initial volume of bodily fluid after the first portion of the sequestration chamber receives the volume of air. In some embodiments, the initial volume of bodily fluid can be a volume sufficient to substantially fill the second portion of the sequestration chamber. In other embodiments, the initial volume of bodily fluid can be a volume or amount of bodily fluid that flows into the second portion of the sequestration chamber while a negative pressure differential (e.g., resulting from the increase in volume) is below a threshold magnitude or amount. In other embodiments, bodily fluid can flow into the second portion of the sequestration chamber until pressures within the sequestration chamber and/or within the flow path between the bodily fluid source and the sequestration chamber are equalized. In still other embodiments, the initial volume can be any suitable amount or volume of bodily fluid such as any of the amounts or volumes described in detail herein. In some instances, the filling or substantial filling of the second portion of the sequestration chamber can be operable to sequester, retain, and/or fluidically lock the volume of air in the first portion of the sequestration chamber. After receiving the initial volume of bodily fluid, the actuator of the device is transitioned from a first configuration to a second configuration to (1) sequester the sequestration chamber and (2) allow a subsequent volume of bodily fluid to flow from the inlet to the outlet in response to the suction force, at16. In some embodiments, the actuator can transition from a first state to a second state to automatically sequester the initial volume of bodily fluid in the sequestration portion. In other embodiments, the actuator can transition from a first state to a second state in response to a force exerted by a user, as described above with reference to the actuators250,350, and/or450. For example, in some embodiments, the actuator can be a rod or plunger that includes one or more seals or the like that can (1) fluidically isolate at least a portion of a flow path between the inlet and the sequestration chamber, (2) fluidically isolate at least a portion of a flow path between the outlet and the sequestration chamber, and (3) establish fluid communication between the inlet and the outlet to allow the subsequent volume of bodily fluid to flow therebetween. With the fluid collection device fluidically coupled to the outlet of the housing, the subsequent volume of bodily fluid (e.g., one or more sample volumes) can be conveyed into the fluid collection device and used, for example, in any suitable testing such as those described herein. As described in detail above, in some instances, sequestering the initial volume of bodily fluid in the sequestration portion of the device can sequester any contaminants contained in the initial volume. Accordingly, contaminants in the subsequent volume of bodily fluid that may otherwise lead to false or inaccurate results in testing can be reduced or substantially eliminated. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Where schematics and/or embodiments described above indicate certain components arranged in certain orientations or positions, the arrangement of components may be modified. While the embodiments have been particularly shown and described, it will be understood that various changes in form and details may be made. Although various embodiments have been described as having particular features, concepts, and/or combinations of components, other embodiments are possible having any combination or sub-combination of any features, concepts, and/or components from any of the embodiments described herein. In some embodiments, the specific configurations of the various components can also be varied. For example, the size and specific shape of the various components can be different from the embodiments shown, while still providing the functions as described herein. In some embodiments, varying the size and/or shape of such components may reduce an overall size of the device and/or may increase the ergonomics of the device without changing the function of the device. In some embodiments, the size and/or shape of the various components can be specifically selected for a desired or intended usage. For example, in some embodiments, a device such as those described herein can be configured for use with or on seemingly healthy adult patients. In such embodiments, the device can include a sequestration chamber that has a first volume (e.g., about 0.5 ml to about 5.0 ml). In other embodiments, a device such as those described herein can be configured for use with or on, for example, very sick patients and/or pediatric patients. In such embodiments, the device can include a sequestration chamber that has a second volume that is less than the first volume (e.g., less than about 0.5 ml). Thus, it should be understood that the size, shape, and/or arrangement of the embodiments and/or components thereof can be adapted for a given use unless the context explicitly states otherwise. Any of the embodiments described herein can be used in conjunction with any suitable fluid transfer, fluid collection, and/or fluid storage device such as, for example, the fluid reservoirs described in the '420 patent. In some instances, any of the embodiments described herein can be used in conjunction with any suitable transfer adapter, fluid transfer device, fluid collection device, and/or fluid storage devices such as, for example, the devices described in the '783 Patent, the '510 Publication, the '074 Publication, and/or any of the devices described in U.S. Pat. No. 8,535,241 entitled, “Fluid Diversion Mechanism for Bodily-Fluid Sampling,” filed Oct. 22, 2012; U.S. Pat. No. 9,060,724 entitled, “Fluid Diversion Mechanism for Bodily-Fluid Sampling,” filed May 29, 2013; U.S. Pat. No. 9,155,495 entitled, “Syringe-Based Fluid Diversion Mechanism for Bodily-Fluid Sampling,” filed Dec. 2, 2013; U.S. Patent Publication No. 2016/0361006 entitled, “Devices and Methods for Syringe Based Fluid Transfer for Bodily-Fluid Sampling,” filed Jun. 23, 2016; U.S. Patent Publication No. 2018/0140240 entitled, “Systems and Methods for Sample Collection with Reduced Hemolysis,” filed Nov. 20, 2017; and/or U.S. Pat. No. 9,950,084 entitled, “Apparatus and Methods for Maintaining Sterility of a Specimen Container,” filed Sep. 6, 2016, the disclosures of which are incorporated herein by reference in their entireties. While the control devices100,200,300, and/or400are described as transferring a bodily fluid into the device as a result of a negative pressure within a fluid collection device, in other embodiments, the devices described herein can be used with any suitable device configured to establish a negative pressure differential, suction force, and/or the like such as, for example, a syringe or pump. In other embodiments, a control device can include a pre-charged sequestration chamber, a vented sequestration chamber, a manually activated device configured to produce a negative pressure, an energy source (e.g., a chemical energy source, a kinetic energy source, and/or the like), and/or any other suitable means of defining and/or forming a pressure differential within a portion of the control device. Moreover, a control device can be coupled to such a collection device by a user (e.g., doctor, nurse, technician, physician, etc.) or can be coupled or assembled during manufacturing. In some embodiments, pre-assembling a control device and a collection device (e.g., a sample container or syringe) can, for example, force compliance with a sample procurement protocol that calls for the sequestration of an initial amount of bodily fluid prior to collecting a sample volume of bodily fluid. While some of the embodiments described above include a flow controller and/or an actuator having a particular configuration and/or arrangement, in other embodiments, a fluid control device can include any suitable flow controller and/or actuator configured to selectively control a flow of bodily fluid through one or more portions of the fluid control device. For example, while some embodiments include an actuator having one or more seals arranged as an o-ring or an elastomeric over-mold, which is/are moved with the actuator and relative to a portion of the device (e.g., an inner surface of a housing or the like), in other embodiments, a fluid control device can include one or more seals having any suitable configuration. For example, in some embodiments, a fluid control device can include one or more seals arranged as an elastomeric sheet or the like that is/are fixedly coupled to a portion of the control device. In such embodiments, a portion of an actuator such as a pin or rod can extend through an opening defined in the one or more elastomeric sheets, which in turn, form a substantially fluid tight seal with an outer surface of the pin or rod. As such, at least a portion of the actuator can move relative to the one or more elastomeric sheets, which in turn, remain in a substantially fixed position relative to the portion of the control device. In some embodiments, removal of the portion of the actuator from the opening defined by the one or more elastomeric sheets can allow a flow of fluid through the opening that was otherwise occluded by the portion of the actuator. Accordingly, the one or more elastomeric sheets can function in a similar manner as any of the seals described herein. Moreover, in some embodiments, such an arrangement may, for example, reduce an amount of friction associated with forming the desired fluid tight seals, which in turn, may obviate the use of a lubricant otherwise used to facilitate the movement of the seals within the control device. In some embodiments, a device and/or a flow controller can include one or more vents, membranes, members, semi-permeable barriers, and/or the like configured to at least partially control a flow of fluid through the device, flow controller, and/or actuator. For example, while portions of the sequestration chamber230are described above as receiving and retaining a volume of air evacuated, vented, and/or purged from the fluid flow path between the bodily fluid source and the sequestration chamber230, in other embodiments, a sequestration chamber230can include a vent or selectively permeable member configured to allow the air to exit the sequestration chamber230. For example, in some embodiments, a bladder or diaphragm (or portion thereof) can be formed of or from a semi-permeable material that can allow air but not bodily fluid to flow therethrough. In other embodiments, a semi-permeable material can be disposed in or along a fluid flow path between the sequestration chamber and at least one of an outlet or an inlet to selectively allow air and/or bodily fluid to flow therebetween. In some embodiments, a fluid control device can include a semi-permeable member and/or membrane that can be similar in form and/or function to the semi-permeable members and/or membranes (e.g., flow controllers) described in the '074 Publication incorporated by reference hereinabove. While the flow controller240,340, and440are described above as being bladders configured to transition, move, flip, and/or otherwise reconfigure in response to an amount of negative pressure exerted on a surface of the bladder exceeding a threshold amount of negative pressure, in other embodiments, a fluid control device can include any suitable flow controller, actuator, semi-permeable member (e.g., air permeable and liquid impermeable), and/or the like configured to transition, move, flip, and/or otherwise reconfigure in any suitable manner in response to being exposed to a desired and/or predetermined amount of negative pressure. In other embodiments, a control device can include a bladder (or flow controller) that is configured to “flip” (e.g., relatively quickly and/or substantially uniformly transition) or configured to gradually transition (e.g., unroll, unfold, unfurl, and/or otherwise reconfigure) from the first state to the second state in response to being exposed to a negative pressure differential. In some instances, controlling a rate at which a bladder (or flow controller) is transitioned may allow for a modulation and/or control of a negative pressure differential produced within the sequestration chamber, and in turn, a magnitude of a suction force exerted within a patient's vein and/or other suitable bodily fluid source. While some of the embodiments described above include a flow controller and/or actuator that physically and/or mechanically sequesters one or more portions of a fluid control device, in other embodiments, a fluid control device need not physically and/or mechanically sequester one or more portions of the fluid control device. For example, in some embodiments, an actuator such as the actuator250can be transitioned from a first state in which an initial volume of bodily fluid can flow from an inlet to a sequestration chamber or portion, to a second state in which (1) the sequestration chamber or portion is physically and/or mechanically sequestered and (2) the inlet is in fluid communication with an outlet of the fluid control device. In other embodiments, however, an actuator and/or any other suitable portion of a fluid control device can transition from a first state in which an initial volume of bodily fluid can flow from an inlet to a sequestration chamber or portion, to a second state in which the inlet is placed in fluid communication with the outlet without physically and/or mechanically sequestering (or isolating) the sequestration chamber or portion. When such a control device is in the second state, one or more features and/or geometries of the control device can result in a preferential flow of bodily fluid from the inlet to the outlet and the initial volume of bodily fluid can be retained in the sequestration chamber or portion without physically and/or mechanically being sequestered or isolated. While the restrictor219is described above as modulating and/or controlling a magnitude of negative pressure applied on or through at least a portion of the device200(e.g., within the sequestration chamber230and/or otherwise on the flow controller240), in other embodiments, a control device can include any suitable feature, mechanism, and/or device configured to modulate, create, and/or otherwise control one or more pressure differentials through at least a portion of the control device. For example, in some embodiments, a user can transition and/or move an actuator to change (e.g., reduce or increase) the size of one or more portions of a fluid flow path or fluid flow interface within a portion of the control device to manually modulate and/or otherwise control an amount or magnitude of negative pressure within one or more portions of a control device. Although not shown, any of the devices described herein can include an opening, port, coupler, septum, Luer-Lok, gasket, valve, threaded connecter, standard fluidic interface, etc. (referred to for simplicity as a “port”) in fluid communication with the sequestration chamber. In some such embodiments, the port can be configured to couple to any suitable device, reservoir, pressure source, etc. For example, in some embodiments, the port can be configured to couple to a reservoir, which in turn, can allow a greater volume of bodily fluid to be diverted and/or transferred into the sequestration chamber. In other embodiments, the port can be coupled to a negative pressure source such as an evacuated container, a pump, a syringe, and/or the like to collect a portion or the full volume of the bodily fluid in the sequestration chamber, channel, reservoir, etc. and can use that volume of bodily fluid (e.g., the pre-sample volume) for additional clinical and/or in vitro diagnostic testing purposes. In other embodiments, the port can be configured to receive a probe, sampling tool, testing device, and/or the like that can be used to perform one or more tests (e.g., tests not sensitive to potential contamination) on the initial volume while the initial volume is disposed or sequestered in the sequestration chamber. In still other embodiments, the port can be coupled to any suitable pressure source or infusion device configured to infuse the initial volume of bodily fluid sequestered in the sequestration chamber back into the patient and/or bodily fluid source (e.g., in the case of pediatric patients, very sick patients, patients having a low blood volume, and/or the like). In other embodiments, the sequestration channel, chamber, and/or reservoir can be configured with the addition of other diagnostic testing components integrated into the chamber (e.g., a paper test) such that the initial bodily fluid is used for that test. In still other embodiments, the sequestration chamber, channel, and/or reservoir can be designed, sized, and configured to be removable and compatible with testing equipment and/or specifically accessible for other types of bodily fluid tests commonly performed on patients with suspected conditions. By way of example, a patient with suspected sepsis commonly has blood samples collected for lactate testing, procalcitonin testing, and blood culture testing. All of the fluid control devices described herein can be configured such that the sequestration chamber, channel, reservoir, etc. can be removed (e.g., after receiving the initial volume of bodily fluid) and the bodily fluid contained therein can be used for these additional testing purposes before or after the subsequent sample is collected for microbial testing. Although not shown, in some embodiments, a fluid control device can include one or more lumen, channels, flow paths, etc. configured to selectively allow for a “bypass” flow of bodily fluid, where an initial amount or volume of bodily fluid can flow from the inlet, through the lumen, cannel, flow path, etc. to bypass the sequestration chamber, and into the collection device. In some embodiments, the fluid control device can include an actuator having, for example, at least three states—a first in which bodily fluid can flow from the inlet to the sequestration chamber, a second in which bodily fluid can flow from the inlet to the outlet after the initial volume is sequestered in the sequestration chamber, and a third in which bodily fluid can flow from the inlet, through the bypass flow path, and to the outlet. In other embodiments, the control device can include a first actuator configured to transition the device between a first and second state, as described in detail above with reference to specific embodiments, and can include a second actuator configured to transition the device to a bypass configuration or the like. In still other embodiments, the control device can include any suitable device, feature, component, mechanism, actuator, controller, etc. configured to selectively place the fluid control device in a bypass configuration or state. In some embodiments, a method of using a fluid control device such as those described herein can include the ordered steps of establishing fluid communication between a bodily fluid source (e.g., a vein of a patient or the like) and an inlet of a fluid control device. An outlet of the fluid control device is then placed in fluid communication with and/or otherwise engages a negative pressure source. Such a negative pressure source can be a sample reservoir, a syringe, an evacuated container, an intermediate transfer device, and/or the like. The fluid control device can be in a first state or operating mode when the outlet is coupled to the negative pressure source and, as such, a negative pressure differential is applied through the fluid control device that draws an initial volume of bodily fluid into a sequestration chamber of the fluid control device. For example, a negative pressure within a sample reservoir can be operable in drawing an initial volume of bodily fluid from a patient and into the sequestration chamber. Once the initial volume of bodily fluid is disposed in the sequestration chamber, the fluid control device is transitioned, either automatically or via user intervention, from the first state or operating mode to a second state or operating mode such that (1) the initial volume is sequestered in the sequestration chamber and (2) fluid communication is established between the inlet and the outlet. The sequestration of the initial volume can be such that contaminants entrained in the flow of the initial volume are likewise sequestered within the sequestration chamber. With the initial volume of bodily fluid sequestered in the sequestration chamber and with fluid communication established between the inlet and the outlet, subsequent volumes of bodily fluid that are substantially free of contamination can be collected in one or more sample reservoirs. While the method of using the fluid control device is explicitly described as including the recited ordered steps, in other embodiments, the ordering of certain events and/or procedures in any of the methods or processes described herein may be modified and such modifications are in accordance with the variations of the invention. Additionally, certain events and/or procedures may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above. Certain steps may be partially completed or may be omitted before proceeding to subsequent steps. For example, while the devices are described herein as transitioning from a first state to a second state in a discrete operation or the like, it should be understood that the devices described herein can be configured to automatically and/or passively transition from the first state to the second state and that such a transitioning may occur over a period of time. In other words, the transitioning from the first state to the second state may, in some instances, be relatively gradual such that as a last portion of the initial volume of bodily fluid is being transferred into the sequestration chamber, the housing begins to transition from the first state to the second state. In some instances, the rate of change when transitioning from the first state to the second state can be selectively controlled to achieve one or more desired characteristics associated with the transition. Moreover, in some such instances, the inflow of the last portion of the initial volume can limit and/or substantially prevent bodily fluid already disposed in the sequestration chamber from escaping therefrom. Accordingly, while the transitioning from the first state to the second state may occur over a given amount of time, the sequestration chamber can nonetheless sequester the volume of bodily fluid disposed therein. | 200,468 |
11857322 | In the drawings, the same reference numbers and any acronyms identify elements or acts with the same or similar structure or functionality for ease of understanding and convenience. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. DETAILED DESCRIPTION The present disclosure is described with reference to the attached figures, where like reference numerals are used throughout the figures to designate similar or equivalent elements. The figures are not drawn to scale, and are provided merely to illustrate the instant disclosure. Several aspects of the disclosure are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the disclosure. One having ordinary skill in the relevant art, however, will readily recognize that the disclosure can be practiced without one or more of the specific details, or with other methods. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the disclosure. The present disclosure is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the present disclosure. Aspects of the present disclosure can be implemented using one or more suitable processing device, such as general-purpose computer systems, microprocessors, digital signal processors, micro-controllers, application-specific integrated circuits (ASIC), programmable logic devices (PLD), field-programmable logic devices (FPLD), field-programmable gate arrays (FPGA), mobile devices such as a mobile telephone or personal digital assistants (PDA), a local server, a remote server, wearable computers, tablet computers, or the like. Memory storage devices of the one or more processing devices can include a machine-readable medium on which is stored one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions can further be transmitted or received over a network via a network transmitter receiver. While the machine-readable medium can be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” can also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read-only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, flash, or other computer-readable medium that is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processing device, can be used for the memory or memories. Overview While many black box algorithms exist that are able to stratify patients and identify appropriate treatments, clinicians cannot determine the basis for the decisions made by the algorithm and therefore validate the choices. This is particularly difficult, because certain regulations require doctors to validate the decision of clinical support systems in order to pass regulatory scrutiny, and generally validate that the approach is clinically sound. With a black box machine learning algorithm, clinicians would understand little more than what features (in some cases) were the input, but not which features were the most important. For deep learning algorithms, the features identified may not even be apparent, and therefore the only knowledge a clinician may have about the decision is what types of data were input into the algorithm and/or were used to train it. While some interpretable and/or explainable algorithms exist, they may only be limited to one type of data (e.g. one modality, such as clinical scales) and/or may be overly complex, and therefore not useful. Furthermore, they generally are types of algorithms such as decision trees that may not be as accurate, and/or only have the ability to take into account a fraction of the features and/or attributes that could be useful. Additionally, developing interpretable classification algorithms in the neurobehavioral space is challenging given the large number of biotypes within certain neurobehavioral classifications (e.g., depression, schizophrenia, and/or other indications). Thus, it can be extraordinarily difficult to develop an explainable algorithm that would be understandable by a human, taking into account multiple modalities and/or sources of data, yet still accurate enough to be useful. Accordingly, systems and methods disclosed herein provide for stratifying patients in the neurobehavioral space using, for example, multiple modalities and/or short interpretable algorithms. These algorithms are able to process an extraordinary amount of features and/or attributes, but only output a relatively short rule list that is easily interpretable, yet still classifies patients with accuracy. According to some implementations of the present disclosure, some of these rule lists also take into account multiple modalities including clinical scales questions, tasks, wet biomarkers, or the like, or any combination thereof. This is very advantageous and unexpected, because balancing multiple modalities and/or the number of different features is extraordinarily difficult to understand how they interact, and the disclosed models are able to incorporate them into the rule lists. Furthermore, these rule lists are able to identify higher responders to certain neurobehavioral drugs that would otherwise be applied broadly to patients generally diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders (“DSM”) categories (e.g. depression, schizophrenia, etc.). In some cases, the neurobehavioral drugs did not show an improved response over the placebo when applied to patients classified broadly in the DSM categories, but they did when classified using the disclosed rule lists. This is very advantageous, as these rule lists allow these drugs to be given to the right patients, they are interpretable, and the rule list are short enough to be efficient and practical when applied. Thus, this represents an entirely new paradigm in the stratification of patients in the neurobehavioral space. Accordingly, the disclosed systems and methods are able to analyze categorical datasets with a large number of attributes—a property that is prevalent in the clinical psychiatry community. Particularly, the disclosed systems and methods may utilize a rule mining method that outputs a set of rules, and/or a Bayesian Rule List model that processes the set of rules and generates Bayesian Decision Lists or rule lists that may be utilized to classify patients. Furthermore, in some examples, a feature selection method may first identify the most important features before a rule mining method is applied to the features. Decision List Generation According to some implementations of the present disclosure, decision lists are generated that stratify patients into different categories. One or more “if→then” statements are applied to specific input features. These decision lists are easily understandable by clinicians and thus may be easily validated. Disclosed herein are examples of how these decision lists may be generated. In some examples, the disclosed systems and methods utilize one or more of the following models to generate decision lists: (i) feature selection models, (ii) rule mining models, and (iii) Bayesian Rule List models. In some examples, only rule mining models and Bayesian Rule list models may be utilized. In other examples, only Bayesian Rule list models may be utilized to generate a rule list to stratify patients. Bayesian Rule Lists Models Bayesian Rule List models (“BRL model” and/or “BRL algorithm”) are a framework proposed by Letham et al. in “Interpretable Classifiers Using Rules and Bayesian Analayis: Building a Better Stroke Prediction Model,” Annals of Applied Statistics, Vol. 9, No. 32015, the content of which is incorporated herein by reference in its entirety. The BRL model may be utilized to build lists of rules for data sample classification. An example of a BRL model output trained on a commonly used Titanic survival data set, is depicted inFIG.1. As shown,FIG.1illustrates a pseudo-code showing a Bayesian Decision List output from a BRL model on a Titanic survival data set, wherein θ denotes a probability of survival. An additional example of BRL models is described in a book by Christoph Molnar “Interpretable Machine Learning,A Guide for Making Black Box Models Explainable,” Chapter 4, Interpretable Models, the content of which is incorporated herein by reference in its entirety. As described by Molnar, BRL models generate a decision list using a selection of pre-mined rules, and in many cases prioritizing few rules and short conditions for each rule. This may be performed by defining a distribution of decision lists with prior distributions for the length of conditions and the number of rules. The posteriori probability distribution of lists allows the model to evaluate potential decision lists for their probability. In some examples, the model identifies a decision list that maximizes the posterior probability. In some examples, the BRL model will: (i) generate an initial decision list randomly drawn from a priori distribution list; (ii) iteratively modify the initial decision list by adding, removing, or moving rules in the list, as long as the new list follows the posterior distribution of lists; and (iii) select the modified list with the highest probability according to the posteriori distribution. In some examples, and specifically, the BRL model may be applied to a set of rules that were pre-mined using an FP-Growth algorithm, a MCA rule miner disclosed herein, and/or other rule mining techniques. The BRL model may rely on assumptions about the distribution of the output label, and/or the distribution of the parameters that define the distribution of the output label. Thus, the Bayesian approach combines existing knowledge or requirements (so-called priori distributions) while also fitting to the data. In the case of decision lists, the Bayesian model favors decision lists to be short with short rules. In some examples, the goal is to sample decision lists from the posteriori distribution: p(d❘x,y,A,α,λ,η)︸posteriori∝p(y❘x,d,α)︸likelihood·p(d❘A,λ,η)︸priori where d is a decision list, x is the features, y is the output, A is the set of pre-mined conditions, λ is the prior expected length of the decision lists, η is the prior expected number of conditions in a rule, α is the prior pseudo-count for the positive and negative classes which is best fixed at (1,1). In some examples, the following equation represents the probability of a decision list, given the data and priori assumptions: p(d|y,A,α,λ,η) This is proportional to the likelihood of the outcome y given the decision list and the data, times the probability of the list given prior assumptions and the pre-mined conditions. In some examples, the following equation represents the likelihood of the outcome y, given the decision list and data: p(y|x,d,α) BRL may assume that y is generated by a Dirichlet-Multinomial distribution. The better the decision list “d” explains the data, the higher the likelihood. In some examples, the following equation represents the prior distribution of the decision lists. P(d|A,λ,η) The equation may combine a truncated Poisson distribution (parameter λ) for the number of rules in the list and a truncated Poisson distribution (parameter η) for the number of feature values in the conditions of the rules. A decision list has a high posterior probability if it explains the outcome y well, and is also likely according to the prior assumptions. According to some implementations of the present disclosure, estimations in Bayesian statistics may be performed by first drawing candidates, evaluating them, and updating posteriori estimates using a Markov chain Monte Carlo method. For decision lists, one or more lists from the distribution of decision lists are drawn. The BRL model may first draw an initial decision list, and iteratively modify it to generate samples of decision lists from the posterior distribution of the lists (e.g., a Markov chain of decision lists). The results are potentially dependent on the initial decision list, so it is advisable to repeat this procedure to ensure a great variety of lists. For example, a default number of iterations in a software implementation is ten times. In some examples, one or more of the following steps may be utilized to identify an initial decision list:1) Pre-mine patterns or a set of rules;2) Sample the list length parameter m from a truncated Poisson distribution;3) For the default rule: Sample the Dirichlet-Multinomial distribution parameter of the outcome value (e.g. the rule that applies when nothing else applies);4) For decision list rule j=1, . . . ,m:a. Sample the rule length parameter1(number of conditions) for rule j;b. Sample a condition of length ljljfrom the pre-mined conditions;c. Sample the Dirichlet-Multinomial distribution parameter for the THEN-part (e.g. for the distribution of the target outcome given the rule);5) For each observation in the dataset:a. Find the rule from the decision list that applies first (top to bottom);b. Draw the predicted outcome from the probability distribution (Binomial) suggested by the rule that applies. Once the initial decision list is identified, the BRL model may generate many new lists starting from the identified initial list (e.g., an initial sample) to obtain many samples from the posterior distribution of decision lists. Markov Chain Monte Carlo Sampling (“MCMC”) According to some implementations of the present disclosure, Metropolis-Hastings sampling of d may be performed. Particularly, the new decision lists may be sampled by starting from the initial list and then randomly making one or more modifications. The one or more modifications include (i) moving a rule to a different position in the list, (ii) adding a rule to the current decision list from the pre-mined conditions, (iii) removing a rule from the decision list, or (iv) any combination thereof. In some implementations, which of the rules is switched, added, or deleted is chosen at random. In some implementations, the algorithm evaluates the posteriori probability of the decision list (e.g., accuracy, shortness, or both) at each step. In some examples, the BRL model may utilize various algorithms to ensure that the sampled decision lists have a high posterior probability. This procedure provides many samples from the distribution of decision lists. The BRL algorithm may select the decision list of the samples with the highest posterior probability. Rule Mining Models According to some implementations of the present disclosure, a set of rules is first mined from a data set. For instance, as disclosed in Letham et al. (2015) and referenced herein, an FP growth miner is used for first mining a set of rules from a data set. The BRL model searches over a configuration space of combinations of the prescribed set of rules using an MCMC algorithm or other suitable algorithms as disclosed herein. In some implementations, rule mining methods may be utilized to generate a set of rules that a BRL model may process to generate and output a decision list. In some such implementations, the rule mining methods include an MCA based rule mining method. The MCA-based rule mining method can include one or more scaling properties against a plurality of categorical attributes, and may utilize a new implementation of the BRL algorithm using multi-core parallel execution. This new implementation using multi-core parallel execution was applied to the CNP dataset for psychiatric disorders and resulted n rule-based interpretable classifiers capable of screening patients using self-reported questionnaire data (e.g. scales data). The results not only show the viability of building interpretable models for state-of-the-art clinical psychiatry datasets, but also that these models can be scaled to larger datasets to understand the interactions and differences between these disorders. Relevant notations and definitions used throughout this disclosure is introduced. An attribute, denoted α, is a categorical property of each data sample, which can take a discrete and finite number of values, denoted |α|. A literal is a Boolean statement checking if an attribute takes a given value, e.g., given an attribute a with categorical values {c1, c2}, the following literals can be defined: α is c1, and α is c2. Given a collection of attributes {α1}i=1p, a data sample is a list of categorical values, one per attribute. A rule, denoted r, is a collection of literals, with length |r|, which is used to produce Boolean evaluations of data samples as follows: a rule evaluates to True whenever all the literals are also True, and evaluates to False otherwise. This disclosure considers the problem of efficiently building rule lists for data sets with a large total number of categories among all attributes (e.g., Σi=1p|αi|), a common situation among data sets related to health care or pharmacology including neurobehavioral health disorders. In one example, given n data samples, a data set can be represented as a matrix X with dimensions n×p, where Xi,j is the category assigned to the i-th sample for the j-th attribute. A categorical label for each data sample is also considered, collectively represented as a vector Y with length n. The number of label categories is denoted by, where≥2. If=2 then a standard binary classification problem is present. If, instead,>2 then a multi-class classification problem is solved. Conventional rule mining methods often fail to execute on data sets with a large total number of categories, due to either unacceptably long computation time or prohibitively high memory usage. This present disclosure includes a novel rule mining model based on Multiple Correspondence Analysis (“MCA”) that is both computational and memory efficient, enabling the application of a BRL model on datasets with a large total number of categories. According to sim implementations of the present disclosure, an MCMC search method in the BRL model may be parallelized by executing individual Markov chains in separate CPU cores of a computer. In some implementations, the convergence of multiple chains may be periodically checked using a generalized Gelman & Rubin convergence criteria, thereby stopping the execution once the convergence criteria is met. As shown inFIG.4, for example, this implementation is faster than the original single-core version, enabling the study of more data sets with longer rules and/or a large number of features. MCA is a method that applies the power of Correspondence Analysis (“CA”) to categorical data sets. According to some implementations of the present disclosure, MCA is an application of CA to an indicator matrix of all categories in a set of attributes, thereby generating principal vectors projecting each of those categories into a Euclidean space. The generated principal vectors are used to build a heuristic merit function over the set of all available rules given the categories in a data set. Moreover, the structure of the merit function allows for efficient mining of the best rules. Rule Score Calculation In some implementations, a methodology is disclosed for determining a score related to a usefulness of a rule and/or any number of rules. However, any other suitable methodologies may be utilized. An extended data matrix may be defined as concatenating X and Y, denoted Z=[X Y] with dimensions n×(p+1). The MCA principal vectors are computed for each category present of Z. The MCA principal vectors associated with corresponding categorical values are called categorical vectors, denoted by {vj}j=1Σi|αi|, where {αi}i=1pis a set of attributes in a data set X The MCA principal vectors associated with corresponding label categories are called label vectors, denoted by {wk}. Each category can be mapped to a literal statement. The principal vectors serve as a heuristic to evaluate a quality of a given literal to predict a label. Therefore, a score between each categorical vector vjand each label vector ωkis calculated as a cosine of their angle: pj,k=cos∡(Vj,ωk)=〈Vj,ωk〉vj2ωk2(1) In the context of random variables, pj,kis equivalent to the correlation between the two principal vectors. The score between a rule r and label category k is calculated, and denoted μk(r), as the average among the scores between the literals in r and the same label category, e.g.: μk(r)=1r∑l∈rρl,k·(2) The configuration space of rules r built using the combinations of all available literals in a data set is searched such that |r|≤rmax, and those with highest scores for each label category are identified. These top rules are the output of disclosed miner, and are passed over to the BRL method as the set of rules from which rule lists will be built. Rule Pruning In some implementations, the number of rules generated by all combinations of all available literals up to length rmaxcan be large even for modest values of rmax, the disclosed technology may include different methods of pruning and/or eliminating a portion of the generated rules. In some such implementations, for example, the present disclosure includes two conditions under which rules are efficiently eliminated from consideration. First, rules whose support over each label category is smaller than a user-defined threshold smincan be eliminated. The support of a rule r for label category k, denoted suppk(r), is the fraction of data samples that the rule evaluates to True among the total number of data samples associated to a given label. Given a rule r, note that the support of every other rule {circumflex over (r)} containing the collection of literals in r satisfies suppk({circumflex over (r)})≤suppk(r). Hence, once a rule r fails to pass the minimum support test, all rules longer than r that also contain the all the literals in r may be stopped from being considered. Second, rules whose score is smaller than a user-defined threshold umincan be eliminated. Now, suppose that a new rule is {circumflex over (r)} is to be built by taking a rule r and adding a literal 1. In that case, given a category k, the score of this rule is to satisfy: μk(r^)=rμk(r)+ρl,kr+1≥μmin(3) Let ρ_k=maxlρl,k for label category k among all available literals, then an extension of r can be predicted to have a score greater than μminif: μk(r)≥(r+1)μmin-ρ_kr=mk(r)(4) Given the maximum number of rules to be mined per label M, μminis recomputed as the system iterates through combining literals to build new rules. Indeed, the scores for the temporary list of candidate rules is periodically sorted and set μminequal to the score of the M-th rule in the sorted list. As μminincreases due to better candidate rules becoming available, the condition in Equation (4) becomes more restrictive, resulting in less rules being considered and therefore in a faster overall mining. FIGS.2A-2Cdepict a pseudocode and flowcharts of a MCA-based rule mining algorithm disclosed herein. The loop iterating over label categories in line three (3) may be easily parallelized as a multi-core computation, significantly reducing the mining time as shown inFIG.3. The flowchart illustrated inFIGS.2B and2Crepresents the pseudocode presented inFIG.2A. The process may be performed by a computer that includes a memory that stores computer instructions, and a processor that executes the computer instructions to perform actions. The actions performed by the computer include those computer operations that result in the computer executing the illustrated process. FIGS.2B-2Cillustrate a process for the MCA based rule mining model as disclosed herein. The process begins, at block202inFIG.2B, where a data set is obtained. The data set includes a plurality of attribute statements for a plurality of label categories. The process proceeds to block204, where a score for each combination of each attribute statement with each label category is then determined, such as described above with respect to Equation 1. The process continues at loop block206awhere each target category in the plurality of categories of the data set is processed as described below until loop block206b. The process proceeds next to loop block208awhere each target attribute statement of the plurality of attribute statements of the data set is processed as described below until loop block208b. The process continues next at decision block210, where a determination is made whether two parameters are true: (i) the score for the target category and the target attribute statement is greater than a first user-defined threshold, and (ii) support for the attribute statement is greater than a second user-defined threshold. If both parameters are true, the process proceeds to block212; otherwise, the process proceeds to loop block208b. At block212, the rule set for the target category is updated to include the target attribute statement, after which the process flows to loop block208b. At loop block208b, the process loops to loop block208auntil each attribute statement of the plurality of attribute statements is processed. The process then proceeds to loop block214ainFIG.2B. At loop block214a, each rule of a plurality of rules in a rule set associated with the data set is processed as described below until loop block214b. The process continues at loop block216awhere each target rule in the rule set for the target category is processed as described below until loop block216b. The process proceeds to block218, where the first user-defined threshold is updated to the maximum score in the rule set for the target category. The process continues at loop block220awhere each target attribute statement of the plurality of attribute statements is processed as described below until loop block220b. The process proceeds to block222, where a new rule is set as the target rule with the target attribute. The process continues next at decision block224, where a determination is made whether two additional parameters are true: (i) a score between the new rule and the target category is above the current first user-defined threshold, and (ii) the support for the new rule is greater than the second user-defined threshold. If both parameters are true, the process proceeds to block226; otherwise, the process proceeds to loop block220b. At block226, the rule set for the target category is updated to include the new rule, after which the process flows to loop block220b. At loop block220b, the process loops to loop block220auntil each attribute statement of the plurality of attribute statements is processed, and the process then proceeds to loop block216b. At loop block216b, the process loops to loop block216auntil each target rule in the rule set for the target category is processed, and the process then proceeds to loop block214b. At loop block214b, the process loops to loop block214auntil each rule is processed, and the process then proceeds to block228. At block228, the rule set is updated for the top M number of rules sorted by score. The process then continues at loop block206b, where the process loops to loop block206ainFIG.2Buntil each category of the plurality of categories is processed, and the process then terminates or otherwise returns to a calling process to perform other actions. Feature Selection Prior to application of the rule mining techniques disclosed herein, in some examples, various methods are disclosed to identify features from which the rules may be mined. This allows for identification of the most important features, to make the rule mining process more efficient and have less literals or rules that contribute noise to the rule lists. In some examples, forward selection techniques are implemented for identifying relevant features from the datasets, and for stratifying patients. For instance, logistic regression models with elastic net regularization are utilized in some cases to identify the most important features from the data. Then, either logistic or linear regression models can be utilized to stratify patients from these features, and/or the rule mining techniques can be applied to the features identified (and in combination with linear regression in one example). The data (including the features) are processed by a Bayesian Rule List algorithm, which in turn outputs a Bayesian Decision List that can stratify patients. According to some implementations of the present disclosure, these decision lists or rule lists may be applied for (i) screening healthy groups from patients, (ii) separating patients into diagnostic categories, (iii) identifying patients that are higher responders to certain drugs, and/or (iv) any combination thereof. Model Fitting and Feature Importance Weighting The goals of machine learning analyses may include (i) to establish robust classifiers, (ii) to identify important features that can be used to stratify patients, or (iii) both (i) and (ii). To achieve the first goal of establishing robust classifiers, a logistic regression model can be utilized. Separate logistic regression models may be independently trained using each or various combinations of the above extracted feature modalities as inputs. In some implementations, the performances of each model can be evaluated. If the number of features is relatively large, an elastic net regularization term can be added in all of the logistic regression models to prevent overfitting. The elastic net regularization is a linear combination of the L1 and L2 regularization terms and has been shown to have advantages over L1 and L2 regularization when dealing with high-dimensional data with small sample size and correlated features. The use of elastic net regularization in these models also enabled feature selection as the regularization induces sparse models via the grouping effect where all the important features will be retained and the unimportant ones set to zero. This allows for the identification of predictive features. The elastic net regularized logistic regression implemented in the scikit-learn toolbox contains two hyperparameters: the overall regularization strength and the mixing ratio between the L1 and L2 terms. The following procedure can be utilized to determine the best regularization parameters. First, the input data can be randomly partitioned into a development set and an evaluation set. The development set can contain 80% of the data upon which a grid search with 3-fold cross validation procedure can be implemented to determine the best hyperparameters. Then the model can be trained on the entire development set using the best hyperparameters and can be further tested on the remaining 20% of evaluation set which the model had never seen before to obtain testing performance. All features can be standardized to have zero mean and unit variance within the training data (the training folds in the 3-fold cross validation or the development set) and the mean and variance from the training data can be used to standardize the corresponding test data (the testing fold or the evaluation set) to avoid information spill-over from test data to training data. The entire process can be implemented ten (10) times on ten (10) different random partitions of the development and evaluation sets or other various combinations of times. The following metrics can be used to quantify the model performances: area under the receiver operating characteristics curve (AUC), accuracy, sensitivity, and specificity. From the above trained models, one can assess how predictive each feature is since the weights of the logistic regression model in the transdiagnostic classifiers represent the relationship between a given feature and the logarithm of the odds ratio of an observation being a patient. For each feature, its corresponding mean model weight can be calculated and divided by the standard deviation across the ten (10) model implementations as the proxy for feature importance. Such a feature importance measure is analogous to the Cohen's d effect size measure and thus favored features with large weights and small standard deviations across the ten (10) model implementations. Features with large importance values from the transdiagnostic classifiers are potentially symptoms, traits, and neuropathological mechanisms shared across patient groups but are distinct from healthy controls or other relevant traits related to responding to certain medications. Feature Importance-Guided Sequential Model Selection If the feature dimension of the input data is high compared to the sample size in the dataset, the transdiagnostic classifiers using the full feature sets are likely to be subjected to a substantial amount of noise as well as features that are not predictive. The presence of those noisy features, especially when the sample size is small, might impede the ability of the models to achieve their best performances. To investigate whether improved classification performances can be achieved from a reduced set of most predictive features, the following feature importance-guided sequential model selection procedure can be utilized. Specifically, first the features in the classifiers may be rank ordered according to their feature importance measures. Next, a series of truncated models may be built such that each model would only take the top k most predictive features as inputs to perform the same transdiagnostic classification problems. Let k range from the top 1 most predictive feature to all available features in steps of 1 for clinical phenotype features, MRI features, task based features, or other combinations of features. For any feature or feature combinations involving fMRI correlations, because of the significantly increased feature dimension, the k's were chosen from a geometric sequence with a common ratio of two (e.g., 1, 2, 4, 8, 16, . . . ). Model performances can be obtained for each truncated model and can be evaluated as a function of the number of top features (k) included in each truncated model. To statistically test whether a model's performances is significantly above chance level, a random permutation test can be performed where labels in the data can be shuffled 100 times, or any other suitable numbers of times. The model can be trained on these label-shuffled data using exactly the same approach as described herein. The performances from the 100 models can be used to construct the empirical null distribution against which the model performance from the actual data was then compared. Generating Rule Lists from Identified Features According to some implementations of the present disclosure, once the top features are identified to separate a set of groups using forward selection, a rule miner and BRL algorithm can be utilized to developed rules to separate those groups. For instance, the output labels for the data used by a rule miner can be derived from the groups separated by, for instance, a linear regression model used in forward selection. The rule miner can then output a set of rules derived from the features. Lastly, a Bayesian Rule List model can be applied to the set of rules to develop decision lists that would separate the patients into the same groups. Methods of Generating Decision Lists According to some implementations of the present disclosure, the system, the methods, and the models may be utilized in various combinations to generate rule lists or Bayesian Decision Lists that are capable of stratifying patients. For instance, the Bayesian Decision Lists may be capable of screening patients for mental health disorders, for diagnosing patients, or for matching patients to the right neurobehavioral treatments (e.g. certain drugs or other treatments). FIG.21is a flow chart illustrating a process for generating a Bayesian Decision List as disclosed herein. First, a patient database may be provided2100that includes labelled data with different attributes associated with certain outcomes. The patient database may include a variety of different modalities of data including MRI data of a patient's brain, responses to clinical scales questionnaires, data relating to levels of biochemical markers tested from a patient's body (“wet biomarkers”), demographic data (e.g., age, sex, etc.), task data (e.g. output from various tasks disclosed herein), and audio/facial expression data. Then, in some examples, the data may be first processed with a features selection model2110as disclosed herein. In some examples, this may include model dependent feature selection2107(e.g. elastic net, LASSO), forward feature selection2109as disclosed herein, backward feature selection2111, or other suitable features selection models. In other examples, the data may not be processed with a features selection model2109to first narrow down the features that a rule miner would be applied to. Next, in some examples, a rule mining model may be applied to the data2120or the selected features and associated outcomes from step2110. Various suitable rule mining models may be utilized including the novel MCA rule mining model2113disclosed herein. In other examples, FP growth2114, Apriori2115, or other rule mining methods may be utilized. This may output a set of rules for further processing. Next, a Bayesian Rule List model may be applied2130to the set of rules output by the rule miner2120. In other examples, a Bayesian Rule List model could be applied2130to all possible rules or be applied to a set of rules identified using a method other than a rule miner2120. The Bayesian Rule List model may be applied based on the examples disclosed herein or other suitable application of the model and/or framework generally. In some examples, it may include the MCMC algorithm2134described herein. Next, the process will output a Bayesian Decision List2140capable of classifying the data. In the disclosed examples, these will primarily relate to classifying individuals or patients into neurobehavioral categories including screening healthy from patients, diagnosing patients with mental disorders, and identifying a specific treatment for a specific patient. This decision list may be saved in a memory of a computer, displayed on a display or both. Systems and Data Acquisition FIG.22illustrates various example systems that may be utilized to implement the disclosed technology. For instance, the system may include a computing device2210with a display and/or interface2212, a network2220, a patient2200, a server2250and database2240. In some examples, the interface may include a microphone and speaker. In some examples, the speaker may provide instructions, questions or other information to patient, and the microphone may capture the patient's answer, responses, and vocal features. The computing device2210may be any suitable computing device, including a computer, laptop, mobile phone, tablet, etc. The network2220may be wired, wireless, or various combinations of wired and wireless. The server2250and database may be local, remote, and may be combinations of servers2250and database2240, or could be local processors and memory. The computing device2210and server2250may include a control system with one or more processors. In some examples, all of the processing may performed on a computing device2210or portions of the processing may be performed on the computing device2210and other portions of the processes may be performed on the server2250. The display and/or interface2212may be a touchscreen interface and display, or may be a keyboard and display or any other suitable interface to implement the technology disclosed herein, including a microphone and speaker. For instance, certain tasks disclosed herein or utilized in the art may include certain interface features. Additionally, certain biochemical tests and instruments (not pictured) may be utilized to test certain biochemical markers of the patient2200. This include various blood tests known in the art for testing for tumor necrosis factor. For instance, ELISA tests may be utilized with various plate readers to quantify the levels of certain molecules or biochemical moieties in a patient2200. Furthermore, magnetic resonance or other machines may be utilized scan patients and output MRI data or brain functional data that is utilized by the disclosed models to stratify patients. MRI data may correspond to a set of MRI images of a biological structure. In some examples, the MRI data corresponds to MM data for a patient's brain. The MRI data can include task-based fMRI data, rs-fMRI data, and/or sMRI data and others. The MM data may be acquired using a variety of methods, including for instance, using 3T Siemens Trio scanners. In one example, sMRI data may be T1-weighted and acquired using a magnetization-prepared rapid gradient-echo (MPRAGE) sequence with the following acquisition parameters: TR=1.9 s, TE=2.26 ms, FOV=250 mm, matrix=256×256, 1761-mm thick slices oriented along the sagittal plane. As an example, the resting-state fMRI scan may be a single run lasting 304 s. However, these are example only, and a variety of other acquisition methods could be utilized. Methods of Applying Decision Lists to Stratify Patients FIGS.23and24illustrate flow charts showing example methods of stratifying individual patients using the disclosed Bayesian Decision Lists. For instance,FIG.23illustrates a method of stratifying a patient using received patient data2300. For instance, the patient data may include MM data2303, questionnaire data2305(e.g. clinical scales), profile and/or demographic data2307that may include for instance age, sex, weight, ethnicity or others, task data2309that may include a variety of tasks such as the those disclosed herein, or biochemical biomarker levels in a patient2311such as tumor necrosis factor or others. Next, the patient data may be processed with a model2310. In many examples, a Bayesian Decision List2313may be utilized to process the data. This provides an interpretable result that a clinician may validate. In other examples, other machine learning models2315, decision lists or similar models may be utilized to stratify patients in the neurobehavioral space. In some examples, the disclosed rule miner may be utilized to stratify patients outside the neural behavioral space, especially given its potential for multi-model (or data type) utilization. Next, the system may output a patient classification2320which may then be displayed2330in a display, interface, and/or stored in a memory reference to the patient (or an identifier for the patient). Accordingly, the rule list utilized may also be displayed including how the patient was classified according to the rule list—including which rules the patient fell under to reach the classification. This would provide an interpretable classification of the patient. The classification may be used: (i) as a screening tool to determine whether the patient is healthy or has a mental disorder, (ii) to diagnose a mental health disorder, (iii) to determine a probability a patient has a certain mental health disorder, and/or (iv) to recommend a treatment. The treatment may include pharmaceutical drugs, cognitive behavioral therapy including software based versions of the therapy or other suitable therapies. In some examples, a clinician may also treat the patient2340. This may include prescribing a pharmaceutical that may be administered to the patient or the patient may be instructed to take. In other examples, this could be a recommended software program, including software based versions of cognitive behavioral therapy. FIG.24illustrates a similar process but additional includes further details on acquiring scales related data from the patient using a computing device2210such as a tablet or mobile phone. For instance, the scales or questionnaire data2305may be acquired by displaying a series of text based questions on a display2400and receiving a patient selection of answers2410through an interface2212which may include multiple choice answers or other inputs. In other examples, the patient may fill out a paper based questionnaire and the data may be entered into the disclosed systems and methods. EXAMPLES The following examples are provided to better illustrate the claimed disclosure and are not intended to be interpreted as limiting the scope of the disclosure. To the extent that specific materials or steps are mentioned, it is merely for purposes of illustration and is not intended to limit the disclosure. One skilled in the art may develop equivalent means or reactants without the exercise of inventive capacity and without departing from the scope of the disclosure. Example 1: Benchmark Datasets The MCA-miner method disclosed herein inFIGS.2A-2C, when used together with BRL, offers the power of rule list interpretability while maintaining the predictive capabilities of already established machine learning methods. The performance and computational efficiency of the new MCA-miner is benchmarked against the “Titanic” dataset, as well as the following five (5) datasets available in the UCI Machine Learning Repository: “Adult,” “Autism Screening Adult,” “Breast Cancer Wisconsin (Diagnostic),” “Heart Disease,” and “HIV-1 protease cleavage,” which are designated as Adult, ASD, Cancer, Heart, and HIV, respectively. These datasets represent a wide variety of real-world experiments and observations, thus enabling the improvements described herein to be compared against the original BRL implementation using the FP-Growth miner. All six benchmark datasets correspond to binary classification tasks. The experiments were conducted using the same set up in each of the benchmarks. First, the dataset is transformed into a format that is compatible with the disclosed BRL implementation. Second, all continuous attributes are quantized into either two (2) or three (3) categories, while keeping the original categories of all other variables. It is worth noting that depending on the dataset and how its data was originally collected, the existing taxonomy and expert domain knowledge are prioritized in some instances to generate the continuous variable quantization. A balanced quantization is generated when no other information was available. Third, a model is trained and tested using 5-fold cross-validations, reporting the average accuracy and Area Under the ROC Curve (AUC) as model performance measurements. Table 1 presents the empirical result of comparing both implementations. The notation in the table follows the definitions above. To strive for a fair comparison between both implementations, the parameters rmax=2 and smin=0:3 are fixed for both methods, and in particular for MCA-miner μmin=0:5 and M=70 are also set. The multi-core implementations for both the new MCA-miner and BRL were executed on six parallel processes, and stopped when the Gelman & Rubin parameter satisfied {circumflex over (R)}≤1.05. All the experiments were run using a single AWS EC2 c5.18×large instance with 72 cores. TABLE 1Performance evaluation of FP-Growth against MCA-minerwhen used with BRL on benchmark datasets. ttrainis the full training wall time.FP-GROWTH + BRLMCA-MINER + BRLDATASETnpΣt-1p|α1|ACCURACYAUCttrain[s]ACCURACYAUCttrain[s]Adult45.222141110.810.855120.810.85115ASD24821890.870.901980.870.9016Cancer569321500.920.971680.920.9422Heart30313490.820.861170.820.8615HIV5.84081600.870.884490.870.8836Titanic2.201380.790.761180.790.7510 It is clear from the experiments in Table 1 that the new MCA-miner matches the performance of FP-Growth in each case, while significantly reducing the computation time required to mine rules and train a BRL model. Example 2: Transdiagnostic Screener for Mental Health The disclosed systems and methods for stratifying patients was applied to a data set from the Consortium for Neuropsychiatric Phenomics (“CNP”). CNP is a research project aimed at understanding shared and distinct neurobiological characteristics among multiple diagnostically distinct patient populations. Four groups of subjects are included in the study: healthy controls (HC, n=130), Schizophrenia patients (SCHZ, n=50), Bipolar Disorder patients (BD, n=49), and Attention Deficit and Hyperactivity Disorder patients (ADHD, n=43). The total number of subjects in the dataset is n=272. The goal in analyzing the CNP dataset was to develop interpretable and effective screening tools to identify the diagnosis of these three psychiatric disorders in patients. CNP Self-Reported Instruments Dataset Among other data modalities, the CNP study includes responses to p=578 individual questions, belonging to 13 self-report clinical questionnaires, per subject. The total number of categories generated by the 578 questions is Σi-1p|αi|=1350. The 13 questionnaires are the following (in alphabetical order):Adult ADHD Self-Report Screener (ASRS),Barratt Impulsiveness Scale (Barratt),Chapman Perceptual Aberration Scale (ChapPer),Chapman Physical Anhedonia Scale (ChapPhy),Chapman Social Anhedonia Scale (ChapSoc),Dickman Function and Dysfunctional Impulsivity Inventory (Dickman),Eysenck's Impusivity Inventory (Eysenck),Golden & Meehl's 7 MMPI Items Selected by Taxonomic Method (Golden),Hopkins Symptom Check List (Hopkins),Hypomanic Personality Scale (Hypomanic),Multidimensional Personality Questionnaire—Control Subscale (MPQ),Temperament and Character Inventory (TCI), andScale for Traits that Increase Risk for Bipolar II Disorder (BipolarII). The individual questions are abbreviated using the name in parenthesis in the list above together with the question number. For example, Hopkins #57 denotes the 57-th question in the “Hopkins Symptom Check List” questionnaire. Depending on the particular clinical questionnaire, each question has results in a binary answer (e.g., True or False) or a rating integer (e.g., from 1 to 5). Each question is used as a literal attribute, resulting in a range from two (2) to five (5) categories per attribute. TABLE 1Performance evaluation of FP-Growth against MCA-miner when used with BRL onbenchmark datasets. ttrainis the full training wall time.FP-Growth + BRLMCA-miner + BRLDatasetnpΣt-1p|α1|AccuracyAUCttrain[s]AccuracyAUCttrain[s]Adult45.222141110.810.855120.810.85115ASD24821890.870.901980.870.9016Cancer569321500.920.971680.920.9422Heart30313490.820.861170.820.8615HIV5.84081600.870.884490.870.8836Titanic2.201380.790.761180.790.7510 Performance Benchmark Rather than prune the number of attributes a priori to reduce the search space for both the rule miner and BRL, the new MCA-miner described herein was employed to identify the best rules over complete search space of literal combinations. Note that this results in a challenging problem for most machine learning algorithms since this is a wide dataset with more features than samples, e.g., Σi=1p|αi|>>p>>n. Indeed, just generating all rules with three (3) literals from this dataset results in approximately 23 million rules.FIG.3is a graph that compares the wall execution time of the new MCA-miner against three popular associative mining methods: FP-Growth, Apriori, and Carpenter, all using the implementation in the PyFIM package. All samples in the plot were obtained training the same features from the CNP dataset on each method. Times in the plot are an average of five (5) runs. Black circles denote the last successful execution of a method. Executions were automatically canceled for wall times longer than 12 hours. As shown inFIG.3, while the associative mining methods are reasonably efficient on datasets with few features, they are incapable of handling more than roughly 70 features from the CNP dataset, resulting in out of memory errors or impractically long executions even for large-scale compute-optimized AWS EC2 instances. In comparison, MCA-miner empirically exhibits a grow rate compatible with datasets much larger than CNP, as it runs many orders of magnitude faster than associative mining methods. It is worth noting that while FP-Growth is shown as the fastest associative mining method, its scaling behavior vs. the number of attributes is practically the same as Apriori in some experiments. For instance, the magnitude of the feature space grows exponentially. Expressed mathematically, given d unique features, the total number of possible rules is approximately the following: R=∑k=1d-1[(dk)×∑j=1d-k(d-kj)]=3d-2d+1+1 The MCA process filters through this space and generates a much smaller rule space. The BRL process constructs the rule list to fit the mode. As disclosed herein, the CNP dataset includes about 578 features, which generate approximately 23 million effective rules. The disclosed MCA algorithm can process through this large set of rules, while the traditional algorithms (e.g., Apriori, FP-Growth) can only handle those with about 100 features. In addition to the increased performance due to the new MCA-miner, the implementation of the BRL training MCMC algorithm is improved by running parallel Markov chains simultaneously in different CPU cores, as explained herein.FIG.4shows the BRL training time comparison, given the same rule set and both using six chains, between the new multi-core implementation against the original single-core implementation reported in. Times in the plot ofFIG.4are an average of five (5) runs. Also,FIG.5shows that the multi-core implementation convergence wall time tmulti-core scales linearly with the number of Markov chains, with tsingle-core≈½ Nchains tmulti-core. The number of cores used in the multi-core implementation is equal to the number of MCMC chains. Times in the plot are an average of five (5) runs. While both implementations display a similar grow rate as the rule set size increases, the new multi-core implementation is roughly three (3) times faster in this experiment. Interpretable Transdiagnostic Classifiers In the interest of building the best possible transdiagnostic screening tool for the three types of psychiatric patients present in the CNP dataset, three different classifiers were built. First, a binary classifier is built to separate HC from the set of Patients, defined as the union of SCHZ, BD, and ADHD subjects. Second, a multi-class classifier is built to directly separate all four original categorical labels available in the dataset. Finally, the performance of the multi-class classifier is evaluated by repeating the binary classification task and comparing the results. In addition to using Accuracy and AUC as performance metrics, Cohen's coefficient (Cohen 1960) is reported as another indication for the effect size of the new classifier. Cohen's κ is compatible with both binary and multi-class classifiers. It ranges between −1 (complete misclassification) to 1 (perfect classification), with 0 corresponding to a chance classifier. To avoid a biased precision calculation, the dataset is sub-sampled to balance out each label, resulting in n=43 subjects for each of the four classes, with a total of n=172 samples. Finally, 5-fold cross-validation is used to ensure the robustness of the training and testing methodology. Binary Classifier Besides the new MCA-miner described herein together with BRL to build an interpretable rule list, its performance is benchmarked against other commonly used machine learning algorithms compatible with categorical data, which were applied using the Scikit-learn (Pedregosa et al. 2011) implementations and default parameters. As shown in Table 2, the method described herein is statistically as good, if not better, than the other methods compared against. TABLE 2HC vs. Patient binary prediction performance comparisonfor different machine learning models.CLASSIFIERACCURACYAUCCOHENMCA-miner + BRL0.790.820.58Random Forest0.750.850.51Boosted Trees0.790.870.59Decision Tree0.710.710.43 The rule list generated using MCA-miner and BRL is shown inFIG.6, which depicts a rule list for a transdiagnostic screening of psychiatric disorders, classifying between Healthy Controls vs. Patients Also, a breakdown analysis of the number of subjects being classified per rule in the list is shown inFIG.7. The detailed description of the questions inFIG.6is shown in Table 3. Note that most of the subjects are classified with a high probability in the top two rules, which is a very useful feature in situations where fast clinical screening is required. Multi-Class Classifier FIG.8shows the output rule list after training a BRL model using the all four labels in the CNP dataset, as explained above. Each rule can be used to infer the diagnosis of a subject. The rule list accuracy is 0.54 and Cohen's Kappa is 0.40. Note that the rules inFIG.8emit the maximum likelihood estimate corresponding to the multinomial distribution generated by the same rule in the BRL model, since this is the most useful output for practical clinical use. After 5-fold cross-validation the new MCA-miner with BRL classifier has an accuracy of 0:57 and Cohen's κ of 0:38. FIG.10shows the average confusion matrix for the multi-class classifier using all five (5) cross-validation testing cohorts. The actual questions referenced in the rule list inFIG.8are shown in detail in Table 3. TABLE 3Questions from the CNP dataset singled out by rulelist classifiers in FIGS. 6 and 8LABELQUESTIONANSWER TYPEBarratt#12I am a careful thinker1 (rarely) to 4(almost always)BipolarII#1My mood often changes, fromBooleanhappiness to sadness, withoutmy knowing whyBipolarII#2I have frequent ups and downsBooleanin mood, with and withoutapparent causeChapSoc#9I sometimes become deeplyBooleanattached to people I spenda lot of time withChapSoc#13My emotional responses seemBooleanvery different from thoseof other peopleDickman#22I don't like to do things quickly,Booleaneven when I am doing somethingthat is not very difficult.Dickman#28I often get into trouble becauseBooleanI don't think before I actDickman#29I have more curiosity thanBooleanmost peopleGolden#1I have not lived the rightBooleankind of lifeEyenseck#1Weakness in parts of your bodyBooleanHopkins#39Heart pounding or racing0 (not at all) to3 (extremely)Hopkins#56Weakness in parts of your body0 (not at all) to3 (extremely)Hypomanic#1I consider myself to be anBooleanaverage kind of personHypomanic#8There are often times when I amBooleanso restless that it is impossiblefor me to sit stillTCI#231I usually stay away from socialBooleansituations where I would haveto meet strangers, even if I amassured that they will be friendly The interpretability and transparency of the rule list inFIG.8enables us to obtain further insights regarding the population in the CNP dataset. Indeed, similar to the binary classifier,FIG.9shows the mapping of all CNP subjects using the 4-class rule list. While the accuracy of the rule list as a multi-class classifier is not perfect, it is worth noting how just 7 questions out of a total of 578 are enough to produce a relatively balanced output among the rules, while significantly separating the label categories. Also note that even though each of the 13 questionnaires in the dataset have been thoroughly tested in the literature as clinical instruments to detect and evaluate different traits and behaviors, the 7 questions picked by the rule list do not favor any of the questionnaires in particular. This is an indication that transdiagnostic classifiers are better obtained from different sources of data, and likely improve their performance as other modalities, such as mobile digital inputs, are included in the dataset. Binary Classification Using Multi-Class Rule List The performance of the multi-class classifier is further evaluated inFIG.8by using it as binary classifier, e.g., the ADHD, BD, and SCHZ labels are replaced with Patients. Using the same 5-fold cross-validated models obtained in the multiclass section above, their performance is computed as binary classifiers obtaining an accuracy of 0:77, AUC of 0:8, and Cohen's κ of 0:54. These values are on par with those in Table 2, showing that the method does not decrease performance by adding more categorical labels. Example 3: Treatment Response to BTRX-246040 The disclosed systems and methods were used in a randomized, placebo controlled study to identify patients that would respond to BTRX-246040 (LY2940094)—a nociceptin receptor antagonist (“NOPA”). Details about the chemical structure and other properties, uses and indications for BTRX-246040 are disclosed in J M Witkin et al., “Therapeutic Approaches for NOP Receptor Antagonists in Neurobehavioral Disorders: Clinical Studies in Majority Depressive Disorder and Alcohol Use Disorder with BTRX-246040,” the content of which is incorporated herein by reference in its entirety. Additionally, BTRX-246040, it's uses, indications, treatments, and forms are disclosed in U.S. Pat. No. 8,232,289 filed Nov. 10, 2010, titled “Spiropiperidine Compounds as ORL-1 Receptor Antagonists” and U.S. Publication NO. 2012/0214784 filed Aug. 23, 2012 titled “Spiropiperidine Compounds as ORL-1 Receptor Antagonists,” both of which are incorporated by reference in their entirety herein. During the study disclosed herein, BTRX-246040 was administered once daily in patients with major depressive disorder without anhedonia. The study included 73 patients with 38 randomized to BTRX-246040 and 35 randomized to the placebo. The BTRX group had 17 responders and the placebo had 15 responders. The study included the following methods:28 days screening periodEight (8 (weeks of active treatmentOff-drug follow up after one to two weeks104 MDD patients randomized1:1 ratio stratified by SHAPS≤4 and SHAPS>4dosage: 40 mg first week, then 80 mg onwards when tolerated Additionally, the study included the following schedule of assessments listed in Table 4: TABLE 4Schedule of AssessmentsSTUDY DRUGEND OFSCREENINGBASELINETREATMENTTREATMENTFOLLOW-UPVisit12345678DayD-28 to D-7D1D8D15D29D43D57WeekW0W1W2W4W6W8W9 to W10MADRS/CGIXXXXXXXXHAMA/HADS/DARSXXXXSHAPSXXXXXPain QuestionXXXXXTraumatic EventsXPRT/EEfRTXXXFERTXXXAge/Sex/EthnicityXX Accordingly, the patients received various assessments during various visits in the 8 weeks of the study. Those includes: Clinical Scales The following are known clinical questionnaires that were utilized at the indicated time points above:Montgomery-Asberg Depression Scale (MADRS)Hamilton Anxiety Scale (HAMA)Hospital Anxiety and Depression Scale (HADS-A/HADS-D)Snaith-Hamilton Pleasure Scale (SHAPS)Dimensional Anhedonia Rating Scale (DARS) Tasks The following tasks were administered to the patients, including with mobile mobile or tablet based versions of the tasks that gave the patient instructions and requested input from the patients though a user interface. Probabilistic Reward Task (PRT) The PRT task assesses objective measures of reward responsiveness.FIG.11is a schematic illustration of the task design for this study. For each trial, subjects' task was to decide whether a short (11.5 mm) or a long (13 mm) mouth was displayed on a previously mouthless cartoon face on a display by a control system by pressing either the ‘z’ or the ‘I’ key of a keyboard connected to a computer processor and the display of the user interface. In other examples, the keys could have been displayed on a touch screen interface of a tablet or a mobile device. When the subject pressed the correct response, they would sometimes be rewarded with a message like (“Correct!! You won 5 Cents”). The subjects were told the goal is to win as much money as possible and that not all correct responses would be rewarded. To evaluate response bias, asymmetric reinforcement was utilized—correct identification of either the short or long mouth was reordered three times more frequently (the rich stimulus) than correct identification of the other mouth (“lean stimulus”). The reinforcement allocation and key assignments were counterbalanced across subjects. The task was administered in three (3) blocks or sessions of 50 long versus 50 short mouths. The rich/lean versus long/short associations are balanced across subjects. FIG.12illustrates an example of the task implemented in a user interface of a mobile device with a touch screen. A patient would be presented with the image shown inFIG.12, and the patient would then select using the touchscreen user interface, the circle with the text “Short” on the left or “long” on the right. In other examples, the stimulus and response buttons may appear as illustrated inFIG.11. The local processor would then receive the user input, time stamp, and record the information in a local memory and/or a database to accumulate the patient's responses. For instance, the control system would determine the time between when the mouth was displayed and the time stamp of receiving the patient's response to assess the patient's reaction time. Additionally, the control system would determine the PRT outcome measures described below with reference toFIG.13, especially the response bias between the hit rates of the rich and lean stimuli. These measures may then be processed as input features to various models disclosed herein. FIG.13illustrates the PRT outcome measures per Block of the PRT task. Measures used include response bias, discriminability, reaction time, hit rate (rich), and hit rate (lean). Effort-Expenditure for Rewards Task (EEfRT) The EEfRT task measures the objective motivation component of reward processing. The patient chooses a hard or easy task: (i) hard: the display requests the user click 100 times in 21 seconds using the non-dominant little finger and (ii) easy: the display requests the user click 30 times in 7 seconds using the dominant index finger. Once the assessment is initiated, the control system sends instructions to display instructions for the user to click a certain amount of times after the user selects hard or an easy assessment. Then, the control system sends instructions to display a reward amount and probability: 1) Amounts: $1 (easy); $1.24-$4.30 (hard) 2) Probabilities: 12% (low); 50% (medium); 88% (high) Then, once the control system initiates the test, the clicks from the user's mouse or screen taps are recorded and time stamped, to determine how many clicks the user finished within the time periods.FIG.14illustrates a schematic example of the user interface displayed items and linear progression of the task. For instance, the user starts the task, selects the probability ad easy or hard, once they are ready they select ready and presses the correct button. The control system will determine how much money the user won. Facial Expression Recognition Task (FERT) The FERT task measures the bias in emotion recognition and processing. The control system sends instructions to the display to display images of humans with six different basic emotions (plus neutral):Happiness;Fear;Anger;Disgust;Sadness;Surprise; andNeutral The subject that is displayed buttons on the interface (in some examples) that allow the patient to select the emotion the patient believes matches the emotion expressed on the face in the image. In this example, ten (10) intensity levels of the emotion were presented. The outcomes measured by the test include:the accuracy, overall and per each intensity level;misclassification;average reaction time;target sensitivity; andresponse bias. Demographics In some examples the interface requested the patient provide their demographic information (or it could have been retrieved from a database). In some examples, this information was used as input into the classifiers.AgeSex The primary outcome of the study utilized was the clinical scale MADRS total at week 8. Predictive models were built using the disclosed systems and methods that label a high responder as those patients with a MADRS response that decrease by 50% from their initial baseline. In some models, the features set utilized included MADRS, HADS-A, HADS-D, Age, PRT, FERT, and EEfRT, with the scales and tasks input at week 0 as features. Biochemical Biomarkers In one example, biochemical or biomarkers, including Tumor Necrosis Factor, were also utilized to determine whether they could be useful to stratify patients as part of the disclosed Bayesian Decision Lists. The biochemical biomarkers tested included:NociceptinInterleukin 6Interleukin 1 BetaInterferon GammaInterleukin 10Interleukin 2Tumor Necrosis FactorC Reactive Protein As discussed herein, these biomarkers were processed by the models disclosed herein for generating a Bayesian Rule List. Accordingly, in at least one example, Tumor Necrosis Factor was output as a rule in a Bayesian Decision List as described further below. Models to Stratify Patients To build the models to stratify the patients from the data, first, forward selection using logistic regression with elastic net regularization was utilized as disclosed herein. This identified the top features from the full feature set that included the tasks, scales and demographics that had the greatest ability to separate patients into three groups: (i) BTRX-246040 responders, (ii) placebo responders, and (iii) non-responders. In this example, linear regression was first utilized to separate the groups using the top features identified as inputs. This was done in part by simulating a multi-verse scenario where each patient goes through both the drug arm and the placebo arm, and then taking the difference in the predicted Week 8 outcome scores across the two simulated arms (see Webb et al., 2019, paper, Personalized prediction of antidepressant v. placebo response: Evidence from the EMBARC study. Psychological Medicine, 49(7), 1118-1127) which is incorporated by reference herein in its entirety. As illustrated inFIG.16, the linear regression model resulted in good identification of patients that were higher responders to BTRX-246040. The cutoff between groups was determined by a compromise between maximizing the effect size and maintaining an adequate sample size within each subgroup. In some examples, the top features derived from the forward selection model could be utilized to build predictive models that could separate new patients into the different responding groups. In those examples, the features could be pre-processed from the tasks, demographic data, and scales answers, and then input into a linear or logistic regression model to output classifications of new patients. In other examples, a rule miner and BRL algorithm could be utilized to developed rules to separate the groups identified using forward selection. In that example, the output labels for the data used by the rule miner could be derived from the three groups separated by the linear regression model (groups and data shown inFIG.16). Then, literal rules could be developed using a Bayesian Rule List model that would separate the patients into these categories based on the features identified in forward selection to output a Bayesian Decision List. Those rule lists, could then be utilized to separate new patients based on whether they would respond to BTRX-246040, placebo, or neither. Thus, high responders to BTRX-246040 could be identified and treated with the drug. In other examples, the disclosed systems and methods could stratify patients to identify high responders to other neuropsychiatric drugs. The resulting algorithms could be saved on a remote database server, or locally, including on the memory of a handheld computing device that administers the scales and/or tasks disclosed herein. Accordingly, patients could be administered scales questionnaires, tasks, and submit demographic information on a mobile device or other computing device. Next, the computing device and control system may process the data to be input as features into a Bayesian Rule List, and then output whether or not the Patient is likely a high responder for BTRX-246040 or other drugs in other examples. Results The group level treatment effects of the disclosed study were similar between the treatment and placebo groups. Accordingly, in this study, BTRX-246040 did as well as the placebo across all subjects as illustrated inFIGS.15A-15B, which depict graphs and tables showing treatment effects. In addition, as illustrated inFIGS.16A-16B, the disclosed classifiers were able to identify patients that would be higher responders to BTRX-246040 by a greater than 5-point change on the MADRS scale after the 8-week study. Additionally, the disclosed classifiers were able to identify patients that would be higher responders to the placebo. Furthermore, the table below illustrates the logistic regression models built with forward selection models (with elastic net regularization) had good accuracy and AUC in separating the high responding to BTRX-246040 and high responders to Placebo groups: TABLE 5Assessment of ModelsBTRX MODELPLA MODEL(ROC-AUC = 0.72, Acc = 0.63)(ROC-AUC = 0.87, Acc = 0.81)AgeEEfRT Completion Rate-LowHADS-A Total ScorePRT Response Bias-Block 2HADS-D Total ScorePRT Response Bias-Block 3 Additionally,FIGS.17A-17Dillustrate that the response prediction from baseline data is stable over time. Specifically, the models utilized only baseline data to make a prediction about the subjects at week 8—without access to intervening data after subjects began the study. With only baseline data, the subjects identified by the models as responders to BTRX-246040 and placebo both maintained improved MADRS scores at weeks 1, 2, 4, and 6. Thus, with only baseline data, the models were able to identify higher responders to placebo and BTRX-246040 that was very consistent over time—a surprisingly accurate and stable stratification of treatment and placebo responders. FIGS.18A-18Fillustrate the top features that separated responders from non-responders to the placebo and BTRX-246040 treatment. For example,FIG.18Adepicts a graph showing a top feature of HADS-A total score;FIG.18Bdepicts a graph showing a top feature of HADS-D total score;FIG.18Cdepicts a graph showing a top feature of PRT response bias—Block2;FIG.18Ddepicts a graph showing a top feature of PRT response bias—Block3;FIG.18Edepicts a graph showing a top feature of age;FIG.18Fdepicts a graph showing a top feature of EEfRT completion rate; These features were identified with Forward Feature Selection methods disclosed herein. Based on the results, some of the classifiers disclosed herein increased accuracy when different modalities were included in the Bayesian Rule Lists, for instance clinical scales and tasks based assessments. Accordingly, given the numerous modalities available for neuropsychiatric testing, there are a plethora of features available for input into various models. The disclosed systems and methods have an unprecedented ability to identify the most predictive features using forward feature selection, and then process those features with a rule miner to output understandable rule lists for accurately stratifying patients based on those features. In other examples, a rule miner could potentially be used on the broader list of features to output a rule list to stratify patients. FIGS.19A-19Billustrate an example of this combination approach that has been found to be very advantageous. As illustrated, the bar graphs inFIG.19Bdepict the patient groups that were stratified using Forward Feature Selection and linear regression. However, based on those separations alone, it is not clear what features are most important to stratify the patients for each group. For instance, inFIGS.18E-18F, it is not known which scales, tasks, or other input features would be most important to identifying patients that are higher responders to BTRX-246040 based on the outputs of the linear regression model. Therefore, the rule miner and BRL algorithm was applied to the results, and a set of rules were identified that specifically could identify higher responders to BTRX-246040 (seeFIGS.18A-18F). Interestingly, as illustrated inFIGS.19A-19B, this only included the FERT task response bias to the angry expression, and a specific threshold of a HADS-A score. Accordingly, the rule miner and BRL algorithm is extraordinarily valuable in interpreting the basis for stratifying the groups which also could allow one to design more efficient screening systems for patients (only certain tasks, and scales would need to be administered in the future to screen patients instead of administering all of the questions and all of the tasks). FIGS.20A-20Bfurther illustrate additional Bayesian Decision Lists extracted from the data set and output by the BRL model after being assigned labels from the outputs of the Forward Features Selection and linear regression models. These include Bayesian Decision Lists to identify patients would respond to placebo. Thus, the disclosed systems and methods can generate Bayesian Rule Lists to screen out patients that are higher responders to placebo, in the design of clinical trials for instance. FIGS.25A-25Billustrate a Bayesian Decision List that incorporated Tumor Necrosis Factor, a biochemical or wet biomarker into the rule list. More specifically,FIG.25Adepicts a further pseudocode for a BRL output incorporating Tumor Necrosis Factor into the rule list, according to some implementations of the present disclosure; andFIG.25Bdepicts graphs showing treatment responses between BTRX group and Rest group, and a bar graph showing a number of subjects being identified by the rules, according to some implementations of the present disclosure. This is surprising, as this rule list combines four different and disparate types of modalities into a single, short Bayesian Decision List capable of stratifying patients: (i) demographics, (ii) clinical scales, (iii3) tasks, and (iv) biochemical markers. Additionally, the rule list reliably separates higher responders to BTRX-246040. Accordingly, this data demonstrates that the disclosed systems and methods for generating Bayesian Decision Lists may surprisingly and accurately take into account even biochemical markers in combination with a variety of other biomarker modalities. Example 4: Treatment Response to CERC-501 In another example, during a phase 2a study known as FAST-MAS was run to evaluate the Kappa Opioid Receptor (“KOR”) as a target for the treatment of mood and anxiety spectrum disorders. Additionally, CERC-501, it's uses, indications, treatments, and forms are disclosed in PCT Publication No. WO2018170492, filed Mar. 16, 2018, titled “Kappa Opioid Receptor Antagonists and Products and Methods Related Thereto” which is incorporated by reference herin in its entirety. During the trial, CERC-501 was tested to see whether it engaged key neural circuitry related to the hedonic response. The FAST-MAS trial included a 30-day screening period, followed by 8 weeks of active treatment, follow up 12 weeks of off-drug follow up after baseline. The study included 80 patients randomized (of 163 enrolled) to include 45 on CERC-501 and 44 to the Placebo. The patients received 10 m daily for the 8 weeks of active treatment. Patients were eligible for enrollment if they met both:(i) DSM-IV TR critiera for at least one of:MDDBipolar I or II DepressedGADSocial PhobiaPanic DisorderPTSD; and(ii) SHAPS score of ≥20 The diagnosis breakdown was accordingly to the following table: TABLE 6Primary Diagnosis BreakdownMINI DIAGNOSISTOTALPLACEBOTREATMENTMDD331716GAD1156BD I532BD II404Social Anxiety422DisorderPTSD220Panic Disorder211Total613031 Furthermore, the primary outcome measures were the following measures included in Table 7 below. These include the fMRI, the SHAPS scale, and the PRT task as disclosed herein TABLE 7Primary Outcome MeasuresTYPEMEASUREDESCRIPTIONPrimaryfMRI MIDChange in Ventral Striatal ActivationTaskOccurring in Anticipation of RewardDuring the Monetary Incentive DelayTask Measured by fMRISecondarySHAPSClinical Anhedonia Measured by theSnaith-Hamilton Pleasure Scale(SHAPS; Total Score)SecondaryPRT TaskChange in Behavioral Measure ofAnhedonia Using the ProbabilisticReward Task Furthermore, the schedule of scales assessed included the following timeline: TABLE 8Schedule of Scales AssessedSTUDY DRUGSCALETREATMENTEND OFNAMESCREENINGBASELINE3TREATMENTFOLLOW-UPVisit12(Phone)45678DayD-30 to D-1D0D7D14D28D42D56D84+/−4+/−4+/−4+/−4+/−4Week01246812SHAPS/CGI/XXXXXXXTEPS/VAS/PRISEHAM-XXD/HAM-A/CPFQCSSRSXXXXXXXX Results When applied to the whole patient cohort, CERC-501 illustrated a difference in the outcome and treatment response. The data was first analyzed using a personalized advantage index and identifying the top features through forward feature selection. The following table indicates the top features identified using the forward feature selection process as disclosed herein. TABLE 9Top Features identified using Forward Features selection.KORA MODELPBO MODEL(ROC-AUC = 0.80, Acc = 0.73)(ROC-AUC = 0.90, Acc = 0.82)HAMD 3 (−): SuicideHAMD 4 (−): Initial InsomniaTEPS 1 (−): Can't wait to see movieHAMA 7 (−): General somaticw/favorite actorsymptomsHAMD 16 (−): Weight lossSHAPS 13 (−): Get pleasurefrom helping othersCPFQ 7 (−): Mental acuityHAMD 16 (+): Weight lossSHAPS 5 (−): Enjoy a warm bath orrefreshing showerHAMA 4 (−): InsomniaHAMA 10 (−): Respiratory symptomsHAMA 11 (+): GI symptoms Interestingly, in this example the top features were all scales modality features. Next, these features were processed using the disclosed systems and methods to output a Bayesian Decision List illustrated inFIGS.26A-26B. More specifically,FIG.26Adepicts yet another pseudocode for a BRL output using a disclosed rule mining technique and a BRL model, according to some implementations of the present disclosure; andFIG.26Bdepicts a graph showing treatment responses between KORA group and Rest group, according to some implementations of the present disclosure. For instance, a rule miner was applied to the features and outcomes to output a rule set, and a BRL model was applied to the rule set to output the decision list. This list reliable separated patients that responded to CERC-501 as illustrated by theFIGS.26A-26B. The impact of CERC-501 was greater on the patients identified using the Bayesian Decision Lists than the impact generally on the patients that received the active treatment, confirming that the disclosed systems and methods can reliably identify higher responders, including for drugs that target KOR. Furthermore,FIGS.27A,27B, and28show additional rule lists that were generated according to the disclosed systems and methods that were capable of identifying higher responders for CERC-501. More specifically,FIG.27Adepicts a pseudocode for a BRL output using a disclosed rule mining technique and a BRL model, according to some implementations of the present disclosure;FIG.27Bdepicts a graph showing treatment responses between KORA group and Rest group, according to some implementations of the present disclosure; andFIG.28depicts an additional pseudocode for a BRL output using a disclosed rule mining technique and a BRL model, according to some implementations of the present disclosure. In some of these rule lists, task data was included in the rules, including the PRT task disclosed herein. Accordingly, the FAST-MAS study confirms that the disclosed technology may be utilized to generate rule lists that are capable of stratifying patients to identify higher responders to drugs that target the Kappa Opioid Receptor. Example 5: Voice and Facial Modalities In some examples, the disclosed technology may utilize data and features from audio and video recordings of patients performing speaking tasks. For instance, in some examples, the disclosed Bayesian Decision Lists may incorporate features from these speaking tasks to stratify patients (possibly in combination with other disclosed modalities including scales). Specifically, the speaking features may include the following modalities:1) audio features from the patient's voice extracted from the recordings during a speaking task;2) text features from the words and sentences spoken by the patient during a speaking task; and3) video features from the facial expressions of the patient recorded during a speaking task. Example Speaking Tasks Accordingly, systems and methods may be utilized to record audio and video data while a patient is performing a speaking task. For instance, a patient may be asked to read aloud a passage, paragraph or other text while a microphone and video camera record the patient speaking. The system may display the instructions on a display or provide audio instructions on the speaker of an interface. This will allow the system to identify audio and visual features relating to how a patient communicates certain passages. In other examples, a display may present questions to the patient (or questions may be asked to the patient over a speaker) and the microphone and video camera may record the answer provided by the patient. In this example, in addition to analyzing the audio and visual features of the response, the systems and methods may also analyze the answers and words chosen by the patient and they may be inputs into the models disclosed herein. Systems and Methods for Acquiring Audio and Visual Features of Speaking Tasks Following are example systems and methods for capturing the audio, visual, and textual features during the speaking tasks. In some examples, a mobile device application will be used to perform the test and capture the data. In other examples, a variety of other computing devices could be utilized in a system that includes a microphone, speaker, camera, display, and interface. In some examples, only audio or only video data may be captured and/or input into the disclosed algorithms. For instance, a Bayesian Decision List may only include an audio feature or may only include a video (e.g. facial expression) feature. Therefore, to stratify patients, only audio or video data would respectively need to be recorded. FIG.29presents an example system700A, which can be configured to perform various methods of capturing audio and visual data during various tasks disclosed herein. In particular, system700A includes a display702; a user704; a camera706; a camera field of view706a; a user interface708including a speaker; a remote computing device710; and a microphone712. The camera706captures visual data of an area in front of the camera (area706a) and in some examples. transmits the visual data to the display702and the remote computing device710. As shown inFIG.29, a user704may position the camera so that their head or face is in the view of the camera706. In such an example, the camera706captures footage of the face of the user704. In some examples, the camera706can be configured to take live video footage, photographs, or images/videos in non-visual wavelengths. In some examples, the camera706is configured to start or stop recording based on instructions from the remote computing device710or a local processor or computing device. For instance, the application or program running the process may be performed by a remote server, computing device, or a local processor. The camera706is communicatively coupled to the display702and the remote computing device710or a local computing device. In some examples, a smartphone will perform each of these functions. The user interface708is configured to receive input from a user704. For example, the user interface708may include a keyboard, a touchscreen, a speaker, a mobile device, or any other device for receiving input, as known in the art. The user704enters data on the user interface708in response to prompts on the display702or may speak their answers which are recorded by the microphone712. For example, the display702outputs a series of mental health questions (or the questions may be asked over the speaker), and the user704inputs an answer to each question on the user interface708through various methods. The user interface708is configured to directly display the input on display702and is configured to relay the data to the remote computing device710. The microphone712is configured to receive auditory input, for example, from the user704. The microphone is configured to start or stop recording based on instructions from the remote computing device710. The microphone is configured to transmit audio data to the remote computing device710. In some examples, the microphone can be on a user's smart phone. The display702is configured to receive data from the camera706, the remote computing device710, and the user interface708. For example, the display702displays the visual data captured by the camera706. In another example, the display702displays input received from the user interface. The display702is directly coupled to the camera706and the microphone712in some examples; in other examples, the camera706and the microphone712send their data to the remote computing device710, which then processes the data and instructs the display702according to the processed data. In other examples, the display702displays data received from the remote computing device710. Example data from the remote computing device710includes questions from a mental health questionnaire, answer boxes, answer options, answer data, a mental health indicator, or any other information. In some examples, the display702is on a smart phone. The present disclosure also contemplates that more than one display702can be used in system702, as would be readily contemplated by a person skilled in the art. For example, one display can be viewable by the user704, while additional displays are visible to researchers and not to the user704. The multiple displays can output identical or different information, according to instructions by the remote computing device710. A remote computing device710can be communicatively coupled to a display702, a camera706, a user interface708, and a microphone712. For example, the communication can be wired or wireless. The remote computing device710can process and/or store input from the display702, the camera706, the user interface708, and the microphone712. In some examples, system700can be a user704with a unitary device, for example, a smart phone. The smart phone can have a display702, a camera706, a user interface708, a computing device710, and a microphone710. For example, the user704can hold the smart phone in front of his or her face while reading text on the display702and responding to the mental health questionnaires. Referring briefly toFIG.30, an example interface design is shown. Similar labels are used for corresponding elements toFIG.29. A first screen1000A of the interface design displays text for a user to read. A second screen1000B of the interface design displays a face of the user as video data is being recorded. In some implementations, the first screen1000A and the second screen1000B are the same physical screen of an electronic device having the display702and the user interface708. For example, the first screen1000A and the second screen1000B are displayed at two different points in time.FIG.30demonstrates how the disclosed system and methods can be performed on a local device, with ease of access for the user. Test Application for Voice/Facial Recognition During Screening FIG.31shows a flow chart showing an example method700B, for executing a speaking task application on a user device and recording the audio and visual data during the test of the user's voice and facial expressions. First, at step720, the system may control execution and termination of a test application. The test application can be a software application stored on a computing device (e.g., the remote computing device710ofFIG.29). Step720provides for executing the test application upon receiving and indication to initiate a test. In some examples, the indication comes from a user interface (e.g., the user interface708ofFIG.29) communicatively coupled to the computing device. Step720provides for executing the test application until the computing device receives an indication to stop the test. In some examples, this indication comes from the user interface. In some examples, the indication to stop the test includes determining, by the computing device, that the user's face is not within an image captured by a camera. While the test is being executed according to step720, methodology700B proceeds to step721. Step721provides for displaying a series of questions. An example series of questions includes questions from mental health questionnaires, and includes both text and answers for each question or open ended questions that will allow the patient to provide their own answers. In other examples, the system will display text for the user to read verbatim. In other examples, the system will provide questions using an audio modality over a speaker. While the test is being executed according to step720, methodology700B can provide for step722. Step722provides for displaying live video data. In some examples, live video data is collected from a camera positioned to capture an image in front of a display (e.g., camera706capturing visual data of user704positioned in front of the display702, as shown inFIG.30). In some examples, live video data is recorded and then displayed at a display; in other examples, live video data is simultaneously recorded and displayed. The display can be facing the user. This will allow the user to line up their face so that it is in the frame or field of view of the camera. Step723provides for recording test video data and test audio data (e.g., from camera706and microphone712ofFIG.29). In some examples, the audio data and the video data are recorded in segments corresponding to the display of questions at step722; in others examples, the data is collected in an un-interrupted stream while the questions or text is presented at step722. In some examples, a microphone (e.g., microphone712ofFIG.29) records audio data upon determining, by the computing device, that the user is speaking. In some examples, the microphone stops recording audio data when the computing device determines that the user is not speaking. Step724provides for receiving answers for each of the series of questions (the questions provided for in step721). The answers are received at a user interface. In some examples, the answers include selection of a multiple choice question, a textual response, or any other user input as contemplated by one skilled in the art. In other examples, the system will record the verbatim reading of the text. In some examples, answers to questions may be received through the microphone. Step725provides for processing the answers and/or audio and visual data of the user reading text received at step724and the test video data and the test audio data recorded at step723. In some examples, the processing is performed at a computing device using a machine learning model and results in outputting a mental health indication of the user. In some examples of the present disclosure, step725performs processing of the answers, the test video data, and the test audio data. In some examples, the output mental health indication identifies a likelihood of the user having any one of several mental health disorders. The mental health disorders include a neuropsychiatric disorder, schizophrenia, and a bipolar disorder. In some examples, the mental health indication identifies whether the user is a patient or a healthy control. This model can then be used as a diagnostic tool. For example, additional mental health questionnaire data, voice data, and/or video data can be input into the model to determine a mental health indication of a patient. Data Separation and Feature Identification After the data is captured and recorded using the above systems and methods, the data may be first pre-processed to separate various modalities and features of the data.FIG.32illustrates a flowchart of an example data processing pipeline for reading task related data. In some examples, first questions or text will be displayed3200for the patient to answer or reach aloud. Next, the data will be recorded3210while the user is speaking as disclosed herein through the microphone and for instance a front facing camera on a smart phone or tablet in some examples. In some implementations, the data recorded at step3210includes visual data3203and audio data3205. Then the data will be segmented3220so that it can be pre-processed to identify features3240. For instance, the audio data3205may be segmented into audio and/or speech data2409of the user speaking, and this data may be processed using language processing algorithms to identify the textual information or answers2410. Additionally, the video data may be processed to identify the face and facial features2411. The data may be time stamped so that audio, facial, and textual features may be linked in time, to have higher level features. Next, the features in each of the modalities must be identified3240. Various algorithms for each of the modalities may be utilized to identify the features2310. In some implementations, the data and the answers are processed at step2310using one or more models, such as the Bayesian rule list2313and/or any suitable machine learning method2314. The output from processing the data and the answers can include patient classification2320. At step2330, the output patient classification can be displayed on any suitable display device. Finally, the patient may be treated at step2340as described herein. Following are some low level and high level features that may identified, however this is just example and not comprehensive. Audio/Speech Features The audio features identified may include local features, some global waveform level features, phoneme rate, demographic (gender, etc.), duration, speaking ratio, voice ratio, prosodic features, glottal and spectral features, or other suitable features. Some high level features may include statistical functionals, regression functions, and local maxima/minima related functionals. Additionally, dimensionality reduction may be performed on these features using a various of methods, which may include Brute-force methods and principal component analysis (“PCA”). Text Features The audio features identified may include number of sentences, number of words, word embeddings, dictionary based methods, and session level features, for instance those discussed by Pampouchidou in “Depression Assessment by Fusing High and low Level Features from Audio, Video, and Text” AVEC 2016, the content of which is incorporated herein by reference in its entirety. Video/Facial Features The audio features identified may include facial action units, facial landmarks, or gaze direction as described by Valstar et al. in “AVEC 2016: Depression, Mood, and Emotional Recognition Workshop and Challenge”, the content of which is incorporated herein by referenced in its entirety. Additional high level features may include geometric features described by Syed Mohammed's dissertation for the University of Auburn, Alabama in 2017 titled “The Application of Data Mining and Machine Learning in the Diagnosis of Mental Disorders” the content of which is incorporated herein by reference in its entirety. Additional high level features may include correlation and covariance matrices. Additionally, dimensionality reduction may be performed on these features using a various of methods, which may include Brute-force methods and PCA. Processing Features with Model After features have been identified they may be processed with a model2310as described herein, for instance a Bayesian Rule Decision List2313. Accordingly, prior to processing these features, a Bayesian Rule List model and other processing models would be applied to generate a Bayesian Decision List that utilized rules with these features. After processing with the list, the technology may output the patient classification2320and display it on a display2330as previously described herein. Finally, the patient may be treated2340as described herein. Additional Implementations of Computer & Hardware Implementation of Disclosure It should initially be understood that the disclosure herein may be implemented with any type of hardware and/or software, and may be a pre-programmed general purpose computing device. For example, the system may be implemented using a server, a personal computer, a portable computer, a thin client, or any suitable device or devices. The disclosure and/or components thereof may be a single device at a single location, or multiple devices at a single, or multiple, locations that are connected together using any appropriate communication protocols over any communication medium such as electric cable, fiber optic cable, or in a wireless manner. It should also be noted that the disclosure is illustrated and discussed herein as having a plurality of modules which perform particular functions. It should be understood that these modules are merely schematically illustrated based on their function for clarity purposes only, and do not necessary represent specific hardware or software. In this regard, these modules may be hardware and/or software implemented to substantially perform the particular functions discussed. Moreover, the modules may be combined together within the disclosure, or divided into additional modules based on the particular function desired. Thus, the disclosure should not be construed to limit the present disclosure, but merely be understood to illustrate one example implementation thereof. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some implementations, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer programs, e.g., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described in this specification can be implemented as operations performed by a “data processing apparatus” on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. CONCLUSION While various examples of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the disclosed examples can be made in accordance with the disclosure herein without departing from the spirit or scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above described examples. Rather, the scope of the disclosure should be defined in accordance with the following claims and their equivalents. Although the disclosure has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, to the extent that the terms “including,” “includes,” “having,” “has,” “with,” or variants thereof, are used in either the detailed description and/or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising.” Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Furthermore, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. | 112,500 |
11857323 | DETAILED DESCRIPTION Embodiments will now be described with reference to the figures. For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein. Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred” over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description. Any module, unit, component, server, computer, terminal, engine or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Further, unless the context clearly indicates otherwise, any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors. The following relates generally to detection of human stress and more specifically to a system and method for camera-based stress determination. It has been determined that an individual's stress can be observed by measuring heart rate variability including respiratory sinus arrhythmia. Given a stressful situation where an individual encounters a perceived threat, the autonomic nervous system generally works to adjust the internal state of the individual's body and react to the situation. The two branches of the autonomic nervous system, the sympathetic and parasympathetic nervous systems, contribute in stress reaction. The sympathetic nervous system is generally concerned with challenges from the external environment, for example triggering the fight-or-flight response in stressful situations. The parasympathetic nervous system is generally concerned with returning the body to a resting state or the state of homeostasis. It has been determined that stress generally occurs when the parasympathetic nervous system fails to maintain homeostasis. Thus, a determination of stress can be obtained by examining the level of homeostasis. As part of the parasympathetic nervous system, the vagus nerve generally plays a large role in the regulation of homeostasis because it is responsible for signaling the heart, lungs, and digestive tract to slow down and relax. The activity of the vagus nerve, otherwise known as vagal tone, can thus be indicative of the level of homeostasis within the body. Generally, with increased vagal tone, the heart slows down, homeostasis is maintained, and stress level decreases. Generally, with decreased vagal tone, the heart quickens, homeostasis is disrupted, and stress level increases. It has been shown that parasympathetic vagal activity, as measured by an electrocardiogram (ECG), decreases during sessions involving stress. In addition, it has been shown that irregular increase and decrease of vagal tone can indicate chronic stress. Although vagal tone can provide insight into an individual's stress level, the changes in vagal tone generally cannot be measured directly. Rather, it has been found that vagal tone, and corresponding information involving stress, can be measured indirectly but reliably by one or more heart rate variability indices, for example respiratory sinus arrhythmia (RSA). RSA is the rhythmic increase and decrease in the beating of the heart, which occurs in the presence of breathing. Typically, heart rate increases with inhalation and decreases with exhalation. It has been shown that a decrease in resting RSA is indicative of increased stress. As part of an approach to measuring RSA, a measurement of variations in heartbeat can be first obtained. In a particular approach, ECG can be used to observe heart rate variability (HRV), analyzing the time-period in milliseconds between each R-wave, to obtain an R-R interval (RRI). With information from the RRI, reliable inferences can be made about stress. An increasing RRI variation can indicate excitation of the vagus nerve as it works to decrease heart rate, and thus can indicate that stress level is low. A decreasing RRI variation can indicate an inhibited vagus nerve, allowing heart rate to increase, and thus can indicate that stress level is high. However, assessment of RRI may not be enough to determine vagal tone because respiration is typically not the only contributor to variations in heart rate. As an example, there may be oscillations at frequencies slower than that of respiration, such as Traube-Hering-Mayer waves; which can provide information regarding the sympathetic nervous system rather than the parasympathetic nervous system. Thus, data from ECG recordings typically has to be filtered to obtain various hear rate variability (HRV) features, including measurement of RSA, and in effect can be an estimate of vagal tone that can provide information regarding individual stress level. Use of an ECG can be effective and reliable at assessing individual stress level; however, there are generally limitations with its utilization. ECG is generally expensive, invasive, and inconvenient. First, ECG is typically expensive because it requires the utilization of specialized equipment (for example, ECG electrodes, leads, and machine). In addition, the interpretation of electrocardiographs typically requires specially trained medical professionals, whose time and expertise can be expensive. Second, ECG is typically invasive because its utilization of electrodes requires attachment of said electrodes to the human body, which can cause discomfort. Third, ECG is typically inconvenient because the application of electrodes typically necessitates the preparation of the skin surface to reduce skin impedance in order to obtain a clean ECG signal. The combination of these limitations means that ECG is particularly inconvenient because it cannot be used in all settings. In many cases, these limitations are problematic for the assessment of stress because individuals commonly experience stress at various times in their day, such as at work, home, or school. Yet, with ECG, individuals are typically limited to assessments of their stress during occasional and cumbersome visits to a medical facility with an ECG device in order to determine whether their stress level has reached an unhealthy state. Referring now toFIG.1, a system for camera-based heart rate tracking100is shown. The system100includes a processing unit108, one or more video-cameras100, a storage device101, and an output device102. The processing unit108may be communicatively linked to the storage device101, which may be preloaded and/or periodically loaded with video imaging data obtained from one or more video-cameras100. The processing unit108includes various interconnected elements and modules, including a transdermal optical imaging (TOI) module110, a filtering module112, a data science module114, a bitplane module116, a transformation module118, a reconstruction module120, a stress module122, and an output module124. In a particular case, the TOI module includes an image processing unit104and a filter106. The video images captured by the video-camera105can be processed by the filter106and stored on the storage device101. In further embodiments, one or more of the modules can be executed on separate processing units or devices, including the video-camera105or output device102. In further embodiments, some of the features of the modules may be combined or run on other modules as required. The term “video”, as used herein, can include sets of still images. Thus, “video camera” can include a camera that captures a sequence of still images. Using transdermal optical imaging (TOI), the TOI module110can isolate hemoglobin concentration (HC) from raw images taken from a traditional digital camera. Referring now toFIG.3, a diagram illustrating the re-emission of light from skin is shown. Light301travels beneath the skin302, and re-emits303after travelling through different skin tissues. The re-emitted light303may then be captured by optical cameras100. The dominant chromophores affecting the re-emitted light are melanin and hemoglobin. Since melanin and hemoglobin have different color signatures, it has been found that it is possible to obtain images mainly reflecting HC under the epidermis as shown inFIG.4. Using transdermal optical imaging (TOI), the TOI module110, via the image processing unit104, obtains each captured image or video stream, from the camera105, and performs operations upon the image to generate a corresponding optimized hemoglobin concentration (HC) image of the subject. From the HC data, the HC can be determined. The image processing unit104isolates HC in the captured video sequence. In an exemplary embodiment, the images of the subject's faces are taken at 30 frames per second using a digital camera105. It will be appreciated that this process may be performed with alternative digital cameras, lighting conditions, and frame rates. In a particular case, isolating HC can be accomplished by analyzing bitplanes in the sequence of video images to determine and isolate a set of the bitplanes that approximately maximize signal to noise ratio (SNR). The determination of high SNR bitplanes is made with reference to a first training set of images constituting the captured video sequence, in conjunction with blood pressure wave data gathered from the human subjects. In some cases, this data is supplied along with other devices, for example, ECG, pneumatic respiration, continuous blood pressure, laser Doppler data, or the like, collected from the human subjects, and received, in order to provide ground truth blood flow data to train the training set for HC change determination. A blood flow training data set can consist of blood pressure wave data obtained from human subjects by using one or more continuous blood pressure measurement devices as ground truth data; for example, an intra-arterial blood pressure measurement approach, an auscultatory approach, or an oscillometric approach. The selection of the training data set based on one of these three exemplary approaches depends on a setting in which the continuous blood pressure measurement system is used; as an example, if the human subject is in a hospital intensive care setting, the training data can be received from an intra-arterial blood pressure measurement approach. Bitplanes are a fundamental aspect of digital images. Typically, a digital image consists of certain number of pixels (for example, a width×height of 1920×1080 pixels). Each pixel of the digital image having one or more channels (for example, color channels red, green, and blue (RGB)). Each channel having a dynamic range, typically 8 bits per pixel per channel, but occasionally 10 bits per pixel per channel for high dynamic range images. Whereby, an array of such bits makes up what is known as the bitplane. In an example, for each image of color videos, there can be three channels (for example, red, green, and blue (RGB)) with 8 bits per channel. Thus, for each pixel of a color image, there are typically 24 layers with 1 bit per layer. A bitplane in such a case is a view of a single 1-bit map of a particular layer of the image across all pixels. For this type of color image, there are therefore typically 24 bitplanes (i.e., a 1-bit image per plane). Hence, for a 1-second color video with 30 frames per second, there are at least 720 (30×24) bitplanes.FIG.8is an exemplary illustration of bitplanes for a three-channel image (an image having red, green and blue (RGB) channels). Each stack of layers is multiplied for each channel of the image; for example, as illustrated, there is a stack of bitplanes for each channel in an RGB image. In the embodiments described herein, Applicant recognized the advantages of using bit values for the bitplanes rather than using, for example, merely the averaged values for each channel. Thus, a greater level of accuracy can be achieved for making predictions of HC changes, and thus continuous blood pressure measurements as disclosed herein, and as described for making predictions. Particularly, a greater accuracy is possible because employing bitplanes provides a greater data basis for training the machine learning model. TOI signals can be taken from regions of interest (ROIs) of the human subject, for example forehead, nose, and cheeks, and can be defined manually or automatically for the video images. The ROIs are preferably non-overlapping. These ROIs are preferably selected on the basis of which HC is particularly indicative of blood pressure measurement. Using the native images that consist of all bitplanes of all three R, G, B channels, signals that change over a particular time period (for example, 10 seconds) on each of the ROIs are extracted. The raw signals can be pre-processed using one or more filters, depending on the signal characteristics. Such filters may include, for example, a Butterworth filter, a Chebyshev filter, or the like. Using the filtered signals from two or more ROIs, machine learning is employed to systematically identify bitplanes that will significantly increase the signal differentiation (for example, where the SNR improvement is greater than 0.1 db) and bitplanes that will contribute nothing or decrease the signal differentiation. After discarding the latter, the remaining bitplane images can optimally determine HC and HC changes. With respect to bitplanes, a digital image consists of a certain number of pixels; typically referred to as a configuration of width-times-height (for example, 1920W×1080H). Each pixel has one or more channels associated with it. Each channel has a dynamic range, typically 8 bits per pixel per channel, but occasionally 10 bits per pixel per channel for high dynamic range images. For color videos, each image typically has three channels; for example, Red, Green, and Blue (RGB). In a particular case, there are 8-bits per channel. In some cases, additional channels may be available, such as thermal and depth. As such, a bitplane is a view of a single bit of an image across all pixels; i.e., a 1-bit image per bit per channel. Machine learning approaches (for example, a Long Short Term Memory (LSTM) neural network, or a suitable alternative such as non-linear Support Vector Machine) and deep learning may be used to assess the existence of common spatial-temporal patterns of hemoglobin changes across subjects. The machine learning process involves manipulating the bitplane vectors (for example, 24 bitplanes×30 fps) using the bit value in each pixel of each bitplane along the temporal dimension. In one embodiment, this process requires subtraction and addition of each bitplane to maximize the signal differences in all ROIs over the time period. In some cases, to obtain reliable and robust computational models, the entire dataset can be divided into three sets: the training set (for example, 80% of the whole subject data), the test set (for example, 10% of the whole subject data), and the external validation set (for example, 10% of the whole subject data). The time period can vary depending on the length of the raw data (for example, 15 seconds, 60 seconds, or 120 seconds). The addition or subtraction can be performed in a pixel-wise manner. A machine learning approach, the Long Short Term Memory (LSTM) neural network, or a suitable alternative thereto is used to efficiently and obtain information about the improvement of differentiation in terms of accuracy, which bitplane(s) contributes the best information, and which does not in terms of feature selection. The Long Short Term Memory (LSTM) neural network allow us to perform group feature selections and classifications. The LSTM machine learning algorithm are discussed in more detail below. From this process, the set of bitplanes to be isolated from image sequences to reflect temporal changes in HC is obtained for determination of blood pressure. To extract facial blood flow data, facial HC change data on each pixel or ROI of each subject's body part image is extracted as a function of time when the subject is being viewed by the camera103. In some cases, to increase signal-to-noise ratio (SNR), the subject's body part can be divided into the plurality of regions of interest (ROIs). The division can be according to, for example, the subject's differential underlying physiology, such as by the autonomic nervous system (ANS) regulatory mechanisms. In this way, data in each ROI can be averaged. The ROIs can be manually selected or automatically detected with the use of a face tracking software. The machine learning module112can then average the data in each ROI. This information can then form the basis for the training set. As an example, the system100can monitor stationary HC changes contained by a selected ROI over time, by observing (or graphing) the resulting temporal profile (for example, shape) of the selected ROI HC intensity values over time. In some cases, the system100can monitor more complex migrating HC changes across multiple ROIs by observing (or graphing) the spatial dispersion (HC distribution between ROIs) as it evolves over time. A Long Short Term Memory (LSTM) neural network, or a suitable alternative thereto, can be used to efficiently obtain information about the improvement of differentiation in terms of accuracy, which bitplane(s) contributes the best information, and which does not in terms of feature selection. The Long Short Term Memory (LSTM) neural network allows the system100to perform group feature selections and classifications. The LSTM machine learning algorithm is discussed in more detail below. From this process, the set of bitplanes to be isolated from image sequences to reflect temporal changes in HC is obtained. An image filter is configured to isolate the identified bitplanes in subsequent steps described below. To extract facial blood flow data, HC change data on each pixel of each subject's face image is extracted as a function of time when the subject is being viewed by the camera105. In some other cases, to increase signal-to-noise ratio (SNR) and reduce demand on computational resources, the system100can also use a region of interest approach. In this approach, the system100defines regions of interest on the image, and for each bitplane, sums the bit values of all pixels in each region and divides the sum by the number of pixels in that region. This gives the average bit value for each ROI in each bitplane. The subject's face can be divided into a plurality of regions of interest (ROIs) according to, for example, their anatomy or differential underlying physiology. Machine learning approaches, including deep learning algorithms, (such as a Long Short Term Memory (LSTM) neural network or a suitable alternative such as non-linear Support Vector Machine) may be used to assess the existence of common spatial-temporal patterns of hemoglobin changes across subjects. The Long Short Term Memory (LSTM) neural network or an alternative is trained on the transdermal data from a portion of the subjects (e.g., 70%, 80%, 90%) to obtain a multi-dimensional computational model for the facial blood flow. The models are then tested on the data from the remaining training subjects. Thus, it is possible to obtain a video sequence of any subject and apply the HC extracted from selected bitplanes to the computational models to determine blood flow waves. For long running video streams with changes in blood flow and intensity fluctuations, changes of the estimation and intensity scores over time relying on HC data based on a moving time window (e.g., 10 seconds) may be reported. In an example using the Long Short Term Memory (LSTM) neural network, the LSTM neural network comprises at least three layers of cells. The first layer is an input layer, which accepts the input data. The second (and perhaps additional) layer is a hidden layer, which is composed of memory cells (seeFIG.5). The final layer is an output layer, which generates the output value based on the hidden layer using Logistic Regression. Each memory cell, as illustrated, comprises four main elements: an input gate, a neuron with a self-recurrent connection (a connection to itself), a forget gate and an output gate. The self-recurrent connection has a weight of 1.0 and ensures that, barring any outside interference, the state of a memory cell can remain constant from one time step to another. The gates serve to modulate the interactions between the memory cell itself and its environment. The input gate permits or prevents an incoming signal to alter the state of the memory cell. On the other hand, the output gate can permit or prevent the state of the memory cell to have an effect on other neurons. Finally, the forget gate can modulate the memory cell's self-recurrent connection, permitting the cell to remember or forget its previous state, as needed. The equations below describe how a layer of memory cells is updated at every time step t In these equations:xtis the input array to the memory cell layer at time t. In our application, this is the blood flow signal at all ROIs x→t=[x1tx2t…xnt]Wi, Wf, Wc, Wo, Ui, Uf, Uc, Uoand Voare weight matrices; andbi, bf, bcand beare bias vectors First, we compute the values for it, the input gate, and {tilde over (C)}tthe candidate value for the states of the memory cells at time t: it=σ(Wixt+Uiht-1+bi) {tilde over (C)}t=tanh(Wcxt+Ucht-1+bc) Second, we compute the value for ft, the activation of the memory cells' forget gates at time t: ft=σ(Wfxt+Ufht-1+bf) Given the value of the input gate activation it, the forget gate activation ftand the candidate state value {tilde over (C)}t, we can compute Ct, the memory cells' new state, at time t: Ct=it*{tilde over (C)}t+ft*Ct-1 With the new state of the memory cells, we can compute the value of their output gates and, subsequently, their outputs: ot=σ(Woxt+Uoht-1+VoCt+bo) ht=ot*tanh(Ct) Based on the model of memory cells, for the blood flow distribution at each time step, we can calculate the output from memory cells. Thus, from an input sequence x0, x1, x2, . . . , xn, the memory cells in the LSTM layer will produce a representation sequence h0, h1, h2, . . . , hn. The goal is to classify the sequence into different conditions. The Logistic Regression output layer generates the probability of each condition based on the representation sequence from the LSTM hidden layer. The vector of the probabilities at time step t can be calculated by: pt=softmax(Woutputht+boutput) where Woutputis the weight matrix from the hidden layer to the output layer, and boutputis the bias vector of the output layer. The condition with the maximum accumulated probability will be the predicted condition of this sequence. The heart rate tracking approach, used by the system100on the HC change data from the TOI module110, utilizes adaptive weighting of pixels or multiple regions-of-interest (ROIs), and uses minimizing ‘noise’ criteria to control the weights. The heart rate tracking approach also utilizes a Hilbert transform to extract a coherent signal for the heartbeat. Advantageously, the accuracy when measured against ‘ground truth’ electrocardiogram (ECG) data indicates that the estimated “beats-per-minute” (BPM) of the heartbeat recovery approach is typically consistent within +/−2 BPM of the ECG data. The HC data captured by the TOI module110, as described herein, of a human subject's face, as either ‘live’ or previously recorded, is used as the source data for determining the subject's heart rate. The facial blood flow data can then be used for estimation of related parameters such as the average heart rate in BPM. In order to estimate the BPM of the human subject, the TOI module110detects, recovers and tracks the valid occurrences of the subject's heartbeat. The system100through its various modules, as described herein, then converts these periodic occurrences into an instantaneous statistic representing the average count as BPM. This instantaneous statistic is then continuously updated. Advantageously, this approach has data-sampling that is equal to the video acquisition frame-rate specified as “frames-per-second” (FPS). This provides a continuous per-frame estimation of the instantaneous heart rate. Advantageously, the embodiments described herein can employ the hemoglobin activity captured by the TOI module110to gather information regarding, for example, an individual's heart rate, RRI, and stress level from determining facial hemoglobin activity that is at least partially controlled by the autonomic nervous system (ANS). As ANS can be involved in responding to stress, certain regions of the individual's face can reflect these responses. In a particular case, the sympathetic branch of ANS controls facial blood flow of the eyelids, cheeks, and chin. The parasympathetic branch controls facial blood flow of the nose and ears. In some embodiments, given that the parasympathetic branch has been determined to play a role in maintaining homeostasis, and thus can be responsible for changes in stress level, particular attention can be paid to hemoglobin activities in the nose and ears of an individual. In the embodiments described herein, TOI images of hemoglobin activity can be used to determine heart rate and RRI. This information can be plotted, such as on a Poincare scatter plot, and analyzed to determine stress level. Advantageously, the present inventors have determined that TOI can be used to obtain accurate measures of individual stress level based on facial blood flow information. Turning toFIG.2, a flowchart for a method for camera-based stress determination200is shown. At block202, blood flow information is extracted from a video captured by the camera105using transdermal optical imaging of a human individual by the TOI module110, as described herein, for HC at defined regions-of-interest (ROI). In a particular case, the ROIs are located on the individual's face. In addition, the TOI module110records dynamic changes of such HC over time. For each video, the TOI module110determines heart rate based on blood flow information extracted through the transdermal optical imaging (TOI) approach described herein. Melanin and hemoglobin are typically the primary chromophores that influence light-tissue interaction in the visible spectrum, approximately 400-700 nm. It has been determined that absorbance of hemoglobin, whether oxygenated or deoxygenated, generally decreases sharply in the red spectral region (approximately >590-600 nm). It has also been determined that absorbance of melanin generally follows a monotonic decrease in absorption with increased wavelength. This characteristic difference in absorption between hemoglobin and melanin permits the TOI module110to separate images reflecting skin hemoglobin concentration from those reflecting skin melanin concentration. The camera105captures images in multiple bitplanes in the Red, Green, and Blue (RGB) channels (seeFIG.3). The TOI module110generally selects bitplanes that are most reflective of the hemoglobin concentration changes and discards those that are not based on the color signature differences of hemoglobin and melanin (as described herein). In some cases, cardiovascular data from a physiological measurement system, such as an ECG, can be used as ground truth data for selection of the bitplanes. In this case, given that the facial vasculature is generally an integral part of the cardiovascular system, the hemodynamic changes in the face can correspond closely to the cardiovascular activities obtained from the physiological measurement system. At block204, in order to select the bitplanes, the TOI module110reduces dimensionality to defined regions of interest (ROIs). ROIs can be defined based on how blood flows and diffuses in the face or another part of the human skin surface, or according to other human anatomical features. For example, for the face, the TOI module can define nine ROIs: Forehead Small, Nose Between Eyes, Nose Bridge Full, Nose Tip Small, Right Cheek Narrow, Left Cheek Narrow, Upper Lip, Lower Lip, Chin Small. An example of these ROIs is illustrated inFIG.6. For each ROI, the TOI module110obtains a raw temporal signal for the specific bitplane by averaging image values on each bitplane of each channel to reduce dimensionality. In this approach, the TOI module110defines region of interests on the image. For each bitplane, the TOI module110sums the bit values of all pixels in each region and divides the sum by the number of pixels in that region. This gives the average bit value for each ROI in each bitplane. Machine learning techniques, as described herein, can then be applied to obtain the best weights for all the ROIs in all the bitplanes, such that the system100can optimally predict the individual's stress level. In some cases, the HC data from each ROI are treated as an independent signal. Thus, the HC data for each ROI is routed through a separate, individual corresponding signal processing path (also known as chain) which handles the specific TOI signal originating from a unique location on the facial image. In this way, multiple ROIs are generating multiple signals which are independently yet concurrently processed. At block206, the filtering module112band pass filters the raw signals in the pulse band (approximately 0.5 Hz to 2.5 Hz) from each channel. The present inventors have determined that if a particular bitplane contains information about systemic cardiovascular activity, such information can manifest itself in this band. At block208, the data science module114trains a machine learning model using the band pass filtered raw data from the RGB channels as the input and the ground truth pulse data from the physiological system as the target. A matrix of bitplane composition weights for an individual is obtained. At block210, the bitplane module116uses each individual's matrix of bitplane composition weights to select bitplanes from each frame of the individual's video images. In some cases, the TOI module110and/or the bitplane module116can track the individual's face in each frame and define the ROIs automatically. At block212, with the bitplanes selected, the TOI module110obtains the individual's raw facial blood flow signals from each ROI from the camera105. At block214, in some cases, the transformation module118applies transformations to the filtered ROI signal to provide a principle frequency component of the TOI signal. This component can correspond to a periodic heart band frequency. In a particular case, the transformation can comprise using fast Fourier transform (FFT) and band pass filtering around the heart rate band (for example, 0.5 Hz to 2 Hz). At block216, using the principle frequency component, the reconstruction module120can reconstruct peaks of the individual's heartbeat to determine heart rate and determine intervals between heartbeats (i.e., RRI). Having determined the peaks of heartbeat and determined RRIs, the stress module122determines a stress level for the individual based on approaches using the frequency domain, or the time domain, or using dynamic systems approaches. In an example, at block218, the stress module122plots the RRIs, for example on a Poincare plot for indexing heart rate variability (HRV). In a particular case, the stress module122plots each RRI against the next RRI on the plot; with RR(n) on the x-axis vs. RR(n+1) on the y-axis. At block220, the stress module122determines a second standard deviation of points along a line of identity to obtain “SD2.” At block222, the stress module122determines a first standard deviation of points perpendicular to the line of identity to obtain “SD1.” In an example, the line of identity can be obtained using regression analysis or other suitable approach. At block224, the stress module122determines an indicator of stress by dividing SD2 by SD1. At block226, the output module124outputs the stress determination to an output device102; for example, to a computer monitor, a touchscreen, an LCD screen on a wearable device, an audible device, or the like. The present inventors determined, through scientific testing, that TOI can non-invasively and accurately measure individual stress levels. As an example of such testing, individuals were presented short films, a neutral film for their resting period and a film to elicit a high-arousal emotion. Each individual's skin surface (in this case, their face) was recorded while they viewed the films. Transdermal facial blood flow data was extracted from pixels of each frame of the videos capturing the individuals' faces, as described herein. As a control, ECG was also attached to the individuals as they watched the films to compare the data. In an example of such testing, seventy-nine healthy adults above 18 years of age (34 males; Mean Age=23.704 SD: 7.367) participated. Of the 79 participants, 19 participants completed the study twice and 20 participants completed the study thrice. Participants were told that they would be presented with a relaxing film; the film being an animated depiction of clouds moving through the sky for two minutes. In this example, ECG data was acquired using a BIOPAC™ physiological measurement system with an electrocardiogram amplifier module (ECG100C) connected at a 250-Hz sampling rate. Electrodes were placed on participants based on Einthoven's triangle: near the right shoulder, left shoulder, and right hip. In this example, TOI image sequences were captured using a CCD camera angled to record the participants' face at 60 frames/seconds. In this example, the accuracy of the TOI approach of the embodiments described herein was compared with measurements obtained with the BIOPAC ECG. Correlation coefficients of TOI and BIOPAC measurements were determined, specifically for measures of heart rate and standard deviation 2 (SD2) divided by standard deviation 1 (SD1); i.e., mental stress. These stress scores were transformed into a stress index. In this case, Fisher z-transformation was used to transform the correlation coefficients into z-values. A z-value is a standard score that represents the number of standard deviations the raw score is apart from the population mean. This allows an examination of the data on a normal distribution curve and allows for a determination of where an individual's stress score falls on a stress index. For example, the stress index can assume a mean of zero and a standard deviation of 1. A stress index of zero indicates average stress level, a stress index of 1 indicates a person's stress level is 1 standard deviation above the average, and a stress of −2 indicates a person's stress level is 2 standard deviations below the average. After obtaining stress indexes based on TOI and/or BIOPAC ECG, correlation coefficients of stress indexes were calculated to determine the correspondence between standard scores of heart rate and SD2/SD1 as obtained by TOI and the BIOPAC ECG. A correlational analysis was conducted to examine the relationship between physiological measurements obtained from the embodiments described herein, using TOI, and those obtained with the BIOPAC ECG. A correlation between heart rate measurements obtained from TOI and BIOPAC was determined. It was found that there was a positive correlation between the two instruments, r=0.981. This extremely strong, positive correlation between measurements of heart rate obtained from TOI and those obtained from the BIOPAC ECG seem to indicate that TOI was able to detect heart rate approximately as accurately as the BIOPAC ECG (seeFIG.7A). The correlation between mental stress measurements obtained from TOI and BIOPAC was also determined. SD1 and SD2 was obtained from both instruments. SD1 can be defined as the dispersion (standard deviation) between points in the direction perpendicular to the line of identity on the Poincare plot. SD1 reflects the short-term variation of heart rate caused by RSA, thus it can indicate the activation of the sympathetic nervous system. SD1 measurements can be obtained using the following formula: SD1=22SD(RRn-RRn+1) SD2 can be defined as the dispersion (standard deviation) between points along the line of identity on the Poincare plot. SD2 reflects the long-term variation of heart rate caused by RSA, thus it can indicate the activities of the sympathetic and parasympathetic nervous system. SD2 measurements were obtained using the following formula: SD2=2SD(RRn)2-12SD(RRn-RRn-1)2 SD2/SD1 was determined as the ratio of dynamic change in the heart rate variability time series. SD2/SD1 reflects the relationship between the sympathetic and parasympathetic nervous system, which can be used as an indicator of individual stress. It was found that there was a positive correlation between the measurements of mental stress obtained from TOI and BIOPAC, r=0.903. This strong, positive correlation between measurements of mental stress obtained from TOI and BIOPAC seems to indicate that the TOI was able to determine mental stress approximately as accurately as the BIOPAC (seeFIG.7B). Thus, there were strong, positive correlations between physiological measurements obtained from TOI and those obtained from the BIOPAC ECG. Advantageously, the embodiments described herein were found to provide a non-invasive approach to determine changes in human physiology, specifically heart rate and stress level, with at least the same amount of accuracy as other invasive and expensive approaches. Measurements of SD2/SD1 using the embodiments described herein corresponded strongly with those from the BIOPAC approach signifying that the present approach is able determine stress at least as accurately as the BIOPAC approach. The present embodiments can advantageously be used, for example, to save a lot of cost, inconvenience, and expense currently used to determine heart rate variability (HRV) and stress by other approaches, such as with an ECG. ECG, in particular, is invasive in that it requires preparation of the patient's skin and involves the attachment of electrodes, which can be uncomfortable for some individuals. It can also be difficult to attach ECG electrodes onto certain individuals with a tendency to sweat excessively (e.g., those with diaphoresis) and at extremely humid locations; causing spontaneous detachment of electrodes from the individual, resulting in noisy and likely inaccurate ECG data. ECG equipment is also very expensive such that it is not commonly included in regular health examinations around the world, meaning that many people do not have easy access to procedures that inform them of their cardiovascular health or stress level. The present embodiments advantageously provide an approach that is non-invasive, not susceptible to individual sweatiness, and relatively inexpensive. The present embodiments are non-invasive in that they require neither the preparation of the patient's skin nor the attachment of anything to the patient's body. This can minimize the amount of time medical staff spends to prepare patients for their physiological assessments to be conducted. In addition, fewer people are likely to have reservations regarding examinations of their cardiovascular health. Since the present embodiments do not require the attachment of electrodes onto the human body, they also do not require the individual to be assessed under specific conditions (for example, devoid of any skin condition and in a non-humid environment). Thus, more people can have the opportunity to measure their stress level. The present embodiments also generally require less expensive equipment to operate, and can be readily implemented in various settings. Thus, allowing stress to be monitored on a regular basis. In various embodiments, the camera can be directed to the skin of any body part or parts, such as for example the hand, the abdomen, the feet, or the like. In these cases, the ROIs can be determined based on the structure of such body part. From these body areas, the system may also extract dynamic hemoglobin changes to determine stress level as described herein. The foregoing embodiments may be applied to a plurality of fields. In one embodiment, the system may be installed in a smartphone device to allow a user of the smartphone to measure their stress level. In another embodiment, the system can be used in police stations and border stations to monitor the stress levels of suspects during interrogation. In yet further embodiments, the system can be used in medical or psychiatrist clinics for practitioners to monitor patients. Other applications may become apparent. Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the claims appended hereto. The entire disclosures of all references recited above are incorporated herein by reference. | 43,565 |
11857324 | DETAILED DESCRIPTION A first system10for monitoring the operational state of an aircraft12crew is schematically depicted inFIGS.1and2. The system10is intended to be connected to or integrated with a central avionics system14comprising a central avionics unit16, at least one display unit18located in a control interface of the aircraft12and at least one human-machine interface19enabling the crew to interact with the central avionics unit14, in particular to command the functional systems of the aircraft12. The control interface of the aircraft12is for example located in the aircraft12itself (in the cockpit), or in a remote control room of the aircraft12(in a ground station). In particular, the central avionics unit16is connected to aircraft12equipment, a component of the aircraft's functional systems. The functional systems of the aircraft12include, for example, systems20for measuring the state of the aircraft, systems22for external communication, systems24for activating aircraft commands, and systems23for navigation and operational mission management. The measurement systems20comprise, for example, components comprising sensors for measuring parameters external to the aircraft, such as temperature, pressure or speed, sensors for measuring parameters internal to the aircraft and its various functional systems and positioning sensors, such as GPS sensors, inertial units, and/or an altimeter. The external communication systems22include, for example, components comprising radio, VOR/LOC, ADS, DME, ILS, radar systems, and/or satellite communication systems. The control systems24include components comprising actuators for operating aircraft controls, such as flaps, control surfaces, pumps, mechanical, electrical and/or hydraulic systems, and software actuators for configuring the avionics states of the aircraft. The navigation and operational mission management systems23include, for example, a flight management system, and possibly a mission task management system. These systems are capable of identifying the flight phase and assisting pilots in navigating the aircraft along a trajectory, and in managing the tasks associated with navigation. They are suitable for producing data for use by the monitoring system10such as identification of the current flight phase, aircraft altitude, time of day, etc. The various systems20to24are connected to the central avionics unit14, for example digitally, by at least one data bus25running on an internal network within the aircraft12. The data bus25is able to carry in particular the data required by the system10to operate. These data are, for example, avionics data, in particular control activation data on the interface19, Cursor Control Device (CCD) data for controlling the interface19, or other user interaction data with the interface19, in particular on the aircraft's displays18. When at least one crew member performs actions interacting with the central avionics system14, for example operates commands, or interacts with pointing and cursor designating devices, or with touch screens on the aircraft, first high design assurance level data is generated. The initial high design assurance level data thus generated usually identifies which crew member is performing these interaction actions. The monitoring system10is also connected to a crew-monitoring sensor system26, which is not integrated into the central avionics system14. Here, the sensors of the crew monitoring sensor system26produce data, which may be high design assurance level data and/or lower-level design assurance data, and which may also be carried by the data bus25. As mentioned above, the data design assurance level is for example defined by ARP4754. In the example shown inFIG.1, the sensor system26advantageously includes a seat presence detection sensor28, arranged on a seat intended to receive a crew member in the aircraft control interface, a camera system30and/or at least one additional sensor32. The seat presence sensor28comprises for example a pressure gauge34, which is able to detect that a member of the crew is exerting pressure on the seat cushion and/or the seat back. The camera system30comprises at least one camera36, and an analysis device38of the images produced by the camera36, for determining presence and movement data of a crew member present in the aircraft control interface, as well as a direction of vision of the crew member and advantageously other physiological and cognitive parameters associated with the crew member (such as drowsiness, distraction, etc.) The camera36is for example a camera operating in the visible, near-infrared, thermal, and/or time-of-flight spectrum, in 2D and/or 3D. The additional sensors32comprise for example a sensor40for measuring the heart rate of a crew member, in particular an electrocardiography (ECG) sensor, sensors42for measuring mental activity of the crew member, in particular functional near-infrared (FNIR) sensors or electroencephalography (EEG) sensors. The additional sensors32advantageously include wrist sensors, such as connected watches44or other miscellaneous sensors that can be interfaced with the monitoring system10within the crew-monitoring sensor system26. With reference toFIG.2, the crew operational state monitoring system10comprises a first interface50for exclusively receiving first crew monitoring data having a high design assurance level, and a second interface52for receiving second crew monitoring data having a lower level of design assurance. The crew operational state monitoring system10further comprises a unit54for determining at least one pilot state based on the first monitoring data received from the first receiving interface50and the second monitoring data received from the second interface52. It further comprises a display56and a display manager58on the display56, suitable for displaying at least one crew monitoring window. The crew operational state monitoring system10advantageously further comprises an information and/or alarm generator60, and at least one human/machine interface62to allow the crew to interact with the crew operational state monitoring system10. The pilot state determination unit54is suitable for determining a pilot state, or preferably for determining several pilot states in parallel. The pilot state is for example a state of presence at his/her work station, a state of aircraft operability, in particular incapacitation and/or sleep, a state of alertness, in particular drowsiness, distraction, or inattention, a state of work overload, a state of mental overload, a state of stress, a state of present situation awareness, a state of task engagement, a state of physical activity level, and/or a state of pilot activity consistency and mission-relevance. The pilot state determination unit54comprises, for example, a processor, and at least one memory for receiving software modules suitable for execution by the processor to perform functions. Alternatively, the computer54comprises programmable logic components or dedicated integrated circuits to perform the functions of the modules described below. With reference toFIG.2, the pilot state determination unit54comprises, for the or each monitored pilot state, a first evaluation module70of the first high design assurance level data, suitable for implementing a deterministic algorithm to obtain a high-level pilot state, from the first high design assurance level data only, without taking into account the second lower-level design assurance data. The pilot state determination unit54further comprises, for the or each monitored pilot state, a second lower level evaluation module72, suitable for determining a low-level pilot state, in particular from the second lower level design assurance data. The pilot state determination unit54also comprises, for the or each monitored pilot state, a consolidation module74, which is adapted to determine, in an active configuration, a consolidated pilot state on the basis of the high-level pilot state and the low-level pilot state. It further comprises a command76suitable for switching the consolidation module74to an inactive configuration of the second evaluation module72so that the consolidation module74determines the pilot state based on the high-level pilot state only. The or each pilot state determined by the consolidation module74is a state selected from a normal state and at least one degraded state. The degraded state is determined for example from a state of absence of the pilot from their work station, a state of incapacitation or sleep of the pilot preventing the pilot from operating the aircraft, a state of light or heavy drowsiness, a state of loss of alertness, a state of overwork and/or a state of overcommitment to a task. Further degraded states can be defined depending on the sensors present in the sensor system26. For each pilot state, a normal state corresponds to the degraded state. The normal state is, for example, a state of presence of the pilot at their station, a state of ability or alertness of the pilot to operate the aircraft, a state of alertness, a state of normal workload and/or a state of normal commitment to a task. The states of absence, incapacity and/or sleep, in which the pilot is unable to report via the interface19after a request, are likely to generate information, for example messages or notifications, and possibly even alerts by the information and/or alarm generator60. By way of example, the absence condition may be detected from the first high-level data, in the absence of activity on the avionics controls by a crew member for a predefined period of time (e.g. 5 minutes) and cumulatively, when no pilot reaction is observed in response to a prompt window issued at a display18of the avionics system14. The absence state may also be detected by the absence of pressure on the seat exerted by the crew member for a predefined period of time (e.g. 30 seconds), detected by the pressure gauge34of the presence detection sensor28, or by the absence of facial recognition performed by the image analysis device38coupled to the camera36of the camera system sensor30, for a predefined period of time (e.g. 1 minute). The incapacitated and/or sleeping state may be detected by the absence of pilot activity on the avionics controls for a predefined period of time (e.g. 5 minutes), and cumulatively by the absence of pilot response to a prompting window after a predefined period of time (e.g. 30 seconds). Generally, incapacitation and/or sleep are not detectable at the pressure gauge34of the seat position sensor28, which does detect the presence of the crew member. On the other hand, incapacitation and/or sleep are also detectable by the image analysis device38linked to the camera36of the camera system30in the absence of pilot movement, or by the detected posture of the pilot's body. The state of drowsiness is detected, for example, by measuring the eye activity of the seated pilot with the camera36and the image analysis device38. The state of inattention is measured using the camera system30, by measuring the position of the user's gaze, in particular if this position is away from the positions the user should be looking at in relation to the interfaces or cockpit, for example because they are consulting a mobile phone or tablet. Furthermore, this is corroborated with a lack of pilot activity on the controls, and possibly a lack of pilot response to a prompt. Degraded states of absence and inactivity (incapacitation and/or sleep), are intended to be detected at least by the high level evaluation module70. They are able to be detected in parallel by the lower level evaluation module72. In this example, degraded states of drowsiness and/or inattention are likely to be detected primarily by the lower level assessment module72. The first high level evaluation module70is connected to the first interface50, to exclusively receive first high design assurance level monitoring data. It comprises at least a first state machine80, which is able to implement a deterministic algorithm based on the first high design assurance level monitoring data. The first state machine80is intended to determine the high-level pilot state, between a normal state and a degraded state as defined above. Preferably, the first high-level evaluation module70presents a state machine80for each pilot state to be determined, between the normal pilot state and the degraded pilot state. It therefore comprises at least one state machine80for the absence state and at least one state machine80for the incapacity and/or sleep state. The first evaluation module70is thus adapted, for the or each pilot state monitored by the determination unit54, to obtain a high-level pilot state, chosen between a normal high-level pilot state and a degraded high-level pilot state. The state machine80implements a deterministic algorithm, i.e. an algorithm that, in response to a set of initial monitoring data, always gives the same result. An example of a state machine80for determining an absence state is shown inFIG.3. The state machine80is adapted to determine an absence state or a presence state of the pilot in the control interface in the form of a Boolean flag here referred to as ABS. By default, the detected condition is an initial condition of absence (“initial condition: Absence (ABS=1)”. The state machine80is adapted to retrieve first high-level data from the interface50to determine a pilot detection Boolean flag AVCSabs. The indicator has a value of 1 when, for example, the initial monitoring data indicates pilot interaction actions with the central avionics system14in the recent past. The presence indicator is elaborated within the pilot state determination unit54from the set of data received through the interface50, these data can only vary upon action by the pilot; they thus form an indicator of presence and human activity in the cockpit. In this case, the transition condition being true, the absence state changes to a zero value and the pilot is detected as present. The high-level pilot state is then a normal high-level pilot state. On the contrary, if the pilot detection Boolean flag AVCSabsis zero, i.e. the pilot has not acted on the controls for a predefined time, the state machine80returns to the absence state, which corresponds to a high-level degraded state. In this example, the second lower level evaluation module72is adapted to receive second lower level design assurance monitoring data from the second interface52, and advantageously also first high design assurance level monitoring data from the first interface50. The second evaluation module72is further adapted to determine the functional status of the sensors of the sensor system26, and to eliminate data from sensors of the sensor system26that are non-functional. It is suitable for determining a low-level pilot state only on the basis of the sensors of the monitoring sensor system26that are operational. The second lower level evaluation module72here comprises a second deterministic state machine82associated with each first state machine80. The second state machine82is adapted to implement a deterministic algorithm based on the second monitoring data received from the second interface52, and possibly first monitoring data received from the first interface50. In the example shown inFIG.4, the second state machine82is suitable for determining a low-level pilot state for the or each pilot state determined by the determination unit54, which corresponds to the high-level pilot state determined by the first state machine80ofFIG.3. It is suitable, for example, for determining an absence or presence state of the pilot in the control interface in the form of a Boolean flag here referred to as ABS. By default, the detected condition is an initial condition of absence (“initial condition: Absence (ABS=1)”. In this example, the second machine82uses second lower level monitoring data to obtain a Boolean flag SEATPAD from the pressure gauge34and a Boolean flag CAMERA from the analysis device38of images produced by the camera36. It also uses the pilot detection Boolean flag AVCSabsobtained from the first high-level monitoring data. Starting from an absence state ABS=1, if the second monitoring data from each sensor is declared valid and if the Boolean flag SEATPAD and the Boolean flag CAMERA simultaneously indicate that the pilot is present on the seat (value equal to 1 inFIG.4) for more than a given fixed duration τabs(e.g. 5 seconds), then the normal presence state is obtained (ABS=0). This is also the case if the Boolean flag SEATPAD alone indicates that the pilot is present on the seat (value equal to 1 inFIG.4) for more than a given time τabs, the camera data being invalid, and if the pilot detection Boolean flag AVCSabsindicates a pilot presence. This is also the case if the camera-related Boolean flag CAMERA alone indicates that the pilot is present on the seat (value equal to 1 inFIG.4) for more than a given time τabs, the seat data being invalid, and if the pilot detection Boolean flag AVCSabsindicates a pilot presence. This is also the case if the pilot detection Boolean flag AVCSabsalone indicates a pilot presence when both the seat and camera sensors are declared invalid. Similar conditions allow a degraded state of absence to be detected from the normal state of presence. The result given by the second state machine82is therefore suitable for switching between a normal low-level pilot state and at least one degraded low-level pilot state, and for providing this data to the consolidation module74. The consolidation module74is adapted to consolidate the high-level and low-level pilot states produced by the first high-level evaluation module70and the second lower-level evaluation module72respectively to produce the consolidated pilot state. The consolidation module74is adapted to apply consolidation logic between the high-level pilot state received from the first evaluation module70and the low-level pilot state received from the second evaluation module72, in the activation configuration of the second lower level evaluation module72. The logic is, for example, an “OR” type logic. Thus, if at least one of the high-level or low-level pilot states is a high-level degraded state or a low-level degraded state respectively, then the consolidated pilot state is switched to the degraded state. Instead, when the first evaluation module70and the second evaluation module72each determine a normal state, then the consolidated pilot state produced by the crew operational state monitoring system10is a normal pilot state. Thus, with an architecture of the above type, which provides for segregation of monitoring data according to their design assurance level, between a high level and a lower level, each evaluation module70,72takes as input the interfaces50,52of a level adapted to the pilot state it is to generate, and has its own state machines80,82to each determine a proper pilot state. The first high-level evaluation module70guarantees a high design assurance level of the resulting pilot state, since it uses exclusively high design assurance level data as input, with a deterministic algorithm. This ensures that, in the event of a certification of the monitoring module10, the main degraded states, in particular the absence state and the incapacitated and/or sleep state, are always detected. However, the performance of the first state machine80is more limited, especially in terms of detection time and/or the type of states detected. The second lower-level evaluation module72is designed to ensure operational efficiency of the system by speeding up and improving the detection scope and by integrating lower-level data, thus enriching the possible states that can be detected. The combination of the states produced by the two modules70,72, within the consolidation module74, covers the objectives that the monitoring system10has to fulfil, namely alerting the crew in case of incapacitation, sleep or absence of the pilot, but also operational efficiency. Preferably, the first reception interface50receives, together with each first data item, a validity level associated with the first data item, which corresponds to a (possibly Boolean) quantification of the validity of the first data item, depending, for example, on faults or failures of the system(s) producing and transmitting the first data item. Similarly, the second reception interface52receives, together with each second data item, a validity level associated with the second data item, which corresponds to a (possibly Boolean) quantification of the validity of the second data item, depending, for example, on faults or failures of the system or systems producing and transmitting the second data item. In this case, the high-level evaluation module70is adapted to calculate a high-level validity level associated with the or each determined high-level pilot state, depending on the validity level of the first data used. The low-level evaluation module72is adapted to calculate a low-level validity level associated with the determined low-level pilot state, depending on the validity level of the second data and possibly the first data used. The consolidation module74is adapted to calculate a consolidated validity level of the consolidated pilot state, based on the high and low validity levels respectively, for example by taking the maximum, the average, or a weighting of the high and low validity levels respectively. Advantageously, the algorithms implemented by the high-level and low-level evaluation modules70,72are able to take into account the levels of validity of the first data and the second data, in order to adapt the logic for determining the high and low-level pilot states. This makes them robust to the loss of sensors or input data. Advantageously, the high-level evaluation module70is adapted to calculate a high-level confidence level associated with the or each determined high-level pilot state. The low-level evaluation module72is adapted to calculate a low-level confidence level associated with the determined low-level pilot state. Each confidence level is obtained for example from a table and/or a calculation algorithm. The consolidation module74is adapted to calculate a consolidated confidence level on the basis of the high and low confidence levels respectively, for example by taking the maximum, the average, or a weighting of the high and low confidence levels respectively. The switch76is adapted to be controlled from the human machine interface62to switch the consolidation module74between the configuration of activating the second lower level evaluation module72and the configuration of inactivating the second lower-level evaluation module72. In the activation configuration, the pilot state is determined from the high-level and low-level pilot states by the consolidation module74, whereas in the deactivated configuration, only the high-level pilot states are considered by the consolidation module74to determine the or each pilot state. The presence of the switch76makes the system10more reliable. When all the sensors are operating, the system10uses the data from all the sensors (including the low-level design assurance data), which expands the functional scope of the system10and is more responsive than systems producing the high design assurance level data, which are more reliable. If the sensors producing the lower level of design assurance data start to interfere with the system10due to their lower design assurance level, the crew can manually switch them off, refocusing the system10only on the high design assurance level data. Although the system10is then more limited in function and responsiveness, it remains safe and reliable, consistent with the certification requirements. As will be described below, the need to switch from one configuration to another is detectable by the pilot, for example by looking at the display56. The display manager58comprises, for example, a processor, and at least one memory for receiving software modules that can be executed by the processor to perform functions. It is suitable for recovering the pilot state determined by the consolidation module of the determination unit54, in particular a normal state and/or a degraded state of absence, incapacity and/or sleep, drowsiness or inattention. It is adapted to generate and display on the display56at least a first pilot state monitoring window90, an example of which is shown inFIG.5. The monitoring window90comprises for example a first indicator92indicating the presence of the pilot (“DETECTED” meaning that the pilot is detected), a second indicator94indicating whether pilot activity is observed, resulting from the absence of incapacitation and/or sleep, and a third indicator96indicating the level of pilot alertness, in particular by means of a message98which makes it possible to determine whether drowsiness (“DROWSINESS”) or inattention is detected. The display manager58is further adapted to advantageously generate a second window99to retrieve the history of each of the pilot states over time. For example, inFIG.6, the pilot's seat presence history (“HISTORY”) is displayed as a histogram showing the pilot's presence (“PRESENT”), supposed presence (“SUPPOSED”), supposed absence, and absence (“ABSENT”), with the level of certainty derived from the confidence level associated with the consolidated state from the consolidation module74. The display manager58is further adapted to display a window100for monitoring the operational state of a crew10, visible inFIG.7. The window100comprises buttons102for testing the individual sensors28,30of the sensor system26, a control system104of the command76for deactivating the second lower level evaluation module72, in case of observed malfunction of the sensors28,30, and optionally a button106for silencing the information and/or alarm generators60. With reference toFIG.8, the display manager58is further adapted to display, in case of detection of a degraded pilot state, a pilot prompting window110to which the pilot must respond to confirm or deny the assumed degraded state detected by the system. This window includes an acknowledgement button112, and a counter114which indicates the time remaining before the information and/or alarm generator60generates an alarm. In case the pilot state is an absence, incapacity and/or sleep state, the information and/or alarm generator60is adapted to issue an item of information, e.g. a notification or a message or even an alarm in the control interface to the second pilot or another crew member. This alarm is for example a visual and/or audible and/or vibratory alarm. These alarms are always generated when a state of absence, incapacity and/or sleep is detected. In addition, in the event of drowsiness or inattention, a visual, audible, or vibratory alarm is generated for the pilot concerned. The operation of the crew operational state monitoring system10according to the present disclosure will now be described. The system10is intended to operate preferably during the entire flight. Once the crew operational state monitoring system10is activated, the first interface50for receiving first high design assurance level data continuously receives first monitoring data from, for example, the avionics CPU16over the data bus25. This initial monitoring data reflects, among other things, pilot activity on the avionics controls and provides a high design assurance level. Similarly, the sensors28,30of the monitoring sensor system26are activated to detect the presence of a pilot in their seat by means of the pressure gauge34, and/or to film the pilot, by means of the camera system30, the data from the camera36being continuously analysed by the image analysis device38. This second monitoring data, of a lower design assurance level, is transmitted to the second receiving interface52. At any time, the first evaluation module70receives exclusively the first data from the first interface50, and implements, for each monitored pilot state, the deterministic algorithm present in the first state machine80corresponding to the monitored pilot state. For each monitored pilot state, the implementation of the algorithm, based on the first data, leads to the determination of a high-level pilot state, which can be a high-level normal state or a high-level degraded state. Simultaneously, in an activation configuration, the second lower level evaluation module72receives the second data from the second interface52, optionally accompanied by first data from the first interface50. For each monitored pilot state, it then implements the deterministic algorithm of the second state machine82. The second state machine82is thus able to produce, for each monitored pilot state, a low-level pilot state, which may be a normal low-level state, or a degraded low-level state. The consolidation module74then receives, for each monitored pilot state, the high-level pilot state and the low-level pilot state and determines a consolidated pilot state. If both the high level and low-level pilot states are normal states, the consolidation module74determines that the consolidated pilot state is a normal state. If instead one or fewer of the low-level or high-level pilot states is a degraded state, the consolidation module74determines that the consolidated pilot state is a degraded state. Each pilot state assigned by the consolidation module74is then transmitted to the display manager58to generate and display the monitoring window90with the or each pilot state determined at the given time. Furthermore, each pilot state is also transmitted to the information and/or alarm generator60. In the event of a degraded pilot state, the information and/or alarm generator60first activates the display manager58to generate and display the pilot prompting window100, and then in the absence of pilot response to this window100, activates the alarms defined above. If the pilot feels that the generated alarm is unjustified or is a nuisance, they activate the parameter setting window99. They can then test the correct operation of the detection sensors28or the camera system30using the buttons102. Alternatively, the operator may activate the command76to disable the second lower level evaluation module72and prevent a low-level pilot state from being transmitted to the consolidation module74. The consolidation module74then determines the pilot state solely on the basis of the high-level pilot state. The monitoring system10according to the present disclosure therefore very reliably detects via the first high-level evaluation module70, within a certain time period (e.g. a typical duration of about 5 minutes), that a pilot on duty is absent from their station or is incapacitated or in deep sleep. This maximum delay is reduced by the presence of the lower-level evaluation module72, which detects degraded pilot states much more sensitively, even though the sensors that are used for this detection are of a lower design assurance level. Thus, by segregating the data received by the pilot determination unit54between the first data of a high design assurance level, and the second data of a lower design assurance level, the dual objectives of safety and responsiveness of the monitoring system10can be met. The monitoring system10according to the present disclosure has a very high level of security thanks to the segregation into two levels of detection, one reliable but of reduced performance, and a more responsive but potentially harmful or disruptive one, and the ability to isolate this second level. The monitoring system10according to the present disclosure is furthermore responsive, since the second lower-level evaluation module72receives richer data than that used by the first evaluation module70, allowing for a finer detection of degraded states and often faster. In any case, even if the second lower-level data is of a lower design assurance level than the first higher-level data, it can be automatically excluded by the lower-level evaluation module72when a sensor that produces this data is detected as non-operational. They can also be voluntarily overridden by the pilot, who can use the command76to disable consideration of the low-level pilot state. Thus, the monitoring system10is reliable enough to be certifiable, yet responsive enough to be operational for pilots when everything is working nominally, which is the usual situation. The monitoring system10is able to operate in a degraded manner by selecting the remaining available sensors, while providing the user with the ability to disregard the data produced by the low design assurance level sensors if they give erroneous results. In addition, the monitoring system10allows the integration of data from enhanced design assurance level sensors, for example enhanced seat sensors, into the high level design monitoring data via the interface50. It also allows for new sensors to be taken into account in addition to the detection sensor28and the camera system30, even if they are less reliable. Similarly, the monitoring system10may use data from the lower design level avionics bus19that is retrieved by the second interface52. In one embodiment (not shown), the algorithm implemented by the second lower level evaluation module72is not a deterministic algorithm, but is for example an algorithm that operates by learning. The algorithm implemented by the first high-level evaluation module70remains a deterministic algorithm. In another embodiment, the consolidated pilot state obtained for the or each determined pilot state is suitable for switching from a normal state to a plurality of degraded states of different levels. | 33,915 |
11857325 | DETAILED DESCRIPTION In the following disclosure, embodiments will be described with reference to an insulin injection device. The present disclosure is however not limited to such application and may equally well be deployed with injection devices that eject other medicaments, or with any other kind of medical device, including drug pumps, meters for monitoring the condition of a patient, blood glucose meters, meters for indirectly measuring blood glucose level, blood pressure monitors, pulse monitors, intelligent electronic pill boxes, and the like. Where a medical device is referred to this may refer to the medical device itself, or may refer to a supplementary device designed to attach to a medical device and to derive information from the medical device. Where a condition of a medical device is referred to, this may refer to any condition referred to herein. The condition of the medical device includes but is not limited to: a date and time of medicament delivery, a quantity of medicament delivery, the type of medicament delivered, the identity of the medicament batch, the medicament expiry date, and any combination thereof. The condition of the medical device may also include but is not limited to: the information gathered or determined by the medical device, for instance blood glucose levels, blood pressure, pulse rate, or any combination thereof. A smart key may be a device configured to allow a user to access a vehicle and/or activate a vehicle. Activating a vehicle may include starting an engine and/or activating the ignition. FIG.1is an exploded view of an injection device1, which may for instance represent Sanofi's Solostar® insulin injection pen or Sanofi's AllStar® insulin injection pen, however the present disclosure is also compatible with other types and makes of injection pens as described below. The injection device1ofFIG.1is a pre-filled, disposable injection pen that comprises a housing10and contains an insulin container14, to which a needle15can be affixed. The needle is protected by an inner needle cap16and an outer needle cap17, which in turn can be covered by a cap18. An insulin dose to be ejected from injection device1can be selected by turning the dosage knob12, and the selected dose is then displayed via dosage window13, for instance in multiples of so-called International Units (IU), wherein one IU is the biological equivalent of about 45.5 micrograms of pure crystalline insulin (1/22 mg). An example of a selected dose displayed in dosage window13may for instance be 30 IUs, as shown inFIG.1. It should be noted that the selected dose may equally well be displayed differently. A label (not shown) is provided on the housing10. The label includes information about the medicament included within the injection device, including information identifying the medicament. The information identifying the medicament may be in the form of text. The information identifying the medicament may also be in the form of a color. The information identifying the medicament may also be encoded into a barcode, QR code or the like. The information identifying the medicament may also be in the form of a black and white pattern, a color pattern or shading. The dosage window13may be in the form of an aperture in the housing10, which permits a user to view a limited portion of a number sleeve70that is configured to move when the dosage knob12is turned, to provide a visual indication of a currently programmed dose. Alternatively, the number sleeve70may remain stationary during the dose dialing phase, and the dosage window13may move as a dose is dialed in to reveal the number corresponding to the dialed dose. In either case, the number sleeve70may be a component which rotates when a dose is being dispensed from the injection device1. The injection device1may be configured so that turning the dosage knob12causes a mechanical click sound to provide acoustical feedback to a user. The number sleeve70mechanically interacts with a piston in insulin container14. When needle15is stuck into a skin portion of a patient, and then injection button11is pushed, the insulin dose displayed in display window13will be ejected from injection device1. When the needle15of injection device1remains for a certain time in the skin portion after the injection button11is pushed, a high percentage of the dose is actually injected into the patient's body. Ejection of the insulin dose may also cause a mechanical click sound, which is however different from the sounds produced when using dosage knob12. In some other embodiments, the injection device1does not have a separate injection button11and a user depresses the entire dosage knob12, which moves longitudinally relative to the housing10, in order to cause the medicament to be dispensed. In the various embodiments, during delivery of the insulin dose, the dosage knob12is turned to its initial position in an axial movement, that is to say without rotation, while the number sleeve70is rotated to return to its initial position, e.g. to display a dose of zero units. Injection device1may be used for several injection processes until either insulin container14is empty or the expiration date of injection device1(e.g. 28 days after the first use) is reached. Furthermore, before using injection device1for the first time, it may be necessary to perform a so-called “prime shot” to remove air from insulin container14and needle15, for instance by selecting two units of insulin and pressing injection button11while holding injection device1with the needle15upwards. For simplicity of presentation, in the following, it will be exemplarily assumed that the ejected doses substantially correspond to the injected doses, so that, for instance when making a proposal for a dose to be injected next, this dose equals the dose that has to ejected by the injection device. Nevertheless, differences (e.g. losses) between the ejected doses and the injected doses may of course be taken into account. FIG.2shows an injection monitoring device2(also referred to as an add-on device, supplementary device or dosage monitoring device herein) according to some embodiments. The injection monitoring device2is configured to be releasably secured to the injection device1and is shown attached to the injection device1inFIG.2.FIG.2illustrates some of the major internal and external components of the injection monitoring device2. Externally, the injection monitoring device2comprises a display unit4, a user input6, and a battery compartment102. Internally, the injection monitoring device2comprises electronics24. The electronics24comprise at least a processor25and memory. The electronics24may comprise both a program memory and a main memory. The processor25may for instance be a microprocessor, a Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or the like. The processor25executes program code (e.g. software or firmware) stored in the program memory, and uses a main memory, for instance to store intermediate results. The main memory may also be used to store a logbook on performed ejections/injections. The program memory may for instance be a Read-Only Memory (ROM), and the main memory may for instance be a Random Access Memory (RAM). The injection monitoring device2also comprises a wireless unit28, which is configured to transmit and/or receive information to/from another device in a wireless fashion. Such transmission may for instance be based on radio transmission or optical transmission. In some embodiments, the wireless unit28is a Bluetooth transceiver. Alternatively, wireless unit28may be substituted or complemented by a wired unit configured to transmit and/or receive information to/from another device in a wire-bound fashion, for instance via a cable or fiber connection. When data is transmitted, the units of the data (values) transferred may be explicitly or implicitly defined. For instance, in case of an insulin dose, always International Units (IU) may be used, or otherwise, the used unit may be transferred explicitly, for instance in coded form. The transmitted data also includes a time stamp associated with an injection. The injection monitoring device2may also calculate, store and transmit other data relating to the user's medicament regime and resulting physiological condition. The injection monitoring device2also comprises an audio module104configured to provide audio feedback to a user of the injection monitoring device2. Both the wireless unit28and audio module104may be coupled to and controlled by the electronics24. The injection monitoring device2may also comprise an optical sensor26for reading information identifying the medicament. The information identifying the medicament may be the color of the housing10of the injection device, or the color of an area of the housing or a label affixed to the housing. In these embodiments, the optical sensor26may be a simple photometer configured to detect the color. In some other embodiments, the information identifying the medicament may be a QR code, or other similar encoded information and the optical sensor26may be a camera or QR code reader. Further, one or more light sources may be provided to improve reading of optical sensor26. The light source may provide light of a certain wavelength or spectrum to improve color detection by optical sensor26. The light source may be arranged in such a way that unwanted reflections, for example due to the curvature of the housing10, are avoided or reduced. In an example embodiment, the optical sensor26is a camera unit configured to detect a code (for instance a bar code, which may for instance be a one- or two-dimensional bar code) related to the injection device and/or the medicament contained therein. This code may for instance be located on the housing10or on a medicament container contained in injection device1, to name but a few examples. This code may for instance indicate a type of the injection device and/or the medicament, and/or further properties (for instance an expiration date). This code may be a QR code. The QR code is in general black and white and thus no color detection is required on the part of the optical sensor26. This allows the optical sensor26to be simple and cheap to manufacture. The processor25may be configured to check the information read by the optical sensor26against pre-stored information in order to verify that the user is injecting the correct medicament. If the processor25does not recognize the information or recognizes the information as indicating a different medicament to that which the user should be receiving at that time, then the injection monitoring device2may produce an alarm signal. The alarm signal may comprise words or graphics displayed on the display unit6or sound produced by the audio module104. Alternatively, or in addition, the injection monitoring device2may send an alarm signal to an external device via wireless unit28. The injection monitoring device2comprises an injection device status sensor110(also referred to herein as a non-contact sensor or first non-contact sensor). The status sensor110may take a number of forms. The status sensor110is configured to output signals indicative of the positions of one or more components within the injection device1. The status sensor110may be referred to as a non-contact sensor, since it is able to sense the absolute position and/or movement of components within the injection device1without contact between the sensor110and any of the components sensed. The electronics24receive these signals and infer an operational state of the injection device1and cause information regarding the timing of the operation of the injection device1to be recorded in the main memory and/or transmitted to an external device via the wireless unit28. The exact position of the status sensor110within the injection monitoring device2depends upon the position and movement range of the moveable component of the injection device being measured. The moveable component may be close to the cylindrical part of the housing10of the injection device1. Therefore, the status sensor110is positioned adjacent the cylindrical part of the housing10. The status sensor110may be an optical sensor configured to observe the number sleeve70through the window13and thereby to read the dose dialed into the injection device1. Alternatively, the status sensor110may be an infrared sensor and the injection monitoring device2may comprise a separate infrared light source. The status sensor110may then observe the movement of components within the injection device1through an area of the housing10, dosage knob12or injection button11which is opaque to visible light and infer the dialed or delivered dose of medicament from the observed movements. In some alternative embodiments, the status sensor110may use another non-contact sensing technology, such as capacitive displacement sensing, magnetic induction sensing or Eddy current sensing in order to measure the movement of the internal components of the injection device1. In any case, the injection monitoring device2measures the amount of medicament injected from the injection device1and records the dose history. In some embodiments, the injection monitoring device2is further configured to use the determined dose history and other stored information about the user of the injection device1to determine the due time and date for the user's next dose and/or the amount of the user's next dose. FIGS.3and4illustrate schematically two different systems in which the present disclosure can be used. Referring firstly toFIG.3, a system200is shown in which the injection monitoring device2communicates wirelessly with a device300for managing a medication regime of a user. The device300may for example be a smartphone storing a medical monitoring application. The medical monitoring application may be programmed to receive dosing information from the injection monitoring device2, the dosing information comprising at least a date and time of the most recent injection and data representing the administered medicament dose. The injection device2may be configured to transmit the dosing information to the device300whenever a new injection is performed, or alternatively only in response to a user input. The system200also comprises a smart key400. The smart key400is configured to communicate wirelessly with the device300. The smart key400is also configured to at least partially control and to communicate wirelessly with an associated vehicle500. The vehicle500may comprise a number of electronic systems, for example related to locking/unlocking the vehicle and starting/stopping the vehicle. The vehicle500may also comprise an electronic warning system, configured to emit alerts under circumstances, or even to prevent the engine of the vehicle from being started. Use of a smart key400as an interface between an external device and a vehicle500can increase the safety of the system. This is because at the point-of-delivery to the user the smart key400is already specific for the user's vehicle500, and hence it is easier to include security systems to avoid, for instance, hacking. This may be important, as the vehicle500can include safety assistant systems described herein, which are capable of directly affecting the vehicle500, for instance changing the vehicle's speed. A second advantage is that users do not always connect third party devices, such as smartphones, to their vehicle. By including the smart key400within the system, it is possible to configure the system so that the user is forced to activate and/or connect the necessary devices. The medical monitoring application on the device300stores and manages the user dose history. The medical monitoring application is configured to use the stored dose history to determine a due date and time for the user's next medication dose and/or dose volume/units. The medical monitoring application may also infer a physiological condition of the user by comparing the current time with the determined due time for the next medication dose. For example, where the user's medicament is insulin used to treat diabetes, if the medical monitoring application determines that the current time is later than the due time for the next dose, it may be inferred that the user's blood glucose levels are low. If the medical monitoring application determines that the current time is much later than the due time, for example later than a pre-determined threshold, then it may be inferred that the user has hypoglycemia. The user's physiological condition may be expressed in terms of their fitness to drive a vehicle. This may also be referred to as a “wellness parameter” which may define the likely level of the user's impairment. The medical monitoring application can control the device300to present information to the user, including the due date and time for their next medication dose administration and any warnings should the current time be later than the determined due date and time. The medical monitoring application may also be programmed to control the device300to transmit at least the determined due date and time for the user's next medication dose to the smart key400. In some embodiments, the device300may also transmit an indication of the user's physiological condition and/or fitness to drive a vehicle. The smart key400receives and stores the due time for the user's next medication dose administration. Whenever the smart key400is in range of the vehicle500, or alternatively whenever the smart key400is used to unlock the vehicle500, the smart key communicates the due time to the electronic warning system of the vehicle500. If the user's next dose is overdue (i.e. if the current time is later than the due time), the electronic warning system is configured to emit a warning. This warning may take a number of forms, for example an audible alarm, which may be a spoken communication, or a visual indication. The vehicle500may be provided with an internal display screen which may display warning text such as “Your next insulin dose is overdue and your ability to drive may be impaired”. Additional traffic symbols or the standardized symbols from pharmaceutical packaging can also be used (FIG.9). In addition the display screen may specify when the user's next dose was due. The display screen may also indicate the severity of the user's potential impairment, for example using a scale of 1 to 3 and color coding, such as yellow, orange and red. In some embodiments, the electronic warning system only emits a warning when the vehicle is actually started, or it may emit a different warning when the vehicle is started, for example an audible warning. The vehicle500may provide the user with advice, and this advice may be provided visually or audibly. For instance the internal display screen may advise a user who is inferred to be impaired based upon their dose history to take action to alleviate their condition. This advice can include instructions or a reminder to take a dose of medicament or to take any recommended steps. The vehicle500may include a navigation system. When the navigation system is in use for navigation the system may calculate, based upon the due date and time for the next medication dose administration, a route that incorporates an appropriate stop allowing the user to administer their dose. The stop should be timed such that the user's fitness to drive is not impaired before the injection. The navigation system can be configured such that the stop is communicated to the user and the user is informed that a medicament dose should be administered. If the navigation system is not in use for navigation, the system may indicate, based upon the due date and time for the next medication dose administration, reminders of when the next medicament dose should be administered and may indicate appropriate possibilities for the user to stop, for instance a rest area. It is also possible for the navigation system to be comprised by device300, or comprised by a separate navigation device. In this case the navigation system can be configured as described above, and the necessary information is either already received by the device300or communicated to the separate navigation device. This communication may be sent from the device300, the smart key400, or the vehicle500. FIG.4illustrates schematically a different system250. In this system250, the injection monitoring device2communicates directly with the smart key400. In these embodiments the smart key400is programmed with the medical monitoring application described above. The smart key400may therefore receive dosing information from the injection monitoring device2directly, the dosing information comprising at least a date and time of the most recent injection and data representing the administered medicament dose. The injection device2may be configured to transmit the dosing information to the smart key400whenever a new injection is performed, or alternatively only in response to a user input. The smart key400may therefore use the medical monitoring application to store and manage the user dose history. The medical monitoring application is configured to use the stored dose history to determine a due time for the user's next medication dose. The smart key400may perform a plausibility check on the received data by communicating with the injection monitoring device2to confirm the accuracy of the information stored. The medical monitoring application may also infer a physiological condition of the user by comparing the current time with the determined due time for the next medication dose, as previously described. The user's physiological condition may be expressed in terms of their fitness to drive a vehicle. The smart key400then communicates some or all of this information directly to the vehicle500. For example, the smart key400may determine a fitness of the user to drive, and communicate only this information to the vehicle500. Alternatively, the smart key400may communicate the due time for the user's next medicament dose to the vehicle and the electronic warning system of the vehicle may perform the comparison. The smart key400may also have a small display screen. This screen can be used to present information to the user, including the due time for their next medication dose administration and any warnings should the current time be later than the determined due time. Whenever the smart key400is in range of the vehicle500, or alternatively whenever the smart key400is used to unlock the vehicle500, the smart key communicates the fitness of the user to drive and/or the due time to the electronic warning system of the vehicle500. The electronic warning system of the vehicle500may then behave as described above with reference toFIG.3. The smart key400may continue to check the plausibility of the stored data after the vehicle500is started, i.e. during driving. If during driving, the time for the user's next medicament dose becomes due, then the electronic warning system of the vehicle500may notify the driver. FIG.5illustrates schematically a different system260. In this system260, the medical device is a permanent monitoring device261that is capable of continuously or regularly monitoring a condition, such as a physiological condition, of a user. For instance, the permanent monitoring device261may be a device that detects blood glucose levels either directly or indirectly. Alternatively, the permanent monitoring device261may be a device that monitors the user's blood pressure and/or pulse. At least a part of the information gathered by the permanent monitoring device261, for instance including the user's condition or physiological condition, may then be communicated directly with the smart key400. In these embodiments the smart key400is programmed with the medical monitoring application described above. The smart key400may therefore receive information, such as blood glucose levels, from the permanent monitoring device261directly. The permanent monitoring device261may be configured to transmit the information to the smart key400whenever a new measurement is performed, or whenever a deviation from a previous condition is detected. The smart key400may use the medical monitoring application to store and manage a history of the user's condition. The medical monitoring application may be configured to use the stored history to determine an inference of a user's fitness to drive a vehicle. The medical monitoring application may be configured to use the stored history to determine a due time for the user's next medication dose; for instance, blood glucose levels may allow the medical monitoring application to determine when the next insulin dose should be delivered. The smart key400then communicates some or all of this information directly to the vehicle500. For example, the smart key400may determine an inference of the fitness of the user to drive, and communicate only this information to the vehicle500. Alternatively, the smart key400may communicate the user's physiological condition, or the estimated due time for the next dose, to the vehicle and the electronic warning system of the vehicle may perform the comparison. The smart key400may also have a small display screen. This screen can be used to present information to the user, including their physiological condition, the estimated time for the next dose, their fitness to drive, and any warnings should they be unfit to drive. Whenever the smart key400is in range of the vehicle500, or alternatively whenever the smart key400is used to unlock the vehicle500, the smart key may communicate the fitness of the user to drive and/or the user's condition to the electronic warning system of the vehicle500. The electronic warning system of the vehicle500may then behave as described above with reference toFIG.3. For instance, the user may be provided with a warning relating to their impairment to drive or may be provided with advice relating to alleviating their condition. The advice may include reminders in relation to medicament doses or any other recommended steps. The smart key400may communicate the user's condition to the vehicle500after the vehicle500is started, i.e. during driving. If during driving, the user's condition changes or if the user is determined to be unfit to drive, then the electronic warning system of the vehicle500may notify the driver and/or provide with advice relating to alleviating the user's condition. The vehicle500may be configured to reduce speed slowly and/or stop based on the received information or received instructions from the smart key400and/or medical device. For instance, the vehicle500can comprise safety assistant systems such as a lane assistant, to control the directional stability using a camera, and a speed assistant to control the vehicle speed. The safety assistant systems may be controlled based upon the information or instructions received from the smart key400and/or medical device. The vehicle500may be able to activate hazard lights, headlights (dipped or high beam), and/or activate any audible warning such as a vehicle horn based upon the information or instructions received. The smart key400may be able to initiate an emergency call, for instance the smart key400may be able to communicate with an external communication device either directly or via the vehicle500. Where communication is via the vehicle500, this may utilize the vehicle's existing connection to a mobile communication device, such as a “hands-free” system. Where an emergency call has been initiated, the smart key400may be able to communicate relevant information, such as the condition of the user, the user's name, the medical disorder from which the user suffers, the type of medicament used by the user, the dose history, the history of the user's physiological condition, and any combination thereof. In an illustrative example, a user who suffers from diabetes may be fitted with a connectable blood glucose monitoring (BGM) device. The BGM device communicates with a medical monitoring application which has been installed upon a smart key, and the communication comprises the user's blood glucose level. The medical monitoring application receives this information and performs a comparison in order to determine if the user is, for instance, hypoglycemic. If the medical monitoring application infers that the physiological condition of the user renders them unfit to drive, a communication will be sent to a vehicle. The communication comprises instructions to display a warning to the user. In addition, the vehicle may display advice to the user via the internal display screen, in this case the advice may include the advice to consume food or drink likely to alleviate the user's condition, including food or drink comprising sugar, fruit, chocolate, orange juice, or the like. If the vehicle is stationary and the user's condition is of a pre-determined level of severity the medical monitoring application sends instructions to the vehicle so that the engine is not allowed to start. If the vehicle is in motion and the user's condition is of a pre-determined level of severity, the medical monitoring application sends a communication to the vehicle to cause the safety assistant systems of the vehicle to slowly bring the vehicle to a stationary positon, to start the hazard lights, to sound the vehicle horn, and to initiate an emergency telephone call including the condition of the user, the user's blood glucose level, the user's name, the type of insulin used by the user, and last dose of insulin received. In another illustrative example, a user taking painkillers, for instance a COX-2 inhibitor, may be monitored by medical devices such as a blood pressure gauge and/or a pulse meter. These medical devices communicate with a medical monitoring application which has been installed upon a smart key, and the communication comprises the user's physiological condition. Based on the physiological condition the medical monitoring application may perform any action described above. FIG.6illustrates schematically a different system270. In this system270, the medical device2communicates wirelessly with a device271. The device271may for example be a smartphone storing a medical monitoring application described above and also storing an application that enables communication with a vehicle. The medical monitoring application may be programmed to receive information from the medical device2. The information may include dosing information comprising at least a date and time of the most recent injection and data representing the administered medicament dose, and/or may include information comprising the user's condition or physiological condition. The medical device2may be configured to transmit the information to the device271whenever a new injection or measurement is performed, when a deviation from a previous condition is detected, or alternatively only in response to a user input. The device271may communicate the information to remote storage272. For instance, the remote storage272may be a cloud storage service. The remote storage272may be used to store and manage a history of the user's condition and dose history. The device271may be able to access the remote storage272in order to receive previous information including the time and date of any medicament doses, the type of medicament, the medicament dose quantity, or any recorded physiological conditions. The device271may also be able to access the remote storage272in order to receive the calculated or defined due time for the user's next medicament dose. The device271may be able to access the remote storage272in order to receive data indicating an inferred physiological condition of the user by comparing the current time with the determined due time for the next medication dose, as previously described. The device271may be able to access the remote storage272in order to receive data indicating the user's fitness to drive a vehicle. The device271then communicates some or all of this information directly to the vehicle500. For example, the device271may communicate a fitness of the user to drive to the vehicle500. Alternatively, the device271may communicate the due time for the user's next medicament dose to the vehicle and the electronic warning system of the vehicle may perform the comparison. The vehicle500may then behave as described above with reference toFIGS.3,4, or5. The device271may also have a display screen which can be used to present information to the user, including the due time for their next medication dose administration and any warnings should the current time be later than the determined due time. FIG.7illustrates schematically a different system280. In this system280, the medical device2communicates wirelessly with a device281. The device281is able to communicate with both a smart key400and remote storage272, which may be a cloud storage service. In some aspects, the system280operates similarly to that described in relation toFIG.3. However, device281may communicate the information received from the medical device2to the remote storage272. The remote storage272may be used to store and manage a history of the user's condition and dose history. The device281may be able to access and communicate with the remote storage272in the manner described forFIG.6. FIG.8illustrates schematically a different system290. In this system290, the medical device2communicates wirelessly with a device291. The communication between the medical device2and the device291is the same as described in relation toFIG.3above. The content of the information communicated to the vehicle500, and the vehicle's corresponding actions, are also the same as described in relation toFIG.3above. However, device291is able to directly communicate with vehicle500, and may comprise an application for communication with vehicle500. This communication may be a wireless or wired communication, optionally this communication is via Bluetooth. This communication may, for instance, be via an existing mechanism for pairing a smart phone with an entertainment system of a vehicle. For the systems described above, once the user's fitness to drive has been inferred, the information may be categorized under “driving ability level” categories. These categories may vary by severity, for instance a first category may be assigned to users who are fit to drive. A second category may be assigned to users who may be suffering from mild impairment. A third category may be assigned to users who are suffering from severe impairment. Further categories are envisaged indicating varying severity. The device or smart key may be configured to send different instructions depending on the category to which the user's driving ability level has been assigned. The vehicle may be configured to perform different actions depending on the category to which the user's driving ability level has been assigned. For instance, the vehicle may emit warnings to user's in the second category, whereas may prevent users in the third category from driving or activate safety assistant systems as described above. Examples above relating to diabetic patients who require insulin are illustrative. The present disclosure is also applicable to any users who may become impaired. For instance, patients who require cardiovascular medication or patients who require painkillers, such as a COX-2 inhibitor. While some examples of the injection monitoring device2are shown herein, the systems described above can be configured to work with any device configured to monitor the amount or dosages of medicament administered to a patient. For example, the above systems can accommodate injection devices having integrated injection monitoring solutions (e.g., injection devices which include an integrated dose monitoring solution carried on-board the injection device) as well as other types of injection monitoring devices meant to be retrofitted or added-on to existing injection devices (e.g., add-on injection monitoring devices which fit over and/or partially or completely encapsulate the injection button11of the injection device1). The systems described above can also accommodate one-time use or disposable injection devices that include their own integrated injection monitoring devices. The terms “drug” or “medicament” are used synonymously herein and describe a pharmaceutical formulation containing one or more active pharmaceutical ingredients or pharmaceutically acceptable salts or solvates thereof, and optionally a pharmaceutically acceptable carrier. An active pharmaceutical ingredient (“API”), in the broadest terms, is a chemical structure that has a biological effect on humans or animals. In pharmacology, a drug or medicament is used in the treatment, cure, prevention, or diagnosis of disease or used to otherwise enhance physical or mental well-being. A drug or medicament may be used for a limited duration, or on a regular basis for chronic disorders. As described below, a drug or medicament can include at least one API, or combinations thereof, in various types of formulations, for the treatment of one or more diseases. Examples of API may include small molecules having a molecular weight of 500 Da or less; polypeptides, peptides and proteins (e.g., hormones, growth factors, antibodies, antibody fragments, and enzymes); carbohydrates and polysaccharides; and nucleic acids, double or single stranded DNA (including naked and cDNA), RNA, antisense nucleic acids such as antisense DNA and RNA, small interfering RNA (siRNA), ribozymes, genes, and oligonucleotides. Nucleic acids may be incorporated into molecular delivery systems such as vectors, plasmids, or liposomes. Mixtures of one or more drugs are also contemplated. The drug or medicament may be contained in a primary package or “drug container” adapted for use with a drug delivery device. The drug container may be, e.g., a cartridge, syringe, reservoir, or other solid or flexible vessel configured to provide a suitable chamber for storage (e.g., short- or long-term storage) of one or more drugs. For example, in some instances, the chamber may be designed to store a drug for at least one day (e.g., 1 to at least 30 days). In some instances, the chamber may be designed to store a drug for about 1 month to about 2 years. Storage may occur at room temperature (e.g., about 20° C.), or refrigerated temperatures (e.g., from about −4° C. to about 4° C.). In some instances, the drug container may be or may include a dual-chamber cartridge configured to store two or more components of the pharmaceutical formulation to-be-administered (e.g., an API and a diluent, or two different drugs) separately, one in each chamber. In such instances, the two chambers of the dual-chamber cartridge may be configured to allow mixing between the two or more components prior to and/or during dispensing into the human or animal body. For example, the two chambers may be configured such that they are in fluid communication with each other (e.g., by way of a conduit between the two chambers) and allow mixing of the two components when desired by a user prior to dispensing. Alternatively or in addition, the two chambers may be configured to allow mixing as the components are being dispensed into the human or animal body. The drugs or medicaments contained in the drug delivery devices as described herein can be used for the treatment and/or prophylaxis of many different types of medical disorders. Examples of disorders include, e.g., diabetes mellitus or complications associated with diabetes mellitus such as diabetic retinopathy, thromboembolism disorders such as deep vein or pulmonary thromboembolism. Further examples of disorders are acute coronary syndrome (ACS), angina, myocardial infarction, cancer, macular degeneration, inflammation, hay fever, atherosclerosis and/or rheumatoid arthritis. Examples of APIs and drugs are those as described in handbooks such as Rote Liste 2014, for example, without limitation, main groups 12 (anti-diabetic drugs) or 86 (oncology drugs), and Merck Index, 15th edition. Examples of APIs for the treatment and/or prophylaxis of type 1 or type 2 diabetes mellitus or complications associated with type 1 or type 2 diabetes mellitus include an insulin, e.g., human insulin, or a human insulin analogue or derivative, a glucagon-like peptide (GLP-1), GLP-1 analogues or GLP-1 receptor agonists, or an analogue or derivative thereof, a dipeptidyl peptidase-4 (DPP4) inhibitor, or a pharmaceutically acceptable salt or solvate thereof, or any mixture thereof. As used herein, the terms “analogue” and “derivative” refers to a polypeptide which has a molecular structure which formally can be derived from the structure of a naturally occurring peptide, for example that of human insulin, by deleting and/or exchanging at least one amino acid residue occurring in the naturally occurring peptide and/or by adding at least one amino acid residue. The added and/or exchanged amino acid residue can either be codable amino acid residues or other naturally occurring residues or purely synthetic amino acid residues. Insulin analogues are also referred to as “insulin receptor ligands”. In particular, the term “derivative” refers to a polypeptide which has a molecular structure which formally can be derived from the structure of a naturally occurring peptide, for example that of human insulin, in which one or more organic substituent (e.g. a fatty acid) is bound to one or more of the amino acids. Optionally, one or more amino acids occurring in the naturally occurring peptide may have been deleted and/or replaced by other amino acids, including non-codeable amino acids, or amino acids, including non-codeable, have been added to the naturally occurring peptide. Examples of insulin analogues are Gly(A21), Arg(B31), Arg(B32) human insulin (insulin glargine); Lys(B3), Glu(B29) human insulin (insulin glulisine); Lys(B28), Pro(B29) human insulin (insulin lispro); Asp(B28) human insulin (insulin aspart); human insulin, wherein proline in position B28 is replaced by Asp, Lys, Leu, Val or Ala and wherein in position B29 Lys may be replaced by Pro; Ala(B26) human insulin; Des(B28-B30) human insulin; Des(B27) human insulin and Des(B30) human insulin. Examples of insulin derivatives are, for example, B29-N-myristoyl-des(B30) human insulin, Lys(B29) (N-tetradecanoyl)-des(B30) human insulin (insulin detemir, Levemir®); B29-N-palmitoyl-des(B30) human insulin; B29-N-myristoyl human insulin; B29-N-palmitoyl human insulin; B28-N-myristoyl LysB28ProB29 human insulin; B28-N-palmitoyl-LysB28ProB29 human insulin; B30-N-myristoyl-ThrB29LysB30 human insulin; B30-N-palmitoyl-ThrB29LysB30 human insulin; B29-N-(N-palmitoyl-gamma-glutamyl)-des(B30) human insulin, B29-N-omega-carboxypentadecanoyl-gamma-L-glutamyl-des(B30) human insulin (insulin degludec, Tresiba®); B29-N-(N-lithocholyl-gamma-glutamyl)-des(B30) human insulin; B29-N-(ω-carboxyheptadecanoyl)-des(B30) human insulin and B29-N-(ω-carboxyheptadecanoyl) human insulin. Examples of GLP-1, GLP-1 analogues and GLP-1 receptor agonists are, for example, Lixisenatide (Lyxumia®), Exenatide (Exendin-4, Byetta®, Bydureon®, a 39 amino acid peptide which is produced by the salivary glands of the Gila monster), Liraglutide (Victoza®), Semaglutide, Taspoglutide, Albiglutide (Syncria®), Dulaglutide (Trulicity®), rExendin-4, CJC-1134-PC, PB-1023, TTP-054, Langlenatide/HM-11260C, CM-3, GLP-1 Eligen, ORMD-0901, NN-9924, NN-9926, NN-9927, Nodexen, Viador-GLP-1, CVX-096, ZYOG-1, ZYD-1, GSK-2374697, DA-3091, MAR-701, MAR709, ZP-2929, ZP-3022, TT-401, BHM-034. MOD-6030, CAM-2036, DA-15864, ARI-2651, ARI-2255, Exenatide-XTEN and Glucagon-Xten. An example of an oligonucleotide is, for example: mipomersen sodium (Kynamro®), a cholesterol-reducing antisense therapeutic for the treatment of familial hypercholesterolemia. Examples of DPP4 inhibitors are Vildagliptin, Sitagliptin, Denagliptin, Saxagliptin, Berberine. Examples of hormones include hypophysis hormones or hypothalamus hormones or regulatory active peptides and their antagonists, such as Gonadotropine (Follitropin, Lutropin, Choriongonadotropin, Menotropin), Somatropine (Somatropin), Desmopressin, Terlipressin, Gonadorelin, Triptorelin, Leuprorelin, Buserelin, Nafarelin, and Goserelin. Examples of polysaccharides include a glucosaminoglycane, a hyaluronic acid, a heparin, a low molecular weight heparin or an ultra-low molecular weight heparin or a derivative thereof, or a sulphated polysaccharide, e.g. a poly-sulphated form of the above-mentioned polysaccharides, and/or a pharmaceutically acceptable salt thereof. An example of a pharmaceutically acceptable salt of a poly-sulphated low molecular weight heparin is enoxaparin sodium. An example of a hyaluronic acid derivative is Hylan G-F 20 (Synvisc®), a sodium hyaluronate. The term “antibody”, as used herein, refers to an immunoglobulin molecule or an antigen-binding portion thereof. Examples of antigen-binding portions of immunoglobulin molecules include F(ab) and F(ab′)2 fragments, which retain the ability to bind antigen. The antibody can be polyclonal, monoclonal, recombinant, chimeric, de-immunized or humanized, fully human, non-human, (e.g., murine), or single chain antibody. In some embodiments, the antibody has effector function and can fix complement. In some embodiments, the antibody has reduced or no ability to bind an Fc receptor. For example, the antibody can be an isotype or subtype, an antibody fragment or mutant, which does not support binding to an Fc receptor, e.g., it has a mutagenized or deleted Fc receptor binding region. The term antibody also includes an antigen-binding molecule based on tetravalent bispecific tandem immunoglobulins (TBTI) and/or a dual variable region antibody-like binding protein having cross-over binding region orientation (CODV). The terms “fragment” or “antibody fragment” refer to a polypeptide derived from an antibody polypeptide molecule (e.g., an antibody heavy and/or light chain polypeptide) that does not comprise a full-length antibody polypeptide, but that still comprises at least a portion of a full-length antibody polypeptide that is capable of binding to an antigen. Antibody fragments can comprise a cleaved portion of a full length antibody polypeptide, although the term is not limited to such cleaved fragments. Antibody fragments that are useful in the present disclosure include, for example, Fab fragments, F(ab′)2 fragments, scFv (single-chain Fv) fragments, linear antibodies, monospecific or multispecific antibody fragments such as bispecific, trispecific, tetraspecific and multispecific antibodies (e.g., diabodies, triabodies, tetrabodies), monovalent or multivalent antibody fragments such as bivalent, trivalent, tetravalent and multivalent antibodies, minibodies, chelating recombinant antibodies, tribodies or bibodies, intrabodies, nanobodies, small modular immunopharmaceuticals (SMIP), binding-domain immunoglobulin fusion proteins, camelized antibodies, and VHH containing antibodies. Additional examples of antigen-binding antibody fragments are known in the art. The terms “Complementarity-determining region” or “CDR” refer to short polypeptide sequences within the variable region of both heavy and light chain polypeptides that are primarily responsible for mediating specific antigen recognition. The term 35 “framework region” refers to amino acid sequences within the variable region of both heavy and light chain polypeptides that are not CDR sequences, and are primarily responsible for maintaining correct positioning of the CDR sequences to permit antigen binding. Although the framework regions themselves typically do not directly participate in antigen binding, as is known in the art, certain residues within the framework regions of certain antibodies can directly participate in antigen binding or can affect the ability of one or more amino acids in CDRs to interact with antigen. Examples of antibodies are anti PCSK-9 mAb (e.g., Alirocumab), anti IL-6 mAb (e.g., Sarilumab), and anti IL-4 mAb (e.g., Dupilumab). Pharmaceutically acceptable salts of any API described herein are also contemplated for use in a drug or medicament in a drug delivery device. Pharmaceutically acceptable salts are for example acid addition salts and basic salts. Those of skill in the art will understand that modifications (additions and/or removals) of various components of the APIs, formulations, apparatuses, methods, systems and embodiments described herein may be made without departing from the full scope and spirit of the present disclosure, which encompass such modifications and any and all equivalents thereof. | 49,029 |
11857326 | DETAILED DESCRIPTION Discussed herein are various neural probes in the form of electrodes and other related devices, methods, and technologies that incorporate agent delivery of various kinds, including agent or drug elution. More specifically, the various embodiments disclosed or contemplated herein relate to improved systems, devices, and methods, and various components thereof, for monitoring, stimulating, and/or ablating brain tissue while also delivering an agent of some kind, and various components of such systems and devices. The agent can be delivered contemporaneously via drug delivery or over time (elution) via a coating or bioresorbable delivery of some kind, and can be a pharmaceutical drug or an agent for providing benefits to the patient, including, for example, enhancing the electrical features of the device. In some embodiments, the agent can be delivered or provided over time on the surface of the brain (via a cortical electrode, for example) or into the tissue of the brain (via a depth electrode, for example). In those implementations in which the agent is delivered or provided over time, it is understood that that it can be any known period of time, from a relatively short period to a relatively long period. In addition, the controlled delivery or providing of the agent according to any embodiment herein can be self-controlled by a patient, doctor, other user, or computer or can be controlled autonomously. For purposes of this application, it is understood that the term “drug delivery” includes elution. The various drug delivery device embodiments disclosed or contemplated herein include any type of neural electrode, including, for example, a cortical electrode10as shown inFIG.1A, a depth electrode12as depicted inFIG.1B, or an electrode array14as shown inFIG.1C. Other exemplary devices can include, for example, a scalp electrode (not shown). Alternatively, the implementations herein are not limited to those specific, exemplary devices. Rather, any of the drug delivery features or components disclosed or contemplated herein can be incorporated into any known neural probe or electrode. In each of these device embodiments as shown or any other known device, the drug delivery can be in the form of a coating disposed on at least a portion of the device (such as any of devices10,12,14), with the coating containing a treatment agent or drug of some kind. Alternatively, as will be described in further detail below, the drug delivery can be accomplished via a sheath or other separate component that can be disposed over at least a portion of the device prior to implantation, deployed with the device, and released upon removal of the device to remain in position to continue to deliver a treatment agent over a predetermined period of time after the device has been removed. In a further alternative, the device itself can have an agent delivery component or feature, such as delivery openings in the device that allow for continual delivery over time or delivery upon actuation by a user. These agent delivery and elution embodiments and other such components or features are described in additional detail herein. In one embodiment as best shown inFIG.2, the neural probe device20has an outer surface22with an agent coating24coated on the outer surface22. It is understood that the neural probe device20can be any type of neural probe, including any of the exemplary devices discussed above, and it is further understood that the agent coating24embodiments as described herein can be incorporated into any device embodiment disclosed or contemplated herein. The outer surface22as shown can be any portion of the device20, such that the entire outer surface22of the device20can be coated, or any portion thereof. In one embodiment, the outer surface22of the device20is made of a polyimide, such as, for example, Kapton. In a further implementation, the outer surface22is made of parylene C, which is a coating that can be coated over the device20(including over a polyimide such as Kapton) to create the outer surface22made of parylene C. Alternatively, the outer surface22can be made of any known material that can be incorporated into a neural probe device20. In one specific embodiment, the coating24includes nitric oxide, and at least one treatment agent. That is, the treatment agent is coated onto the outer surface22, and then the nitric oxide is coated over the treatment agent, thereby creating a two-layered agent coating24made up of the agent layer and the nitric oxide layer. After deployment of the device20to its desired location in the patient, the nitric oxide begins to slowly resolve over time, resulting in the slow release (elution) of the treatment agent. Alternatively, the coating24can include any composition that can slowly dissolve when the device20is deployed to slowly release the treatment agent over time. For example, the coating24can be a dissolvable coating, a hydrophilic coating, a patterned coating with time release, or any other type of coating. Alternatively, the coating24can including any composition that allows for release of the treatment agent via any period of time, including immediately. In another exemplary implementation, the coating24is made up of a layer of time-release microspheres that contain the treatment agent and can release the treatment agent over a predetermined period of time that can be selected. That is, the microspheres can be engineered to release the agent over the desired time period, including, for example, a range from about one week to about one year. One specific example of such a microsphere that can be coated on the device20to create the coating24is the CHRONIJECT™ microsphere system for drug delivery, which is commercially available from Oakwood Labs. In use, the device20would be positioned on or in the brain tissue of the patient, depending on the type of device20. For example, a depth electrode device would be inserted into the brain tissue, while a cortical electrode would be positioned onto the surface of the brain tissue. Regardless of the type of device20, the coating24with the treatment agent(s) contained therein is disposed on the outer surface22in a location on the device20so as to maximize the contact between the coating24and the brain tissue in contact with the device20. In one embodiment, the treatment agent included in the coating24is intended to curtail the body's response to the presence of the device in the body. For example, the treatment agent can be Slipskin™ 90/10 or Medikote™ PVD Coating. In certain implementations, the coating24provides for extended time-release of the treatment agent during the period of time that the device20is in the brain, thereby preventing or reducing the body's natural response to the presence of the device20throughout the time that the device20is present. The time-release feature of the coating24can be accomplished by the specific nitric oxide or microsphere technologies and similar technologies as discussed above, or by any other known time-release technology. Alternatively, the treatment agent can be any known agent that could be beneficial for a neural probe in contact with brain tissue for any period of time. For example, the treatment agent can be heparin. In addition to the desired benefits of the treatment agent included in the coating24, additional benefits can arise from incorporation of the coating24onto the outer surface22of the device20. For example, in certain embodiments, the coating24can increase contact between the brain tissue and the device20by causing the brain tissue to be attracted to the coating24, thereby causing attraction of the brain tissue to the device20. For example, in one embodiment, the coating24can be hydrophilic such that the coating attracts water, thereby causing the water in the brain tissue to be drawn toward and/or adhere to the device20. According to another implementation, the coating24can be used to influence or change the behavior of the electrical activity of a neuron. For example, in one embodiment, the coating24can influence the way ion channels in a neuron function. More specifically, the coating24can include a composition that contains ions, such as ions in the form of sodium or potassium, for example. Alternatively, the ion composition can be any known composition containing ions. A seizure is caused by neuron cells “firing” as a result of an increase in the action potential of the cells. The “firing” is a spark created by each cell as the cell is reset to bring the action potential back to a normal state. In this specific embodiment, the ion composition is delivered to the neuron cells such that the ions can change the ionic state outside the target neuron cells, thereby reducing or eliminating the risk of the cells firing. That is, the ions can bring the action potential of the cells back to a normal state without the cells firing. Thus, the coating24containing ions can decrease the electrical activity of one or more neurons, thereby reducing or eliminating the risk of a seizure. In certain embodiments, the coating24can contain a treatment agent that has both at least one drug in combination with ions. In accordance with an alternative implementation as shown inFIGS.10A and10B, a device160is provided that can deliver a fluidic agent/composition via a delivery mechanism to slow seizure activity in a fashion similar to the delivery of ions as described above, but this embodiment utilizes cold saline or similar fluids instead of ions. As best shown inFIG.10A, the device160in this implementation is a cortical electrode160having a connection structure162and an electrode array pad164. Further, the device has a fluid channel166defined through the connection structure162and a delivery channel168defined in the electrode array pad164to provide for passage of the fluid agent through the channel166and delivery of the agent to the target area of the brain via the delivery channel168. Further, the fluid channel166extends proximally from the connection structure162and has a connector170on its proximal end to allow for coupling to a syringe or other source of agent fluid. In one implementation as shown, the delivery channel168has four branches extending across the pad164as shown. Alternatively, the delivery channel168can have one, two, three, or five or more branches. In a further alternative, the channel168can have any configuration that provides for effective delivery of the fluidic agent to the target area of the brain. As best shown inFIG.10B, which depicts an expanded view of a portion of a delivery channel168according to one embodiment, the delivery channel168has a plurality of holes172formed therein such that the inner lumen (not shown) of the channel168is in fluidic communication with the area adjacent to the channel168. As such, the fluidic agent being delivered through the fluid channel166and into the delivery channel168can pass through the holes172and thereby be delivered to the target area adjacent to the pad164. It is understood that the holes172can be formed in the channel168along its entire length and all of its branches. Alternatively, the holes172can be formed in only a predetermine portion or length of the channel168and/or its branches. According to certain embodiments, the fluidic agent is cold saline that can slow seizure activity by delivery to the area of the brain that is the source of that activity. Alternatively, any other fluidic agent that can slow seizure activity can be used. In certain alternative implementations, the coating24can be added to or incorporate onto any type of neural tools (such that the outer surface22described above is an outer surface22of a related neural tool20, rather than any of the probe or electrode embodiments discussed herein) that are used with or during use of neural probes, such as wands, spatulas, or other such known tools and devices. Thus, the various embodiments and features as described herein with respect toFIG.2can also apply to a coating24on any such tool, including any coating embodiment with any treatment agent as described or contemplated herein. FIGS.3A and3Bdepict two embodiments of neural probe devices30,36having drug delivery components (or “structures”)34,40disposed thereon. More specifically, the device30inFIG.3Ais a cortical electrode30having a contact array32, and the drug delivery structure34is disposed over the contact array32. Further, the device36inFIG.3Bis a depth electrode36having an elongate body38, and the drug delivery structure40is disposed over the elongate body38. It is understood that a similar structure can be disposed over any neural probe device disclosed herein or any other known neural probe device. In these exemplary embodiments, in use, each structure34,40can be a sheath, scaffold, or sheath-like structure34,40that can be physically disposed over the device (such as device30or36) prior to placement of the device on or in the patient's brain as necessary. Thus, when the device30,36is positioned in the patient, the structure34,40is also positioned in the patient. Further, in certain implementations, the structure34,40can be maintained in place in the patient when the device30,36is removed, thereby allowing the structure34,40to continue to deliver the desired treatment agent to the area after the device30,36is removed. In certain embodiments, the structure34,40(or any such structure for any type of neural probe device) can be a sheath or scaffold34,40made of dissolvable material containing a treatment agent such that the treatment is steadily released over time as the material dissolves. For example, in one implementation, the structure34,40is a commercially-available bioresorbable scaffold such as, or similar to, IGAKI-TAMAI™, DESOLVE®, DESOLVE® 100, IDEAL BIOSTENT™, REVA®, REZOLVE™, REZOLVE™ 2, FANTOM®, FORTITUDE®, MIRAGE™ BRMS, MERES™, XINSORB®, or ART 18AZ™ bioresorbable scaffolds. Alternatively, the structure34,40can be any structure that can be made of any known dissolvable material for timed release of a treatment agent. According to certain further implementations, any of the various device embodiments disclosed or contemplated herein can include a drug delivery component, structure, or feature that is integral to the device. For example, in one embodiment, a neural probe device50as depicted inFIG.4has a drug delivery lumen54defined in the body52of the device50such that the treatment agent can be delivered to the target area of the brain tissue via the drug delivery lumen54in the device50as shown. Further, as will be discussed in further detail below, in certain variations of this implementation, the device50can also have a delivery controller (similar to the controller/actuator214discussed in further detail below in relation toFIGS.12A and12B) associated with the device50that is in fluidic communication with the drug delivery lumen54such that the controller can actuate delivery of the treatment agent to the target area of the tissue via the lumen54. Various exemplary controller implementations are discussed in further detail below. It is understood that any device similar to device50having an agent delivery lumen such as lumen54can also be used for other types of fluid flow. That is, the fluidic access lumen54can be used to not only deliver a treatment agent, but also flush the treatment area with an appropriate known flushing fluid, or retract fluid from the treatment area via the lumen54. In one specific exemplary embodiment, the fluidic access lumen54can be used to apply suction, thereby assisting with retaining the device50in place via the suction. In further alternatives, instead of a delivery lumen (such as lumen54as discussed above), any device embodiment herein can have an array of small openings defined in the body of the device such that the treatment agent can be delivered to the treatment area via the small openings. One such exemplary device60is depicted inFIG.5, in which the device60has openings64defined in the body62as shown. In use, any agent disclosed or contemplated herein can be delivered to the brain tissue via the openings64. It is understood that the delivery can be accomplished via any delivery method or mechanism disclosed or contemplated herein. For example, any of the embodiments herein having small agent delivery openings can also have a controller/actuator similar to any of the exemplary embodiments discussed elsewhere herein such that delivery of the agent via the delivery openings can be controlled and actuated with precision. According to the various embodiments herein, the treatment agent can be provided in any of several different forms for use with any of the device implementations disclosed or contemplated herein. For example, as described above, the agent can be provided in a liquid form that can vary in viscosity in the coating embodiments discussed above with respect toFIG.2. Continuing with the coating embodiments ofFIG.2, it is understood that the agent can also be provided in a slurry form, a solid form, or any other form that a coating is known to take in the art. Alternatively, the agent can be provided in structured, water-soluble, non-fluid form in those embodiments in which the device has a drug delivery structure removably disposed thereon as discussed above with respect toFIGS.3A and3B. Alternatively, the agent can be provided in liquid form in those embodiments in which the device has a drug delivery structure, component, or feature integral to device as discussed above with respect toFIGS.4and5. In a further embodiment, in the devices ofFIGS.4and5(or any devices with any type of delivery structures), the treatment agent can be formed into a solid or dry form, such as a pellet or a powder. For example, in certain embodiments, the treatment agent is formed into a solid, water soluble pellet that can be delivered to the target area in the patient via the delivery lumen54of the device50inFIG.4. In another example relating to the device60depicted inFIG.5, the solid form of the treatment agent can be disposed within the device60such that fluid from the patient can come into contact with the solid treatment agent via the openings64, thereby causing the solid treatment agent to begin to dissolve and thereby deliver the treatment agent to the target tissue via the openings64. In accordance with another implementation similar to the water-soluble solid form as described above, the treatment agent can be in a liquid or dry form and disposed within a capsule. According to a further alternative, the agent can take any known form. Other forms of drug delivery actuation are contemplated herein. For example, in certain implementations such as the exemplary implementation shown inFIG.6, a system70is provided that includes a magnetic actuation member74that can operate in combination with the neural probe device72such that the magnetic actuation member74is in magnetic communication with the device72. The magnetic communication allows the actuation member74to actuate the device72to release the treatment agent for delivery to the target area of the tissue. In this specific exemplary embodiment as shown, the probe72is a cortical electrode probe72having three layers78A,78B,78C that are coupled together to form the body78of the probe72. More specifically, the body78is made up of a first outer layer78A, a middle or inner layer78B, and a second outer layer78C such that the middle layer78B is disposed between and attached to the first and second outer layers78A,78C. The body78also has a cavity (also referred to herein as an “agent receptacle”)80defined therein. More specifically, in this particular embodiment, the cavity80is formed via the absence of a length of the middle layer70B, thereby resulting in a cavity80defined by the first layer78A and the two opposing ends of the middle layer70B on both sides of the cavity80. Any treatment agent82according to any embodiment herein can be disposed in the cavity80as shown. The body78also has a deployable cover (or “flap”)84that is disposed over the cavity80such that the cover84can be used to enclose the cavity80and thereby retain the agent82therein. Further, the flap84can be rotatably coupled to the body78by a joint86at one end of the flap84. In a further alternative, the deployable cover84can be any known device or mechanism for covering an opening and being actuable to move into an open position. In one embodiment, the flap84is tensioned such that the flap84is continuously urged toward the body78(in the direction indicated by arrow “A” toward the “closed” position in which the flap84is in contact with the body78and encloses the cavity80) by the tension. Thus, as the flap84is urged away from the body78(in the direction indicated by arrow “B”), the force urging the flap84toward the body78increases. In one specific implementation, the tensioning component (not shown) is a spring or piston-like component (not shown). Alternatively, the tensioning component (not shown) can be any known tensioning component. Further, the flap84in this implementation can be made of a magnetic material or can have a magnet or other magnetic material disposed therein (not shown) such that the magnetic actuation member74can communicate magnetically with the flap84. Thus, in use, the device72can be positioned as needed in relation to the brain76of the patient. More specifically, in this example, the device72is disposed along the surface of the brain76. Once the device72is positioned as desired, the treatment agent (or the composition containing the treatment agent)82in the cavity80can be released by application of a magnetic field via the magnetic actuation member74. It is understood that the treatment agent82remains unreleased, or undelivered until the magnetic field is applied to the device72. For example, as best shown inFIG.6, the magnetic actuation member74is either moved by a user toward and into closer proximity with the device72or otherwise is actuated such that a magnetic field is generated by the actuation member74that extends to the flap84. The flap84is repelled by the magnetic field, thereby causing the flap84to rotate on its hinge86away from the body78(in the direction indicated by arrow “B”). Thus, the rotation away from the body78causes the flap84to move into its open position or configuration, which results in fluidic access to the cavity80. As a result, the magnetic actuation member74can be used to magnetically actuate the flap84to move into its open position, thereby releasing the agent82in the cavity80. In accordance with another drug delivery actuation embodiment, the treatment agent or the composition containing the treatment agent can be propelled or otherwise actuated to be delivered to the treatment area via iontophoresis. In one specific exemplary embodiment as depicted inFIG.7, a device100is provided that has a device body102with a contact104that is also an electrical actuation member104that is in electrical communication with the treatment agent/composition106to be delivered to the target area of the brain108. In this specific embodiment, the body102is made up of three layers: a first outer layer102A, a middle or inner layer102B, and a second outer layer102C such that the middle layer102B is disposed between and attached to the first and second outer layers102A,102C. A portion of the middle layer102B is made up of the contact104and an electrical lead110coupled to the contact104, as shown, such that an electrical current as represented by arrow D can be transmitted to the contact104via the lead110. Alternatively, the contact and the electrical actuation member can be two different components. The body102also has a cavity (also referred to herein as an “agent receptacle”)112defined therein. More specifically, in this particular embodiment, the cavity112is formed via the absence of a length of the second outer layer102C, thereby resulting in a cavity112defined by the electrical actuation member/contact104and the two opposing ends of the second outer layer102C on both sides of the cavity112such that the cavity112contains the contact104, as discussed above. Any treatment agent106according to any embodiment herein that can be delivered via iontophoresis or any application of electrical current can be disposed in the cavity112as shown. Thus, in the instant implementation as shown, the treatment agent/composition106is disposed in or on the device100such that actuation of the contacts (including contact104) on the body102by providing an electrical current as represented by arrow D can interact with the ionically polarized treatment agent/composition106to propel the agent/composition106toward the treatment area of the brain108, as represented by arrows C. Alternatively, it need not be the contacts of the electrode that propel the agent/composition. Instead, the device100can have any known component that can apply an electrical current to the agent/composition to propel the agent/composition in a similar fashion. In certain embodiments, regardless of the source of the actuation, the amount of current and/or time can be varied to control the speed and distance that the ionically polarized treatment agent/composition travels. Various iontophoretic delivery device embodiments can be scalp electrodes that are positioned on the external scalp of the patient, rather than inside the skull or in direct contact with the brain tissue. In these specific embodiments, the iontophoretic delivery makes it possible to deliver the treatment agent through the skull. In a further alternative embodiment of a drug delivery actuation embodiment as shown inFIG.8, the treatment agent or the composition containing the treatment agent can be propelled or otherwise actuated to be delivered to the treatment area via kinetic energy. More specifically, a device120is provided that has a device body122with a contact124that is also an kinetic actuation member124that is in contact with or otherwise in communication with the treatment agent/composition126to be delivered to the target area of the brain128. In this specific embodiment, the body122is made up of three layers: a first outer layer122A, a middle or inner layer122B, and a second outer layer122C such that the middle layer122B is disposed between and attached to the first and second outer layers122A,122C. A portion of the middle layer122B is made up of the contact124that can be made of a material that is responsive to kinetic energy such that kinetic energy as represented by arrow E can be transmitted to the contact124and cause the contact/kinetic actuation member124to urge the agent/composition126toward the brain128. Alternatively, the contact and the kinetic actuation member can be two different components. The body122also has a cavity (also referred to herein as an “agent receptacle”)130defined therein. More specifically, in this particular embodiment, the cavity130is formed via the absence of a length of the second outer layer122C, thereby resulting in a cavity130defined by the kinetic actuation member/contact124and the two opposing ends of the second outer layer122C on both sides of the cavity130such that the cavity130contains the contact124, as discussed above. Any treatment agent126according to any embodiment herein that can be delivered via kinetic energy or any application of kinetic energy—such as vibration, tuned vibration, ultrasound, etc.—can be disposed in the cavity130as shown. Thus, in the instant implementation as shown, the treatment agent/composition126is disposed in or on the device120such that actuation of the contacts (including contact124) on the body122by providing kinetic energy as represented by arrow E can interact with the treatment agent/composition126to propel the agent/composition126toward the treatment area of the brain128, as represented by arrows F. Alternatively, it need not be the contacts of the electrode that propel the agent/composition. Instead, the device120can have any known component that can apply kinetic energy to the agent/composition to propel the agent/composition in a similar fashion. In certain embodiments, regardless of the source of the actuation, the amount of energy and/or time can be varied to control the speed and distance that the treatment agent/composition travels. Another embodiment of a drug delivery device is contemplated that doesn't require external actuation, but simply provides for delivery of the agent via fluidic access to the agent such that the agent is dissolved over time into the brain fluids that contact the agent. More specifically, as shown inFIG.9A(cross-sectional side view) and9B (top view), a device140is provided that has a device body142with a contact144disposed within a cavity150that defines an agent channel152within the cavity150such that an agent/composition146can be disposed within the channel152. Please note that the agent/composition146is only depicted in a portion of the channel152inFIG.9Ain order to be able to better depict the channel152in the portion not containing any agent146. In this specific embodiment, the body142is made up of four layers: a first outer layer142A, a middle or inner layer142B, a lead and contact layer142C, and a second outer layer142D such that the middle layer142B and the lead/contact layer142C are disposed between and attached to the first and second outer layers142A,142D. The lead/contact layer142C is made up of the contact144and a lead component148that is coupled to the contact144and delivers electrical stimulation thereto. The cavity150is formed via the absence of a length of the first outer layer142A and a length of the middle layer142B, thereby resulting in a cavity150defined by the contact144and the two opposing ends of the first outer layer142A and of the middle layer142B on both sides of the cavity150such that the cavity150contains the contact144, as discussed above. Further, the agent channel152defined within the cavity150such that the channel152encircles the cavity150is formed by the lip154that is formed via the first outer layer142A. More specifically, the first outer layer142A extends further towards the center of the cavity150in comparison to the middle layer142B such that the lip154is formed around the circumference of the cavity150. As such, the lip154forms the channel152such that the agent146can be disposed within the channel152and thereby be disposed around the outer circumference of the cavity150(and the contact144). Any treatment agent146according to any embodiment herein that can be delivered via fluidic access can be disposed in the cavity150as shown. Thus, in the instant implementation as shown, the treatment agent/composition146is disposed with the channel154in the device140as described above such that liquid in the brain tissue can enter the cavity150and interact with the treatment agent/composition146, causing the agent/composition146to dissolve over some predetermined period of time and thereby be delivered to the target area of the brain adjacent to the cavity150. Alternatively, this device140can incorporate iontophoretic or kinetic energy delivery technology similar to that described above into the device140such that the agent/composition146in the channel154can be delivered by iontophoretic or kinetic energy actuation. Another form of drug delivery, according to a further implementation, relates to timed release of an agent via an agent reservoir, as depicted with respect to one example inFIGS.11A and11B. In this exemplary implementation, a drug reservoir180is provided that can be integrated into a contact of any neural probe or alternatively can be disposed elsewhere on the probe. The reservoir180has an enclosure (or “body”)182with a wall184that defines an interior186that contains the desired agent188. Further, an actuable gate190is provided at some point along the wall184such that the gate190can be actuated to allow for release of some portion (or all) of the agent from the reservoir180at the desired time. As best shown inFIG.11B, according to one embodiment, the gate190has a body192, a conduit194defined through the body192, and a moveable flap196. The movable flap196has a hinge198such that it is rotatably coupled to the body192and can move between a closed position (or “configuration) in which it is disposed over the conduit194and thereby seals the conduit194closed and an open position (or “configuration”) in which it is positioned away from the body192as depicted, thereby allowing fluidic access to the conduit194and thus allowing passage of agent out of the interior186, through the conduit194, and out to the target tissue. In one embodiment, the flap196is operated magnetically and is tensioned to return to its closed position when any external forces are removed, in a fashion similar to the flap84described above. Alternatively, the flap196can be operated mechanically, electrically, or via any form of force. Further, while the specific gate190has been described in detail, it is understood that any type of known port or door that can provide both open and closed configurations can be incorporated into the reservoir for use herein. In addition, it is understood that an external actuation mechanism is in communication with the gate190such that a user can utilize the external actuation mechanism to control the gate190via any form of communication, including wired or wireless communication. Another drug delivery system210is depicted inFIGS.12A and12B, according to one exemplary implementation. The system210has a neural probe212that is coupled to a controller214via a connection line216. In one exemplary embodiment, the controller214is a known drug pump214as best shown inFIG.12B. The drug pump214can be any known drug/infusion pump214that can be used to deliver any agent to a patient over time and further can provide precise control of said delivery. Alternatively, any known controller/actuation device can be incorporated into the system210. The controller214is coupled with the neural probe212via the connection line216such that the controller214can control the operation of the probe212, including controlling the delivery of an agent from the probe212to the brain of the patient. In one embodiment, the connection line216has at least one communication line (not shown) and at least one agent delivery lumen (not shown) disposed therein, such that the connection line216can be used to transmit electronic or electrical communications via the communication line and further can be used to transfer an agent via the agent delivery lumen. In one embodiment, the neural probe212is a depth electrode212. Alternatively, the neural probe212can be any known probe, including a cortical electrode or any other known probe. It is understood that any of the device embodiments with a drug delivery lumen as disclosed or contemplated herein, including those inFIGS.4,5,10A,10B,11A, and11B, can be coupled with and operate in conjunction with a controller/actuator such as the controller/actuator214described above. Further, any other embodiment having an actuable component can also be coupled with an operate in conjunction with a controller/actuator, including those inFIGS.6-9B. As such, any of the device implementations disclosed or contemplated herein can be incorporated into the system210or any similar system having a controller/actuator. It is further understood that any of the various embodiments disclosed or contemplated inFIGS.2-12Bcan be incorporated into or used in conjunction with any known neural probe embodiment, including the embodiments depicted inFIGS.1A-1C. In use, the various devices disclosed or contemplated herein, including those having a delivery lumen, delivery openings, iontophoretic delivery, kinetic energy delivery, fluidic access delivery, agent reservoirs, or similar structures or features, can be used to treat a seizure in real-time. That is, if a patient feels a seizure coming on, the patient can actuate a controller/actuator (similar to any of the controller/actuator embodiments discussed above, for example), or the controller can otherwise be triggered to actuate the implanted probe to deliver a treatment agent into the brain tissue via the agent delivery component (such as an agent delivery lumen or agent delivery openings as described above, for example) to eliminate or minimize the seizure. Alternatively, the various delivery components can be utilized in any known manner to deliver a treatment agent via a neural probe device as disclosed or contemplated herein. Various treatment agents can be incorporated into any of the various drug delivery device embodiments disclosed or contemplated herein. For example, some of the exemplary treatment agents can include, but are not limited to, paclitaxel (available as ELUTAX™, BIOSTREAM™, PANTERA LUX™, etc.), acetazolamide, brivaracetam (available as BRIVIACT™), carbamazepine (also available as CARBAGEN™ TEGRETOL™, TEGRETOL PROLONGED RELEASE™), clobazam (also available as FRISIUM™, PERIZAM™, TAPCLOB™, ZACCO™), clonazepam, eslicarbazepine acetate (available as ZEBINIX™) ethosuximide, gabapentin (also available as NEURONTIN™), lacosamide (available as VIMPAT™), lamotrigine (also available as LAMITCAL™), levetiracetam (also available as DESITREND™, KEPPRA™) oxcarbazepine (also available as TRILEPTAL™), perampanel (available as FYCOMPA™), phenobarbital, phenytoin (also available as EPANUTIN™, PHENYTOIN SODIUM FLYNN™) piracetam (available as NOOTROPIL™), pregabalin (also available as ALZAIN™, AXALID™, LECAENT™, LYRICA™, REWISCA™), primidone, rufinamide (available as INOVELON™), sodium valproate (also available as EPILIM™, EPILIM CHRONO™, EPILIM CHRONOSPHERE™, EPISENTA™, EPIVAL™), stiripentol (also available as DIACOMIT™), tiagabine (available as GABITRIL™), topiramate (also available as TOPAMAX™), valproic acid (available as CONVULEX™, EPILIM CHRONO™, EPILIM CHRONOSPHERE™), vigabatrin (available as SABRIL™), zonisamide (also available as ZONEGRAN™), any cannabinoid, any antibiotic, and any stem cell composition. It is further understood that the agent can be any known seizure treatment agent or any other treatment agent that could benefit a patient into which an electrode device is being implanted. In certain embodiment, the treatment agent can be a seizure treatment agent that is attracted to electrical activity. For example, the agent can have an ionic attraction to the misfiring cells in the brain tissue, thereby resulting in the agent being drawn to the area that is the source of the seizure. In one example, the agent is an electrically charged molecule polarized agent (similar to the types of agents used in iontophoretic delivery). Alternatively, the agent takes the form of an electrically charged sphere coated with the agent. In a further alternative, the agent can be any known seizure treatment agent that is attracted to electrical activity, including any agents that have been polarized to be drawn to electrical potentials in a tissue. In these embodiments, the agent can be released or delivery by any of the device embodiments disclosed or contemplated herein and then will be attracted to an area of electrical activity, which is likely to be a seizure. Further, it is understood that, according to various implementations, the agent is an agent that helps to normalize the neuron action potential of the misfiring cells. As such, the agent is drawn to the seizure, where it treats the seizure. In further implementations, the agent is not a treatment agent, but instead is an indication agent or other type of agent that can be incorporated into or delivered via any embodiment disclosed or contemplated herein. In one specific example, the agent is a blood indication agent. That is, the agent is a composition that changes color in the presence of protein, thereby indicating the presence of blood. In one example, the blood indication agent can change color to notify a surgeon that there is blood present in the surgical area of the patient. Although the present invention has been described with reference to preferred embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention. | 40,358 |
11857327 | DETAILED DESCRIPTION This invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. FIG.1illustrates a medical monitoring and treatment device, such as a LifeVest® Wearable Cardioverter Defibrillator available from ZOLL Medical Corporation of Chelmsford, Mass. As shown, the medical monitoring and treatment device100includes a harness110having a pair of shoulder straps and a belt that is worn about the torso of a patient. The harness110is typically made from a material, such as cotton, nylon, spandex, or antron that is breathable, and unlikely to cause skin irritation, even when worn for prolonged periods of time. The medical monitoring and treatment device100includes a plurality of electrocardiographic (ECG) sensing electrodes112that are disposed by the harness110at various positions about the patient's body and electrically coupled (wirelessly or by a wired connection) to a portable treatment controller120via a connection pod130. The plurality of ECG sensing electrodes112are used by the portable treatment controller120to monitor the cardiac function of the patient and generally include a front/back pair of ECG sensing electrodes and a side/side pair of ECG sensing electrodes. It should be appreciated that additional ECG sensing electrodes may be provided, and the plurality of ECG sensing electrodes112may be disposed at varying locations about the patient's body. In addition, the plurality of ECG electrodes112may incorporate any electrode system, including conventional stick-on adhesive electrodes, dry-sensing capacitive ECG electrodes, radio transparent electrodes, segmented electrodes, or one or more long term wear electrodes that are configured to be continuously worn by a patient for extended periods (e.g., 3 or more days). One example of such a long term wear electrode is described in Application Ser. No. 61/653,749, titled “LONG TERM WEAR MULTIFUNCTION BIOMEDICAL ELECTRODE,” filed May 31, 2012, which is hereby incorporated herein by reference in its entirety. The medical monitoring and treatment devices disclosed herein may incorporate sundry materials arranged in a variety of configurations to maintain a proper fit with the patient's body. For example, some embodiments include a garment as described in application Ser. No. 13/460,250, titled “PATIENT-WORN ENERGY DELIVERY APPARATUS AND TECHNIQUES FOR SIZING SAME,” filed Apr. 30, 2012 (now U.S. Pat. No. 9,782,578), which is hereby incorporated herein by reference in its entirety. Thus embodiments are not limited to the configuration and materials described above with reference toFIG.1. The medical monitoring and treatment device100also includes a plurality of therapy electrodes114that are electrically coupled to the portable treatment controller120via the connection pod130and which are capable of delivering one or more therapeutic defibrillating shocks to the body of the patient, if it is determined that such treatment is warranted. As shown, the plurality of therapy electrodes114includes a first therapy electrode114athat is disposed on the front of the patient's torso and a second therapy electrode114bthat is disposed on the back of the patient's torso. The second therapy electrode114bincludes a pair of therapy electrodes that are electrically coupled together and act as the second therapy electrode114b.The use of two therapy electrodes114a,114bpermits a biphasic shock to be delivered to the body of the patient, such that a first of the two therapy electrodes can deliver a first phase of the biphasic shock with the other therapy electrode acting as a return, and the other therapy electrode can deliver the second phase of the biphasic shock with the first therapy electrode acting as the return. The connection pod130electrically couples the plurality of ECG sensing electrodes112and the plurality of therapy electrodes114to the portable treatment controller120, and may include electronic circuitry. For example, in one implementation the connection pod130includes signal acquisition circuitry, such as a plurality of differential amplifiers to receive ECG signals from different ones of the plurality of ECG sensing electrodes112and to provide a differential ECG signal to the portable treatment controller120based on the difference therebetween. The connection pod130may also include other electronic circuitry, such as a motion sensor or accelerometer by which patient activity may be monitored. In some embodiments, both the first therapy electrode114aand the second therapy electrode114bare disposed on the front of the patient's torso. For example, the first therapy electrode114amay be located at external to the apex of the heart and the second therapy electrode114bmay be located along the parasternal line. Thus embodiments are not limited to a particular arrangement of therapy electrodes114. In some embodiments, the plurality of ECG sensing electrodes112are positioned and paired such that artifacts generated from electrical activity are decreased. In other embodiments, the electronic circuitry included in the portable treatment controller120may equalize artifacts measured at electrodes by changing a gain or impedance. Other techniques of decreasing or preventing artifacts within measured electrical activity that may be used in conjunction the embodiments disclosed herein are explained in U.S. Pat. No. 8,185,199, titled “MONITORING PHYSIOLOGICAL SIGNALS DURING EXTERNAL ELECTRICAL STIMULATION,” issued May 22, 2012, which is incorporated by reference herein in its entirety. As shown inFIG.1, the medical monitoring and treatment device100may also include a user interface pod140that is electrically coupled to the portable treatment controller120. The user interface pod140can be attached to the patient's clothing or to the harness110, for example, via a clip (not shown) that is attached to a portion of the interface pod140. Alternatively, the user interface pod140may simply be held in a person's hand. The user interface pod140typically includes one or more actionable user interface elements (e.g., one or more buttons, a fingerprint scanner, a touch screen, microphone, etc.) by which the patient, or a bystander can communicate with the portable treatment controller120, and a speaker by which the portable treatment controller120may communicate with the patient or the bystander. In certain models of the LifeVest® Wearable Cardioverter Defibrillator, the functionality of the user interface pod140is incorporated into the portable treatment controller120. Where the portable treatment controller120determines that the patient is experiencing cardiac arrhythmia, the portable treatment controller120may issue an audible alarm via a loudspeaker (not shown) on the portable treatment controller120and/or the user interface pod140alerting the patient and any bystanders to the patient's medical condition. Examples of notifications issued by the portable treatment controller120are described in application Ser. No. 13/428,703, titled “SYSTEM AND METHOD FOR ADAPTING ALARMS IN A WEARABLE MEDICAL DEVICE,” filed Mar. 23, 2012 (now U.S. Pat. No. 9,135,398), which is incorporated by reference herein in its entirety. The portable treatment controller120may also instruct the patient to press and hold one or more buttons on the portable treatment controller120or on the user interface pod140to indicate that the patient is conscious, thereby instructing the portable treatment controller120to withhold the delivery of one or more therapeutic defibrillating shocks. If the patient does not respond, the device may presume that the patient is unconscious, and proceed with the treatment sequence, culminating in the delivery of one or more defibrillating shocks to the body of the patient. The portable treatment controller120generally includes at least one processor, microprocessor, or controller, such as a processor commercially available from companies such as Texas Instruments, Intel, AMD, Sun, IBM, Motorola, Freescale and ARM Holdings. In one implementation, the at least one processor includes a power conserving processor arrangement that comprises a general purpose processor, such as an Intel® PXA270 processor and a special purpose processor, such as a Freescale™ DSP56311 Digital Signal Processor. Such a power conserving processor arrangement is described in application Ser. No. 12/833,096, titled SYSTEM AND METHOD FOR CONSERVING POWER IN A MEDICAL DEVICE, filed July 9, 2010 (hereinafter the “'096 application”, and now U.S. Pat. No. 8,904,214) which is incorporated by reference herein in its entirety. The at least one processor of the portable treatment controller120is configured to monitor the patient's medical condition, to perform medical data logging and storage, and to provide medical treatment to the patient in response to a detected medical condition, such as cardiac arrhythmia. Although not shown, the medical monitoring and treatment device100may include additional sensors, other than the ECG sensing electrodes112, capable of monitoring the physiological condition or activity of the patient. For example, sensors capable of measuring blood pressure, heart rate, heart sounds, thoracic impedance, pulse oxygen level, respiration rate, and the activity level of the patient may also be provided. FIG.2illustrates a portable treatment controller120that is configured to perform the critical functions of monitoring physiological information for abnormalities and initiating treatment of detected abnormalities. As shown, the portable treatment controller120can include the power conserving processor arrangement200described in the '096 application, a sensor interface212, a therapy delivery interface202, data storage204, a communication network interface206, a user interface208and a battery210. In this illustrated example, the battery210is a rechargeable 3 cell 2200 mAh lithium ion battery pack that provides electrical power to the other device components with a minimum 24 hour runtime between charges. Such a battery210has sufficient capacity to administer one or more therapeutic shocks and the therapy delivery interface202has wiring suitable to carry the load to the therapy electrodes114. Moreover, in the example shown, the battery210has sufficient capacity to deliver up to 5 or more therapeutic shocks, even at battery runtime expiration. The amount of power capable of being delivered to a patient during a defibrillating shock is substantial, for example up to approximately 200 Joules. The sensor interface212and the therapy delivery interface202are coupled to the power conserving processor arrangement200and more particularly to the critical purpose processor of the power conserving processing arrangement200as described in the '096 application. The data storage204, the network interface206, and the user interface208are also coupled to the power conserving processor arrangement200, and more particularly to the general purpose processor of the power conserving processing arrangement as also described in the '096 application. In the example shown, the data storage204includes a computer readable and writeable nonvolatile data storage medium configured to store non-transitory instructions and other data. The medium may, for example, be optical disk, magnetic disk or flash memory, among others and may be permanently affixed to, or removable from, the portable treatment controller120. As shown inFIG.2, the portable treatment controller120includes several system interface components202,206and212. Each of these system interface components is configured to exchange, i.e., send or receive data, with specialized devices that may be located within the portable treatment controller200or elsewhere. The components used by the interfaces202,206and212may include hardware components, software components or a combination of both. In the instance of each interface, these components physically and logically couple the portable treatment controller200to one or more specialized devices. This physical and logical coupling enables the portable treatment controller120to both communicate with and, in some instances, control the operation of specialized devices. These specialized devices may include physiological sensors, therapy delivery devices, and computer networking devices. According to various examples, the hardware and software components of the interfaces202,206and212employ a variety of coupling and communication techniques. In some examples, the interfaces202,206and212use leads, cables or other wired connectors as conduits to exchange data between the portable treatment controller120and specialized devices. In other examples, the interfaces202,206and212communicate with specialized devices using wireless technologies such as radio frequency or infrared technology. The software components included in the interfaces202,206and212enable the power conserving processor arrangement200to communicate with specialized devices. These software components may include elements such as objects, executable code and populated data structures. Together, these hardware and software components provide interfaces through which the power conserving processor arrangement200can exchange information with the specialized devices. Moreover, in at least some examples where one or more specialized devices communicate using analog signals, the interfaces202,206and212can include components configured to convert analog information into digital information, and vice-versa. As discussed above, the system interface components202,206and212shown in the example ofFIG.2support different types of specialized devices. For instance, the components of the sensor interface212couple the power conserving processor arrangement200to one or more physiological sensors such as a body temperature sensors, respiration monitors and dry-capacitive ECG sensing electrodes. It should be appreciated that other types of ECG sensing electrodes may be used, as the present invention is not limited to any particular type of ECG sensing electrode. The components of the therapy delivery interface202couple one or more therapy delivery devices, such as capacitors and defibrillator electrodes, to the power conserving processor arrangement200. In addition, the components of the network interface206couple the power conserving processor arrangement to a computer network via a networking device, such as a bridge, router or hub. The network interface206may supports a variety of standards and protocols, examples of which include USB, TCP/IP, Ethernet, Wireless Ethernet, Bluetooth, ZigBee, M-Bus, IP, IPV6, UDP, DTN, HTTP, FTP, SNMP, CDMA, NMEA and GSM. To ensure data transfer is secure, in some examples, the portable treatment controller200can transmit data via the network interface206using a variety of security measures including, for example, TSL, SSL or VPN. In other examples, the network interface206includes both a physical interface configured for wireless communication and a physical interface configured for wired communication. The user interface208shown inFIG.2includes a combination of hardware and software components that allow the portable treatment controller200to communicate with an external entity, such as a user. These components are configured to receive information from actions such as physical movement, verbal intonation or thought processes. In addition, the components of the user interface208can provide information to external entities. Examples of the components that may be employed within the user interface208include keyboards, mouse devices, trackballs, microphones, electrodes, touch screens, printing devices, display screens and speakers. The LifeVest® wearable cardioverter defibrillator can monitor a patient's ECG signals, detect various cardiac arrhythmias, and provide life saving defibrillation treatment to a patient suffering a treatable form of cardiac arrhythmia such as Ventricular Fibrillation (VF) or Ventricular Tachycardia (VT). Applicants have appreciated that such a medical monitoring and treatment device can be configured to perform a variety of different types of cardiac pacing to treat a wide variety of different cardiac arrhythmias, such as bradycardia, tachycardia, an irregular cardiac rhythm, or asystole. Applicants have further appreciated that, in other embodiments, a medical monitoring and treatment device can be configured to perform pacing to treat pulseless electrical activity. In accordance with an aspect of the present invention, the device can be configured to pace the heart of the patient at a fixed energy level and pulse rate, to pace the heart of the patient on demand with a fixed energy level and an adjustable rate responsive to the detected intrinsic activity level of the patient's heart, or to pace the heart of the patient using capture management with an adjustable energy level and rate responsive to the detected intrinsic activity level of the patient's heart and the detected response of the patient's heart. The various types of pacing may be applied to the patient externally by one or more of the therapy electrodes114a,114b(FIG.1). Various types of pacing that can be performed by a medical monitoring and treatment device, such as the LifeVest® wearable cardioverter defibrillator, can include asynchronous pacing at a fixed rate and energy, pacing on demand at a variable rate and fixed energy, and capture management pacing with an adjustable rate and adjustable energy level. In some embodiments, the medical monitoring and treatment device is configured to periodically assess the level of discomfort of the patient during pacing operation. In these embodiments, responsive to determining that the patient's discomfort level exceeds a threshold, the device attempts to adjust the attributes of the pacing activity to lessen the discomfort experienced by the patient. In one embodiment, the medical monitoring and treatment device provides a user interface through which the device receives information descriptive of the discomfort level experienced by a patient. Should this information indicate that the level of discomfort has transgressed a threshold level, the device adjusts characteristics of the pacing operation in an attempt to decrease the level of discomfort. In another embodiment, the medical monitoring and treatment device assesses the level of discomfort of the patient by monitoring and recording the patient's movement before, during, and after administration of a pacing pulse. The device may monitor the patient's movement using a variety of instrumentation including, for example, one or more accelerometers, audio sensors, etc. To assess the level of discomfort experienced by the patient during pacing pulses, the device may analyze the recorded history of the patient's movement and identify correlations between changes in the patient's movement and the pacing pulse. Strong correlations between pacing pulses and sudden patient movement, which may be representative of a flinch, and strong correlations between pacing pulses and a sudden stoppage of movement, may indicate that a patient is experiencing discomfort. Correlations having a value that transgresses a threshold value may be deemed to indicate discomfort and may cause the device to adjust the characteristics of a pacing pulse. In other embodiments, the device adjusts the characteristics of the pacing operation to lessen the discomfort level of the patient. The characteristics of the pacing operation that may be adjusted include, for example, the energy level of pacing pulses, the width of the pacing pulses, and the rate of the pacing pulses. In some embodiments, the device monitors the cardiac activity of the patient during this adjustment process to ensure that the pacing operation continues to effectively manage cardiac function. In these embodiments, the device may revert the characteristics of the pacing operation to their previous settings, should the pacing operation become ineffective. 1. Fixed Rate and Energy Pacing In accordance with an aspect of the present invention, a medical monitoring and treatment device, such as the LifeVest® wearable cardioverter defibrillator, can be configured to pace the heart of a patient at a fixed rate and fixed energy in response to various types of cardiac arrhythmias. Examples of these cardiac arrhythmias include bradyarrythmia, a lack of sensed cardiac activity (spontaneous or post shock asystole) and pulseless electrical activity. In some cases, these cardiac arrhythmias may occur before or after one or more defibrillation shocks. For example, the device may be configured to provide pulses at a fixed energy level, a fixed pulse width, and a fixed frequency in response to detection of any of the above-noted events by the ECG sensing electrodes112. The energy level of the pacing pulses may be set to a fixed value by applying a desired current waveform for a determined duration of time by one or more of the therapy electrodes114a,114b.The maximum current level of the current waveform may be set to a value between approximately 0 mAmps to 200 mAmps, the pulse width may be set to a fixed value between approximately 0.05 ms to 2 ms, and the frequency of the pulses may be set to a fixed value between approximately 30 pulses per minute (PPM) to approximately 200 PPM. In accordance with one embodiment, a 40 ms square wave pulse is used. Exemplary pacing current waveforms, including a 40 ms constant current pulse, a 5 ms constant current pulse, and a variable current pulse are shown inFIG.3. During pacing operation of the medical monitoring and treatment device, the device may periodically pause for a period of time to evaluate the patient via the ECG sensing electrodes to determine whether a normal sinus rhythm has returned. Where the device detects a normal sinus rhythm, the device may discontinue the application of pacing pulses and simply continue monitoring the patient's physiological signals, such as the patient's ECG, temperature, pulse oxygen level, etc. During an initial fitting of the medical monitoring and treatment device, the level of current, the pulse width, and the frequency of the pulses may be set to an appropriate level based on the input of a medical professional (such as the patient's cardiologist) and the physiological condition of the patient (e.g., based on the patient's normal resting heart rate, the patient's thoracic impedance, etc.) Alternatively, the level of current, the pulse width, and the frequency of the pulses may simply be set to an appropriate value based on typical impedance values for an adult or child, and typical resting heart rates for an adult or child. It should be appreciated that because pacing at a fixed rate may interfere with the patient's own intrinsic heart rate, the device can be configured to perform such fixed rate and energy pacing only in the event of a life-threatening Bradyarrythmia, a lack of any detected cardiac activity following shock, or in response to pulseless electrical activity following shock. 2. Demand (Adjustable Rate) Pacing In accordance with an aspect of the present invention, a medical monitoring and treatment device, such as the LifeVest® wearable cardioverter defibrillator, can also be configured to pace the heart of a patient at a variable rate and a fixed energy in response to various types of cardiac arrhythmias, including a bradyarrythmia (i.e., an excessively slow heart rate), tachycardia (i.e., an excessively fast heart rate), an erratic heart rate with no discernible regular sinus rhythm, a lack of sensed cardiac activity (asystole), and pulseless electrical activity. Some of these cardiac arrhythmias may occur following one or more defibrillation shocks. As known to those skilled in the art, pacing at a fixed rate and energy may not be appropriate to the particular type of cardiac arrhythmia of the patient, and even where the rate and energy level is appropriate, pacing at a fixed rate can result in competition between the rate at which the pacing pulses are being applied and the intrinsic rhythm of the patient's heart. For example, pacing at a fixed rate may result in the application of a pacing pulse during the relative refractory period of the normal cardiac cycle (a type of R wave on a T wave effect) that could promote ventricular tachycardia or ventricular fibrillation. To overcome some of the disadvantages of fixed rate and energy pacing, the medical monitoring and treatment device can be configured to perform demand pacing, wherein the rate of the pacing pulses may be varied dependent on the physiological state of the patient. For example, during demand pacing, the device can deliver a pacing pulse only when needed by the patient. In general, during the demand mode of pacing, the device searches for any intrinsic cardiac activity of the patient, and if a heart beat is not detected within a designated interval, a pacing pulse is delivered and a timer is set to the designated interval. Where the designated interval expires without any detected intrinsic cardiac activity of the patient, another pacing pulse is delivered and the timer reset. Alternatively, where an intrinsic heart beat of the patient is detected within the designated interval, the device resets the timer and continues to search for intrinsic cardiac activity. FIG.4helps to illustrate some of the aspects of demand pacing and the manner in which demand pacing can be performed by the medical monitoring and treatment device. As illustrated inFIG.4, the device may have a variable pacing interval410corresponding to the rate at which pacing pulses are delivered to the patient in the absence of any detected intrinsic cardiac activity detected by the ECG sensing electrodes112and ECG monitoring and detection circuitry. For example, the rate at which pulsing paces are to be delivered to the patient (referred to as the “base pacing rate” herein) may be set at 60 PPM and therefore, the corresponding base pacing interval410would be set to 1 second. The medical monitoring and treatment device may also have a hysteresis rate (not shown inFIG.4) corresponding to the detected intrinsic heart rate of the patient below which the device performs pacing. According to some embodiments, the hysteresis rate is a configurable parameter that is expressed as a percentage of the patient's intrinsic heart rate. In the above example, the hysteresis rate may correspond to 50 beats per minute (BPM). In this example, if the intrinsic heart rate of the patient fell to 50 BPM or below (e.g., more than approximately 1.2 seconds between detected beats), the device would generate and apply a pacing impulse to the patient. During application of a pacing pulse to the body of a patient and a short time thereafter, the medical monitoring and treatment device may intentionally blank out a portion of the ECG signals being received by the ECG monitoring and detection circuitry to prevent this circuitry (e.g., amplifiers, A/D converters, etc.) from being overwhelmed (e.g., saturated) by the pacing pulse. This may be performed in hardware, software, or a combination of both. This period of time, referred to herein as “the blanking interval”420may vary (e.g., between approximately 30 ms to 200 ms), but is typically between approximately 40 ms to 80 ms in duration. In addition to the blanking interval420, the medical monitoring and treatment device can have a variable refractory period430that may vary dependent upon the base pacing rate. The refractory period430corresponds to a period of time in which signals sensed by the ECG sensing electrodes112and the ECG monitoring and detection circuitry are ignored, and includes the blanking interval. The refractory period430allows any generated QRS complexes or T waves induced in the patient by virtue of the pacing pulse to be ignored, and not interpreted as intrinsic cardiac activity of the patient. For example, where the base pacing rate is set to below 80 PPM, the refractory period might correspond to 340 ms, and where the base pacing rate is set above 90 PPM, the refractory period might correspond to 240 ms. For typical applications, the refractory period is generally between about 150 ms and 500 ms. In accordance with an aspect of the present invention, the sensitivity of the ECG monitoring and detection that is performed by the medical monitoring and treatment device may also be varied to adjust the degree by which the ECG sensing electrodes and associated ECG monitoring and detection circuitry can detect the patient's intrinsic cardiac activity. For example, where the amplitude of certain discernable portions (e.g., an R-wave) of a patient's intrinsic ECG signal is below that typically encountered, the voltage threshold over which this discernable portion can be detected as belonging to an ECG signal (and not attributed to noise or other factors) may be lowered, for example from 2.5 mV to 1.5 mV, to better detect the patient's intrinsic cardiac activity. For instance, during an initial fitting of the medical monitoring and treatment device, the sensitivity threshold of the device may be reduced to a minimal value (e.g., 0.4 mV) and the patient's intrinsic ECG signals may be monitored. The sensitivity threshold may then be incrementally increased (thereby decreasing the sensitivity of the device) and the patient's intrinsic ECG signals monitored until these ECG signals are no longer sensed. The sensitivity threshold may then be incrementally decreased (thereby increasing the sensitivity of the device) until the patient's intrinsic ECG signals are again sensed, and the sensitivity threshold of the device may be set to approximately half this value. As with fixed energy and rate pacing, the device may be configured to provide pulses at a fixed energy level and a fixed pulse width in response to detection of any of the above-noted events by the ECG sensing electrodes112and the ECG monitoring and detection circuitry. The maximum current level of the current waveform may be set to a value between approximately 10 mAmps to 200 mAmps, the pulse width may be set to a fixed value between approximately 20 ms to 40 ms, and the base rate of the pulses may be set to a fixed value between approximately 30 pulses per minute (PPM) to approximately 200 PPM, although the actual rate of the pacing pulses can vary based upon the intrinsic cardiac activity of the patient. In accordance with one embodiment, a 40 ms constant current pulse is used, and the current level is set to a fixed value based upon the input of a medical professional, such as the patient's cardiologist and the physiological condition of the patient. The base pacing rate and the hysteresis rate may also be set based upon the input of the patient's cardiologist (or other medical professional) and the physiological condition of the patient, and the blanking interval and refractory period set to an appropriate time interval based upon the base pacing rate and/or the hysteresis rate. Although the base pacing rate may be set to a particular value based on the physiological condition of the patient and input from a medical profession, the medical monitoring and treatment device can include a number of different pacing routines to respond to different cardiac arrhythmias, such as bradycardia, tachycardia, an erratic heart rate with no discernable regular sinus rhythm, asystole, or pulseless electrical activity. These pacing routines may be implemented using a variety of hardware and software components and embodiments are not limited to a particular configuration of hardware or software. For instance, the pacing routines may be implemented using an application-specific integrated circuit (ASIC) tailored to perform the functions described herein. A. Bradycardia As discussed above, where bradycardia is detected and the intrinsic cardiac rate of the patient is below that of the hysteresis rate, the medical monitoring and treatment device will pace the patient at the pre-set base pacing rate. During this time, the device will continue to monitor the patient's intrinsic heart rate and will withhold pacing pulses in the event that an intrinsic heartbeat is detected within designated interval corresponding to the hysteresis rate. This type of on demand pacing is frequently termed “maintenance pacing.” B. Tachycardia For responding to tachycardia, the medical monitoring and treatment device may additionally include another pacing rate, termed an “anti-tachyarrhythmic pacing rate” herein, above which the device will identify that the patient is suffering from tachycardia, and will pace the patient in a manner to bring the patient's intrinsic heart back toward the base racing rate. For example, the device may employ a technique known as overdrive pacing wherein a series of pacing pulses (e.g., between about 5 and 10 pacing pulses) are delivered to the patient at a frequency above the intrinsic rate of the patient in an effort to gain control of the patient's heart rate. Once it is determined that the device is in control of the patient's heart rate, the rate (i.e., the frequency) of the pulses may be decremented, for example by about 10 ms, and another series of pacing pulses delivered. This delivery of pulses and the decrease in frequency may continue until the detected intrinsic cardiac rate of the patient is below the anti-tachyarrhythmic pacing rate, or at the base pacing rate. This type of pacing is frequently termed “overdrive pacing” or “fast pacing.” C. Erratic Heart Rate For responding to an erratic heart rate, the medical monitoring and treatment device may perform a type of pacing that is similar to a combination of maintenance pacing and overdrive pacing discussed above. For example, where the medical monitoring and treatment device detects an erratic heart rate with no discernable sinus rhythm, the device may deliver a series of pacing pulses (e.g., between about 5 and 10 pacing pulses) to the patient at a particular frequency. This frequency may be one that is above a lower frequency of a series of detected intrinsic beats of the patient's heart and below an upper frequency of the detected intrinsic beats of the patient's heart. After delivering the series of pulses, the device may monitor the patient's heart to determine if it has synchronized to the rate of the series of delivered pulses. Where the intrinsic rate of the patient's heart is still erratic, the device may increase the frequency of the series of pulses and deliver another series. This may continue until it is established that the patient's heart is now in a more regular state. Upon determining that the patient's heart is now in a more regular state, the device may perform maintenance pacing if it is determined that the patient's intrinsic heart rate is too low as discussed in section2A above, or perform pacing at a decremented rate in the manner discussed in section2B above, if such is warranted. D. Asystole or Pulseless Electrical Activity For responding to asystole or a detected condition of pulseless electrical activity, the medical monitoring and treatment device may perform maintenance pacing similar to that described in section2A above. This type of pacing would be performed after a series of one or more defibrillating shocks that attempt to restore a normal sinus rhythm to the heart of the patient. In each of the above types of pacing, the medical monitoring and treatment device may be configured to perform a particular type of pacing only after a programmable delay after such cardiac arrhythmias are detected, or after a programmable period of time after one or more defibrillating shocks are delivered. 3. Capture Management In accordance with an aspect of the present invention, a medical monitoring and treatment device, such as the LifeVest® wearable cardioverter defibrillator, can also be configured to pace the heart of a patient using capture management with an adjustable energy level and an adjustable rate in response to various types of cardiac arrhythmias. The various types of cardiac arrhythmias can include a bradycardia, tachycardia, an erratic heart rate with no discernable regular sinus rhythm, a lack of sensed cardiac activity (asystole) following or independent of one or more defibrillation shocks, a life-threatening Bradyarrythmia following one or more defibrillation shocks, or pulseless electrical activity following one or more defibrillation shocks. As known to those skilled in the art, capture management refers to a type of pacing in which the energy level of pacing pulses and the rate of delivery of those pacing pulses may be varied based upon the detected intrinsic activity level of the patient's heart and the detected response of the patient's heart to those pacing pulses. In cardiac pacing, the term “capture” is used to refer to the response of a patient's heart to a pulse of energy which results in ventricular depolarization. In cardiac pacing, it is desirable to limit the amount of energy in each pulse to a minimal amount required for capture; thereby minimizing the amount of discomfort associated with external pacing. In general, the manner in which the medical monitoring and treatment device can perform capture management pacing is similar to that of demand pacing described above, in that it may adjust the rate at which pacing pulses are delivered based upon the detected intrinsic rate of cardiac activity of the patient. The sensitivity of the device to the patient's ECG may be adjusted in a similar manner to that described above with respect to demand pacing. Further, capture management pacing may be used to treat the same types of cardiac arrhythmias as the demand pacing described above, such as bradycardia, tachycardia, an erratic heart rate with no discernable sinus rhythm, asystole, or pulseless electrical activity. However, in contrast to a device that performs demand pacing, a device that is configured to perform capture management pacing will typically have a refractory period430(seeFIG.4) that is significantly shorter than a device configured to perform demand pacing. Indeed, when using capture management pacing, there may be no refractory period430at all, but only a blanking interval420. Alternatively, where there is a refractory period430, the refractory period430may be similar in duration to the blanking interval420. As would be appreciated by those skilled in the art, this is because during capture management pacing, the response of the patient's heart is monitored by the ECG sensing electrodes112and ECG monitoring and detection circuitry to detect whether the delivered pulse of energy resulted in capture. For this reason, while the ECG monitoring and detection circuitry may be switched off or effectively disabled during the delivery of energy pulses, it is important that it be switched back on or otherwise enabled shortly thereafter to detect whether the delivered pulse resulted in capture. In one embodiment in which a 40 ms constant current pulse is used, the blanking interval420may be set to approximately 45 ms to avoid saturation of the ECG monitoring and detection circuitry, but ensure that any intrinsic electrical activity of the patient's heart that was induced by the pacing pulse is detected. During capture management pacing, the medical monitoring and treatment device can deliver a pulse of energy at a determined energy level and monitor the patient's response to determine if capture resulted. Where it is determined that the delivered pulse did not result in capture, the energy level of the next pulse may be increased. For example, where the device is a medical monitoring and treatment device that is external to the patient, the initial setting may be configured to provide a 40 ms rectilinear and constant current pulse of energy at a current of 40 mAmps, and increase the amount of current in increments of 2 mAmps until capture results. The next pacing pulse may be delivered at increased current relative to the first pacing pulse and at a desired rate relative to the first pacing pulse in the absence of any detected intrinsic cardiac activity of the patient. Where the next pacing pulse does not result in capture, the energy may be increased until capture is detected. The medical monitoring and treatment device may then continue pacing at this energy level and at a desired rate in the absence of any detected intrinsic cardiac activity of the patient. During this period of time, the device monitors the patient's cardiac response to the pacing pulses, and may increment the energy level further, should it be determined over one or more subsequent pulses that capture did not result. In an alternative configuration, the medical monitoring and treatment device may apply a series of pulses at an initial energy level and rate, and monitor the patient's response to determine if capture resulted. Where capture did not result, or where capture resulted in response to some of the pulses, but not all, the device may increase the energy of a next series of pulses until capture results for each pulse. Alternatively, the device may be configured to identify a minimum amount of energy that results in capture during capture management pacing. Where it is determined that the delivered pulse did result in capture, the energy level of the next pulse may be decreased. For example, where the device is a medical monitoring and treatment device that is external to the patient, the initial setting may be configured to provide a 40 ms constant current pulse of energy at a current of 70 mAmps. Where it is determined that the delivered pulse resulted in capture, subsequent pacing pulses may be delivered and decreased in increments of 5 mAmps and at a desired rate relative to the first pacing pulse in the absence of any detected intrinsic cardiac activity of the patient until capture is no longer achieved. Where the next pacing pulse does not result in capture, the energy setting may be increased to the last current known to produce a pulse resulting in capture, and then delivering a pulse at the higher energy setting, thus delivering the minimal amount of energy required for capture. The medical monitoring and treatment device may then continue pacing at this energy level and at a desired rate in the absence of any detected intrinsic cardiac activity of the patient. During this period of time, a similar routine may be re-performed at predetermined intervals to ensure that the minimum amount of energy is being delivered for capture. In addition, during this period of time, the device monitors the patient's cardiac response to the pacing pulses, and may increase the energy level should it be determined over one or more subsequent pulses that capture did not result. It should be appreciated that in the various embodiments described above, an external medical monitoring and treatment device has been described which may not only provide life saving defibrillation or cardioversion therapy, but may also provide a wide variety of different pacing regimens. Because the medical monitoring and treatment device can monitor a patient's intrinsic cardiac activity, the patient's thoracic impedance, and other physiological parameters of the patient, the device may be configured to recommend various settings to a medical professional for review and approval. The various settings that may be recommended may include a recommended base pacing rate, a recommended hysteresis rate, a recommended anti-tachyarrhythmic pacing rate, a recommended energy level (or initial energy level if capture management is used), a recommended blanking interval, and/or refractory period, and a recommended sensitivity threshold. In the case of a medical monitoring and treatment device such as the LifeVest® cardioverter defibrillator, this initial recommendation may be performed when the patient is being fitted for and trained on the use of the device. Although the ability to recommend such settings to a medical professional for their review and approval is particularly well suited to a medical monitoring and treatment device, such as the LifeVest® cardioverter defibrillator, such functionality could also be implemented in an Automated External Defibrillator (AED) or an Advanced Life Support (ALS) type of defibrillator, such as the M Series defibrillator, R Series ALS defibrillator, R Series Plus defibrillator, or E Series defibrillator manufactured by the ZOLL Medical Corporation of Chelmsford, MA. It should be appreciated that monitoring the patient's intrinsic cardiac activity and other physiological parameters and making recommendations to a trained medical professional for their review and approval (or possible modification) could reduce the amount of time that is spent manually configuring such devices prior to use on the patient. Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the scope of the invention. Accordingly, the foregoing description and drawings are by way of example only. | 45,960 |
11857328 | DETAILED DESCRIPTION OF THE INVENTION FIG.1schematically illustrates an active electrode design according to an embodiment of the invention. A bio-potential signal Vin(t) is sensed by a capacitive electrode (not shown) and fed to an input of an integrated amplifier10. Impedance, Zs, denotes the skin-electrode impedance. On the input of the integrated amplifier10, the bio-potential signal Vin(t) is modulated with a modulation signal (chopper clock), m(t) in a first mixer11in front of the integrated amplifier10. The integrated amplifier10has a gain Avequal to one, whereby the integrated amplifier10acts as a buffer, and by applying the same chopper modulation signal, m(t), in a second mixer12on the output of the integrated amplifier10, too, the integrated amplifier10and the two mixers11and12provide a buffered path outputting an output signal Vout(t). The modulation signal m(t) employed in the embodiment shown inFIG.1is illustrated as a pulse-width modulated signal having a duty-cycle of 50%, and assumes a unity amplitude of +1 and −1. The chopper frequency, fchop, is selected to ensure that flicker noise in the low frequency range will be substantially eliminated. Impedance, Zin, denotes the finite input impedance. The choppered buffer output Vout(t) is used for driving the active shield placed near said electrode and being electric insulated from said electrode. With reference toFIG.2, the noise spectrum composed by white and pink noise for a semiconductor based amplifier is illustrated. The corner frequency fc characterizes the border between the region dominated by the low-frequency flicker noise (pink noise) and thermal noise which is dominating as the higher frequency “flat-band” noise (white noise). Flicker noise occurs in most electronic devices, and provides a limitation on the signal level a circuit may handle. This is illustrated inFIG.2where log 10(f) is depicted on the x-axis, and the voltage squared is depicted on the y-axis. In the current embodiment, the integrated amplifier is realized in a MOSFET transistor layout, and a corner frequency in the level of approximately 200 Hz has been observed. The corner frequency, fcorner, is the transition between the regions dominated by the low-frequency flicker noise and the higher frequency “flat-band” noise, respectively. Therefor the chopper frequency, fchop, has to be chosen so well above the corner frequency, so the frequency shift introduced prior to the integrated amplifier is sufficient to escape the flicker noise region of the integrated amplifier. The modulation frequency providing the frequency shift is greater than the corner frequency, and according to the illustrated embodiment the chopper frequency fchophas been chosen to be in the range from 200 Hz to 2 kHz. Preferably, chopper frequency fchopis in the range from 400 Hz to 1 kHz. When the chopper frequency fchopis higher, the power consumption will be adversely affected. For an ear-EEG application the sense electrode will pick up a bio-potential signal Vin(t) having amplitude at approximately 1 μV. The bio-potential signal Vin(t) will in a first use situation have a spectral distribution in a basic frequency range between 0 and 35 Hz—which is schematically illustrated inFIG.6a. Once modulated with the chopper signal m(t) in the mixer11, the bio-potential signal Vin(t) will be shifted in frequency so it appears around the chopper frequency at e.g. 1 kHz as is illustrated inFIG.6b. The integrated amplifier will introduce flicker noise in the spectrum up to the corner frequency e.g. at 200 Hz, while the frequency range above the corner frequency—including the frequency shifted bio-potential signal—will only be affected by white, thermal noise. This is illustrated inFIG.6c. In the mixer12, the output from the integrated amplifier10is modulated with the chopper signal m(t), where the bio-potential signal is brought back to the basic frequency range again, while the flicker noise of the amplifier is positioned around the chopper frequency. This is illustrated inFIG.6d. An appropriate low-pass filtering at a later signal processing stage will remove the flicker originated noise now present in the frequency range around the chopper frequency. An active electrode design according to the invention may be designed with low input-referred noise, high input impedance and low bias current, low input referred offset, low output impedance, high CMRR and PSRR, and low-power consumption. The actual implementation of an active electrode may be optimized for different applications, for example implantable neural probe array and fabric-based use (dry-contact electrode). FIG.3shows schematically a bio-potential monitoring system employing an active electrode design according to an embodiment of the invention. A plurality of electrodes is arranged in mesh30positioned on the scalp35of a user. In another embodiment, the electrodes may be provided on an earplug and data may be collected from the ear canal and processed in a battery driven data processor placed behind the ear. Electrodes31and32do each include a probe34being a capacitive sense electrode and an active shield electrode placed near but spaced apart from the capacitive sense electrode. The input signal picked up by the probes34is led to respective amplifiers10, preferably arranged as a unity gain amplifier. The closed-loop unit-gain amplifiers10are connected between the sense electrode and the active shield electrode. With this arrangement, the parasitic capacitor of the sense electrode is effectively reduced, thereby increasing sensitivity. The output from the closed-loop unit-gain amplifier10with chopper modulation is via a shielded cable13, e.g. a coax cable, fed to a Variable-Gain Amplifier14varying the gain based on a control voltage, and further to an Analog-to-Digital Converter15converting the amplified VBio signal into a digital representation for further processing. The Variable-Gain Amplifier14is a differential amplifier. Shielding is preferred but not crucial/necessary between the front-end integrated circuit containing the closed-loop unit-gain amplifier10, and the back-end integrated circuit containing the Variable-Gain Amplifier14and the Analog-to-Digital Converter15. In the following there is provided a technical description of the active electrode using a choppered buffer according to the invention.FIG.4illustrates that there exist several major parasitic-capacitance contributors in an active electrode concept. A shield44is placed near (and substantially in parallel with) said electrode43, and the shield44is electric insulated from said electrode43. There is an electric insulator (not shown) between the electrode43and the shield44. This arrangement will cause a capacitive coupling between the electrode43and the shield44. The electrode43is connected to the integrated circuit via input pads43a, and a capacitive parasitic coupling there between may be observed. The shield44is via input pads48aconnected to a shield48enclosing the integrated circuit, and also here there will be a capacitive parasitic coupling. With shielding by a buffer, electrode parasitic capacitances40at the sensing electrode and parasitic capacitances41caused by capacitive couplings between the input pads48aand43acan be compensated. The active electrode concept shown inFIG.4illustrates that the amplifier is implemented as MOSFET transistors on a substrate (the integrated circuit). The amplifier is connected to a power supply46and ground45via respective contact pads46aand45a, and has an output terminal47. The output terminal47of the integrated amplifier is connected to the shield48, while the contact pads45aand46aare electrical isolated therefrom. The shield48is connected to the output terminal47of the integrated amplifier to actively drive the electrical potential of the shield48, thereby providing an active shielding of the electrode43. Some capacitances are difficult to compensate, because shielding to their bottom node cannot be applied. This count for the parasitic capacitance42abetween the input pad49aand the substrate, the parasitic capacitance between the transistor gate and the substrate—the gate-to-substrate capacitance42b, the parasitic capacitance referred between gate and source of the transistor—the gate-to-source capacitance42c, and the parasitic capacitance between gate and drain of the transistor—the gate-to-drain capacitance42d. In these circumstances, the objective is to design the circuitry such that the value of the capacitors will be as small as possible. As shown inFIG.5, a choppered buffer according to the invention is implemented based on a closed-loop unit-gain amplifier10according to the illustrated embodiment. An input transistor pair, M1and M2, of the closed-loop unit-gain amplifier10is minimized in size in order to reduce the input parasitic capacitances and thereby get high impedance. Flicker noise of input transistors M1and M2is a non-dominant noise source due to the employed chopper modulation. A constant current is maintained through the input transistor pair, M1and M2, through the use of the current source50formed by a transistor MN and a voltage source Vbattery. The voltage source Vbattery may according to one embodiment be a coin-cell battery of the type used for hearing aids having nominal supply voltage being approximately 1.2 V. A bias, Vbp, is applied to the gate of the MOSFET transistor MN controlling the current from the voltage source Vbattery fed to the source of the input transistor pair, M1and M2. By maintaining a constant current through the input transistor pair, M1and M2, and applying the negative feedback of the unity gain configuration, the gate-to-source capacitance42c(FIG.4) and gate-to-substrate capacitance42b(FIG.4) to the sensor input are minimized. The illustrated embodiment for implementing the closed-loop unit-gain amplifier10with chopper modulation according to the invention employs three chopper switches CHOP1, CHOP2, and CHOP3. The sizes of chopper switches CHOP1, CHOP2, and CHOP3are optimized for speed and noise, and in this topology, the chopper switches CHOP2and CHOP3are arranged inside of the closed-loop unit-gain amplifier10. Hereby, by using the inherent differential nodes, no extra differential nodes will be required. Furthermore, this will not limit the bandwidth of the closed-loop unit-gain amplifier10with chopper modulation. The input chopper switch CHOP1receives the sensed bio-potential signal Vin as a first input signal, and the output signal Vout from the closed-loop unit-gain amplifier10via a feedback branch53as a second input signal. The chopper switch CHOP1operates at 1 kHz chopper frequency. The chopper signal alternates between +1 and −1 at 50% duty cycle. The bio-potential signal Vinhas a low bandwidth (normally between 0-40 Hz), but the chopper frequency shall be above the corner, fcorner(FIG.2). Choosing the chopper frequency to be too high, will adversely affect the power consumption of the overall electrode assembly. The gates of the input transistor pair, M1and M2, receive respective outputs from the input chopper switch, CHOP1, The constant current from the current source is passed through the input transistor pair, M1and M2via respective source terminals, and the drains of the input transistor pair, M1and M2, are connected to respective terminals on the second chopper switch, CHOP2. The two outputs from the chopper switch, CHOP2, are connected to respective source terminals of a MOSFET transistor pair, M3and M4. The transistor pair, M3and M4, forms a source follower (common-drain amplifier) being a Field Effect Transistor amplifier topology, typically used as a voltage buffer. MOSFET transistors M5, M6, M7and M8form a cascoded current mirror circuit, which would be recognized by a person skilled in the art as being a standard component in an operational amplifier. The cascoded current mirror circuit is a two-stage amplifier composed of a transconductance amplifier followed by a current buffer. The third chopper switch CHOP3is arranged in between the two stages of the cascoded current mirror circuit. The cascoded current mirror circuit improves input-output isolation as there is no direct coupling from the output to input. Three MOSFET Transistors MNC, M9and M10are arranged as an extra source follower. The three MOSFET Transistors MNC, M9and M10are connected to the voltage source Vbatteryand operated as a level shifter providing lower dc bias to the transistors M3and M4forming the source follower. FIG.7illustrates one embodiment of the chopper switch CHOP1, CHOP2, CHOP3used in the choppered buffer topography shown inFIG.5. The chopper switch has a pair of input terminals80and a pair of output terminals81. The chopper switch is shielded by bulks by means of a shield82, and includes four transistor switches84,85,86, and87—all controlled by a clock signal, Clk. For the transistor switches85and87, the clock signal, Clk is received via respective inverters88and89, whereby the transistor switches84and86closes when the clock signal is high, and the transistor switches85and87closes when the clock signal is low. The inverters88and89are NOT gates implementing logical negation. Thereby, the four transistor switches84,85,86, and87ensures that a first terminal of the pair of input terminals80alternately connected to a first and a second terminal of the output of the pair of output terminals81. The second terminal of the pair of input terminals80alternately connected to the second and the first terminal of the output of the pair of output terminals81. By connecting the bulks or the shield82to the buffer output90, all the transistor switches84,85,86, and87are shielded to eliminate the body effects and extra current flow through the bulk nodes. Therefore the bias current will not significantly change when input common-mode voltage varies. In addition, the chopping clock is bootstrapped to keep the overdrive voltage and thereby the “ON” resistance of the transistor switches84,85,86, and87substantially constant. In such a way, the current noise and the thermal noise are insensitive to input common-mode voltage. A bootstrap circuitry83clock signal and the output buffer signal via the shield82and delivers on the output the chopper clock signal, Clk, used to open and close the transistor switches84,85,86, and87. The bootstrap circuitry83deliberately intends to alter the input impedance. As illustrated inFIG.10, the noise issues may be separated into two categories: voltage-domain noise and current-domain noise.FIG.10show the excess noise sources at the input of chopper amplifier as shown inFIG.5. A man skilled in the art will understand that the problems caused by current noise are highly depending on the value of source impedance. Since the skin-electrode impedance is relatively large, and subject to large variations, in the dry-contact acquisition analog front-end, both two noises deserve our concerns. Voltage-Domain Noise FIG.10shows the excess noise sources at the input of chopper amplifier. Rsxdenotes the skin-electrode resistance. The main voltage noise contributors60include the input transistor pair, M1and M2, and the final transistor pair M7and M8in the cascoded current mirror circuit, due to the high voltage gain from the these MOSFETs gates. The voltage noise may be expressed as a result of the flicker noise and the thermal noise, and the offset and flicker noise is theoretically removed from low frequency signal band, due to the chopper modulation. The dominant component of the residue noise is thermal noise, Vnoise, of MOSFETs and may be expressed as follows: Vnoise≈√{square root over ((1+gm7/gm1)(8kT/gm1)BW)} (1) where BW denotes the bandwidth of interest, gmithe transconductance of MOSFETs Mi, k is Boltzmann's constant, and T is the absolute temperature of the component. Within the bandwidth between 0.5-100 Hz, the integrated noise is approximately 0.29 μVrmswhen the chopper frequency is selected to be 1 kHz and the transconductance of the dominating MOSFETs has been optimized in order to minimizing the thermal noise. From equation (1) it is seen that the noise voltage Vnoiseis proportional with the reciprocal value of the square root of the transistor transconductance. It has been found that current consumption of 8 μA and noise of 29 nV/√{square root over (H)}z as acceptable in terms of power and noise budget of the system, which has been indicated inFIG.5by operating the closed-loop unit-gain amplifier10through the use of the current source50providing 8 μA. Current-Domain Noise Bias current61gives rise to offset across the source impedance. For bio-signal sensor amplifiers, the major sources of bias current include leakage in ESD protection circuitry, gate leakage of input MOSFETs and base current of bipolar junction transistors, chopping activities as well as PCB leakage. The dominant contributors include leakage of ESD protection circuitry and current flow caused by periodic chopping activities. The leakage of ESD protection circuitry is highly dependent on the ESD techniques and circuitry properties. Therefore it exists in all the amplifiers and it's hard to be completely avoided. Periodic chopping activities give rise to dynamic current flow through the chopper switches and switch-capacitor resistance. By definition, the bias current is the average current over a relatively long time at the input node. For the CMOS chopper amplifiers, such kind of current could be the dominant bias current source over the others. It has been observed that the excess noise normally can be regarded as negligible in amplifiers with low-source-impedance, for instance, 10 kΩ˜20 kΩ of wet electrodes. However for high-source-impedance, for instance, several hundred kilo ohm to several mega ohm of dry-contact electrode, the imperfections like dc offset and corresponding output noise will be regarded as problematic. By applying appropriate design optimization strategies, all the switches may advantageously be shielded to eliminate the body effects and extra current flow through the bulk nodes. Therefore the bias current will not significantly change when input common-mode voltage varies. In addition, the chopping clock is bootstrapped to keep the overdrive voltage and thereby the ‘on’ resistance of the switches approximately a constant. In such a way, the current noise and the thermal noise are insensitive to variation in the input common-mode voltage. By ensuring a well optimization of the choppered buffer according to one embodiment of the invention, the bias current is quite low. The chopper switches are naturally shielded by the buffer and there exist no significant potential differences between sources and drains as well as bulks in the switches. Therefore there is no current path in the chopper. The current noise for the choppered buffer according to one embodiment of the invention has been observed in the level about 0.3 fA/√{square root over (Hz)}. With a 1MΩ resistor connected, the excess noise density contribution would be 0.3 nV/√{square root over (Hz)}. The Common-Mode Rejection Ratio (CMRR) of a differential amplifier is the rejection by the device of unwanted input signals common to both input leads, relative to the wanted difference signal. A high CMRR is required when a differential signal must be amplified in the presence of a disturbing common-mode input. Power Supply Rejection Ratio (PSRR) is defined as the ratio of the change in supply voltage in the op-amp to the equivalent (differential) output voltage it produces. The output voltage will depend on the feedback circuit. Chopper modulation has been found to not only reduce the noise but also to contribute to the improvement of the CMRR and the PSRR. The Common-Mode Rejection (CMR) of the amplifier without chopper modulation has been observed to be −73.3 dB and with chopper modulation the CMR is improved to −107.9 dB, almost 35 dB enhancement in CMRR. Furthermore, the Power Supply Rejection (PSR) has been improved with chopper modulation from −48 dB to −97.3 dB, an almost 49 dB enhancement in PSRR. This has been observed with a capacitance load of 10 pF at the output node for a frequency band (wanted input signal) below 100 Hz. According to the invention there is provided a new choppered buffer employed in the active electrode design. Compared with conventional and state-of-art designs, an active electrode with choppered buffer exhibits several attractive advantages. Thanks to the unit-gain configuration, a well shielding property can be permitted. As a consequence, ultra-high input impedance is obtainable, and thereby high CMRR of input impedance network could be realized. Chopper modulation shielded by the buffered output leverages the voltage-domain and the current-domain accuracies, reaching a good trade-off compared to conventional techniques using buffer and chopper amplifier. Besides, a significant benefit resulting from chopper modulation is the improved CMRR and PSRR between two buffer channels, which could be quite useful to enhance the noise immunity against surrounding interferences. The subsequent differential amplifier could filter out the accompanying chopper spikes and ripples at expense of extra amount of power. The active electrode with choppered buffer is very suitable for use in high-quality bio-recording systems. FIG.8shows an embodiment of a sensor system based upon two active electrodes according to the invention. The shown sensor system includes a front-end module84being connected to a back-end module86via a set of wires80. The front-end module84includes in the illustrated embodiment a pair active electrodes43with choppered buffer as described with reference toFIG.1. The choppered buffer is based upon the integrated amplifier10has a gain, Av, equal to one, and two mixers11and12applying the same chopper modulation signal, m(t). Furthermore the output terminal47of the choppered buffer is connected to the shield48enclosing the integrated circuit. Preferably, the active electrodes operate in differential mode—which means that one of the electrodes acts as reference. The back-end module86has a choppered instrumentation amplifier based upon an integrated amplifier82having a gain for amplifying the bio-potential signal from the active electrodes, and two mixers81and83applying the same chopper modulation signal, n(t). The chopper modulation signal, n(t) is applied in order to avoid amplifying the flicker noise in the integrated amplifier82. The chopper clock signal n(t) is preferably a square-wave signal that contains odd harmonics at fch, 3 fch, 5 fch, and as most of the energy of the chopper ripple is located at the first harmonics, fch, the higher harmonics may be eliminated by applying a Chopper Spike Filter (CSF)85providing a low pass or band pass filtering effect. The Chopper Spike Filter85includes a sample and hold circuit provided by a switch and a capacitor, where the switch is driven by sampling pulses. The Chopper Spike Filter85removes glitch caused by the chopper switches. Two branches fed from the output of choppered instrumentation amplifier81-83, but with reverse polarity, has been included in order to generate a fully differential output is fed a Programmable Gain Amplifier (PGA)87and an Analog-to-Digital Converter (ADC)88, from where the signal is supplied to a not-shown microcontroller for processing. The choppered instrumentation amplifier82may in one embodiment be provided in front-end module84and thereby included within the active shielding. Then the number of thin wires80connecting the front-end module84to a back-end module86may be reduced from four to two (shielded) wires. These wires carries supply voltage, ground, clock and signal and may in a specific embodiment have a length of 10 mm. FIG.9shows an ear EEG device115according to one aspect of the invention. The ear EEG device115that can be worn inside the ear of a person to be monitored, e.g. for detecting Hypoglycemia, e.g. like a per se known In-The-Canal (ITC) hearing aid. Furthermore, the device will allow healthcare personal to remote monitor or record EEGs for several days at a time. Healthcare personal would then be allowed to monitor patients who have regularly recurring problems like seizures or micro-sleep. The ear EEG device115will not interfere with normal life, because the ear EEG device115has an acoustic vent116so the wearer will be able to hear. After a while, the wearer forgets that he wears the ear EEG device115. The ear EEG device115is on its outer surface provided with two active electrodes117according to the invention. Internally the ear EEG device115contains an electronic module118. The ear EEG device115is formed to fit into the external auditory canal111of the wearer, and defines a cavity in the external auditory canal111together with the tympanic membrane110, and the cavity is opened by means of the acoustic vent116extending through the entire length of the ear EEG device115. Preferably the ear EEG device115does not extend beyond the pinna112. The electronic module118is shown schematically in enlarged view in the dotted box118. The electronic module118includes a power supply120based upon a standard hearing aid battery for powering the electronics. The two electrodes117provided on the surface of the ear EEG device115pick up a potential and delivers the data via a module125operating as electrode frontend and Analog to Digital Converter (ADC) to a digital signal processor124. Details of the electrode frontend and ADC module125has been explained with reference toFIG.8. The digital signal processor124receives the amplified and digitized signal for processing. According to one embodiment, the digital signal processor124analyses the EEG signal picked up for detecting hypoglycemia by monitoring the brain wave frequency, and if the brain wave frequency falls beyond a predefined interval, this may indicate that a medical emergency may arise. Hypoglycemia is a medical emergency that involves an abnormally diminished content of glucose in the blood. Upon detection of abnormal brain wave activities, the digital signal processor124communicates these findings to a device operating controller122. The device operating controller122is responsible for several operations and has an audio front-end module123including a microphone and a speaker. With the microphone, the device operating controller122is able to pick up audio samples and classify the current sound environment. Furthermore, the device operating controller122may have access to real time clock information—either from an internal clock module or from a personal communication device (e.g. a smartphone) accessible via a radio module121. The personal communication device and the radio module121may establish a wireless communication link by means of a short range communication standard, such as the Bluetooth™ Low Energy standard. The device operating controller122adjusts the predefined interval for normal the brain wave activity in dependence to the real time clock information and the sound environment classification. With the speaker, the device operating controller122is able to alert the wearer of the ear EEG device115that medical emergency may arise and that precautionary actions have to be taken. The number of electrodes has so far been identified as a pair of active electrodes operating in differential mode. However two or more active electrodes may be acting as sensing electrodes for measuring the electric potential difference relative to an active electrode acting as a common reference electrode. The electrodes will operate in a unipolar lead mode. The ear EEG device115may in a further embodiment operate as a hearing aid if the processor is provided with a gain for alleviating a hearing loss of the wearer. The ear EEG device115may advantageously be integrated into an In-The-Canal (ITC) hearing aid, a Receiver-In-Canal (RIC) hearing aid or another type of hearing aid. | 28,077 |
11857329 | DETAILED DESCRIPTION This disclosure relates to various improvements in one or more features, implementations, and design configurations of wearable cardiac monitoring and/or treatment devices over conventional devices. Patients prescribed with such life critical devices need to be able to wear them continuously through daily activities to ensure near constant protection against life-threatening cardiac arrhythmia conditions over extended periods of time. Accordingly, the devices herein provide improved ergonomics and physiological benefits that promote better voluntary compliance with device use guidelines than conventional devices. One set of examples herein is based on a wearable defibrillator band or strap that is worn unobtrusively and comfortably under the patient's undergarments. Another set of examples feature at least two separate wearables portions that can also be worn unobtrusively and comfortably. A first wearable portion includes ECG sensing electrodes and/or other physiological sensors. Additionally, a second wearable portion is optionally separable from the first wearable portion and includes treatment electrodes. Briefly, the wearable defibrillator band is worn about a thoracic region of a patient, in particular, within a T1 thoracic vertebra region and a T12 thoracic vertebra region. The band includes certain light weight elements such as electrocardiogram (ECG) sensors and treatment electrodes in close proximity or in direct contact with the patient's skin, as well as associated circuitry necessary for the device to acquire and process the ECG signals. To secure the sensors and electrodes in close proximity or in direct contact with the patient's skin, the band includes a compression portion that immobilizes the band relative to the patient's skin as the patient moves and goes about a daily routine. The circuitry in the band is electrically coupled to a controller housed within an ingress-protected housing, which includes heavier energy storage elements such as capacitors and batteries. Because no portion of the band traverses the patient's limbs or shoulders, the patient is free to move, bend, twist and lift his or her arms and/or shoulders without imparting torque on the device100. This immobilizes the band relative to the patient's skin and prevents or eliminates signal noise associated with sensors shifting against the skin when compared to wearable devices that run over a patient's shoulder or arm. The size and position of the band also provides a discreet and comfortable device covering only a relatively small portion of the surface area of a patient's entire thoracic region and accommodating a plurality of body types. A relatively small portion can be for example, 25%, or less (e.g. 20%, 15%, 10%, 5% or less than 5%) of the surface area of the thoracic region105. Covering only a relatively small portion of the thoracic region further improves comfort and encourages patient compliance because the patient will feel little or no discomfort and may forget the band is being worn. These features as well as others described herein thus provide certain advantages over conventional wearable defibrillators at least in terms of comfort, patient compliance, and minimizing false arrhythmia alerts. Turning to the two separable wearable portions embodiment, disclosed herein is a first wearable portion that includes an elongated strap configured to encircle a thoracic region of a patient and exert a radial compression force to secure the strap on the patient. The strap includes the ECG sensing electrodes as well as one or more receiving portions to receive additional components including treatment electrodes and additional sensors (e.g., non-ECG physiological sensors or motion sensors). The second wearable portion is configured to be worn over at least one shoulder of the patient and includes a wearable substrate on which one or more treatment electrodes is disposed. In examples, the second wearable portion includes treatment electrodes, and the ECG sensing electrodes in the first wearable portion as well as the treatment electrodes in the second wearable portion are together electrically coupled to a controller housed within an ingress-protected housing including the capacitors and batteries. In certain implementations, the second wearable portion is implemented to fully support the device capacitors and batteries, e.g., wherein such capacitors and batteries are evenly weight distributed within the second wearable portion. The second wearable portion can be worn optionally and for a shorter duration than the first wearable portion such that a patient can avoid wearing treatment portions of the device during uneventful monitoring periods. The second wearable portion can be worn, for example, only when a patient is deemed at potential risk for a sudden cardiac event occurring within some period of time (e.g. 1 day, 1 week, 2 weeks). The treatment electrodes of the second wearable portion are configured to couple to wiring of the first wearable portion and/or a receiving port disposed on the housing that is in electrical communication with a processor of the controller. Wearable medical devices as disclosed herein include cardiac monitoring and/or treatment devices that monitor electrocardiogram (ECG) signals and, in certain examples, other physiological signals of patients wearing such devices. For example, the medical device can be used as a cardiac monitor in certain cardiac monitoring applications, including heart failure and arrhythmia monitoring applications. In some implementations, the medical device can be configured to monitor other physiological parameters as an alternative or in addition to ECG signals and/or metrics. In addition to or instead of cardiac monitoring, such devices may also monitor respiratory parameters (e.g., to monitor congestion, lung fluid status, apnea, etc.), patient activity (e.g., posture, gait, sleep conditions, etc.) and other physiological conditions. In some implementations, the medical device can be configured to include one or more treatment components interoperable with and, in embodiments, selectively connected to one or more monitoring components. In some implementations, a patient-worn cardiac monitoring and treatment device detects one or more treatable arrhythmias based on physiological signals from a patient. The treatable arrhythmias include those that may be treated by defibrillation pulses, such as ventricular fibrillation (VF) and shockable ventricular tachycardia (VT), or by pacing pulses, such as bradycardia, tachycardia, and asystole. A wearable medical device as disclosed herein monitors a patient's physiological conditions, e.g., cardiac signals, respiratory parameters, and patient activity, and delivers potentially life-saving treatment to the patient. The medical device can include a plurality of sensing electrodes that are disposed at various locations on the patient's body and configured to monitor the cardiac signals of the patient such as electrocardiogram (ECG) signals. In some implementations, the device can also be configured to allow a patient to report his/her symptoms including one or more skipped beat(s), shortness of breath, light headedness, racing heart, fatigue, fainting, and chest discomfort. The device determines an appropriate treatment for the patient based on the detected cardiac signals and/or other physiological parameters prior to delivering a therapy to the patient. The device can then cause one or more therapeutic shocks, for example, defibrillating and/or pacing shocks, to be delivered to the body of the patient. The wearable medical device includes a plurality of treatment electrodes disposed on the patient's body and configured to deliver the therapeutic shocks. As described in U.S. Pat. No. 8,983,597, titled “MEDICAL MONITORING AND TREATMENT DEVICE WITH EXTERNAL PACING,” issued on Mar. 17, 2015 (hereinafter the “'597 patent”), which is hereby incorporated herein by reference in its entirety, an example patient worn cardiac monitoring and treatment device can be, for example, an ambulatory medical device that is capable of and designed for moving with the patient as the patient goes about his or her daily routine. For example, as shown inFIG.1, the ambulatory medical device10can be a wearable cardioverter defibrillator (WCD) and can include one or more of the following: a garment11, one or more physiological sensors12(e.g., ECG electrodes, heart rate sensors, vibrational sensors, and/or other physiological sensors), one or more treatment electrodes14aand14b(collectively referred to herein as treatment electrodes14), a medical device controller20, a connection pod30, a patient interface pod40, a belt50about the patient's torso to support one or more components, or any combination of these. In some examples, at least some of the components of the medical device10can be configured to be affixed to the garment11(or in some examples, permanently integrated into the garment11), which can be worn about the patient's torso5. The medical device controller20can be operatively coupled to the physiological sensors12which can be affixed to the garment11, e.g., assembled into the garment11or removably attached to the garment11, e.g., using hook and loop fasteners. In some implementations, the physiological sensors12can be permanently integrated into the garment11. The medical device controller20can be operatively coupled to the treatment electrodes14. For example, the treatment electrodes14can also be assembled into the garment11, or, in some implementations, the treatment electrodes14can be permanently integrated into the garment11. In embodiments according to this disclosure, such as that ofFIGS.2A-6B and9A-14, one or more portions of the garment11of the device10ofFIG.1can be eliminated or distributed about separately donned wearable portions. In embodiments, permanently or temporarily eliminating one or more portions of the garment11results in leaving a device configured with relatively less surface area. Such a wearable device can include one or more of, for example, a belt, a harness, a bandeau, a sash, a vest, a yoke, and/or a pinnie. In implementations, the device can be fitted to the body as a lightweight stretchable support garment. Systems and techniques are disclosed herein to improve ergonomics of the one or more wearable portions of such a wearable medical device. Patients are encouraged to comply with the device use guidelines, including wearing the device at all times including while showering or sleeping. To improve patient compliance with these guidelines, the devices described herein include one or more wearable portions that are lightweight, comfortable, and discreet so that they may be worn under the patient's clothing. In some implementations described herein, the devices include various features that promote comfort and efficacy while continuing to protect the patient from adverse cardiac events. In implementations, the devices are fitted to nest with the contours of a patient's body, including, for example, the shoulder-neck region and/or the thoracic region. In implementations described herein, the devices include one or more wearable portions configured to be worn continuously and/or selectively, each of the one or more wearable portions configured to support one or more monitoring and/or treatment components. Based on an analysis of monitored signals and output from a predictive algorithm configured to determine the likelihood of a cardiac event, the device can instruct the patient on when to add additional monitoring and/or treatment components to the one or more wearable portions, and/or when to add an additional one or more wearable portions including one or more monitoring and/or treatment components. In modular implementations of the wearable medical device including two or more interoperable wearable portions, each portion can include one or more of the aforementioned monitoring and treatment components. In an example scenario, a patient may be prescribed the wearable medical device following a medical appointment. For example, the such a wearable monitoring and/or treatment device can be prescribed for patients that meet certain criteria. Examples may include or more of the following criteria: (1) Primary prevention (ejection fraction (EF)≤35% and Myocardial Infarction (MI), nonischemic cardiomyopathy (NICM), or other dilated cardiomyopathy (DCM)), including after recent MI (e.g., typically worn for about 40 days ICD waiting period), before and after coronary artery bypass grafting (CABG) or percutaneous transluminal coronary angioplasty (PTCA) (e.g., typically worn for about 90 day ICD waiting period), while listed for cardiac transplant, when recently diagnosed with nonischemic cardiomyopathy (e.g., typically worn for about 3 to 9 month ICD waiting period), when diagnosed with New York Heart Association (NYHA) class IV heart failure, and when diagnosed with terminal disease with life expectancy of less than 1 year; (2) ICD indications when patient condition delays or prohibits ICD implantation; and (3) ICD explantation. Wearing the device protects the patient from life-threatening arrhythmias, while also enabling the collection of diagnostic information for additional, potentially more invasive procedures. The example devices described herein are prescribed to be worn continuously and typically for a prescribed duration of time. For example, the prescribed duration can be a duration for which a patient is instructed by a caregiver to wear the device in compliance with device use instructions. The prescribed duration may be for a short period of time until a follow up medical appointment (e.g., 1 hour to about 24 hours, 1 day to about 14 days, or 14 days to about one month), or a longer period of time (e.g., 1 month to about 3 months) during which diagnostics information about the patient is being collected even as the patient is being protected against cardiac arrhythmias. The prescribed use can be uninterrupted until a physician or other caregiver provides a specific prescription to the patient to stop using the wearable medical device. For example, the wearable medical device can be prescribed for use by a patient for a period of at least one week. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least 30 days. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least one month. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least two months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least three months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least six months. In an example, the wearable medical device can be prescribed for use by a patient for an extended period of at least one year. Because these devices require continuous operation and wear by patients to which they are prescribed, advantages of the implementations herein include use of comfortable, non-irritating, biocompatible construction materials, and features designed to enhance patient compliance. Such compliance-inducing design features include, for example, device ergonomics and inconspicuous appearance when worn under output garments, among others. In some implementations, the device includes monitoring and treatment components disposed in or on a cross-body, shoulder-to-hip sash or in or on a monolithic band configured to encircle a thoracic region. The device can be held in compression against the thoracic region so as to minimize or eliminate sensor signal noise and other artifacts. In implementations, the device includes a first wearable portion configured to be worn about the thoracic region for monitoring the patient and one or more ports disposed on the first wearable portion and coupled to a data bus. The one or more ports are configured to receive additional monitoring and/or treatment components for selectively adding functionality to the device. In implementations, the device includes a first wearable portion configured to be worn about the thoracic region for monitoring the patient, and a second, later-added portion. The second wearable portion can be configured to connect to the monitoring components and provide therapeutic treatment to the patient upon detection of a treatable condition. Splitting the components over one or more garments ensures that larger, heavier, and/or infrequently used components, such as defibrillation treatment electrodes, are worn by the patient only when necessary. This distribution of the components of the device over two or more wearable portions lessens patient discomfort throughout a prescribed duration of wear and encourages patient compliance with caregiver instructions. The devices described here can be prescribed to be worn continuously and for long durations of time, often over the course of several weeks or months. Substantially continuous or nearly continuous use as described herein may nonetheless qualify as continuous use. In some implementations, the patient may remove the wearable medical device for a short portion of the day (e.g., for half an hour while bathing). At least a monitoring potion of the wearable medical device can be continuously or nearly continuously worn by the patient. Continuous use can include continuously monitoring the patient while the patient is wearing the device for cardiac-related information (e.g., electrocardiogram (ECG) information, including arrhythmia information, cardiac vibrations, etc.) and/or non-cardiac information (e.g., blood oxygen, the patient's temperature, glucose levels, tissue fluid levels, and/or pulmonary vibrations). For example, the wearable medical device can carry out its continuous monitoring and/or recording in periodic or aperiodic time intervals or times (e.g., every few minutes, hours, once a day, once a week, or other interval set by a technician or prescribed by a caregiver). Alternatively or additionally, the monitoring and/or recording during intervals or times can be triggered by a user action or another event. As noted previously, the wearable medical device can be configured to monitor other physiologic parameters of the patient in addition to cardiac related parameters. For example, the wearable medical device can be configured to monitor, for example, pulmonary vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), and tissue fluids (e.g., using radio-frequency transmitters and sensors), among others. FIGS.2A-Billustrate an example cardiac monitoring and treatment device100that is external, ambulatory, and wearable by a patient. The device100is an external or non-invasive medical device, which, for example, is located external to the body of the patient and configured to provide transcutaneous therapy to the body. The device100is an ambulatory medical device, which, for example, is capable of and designed for moving with the patient as the patient goes about his or her daily routine. The device100can include a band110configured to be worn about a thoracic region105of a patient within a T1 thoracic vertebra region and a T12 thoracic vertebra region, as depicted inFIGS.3and4. For example, the device100can include a band110configured to be worn within a T5 thoracic vertebra region and a T11 thoracic vertebra region. For example, the device100can include a band110configured to be worn within a T8 thoracic vertebra region and a T10 thoracic vertebra region. As shown inFIG.2B, the band110can have a vertical span V1, of between about 1 to about 15 centimeters along at least 50 percent of a length L1of the band110. For example, in implementations, the vertical span V1is between 2 to 12 centimeters along at least 50 percent of the length L1. For example, in implementations, the vertical span V1is between 3 to 8 centimeters along at least 50 percent of the length L1. In implementations, the band110includes a compression portion disposed in the band110. In implementations, the band110exerts compression forces against the skin of the patient by one or more of manufacturing all or a portion of the band110from a compression fabric, proving one or more tensioning mechanisms in and/or on the band110, and providing a cinching closure mechanism for securing and compressing the band110about the thoracic region105. The compression portion is configured to immobilize the band110relative to a skin surface of the thoracic region105of the patient by exerting one or more compression forces against the thoracic region. In implementations, the band is configured to exert the one or more compression forces in a range from 0.025 to 0.75 psi to the thoracic region105. For example, the one or more compression forces can be in a range from 0.05 psi to 0.70 psi, 0.075 psi to 0.675 psi, or 0.1 to 0.65 psi. Compression forces of the medical device can be determined, for example, using one or more pressure sensors distributed about the band110and disposed between the band110and the thoracic region105. The one or more pressure sensors can be, for example, one or more force sensitive resistors, one or more Polydimethylsiloxane (PDMS)-based flexible resistive strain sensors, one or more capacitive pressure sensors, and/or a tactile array of sensors such as those sold by PPS of Los Angeles, CA The one or more pressure sensors can be, for example, ultra-thin (e.g. 0.1 mm or less), flexible pressure sensors. In implementations, the ultra-thin, flexible pressure sensors can be configured to provide pressure mapping using a system for example such as a TEKSCAN measurement and mapping system, including the I-SCAN system by Tekscan, Inc. of South Boston, Mass. In other implementations, the compression forces of the medical device can be modeled using a fabric-based analytical module employing tensile data. In other implementations, the compression forces can be measured using a mechanical measurement system such as the Hohenstein Measurement System, such as the HOSYCAN, manufactured by Hohenstein, Bonnigheim, Germany. In implementations, compliance with one or more compression forces of embodiments described herein can be determined in accordance with the following test fixtures and conditions. The device100can be mounted on a mannequin such as for example, one manufactured by Alvanon. In an example, the mannequin has thoracic circumferential dimensions ranging from 66 cm to 142 cm. In some examples, the garment may be fit on patients such that a garment extends to approximately 1″ below the underbust for fitting the patient. One or more of the exemplary sensors previously described can be inserted between the mounted device100and the mannequin (or patient) at a plurality of arbitrary locations, for example, 5 locations spaced apart along the circumference of the band110. In some examples, the locations may be chosen to be at both anterior and posterior positions about the thoracic region of the mannequin (or patient). Compression forces can then be measured and individually compared to the one or more compression ranges described herein. Alternatively, or in addition, the one or more measured compression forces can be averaged and the average force compared to the one or more compression ranges described herein. The test can be conducted under temperature and humidity conditions of 0-60 degrees Celsius and 10-90% humidity. Further, the test can be conducted in a wet environment (e.g., the device mounted on the mannequin or patient is exposed to water) to simulate bathing and/or showering conditions. As shown inFIG.2A, the device100includes a plurality of electrodes and associated circuitry disposed about the band110. The plurality of electrodes can include at least one pair of sensing electrodes112disposed about the band110and configured to be in electrical contact with the patient. The sensing electrodes112can be configured to detect one or more cardiac signals such as ECG signals. Example ECG sensing electrodes112include a metal electrode with an oxide coating such as tantalum pentoxide electrodes, as described in, for example, U.S. Pat. No. 6,253,099 entitled “Cardiac Monitoring Electrode Apparatus and Method,” the content of which is incorporated herein by reference. The device100can include an ECG acquisition circuit in communication with the at least one pair of ECG sensing electrodes112and configured to provide ECG information for the patient based on the sensed ECG signal. In implementations, the at least one pair of ECG sensing electrodes112can include a driven ground electrode, or right leg drive electrode, configured to ground the patient and reduce noise in the sensed ECG signal. The plurality of electrodes can include at least one pair of treatment electrodes114aand114b(collectively referred to herein as treatment electrodes114) and an associated treatment delivery circuit configured to cause delivery of the electrotherapy to the patient. The at least one pair of treatment electrodes114can be configured to deliver an electrotherapy to the patient. For example, one or more of the at least one pair of treatment electrodes114can be configured to deliver one or more therapeutic defibrillating shocks to the body (e.g., the thoracic region105) of the patient when the medical device100determines that such treatment is warranted based on the signals detected by the sensing electrodes112and processed by the medical device controller120. Example treatment electrodes114include, for example, conductive metal electrodes such as stainless steel electrodes that include, in certain implementations, one or more conductive gel deployment devices configured to deliver conductive gel to the metal electrode prior to delivery of a therapeutic shock. In implementations, a first one of the at least one pair of treatment electrodes114ais configured to be located within an anterior area of the thoracic region105and a second one of the at least one pair of treatment electrodes114bis configured to be located within a posterior area of the thoracic region105of the patient. In some implementations, the anterior area can include a side area of the thoracic region. In some examples, at least some of the plurality of electrodes and associated circuitry of the device100can be configured to be selectively affixed or attached to the band110which can be worn about the patient's thoracic region105. In some examples, at least some of the plurality of electrodes and associated circuitry of the device100can be configured to be permanently secured into the band110. In implementations, the plurality of electrodes are manufactured as integral components of the band110. For example, the at least one pair of treatment electrodes114and/or the at least one pair of ECG sensing electrodes can be formed of the warp and weft of a fabric forming at least a layer of the band110. In implementations, the treatment electrode114and the ECG sensing electrodes112are formed from conductive fibers that are interwoven with non-conductive fibers of the fabric. Additional implementations of sensing electrode arrangements and treatment electrode arrangements on a patient-worn medical device are provided herein in subsequent sections. In implementations, the device100can include one or more sensor ports115a-c(collectively referred to as115) for receiving one or more physiological sensors separate from the at least one pair of ECG sensing electrodes. The one or more physiological sensors can be, for example, sensors for detecting one or more of pulmonary vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), and tissue fluids (e.g., using radio-frequency transmitters and sensors). The additional sensor can be, for example, one or more physiological sensors including a pressure sensor for sensing compression forces of the garment, SpO2 sensors, blood pressure sensors, bioimpedance sensors, humidity sensors, temperature sensors, and photoplethysmography sensors. In some examples, the sensor ports115a-ccan also be configured to receive one or more motion and/or position sensors. For example, such motion sensors can include accelerometers for monitoring the movement of the patient's torso in x-, y- and z-axes to determine a movement of the patient, gait, and/or whether the patient is upright, standing, sitting, lying down, and/or elevated in bed with pillows. In certain implementations, one or more gyroscopes may also be provided to monitor an orientation of the patient's torso in space to provide information on, e.g., whether the patient is lying face down or face up, or a direction in which the patient is facing. In implementations, the device100includes a controller120including an ingress-protected housing, and a processor disposed within the ingress-protected housing. In implementations, as shown inFIG.7, the controller120can include a processor218, a therapy delivery circuit1130including a polarity switching component such as an H-bridge1128, a data storage1207, a network interface1206, a user interface1208, at least one battery1140, a sensor interface1202that includes, for example, an ECG data acquisition and conditioning circuit, an alarm manager1214, one or more capacitors1135, and a Sudden Cardiac Arrhythmia (SCA) Risk Analysis Assessor219. The processor218is configured to analyze the ECG information of the patient from the ECG acquisition circuit and detect one or more treatable arrhythmias based on the ECG information and cause the treatment delivery circuit to deliver the electrotherapy to the patient on detecting the one or more treatable arrhythmias. The medical device controller120can be operatively coupled to the sensing electrodes112, which can be affixed to the band110. In embodiments, the sensing electrodes112are assembled into the band110or removably attached to the garment, using, for example, hook and loop fasteners, thermoform press fit receptacles, snaps, and magnets, among other restraints. In some implementations, as described previously, the sensing electrodes112can be a permanent portion of the band110. The medical device controller120also can be operatively coupled to the treatment electrodes114. For example, the treatment electrodes114can also be assembled into the band110, or, as described previously, in some implementations, the treatment electrodes114can be a permanent portion of the band110. Optionally, the device100can includes a connection pod130in wired connection with one or more of the plurality of electrodes and associated circuitry. In some examples, the connection pod130includes at least one of the ECG acquisition circuit and a signal processor configured to amplify, filter, and digitize the cardiac signals prior to transmitting the cardiac signals to the medical device controller120. In implementations, the device100can include at least one ECG sensing electrode112configured to be coupled to the upper portion of the thoracic region105, above the band110, the at least one ECG sensing electrode112being in wired communication with the ECG acquisition circuitry and at least one of the connection pod and the controller120. In implementations, the device includes a conductive wiring140configured to communicatively couple the controller to the plurality of electrodes and associated circuitry disposed about the band. In implementations, the conductive wiring140can be woven into the warp and weft of the fabric. In implementations, the conductive wiring140can be integrated into the fabric, disposed between layers of the band110. In implementations, the conductive wiring140can include one or more conductive threads integrated into the fabric of the band110. In examples, the one or more conductive threads can be integrated in a zigzag or other doubled back pattern so as to straighten as the band110stretches. The zigzag or doubled-back pattern therefore accommodates for stretching and patient movement while keeping the one or more conductive threads from contacting the skin of the patient. Integrating the conductive wiring140into the band110reduces and/or eliminates snagging the wire or thread on an external object. In other examples, the conductive thread can be routed on an exterior surface of the band110so as to avoid contacting the skin of the patient and therefore avoid irritation associated with such potential contact. In implementations, the conductive wiring140includes two or more conductive wires bundled within an insulating outer sheath. In implementations, the conductive wiring140can be routed along the band110and held securely to the band110by one or more loops of fabric, closable retention tabs, eyelets and/or other retainers so that the conductive wiring140does not snag on or bulge beneath a patient's clothing worn over the band110. In implementations, the conductive wiring140extends between the controller120and plurality of electrodes and associated circuitry and the one or more sensor ports115. The one or more sensor ports115can include thereon a connector for receiving a complimentary mating portion of one or more additional sensors selectively disposed on the band110. The connector of the one or more sensor ports115can be in wired communication with the conductive wiring140such that upon receiving a sensor therein, a sensor port115functions as a conduit for communicating information between the sensor and the processor218of the controller120. The ingress-protected housing of the controller120protects the components thereunder (e.g., the processor218, the therapy delivery circuit1130including a polarity switching component such as an H-bridge1128, a data storage1207, a network interface1206, a user interface1208, at least one battery1140, the sensor interface1202, the alarm manager1214, the one or more capacitors1135, and the Sudden Cardiac Arrhythmia (SCA) Risk Analysis Assessor219) from external environmental impact, for example damage associated with solid particle ingress, dust ingress, and/or moisture, water vapor or liquid ingress. In implementations, for example, the ingress-protected housing can be a two-piece housing having two interlocking shell portions configured to be mated in a sealed press fit. For example, a compressible grommet, o-ring, or silicon seal can be inserted between and/or about the mating surfaces such that ingress into the interlocked shall portions is prevented. Similarly, any additional openings can be similarly sealed to prevent ingress, such as any openings comprising user input buttons or electronics ports for mating with wired components. In some examples, ports for receiving wire connectors therein can be sealed to the housing of the controller120with an epoxy to prevent ingress. Preventing such ingress protects the electronic components of the device100from short-circuiting or corrosion of moisture-sensitive electronics, for example, when a patient wears the device while showering. In implementations, the ingress-protected housing of the controller120includes at least one ingress-protected connector port121configured to receive at least one connector141of the conductive wiring140. The at least one ingress-protected connector port can have an IP67 rating such that the device can be connected to the controller120and operable when a patient is showering or bathing, for example. Example implementations of water-resistant housings of the controller120protect against liquid ingress in accordance with one or more scenarios as set forth in Table 1: TABLE 1ProtectionEffective Against (e.g. shall not impact normal operation of theAgainstmedical device as described herein)Dripping waterFalling drops of dripping water on the medical device housing, e.g.,water dripping on the housing at a rate 1 mm per minute for a period ofaround 10 minutes.Spraying waterSpray of water falling on the medical device housing at any angle up to60 degrees from vertical.Splashing ofWater splashing against the housing from any direction.waterWater jetsWater projected by a nozzle (e.g., a nozzle of 6.3 mm diameter) againstthe housing from any directionPowerful waterWater projected in powerful jets (e.g., a nozzle of 12.5 mm diameterjetsspraying water at a pressure of 100 kPa at a distance of 3 m) against thehousing from any directionImmersion, up toThe housing is immersed in water at a depth of up to 1 meter.1 m depthImmersion, 1 mThe housing is immersed in water at a depth of 1 meter or more.or more depthPowerful highThe housing is sprayed with a high pressure (e.g. 8-10 MPa), hightemperaturetemperature (e.g. 80 degrees Celsius) spray at close range.water jets In some implementations, the ingress-protected housing on the controller120is water-resistant and has a predetermined ingress protection rating complying with one or more of the rating levels set forth in IEC standard 60529. The liquid Ingress Protection rating can be one or more of any level (e.g., levels 3 to 9) in which rating compliance tests are specified in the standard. For example, to have a liquid ingress protection rating level of six, the ingress-protected housing of the controller120shall protect against ingress of water provided by a powerful water jet. The powerful water jet test requires that the housing of the controller120is sprayed from all practicable directions with a stream of water from a test nozzle having a 12.5 mm diameter. Water sprays for 1 minute per square meter for a minimum of three minutes at a volume of 100 liters per minute (+/−5 percent) so that a core of the stream of water is a circle of approximately 120 mm in diameter at a distance of 2.5 meters from the nozzle. For example, to have a rating level of 7, ingress of water shall not be possible when the housing of the controller120is completely immersed in water at a depth between 0.15 m and 1 m so that the lowest point of the housing of the controller120with a height less than 850 mm is located 1000 mm below the surface of the water and the highest point of a housing of the controller120with a height less than 850 mm is located 150 mm below the surface of the water. The controller120is immersed for a duration 30 minutes, and the water temperature does not differ from that of the housing of the controller120by more than 5K. Table 2 provides the rating levels and tests for liquid Ingress Protection in accordance with IEC standard 60529: TABLE 2RatingDegree of ProtectionTest conditions, seeLevelBrief DescriptionDefinitionIEC 60529 section0Non-protected——1Protected againstVertically falling drops shall14.2.1vertically falling waterhave no harmful effectsdrops2Protected againstVertically falling drops shall14.2.2vertically falling waterhave no harmful effects whendrops when housingthe housing is tilted at anytilted up to 15 degreesangle up to 15 degrees oneither side of the vertical3Protected againstWater sprayed at an angle up14.2.3, including, forspraying waterto 60 degrees on either side ofexample, sprayingthe vertical shall have nowater on the housingharmful effectsat 60 degrees fromvertical at a waterflow rate of 10liters/min for at least 5minutes4Protected againstWater splashed against the14.2.4, including, forsplashing waterhousing from any directionexample, sprayingshall have no harmful effectswater on the housingat 180 degrees fromvertical at a waterflow rate of 10liters/min for at least 5minutes5Protected against waterWater projected in jets against14.2.5, including, forjetsthe housing from any directionexample, sprayingshall have no harmful effectswater from a 6.3 mmdiameter nozzle at adistance of 2.5-3 mfrom the housing at awater flow rate of 12.5liters/min for at least 3minutes6Protected againstWater projected in powerful14.2.6, including, forpowerful waterjetsjets against the housing fromexample, sprayingany direction shall have nowater from a 12.5 mmharmful effectsdiameter nozzle at adistance of 2.5-3 mfrom the housing at awater flow rate of 100liters/min for at least 3minutes7Protected against theIngress of water in quantities14.2.7, including, foreffects of temporarycausing harmful effects shallexample, immersionimmersion in waternot be possible when thefor 30 min in a waterhousing is temporarilytank such that theimmersed in water underbottom of the housingstandardized conditions ofis 1 m below thepressure and timesurface of the waterand the top of thehousing is 0.15 mbelow the surface ofthe water8Protected against theIngress of water in quantities14.2.8, including, foreffects of continuouscausing harmful effects shallexample, immersion inimmersion in waternot be possible when thea water tank such thathousing is continuouslythe bottom of theimmersed in water underhousing is greater thanconditions which shall be1 m below the surfaceagreed between manufacturerof the water and theand user but which are moretop of the housing issevere than for numeral 7greater than 0.15 mbelow the surface ofthe water9Protected against highWater projected at high14.2.9, including, forpressure andpressure and high temperatureexample, sprayingtemperature waterjetsagainst the housing from anywater on the housingdirection shall not havefrom all practicalharmful effectsdirections from a fanjet nozzle at a distanceof 175 +/− 25 mmfrom the housing andspraying water at aflow rate of 15liters/min for at least 3min For example, the housing of the controller120can be constructed to be water-resistant and tested for such in accordance with the IEC 60529 standard for Ingress Protection. For instance, the controller120of the device100may be configured to have a rating of level 7, protecting against immersion in water, up to one meter for thirty minutes. This enables a patient to wear the device100in the bathtub or shower for uninterrupted, continuous use. In implementations, the controller120of the device100may be multiple coded, including two or more levels. For example, the controller120of the device100can maintain a liquid Ingress Protection level of 7, protecting against temporary immersion, and a liquid Ingress Protection level of 5, protecting against water jets. As described previously, the housing of the controller120shields one or more of the contents within the controller from environmental impact. These contents can include one or more of the treatment delivery circuit, an ECG acquisition and conditioning circuit, the processor, at least one capacitor, and at least one power source (e.g., a battery). The controller120covers and/or surrounds the hardware components therein, protecting them from wear and tear and protecting the patient from contacting high voltage components. The controller120protects the components from liquid ingress while the patient is showering, for example. In examples, the housing of the controller120can comprise or consist of at least one of neoprene, thermoformed plastic, or injection molded rubber or plastic, such as silicone or other biocompatible synthetic rubber. Additionally, the band110can be water vapor-permeable, and substantially liquid-impermeable or waterproof. The band110may comprise or consist of an elastic polyurethane fiber that provides stretch and recovery. For example, the band110may comprise or consist of at least one of neoprene, spandex, nylon-spandex, nylon-LYCRA, ROICA, LINEL, INVIYA, ELASPAN, ACEPORA, and ESPA. In implementations, a portion of the band110comprises a water resistant and/or waterproof fabric covering and/or encapsulating electronic components including, for example, the sensing electrodes112, the treatment electrodes114, and the conductive wiring140, and a portion of the band comprises a water permeable, breathable fabric having a relatively higher moisture vapor transmission rate that the water resistant and/or waterproof portions. In examples, the band110can comprise or consist of a fabric having a biocompatible surface treatment rendering the fabric water resistant and/or waterproof. For example, the fabric can be enhanced by dipping in a bath of fluorocarbon, such as Teflon or fluorinated-decyl polyhedral oligomeric silsesquioxane (F-POSS). Additionally or alternatively, the band110can comprise or consist of a fabric including anti-bacterial and/or anti-microbial yarns. For example, these yarns can include a base material of at least one of nylon, polytetrafluoroethylene, and polyester. These yarns can be for example, one or more of an antibacterial silver coated yarn, antibacterial DRALON yarn, DRYTEX ANTIBACTERIAL yarn, NILIT BREEZE and NILIT BODYFRESH. In implementations, the outer surface of the band110can comprise one or more patches of an electrostatically dissipative material such as a conductor-filled or conductive plastic in order to prevent static cling of a patient's clothing. Alternatively, in embodiments, the band110comprises a static dissipative coating such as LICRON CRYSTAL ESD Safe Coating (TECHSPRAY, Kennesaw, GA), a clear electrostatic dissipative urethane coating. Returning toFIGS.2A-B, the band110can be sized to fit about the thoracic region105of the patient by matching the length L1of the band110to one or more circumferential measurements of the thoracic region105during an initial fitting. For example, in an initial fitting, a caregiver, physician or patient service representative (PSR) can measure the circumference of the thoracic region105of the patient at one or more locations disposed about the thoracic region105between about the T1 thoracic region and the T12 thoracic region, and select a band110having a length L1within a range of 2-25% longer than the largest measured circumference. Having the band110be longer than the largest measured circumference of the thoracic region105can provide the patient with a comfort advantage of loosening and tightening the band110to accommodate fluctuations in body mass throughout the prescribed duration of wear. In embodiments of the device100having a fastener configured to secure the band110about the thoracic region105, the patient can loosen or reposition the band around one or more positions along the thoracic region105between about the T1 thoracic region and T12 thoracic region. Additionally or alternatively, the band110can have proportions and dimensions derived from patient-specific thoracic 3D scan dimensions. From a 3-dimensional scan of the thoracic region105of the patient, a band can be sized to fit proportions, dimensions, and shape of the thoracic region105. In implementations, for example, various body size measurements and/or contoured mappings may be obtained from the patient, and one or more portions of the band110can be formed of a plastic or polymer to have contours accommodating one or more portions of the thoracic region in a nested fit. For example one or more portions of the band may be 3D printed from, for example, any suitable thermoplastic (e.g., ABS plastic) or any elastomeric and/or flexible 3D printable material. For example the band110may include at least two curved rigid or semi-rigid portions109a,109bfor engaging the patient's sides, under the arms. The at least two curved portions add rigid structure that assists with preventing the band110from shifting or rotating about the thoracic region. This stability provides consistency of sensor signal readings and prevents noise associated with sensor movement. Stability of the device is also provided by the at least one compression portion. The compressive forces of the band110prevent movement of the band110relative to the skin surface of the thoracic region105and reduce or eliminate noise artifacts associated with sensors moving relative to the surface of the skin of the thoracic region105. In one implementation, the band110includes joinable ends145a,145b, and the compression portion comprises an adjustable fastener147for securing the band about the thoracic region105of the patient within the range of compression forces. The range of compression forces secures the band110from movement without the patient developing soreness or compression ulcers during the continuous period of wear. In implementations, the fastener can include a ratchet, a belt buckle, hook and loop fasteners, snaps, buttons, eyelets, and any other mechanism for closing the band110. In implementations, the band110comprises at least one visible indicator149of band tension disposed on a surface of the band110. For example, the visible indicator149can be a color changing indicator incorporated in the band110indicating whether the band110is too loose, overtightened, or compressed within the range of compressive forces. As the band110stretches, the material forming the visible indicator149, for example, can change color between blue, indicating over-tensioning or under-tensioning, and yellow or green, indicating proper tensioning for simultaneously enabling sensor readings and patient comfort. In one implementation, the visible indicator149can comprise one or more stretchable, multilayer smart fibers disposed in or on the band110. The one or more smart fibers change color from red, to orange, to yellow, to green and to blue as strain on the fiber increases. Providing a visible indication directly on the band110enables a patient to adjust or reapply the band110so that the at least one pair of ECG sensing electrodes112and at least one pair of treatment electrodes114are properly positioned and immobilized on the thoracic region105and so that the band isn't overtightened and applying compressive forces in the thoracic region105to a level of patient discomfort. In other implementations, the band can include a mechanical strain gauge in or on the band110. The mechanical strain gauge can be communicatively coupled to the conductive wiring140such that the controller120provides an audible and/or visible indication of whether the band is over-tightened, too loose, or within the range of compression forces enabling effective use and wear comfort. In implementations, the band comprises an unbroken loop and the compression portion comprises a stretchable fabric defining the band110. The band110can be configured to stretch over the shoulders or hips of the patient and contract when positioned about the thoracic region105. In implementations, the stretchable fabric comprises at least one of nylon, LYCRA, spandex, and neoprene. During an initial fitting, the physician, caregiver, or PSR can select a band110sized to fit the patient. For example, the physician, caregiver, or PSR can measure a circumference about one or more locations on the thoracic region105. The physician, caregiver, or PSR can select a band having a circumference within about 75% to about 95% of the measurement of the one or more locations about the thoracic region105. In some implementations, the compression portion comprises an elasticized thread disposed in the band110. The compression portion can comprise an elasticized panel disposed in the band, the elasticized panel comprising a portion of the band110spanning less than a total length of the band110. For example, the band110can include one or more mechanically joined sections forming a continuous length or unbroken loop. The one of the one or more sections can comprise a stretchable fabric and/or elasticized thread interspersed with non-stretchable or relatively less stretchable portions. In other embodiments, the band110can include a compression portion comprising an adjustable tension element, such as one or more cables disposed in the band110and configured to be pulled taught and held in tension by one or more pull stops. In all embodiments, the band110can include one or more visible or mechanical tension indicators configured to provide a notification of the band110exerting compression forces against the thoracic region105in a range from about 0.025 psi to 0.75 psi. As described herein, the band110is immobilized by compression forces and configured to minimize shifting as the patient moves and goes about a daily routine. Because no portion of the band110traverses the patient's limbs or shoulders, the patient is free to move, bend, twist and lift his or her arms and/or shoulders without imparting torque on the device100. This immobilizes the band110relative to the skin surface of the thoracic region and prevents or eliminates signal noise associated with sensors shifting against the skin when compared to wearable devices that run over a patient's shoulder or arm. The size and position of the band110also provides a discreet and comfortable device100covering only a relatively small portion of the surface area of the entire thoracic region105and accommodating a plurality of body types. A relatively small portion can be for example, 25%, or less (e.g. 20%, 15%, 10%, 5% or less than 5%) of the surface area of the thoracic region105. Covering only a relatively small portion of the thoracic region105further improves comfort and encourages patient compliance because the patient will feel little or no discomfort and may forget the device100is being worn. In implementations, the band comprises a breathable, skin-facing layer including at least one of a compression padding, a silicone tread, and one or more textured surface contours. The breathable material and compression padding enable patient comfort throughout the duration of wear and the silicon tread and/or one or more surface contours assist with immobilizing the band relative to the skin surface of the thoracic region. Implementations of the device100in accordance with the present disclosure may exhibit a moisture vapor transmission rate (MVTR) of, for example, between about 600 g/m2/day and about 1,400 g/m2/day when worn by a subject in an environment at room temperature (e.g., about 25° C.) and at a relative humidity of, for example, about 70%. In implementations, the device100has a water vapor permeability of 100 g/m2/24 hours, as measured by such vapor transmission standards of ASTM E-96-80 (Version E96/E96M-13), using either the “in contact with water vapor” (“dry”) or “in contact with liquid” (“wet”) methods. Such test methods are described in U.S. Pat. No. 9,867,976, titled “LONG-TERM WEAR ELECTRODE,” issued on Jan. 16, 2018 (hereinafter the “'976 Patent”), the disclosure of which is incorporated by reference herein in its entirety. In implementations, the band110comprises one or more moisture wicking fabrics for assisting with moving moisture away from the skin of the thoracic region105and improving patient comfort throughout the prescribed duration of wear. In implementations, the device includes an adhesive configured to immobilize the band110relative to the thoracic region of the patient. In implementations, the adhesive is configured to be a removable and/or replaceable adhesive patch for preventing the band110from shifting, rotating, or slipping relative to the skin of the thoracic region. In implementations, once the patient is wearing the band110and has adjusted the band in implementations comprising an adjustment and/or tightening mechanism, the patient can insert on or more adhesive patches between the band110and the skin. In implementations, the patient can swap out one or more adhesive patches with one or more new adhesive patches in the same or a different location between the band110and the skin of the thoracic region105. For example, the patient may swap out the one or more adhesive patches on a daily schedule or may use the adhesive patches selectively during periods of high activity, such as while exercising. The adhesives can include biocompatible adhesives, such as pressure-sensitive adhesives having tack, adhesion, and cohesion properties suitable for use with a medical device applied to skin for short term and long-term durations. These pressure sensitive adhesives can include polymers such as acrylics, rubbers, silicones, and polyurethanes having a high initial tack for adhering to skin. These pressure sensitive adhesives also maintain adhesion during showering or while a patient is perspiring. The adhesives also enable removal without leaving behind uncomfortable residue. For example, such an adhesive can be a rubber blended with a tackifier. In implementations, the adhesive comprises one or more water vapor permeable adhesive patches. The adhesive can be a conductive patch disposed between the plurality of electrodes and the skin of thoracic region105, in some implementations. For example, as described in the '976 patent, a water-vapor permeable conductive adhesive patch can be, for example, the flexible, water vapor-permeable, conductive adhesive material can comprise a material selected from the group consisting of an electro-spun polyurethane adhesive, a polymerized microemulsion pressure sensitive adhesive, an organic conductive polymer, an organic semi-conductive conductive polymer, an organic conductive compound and a semi-conductive conductive compound, and combinations thereof. In an example, a thickness of the flexible, water vapor-permeable, conductive adhesive material can be between 0.25 and 100 mils. In another example, the water vapor-permeable, conductive adhesive material can comprise conductive particles. In implementations, the conductive particles may be microscopic or nano-scale particles or fibers of materials, including but not limited to, one or more of carbon black, silver, nickel, graphene, graphite, carbon nanotubes, and/or other conductive biocompatible metals such as aluminum, copper, gold, and/or platinum. The device100herein includes low skin-irritation fabrics and/or adhesives. In embodiments, the device100may be worn continuously by a patient for a long-term duration (e.g., duration of at least one week, at least 30 days, at least one month, at least two months, at least three months, at least six months, and at least one year) without the patient experiencing significant skin irritation. For example, a measure of skin irritation can be based on skin irritation grading of one or more as set forth in Table C.1 of Annex C of American National Standard ANSI/AAMI/ISO 10993-10:2010, reproduced here in the entirety: TABLE C.1Table 3Human Skin irritation test, grading scaleDescription of responseGradingNo reaction0Weakly positive reaction (usually characterized1by mild erythema and/or dryness across mostof the treatment site)Moderately positive reaction (usually distinct2erythema or dryness, possibly spreadingbeyond the treatment site)Strongly positive reaction (strong and often3spreading erythema with edema and/or escharformation) The skin irritation grading of one represents a weakly positive reaction usually characterized by mild erythema and/or dryness across most of the treatment site. In one implementation, a measure of skin irritation can be determined by testing on human subjects in accordance with the method set forth in American National Standard ANSI/AAMI/ISO 10993-10:2010, by applying sample patches of the adhesive and/or fabric to treatment sites for up to four hours, and, in the absence of skin irritation, subsequently applying sample patches to treatment sites for up to 24 hours. The treatment sites are examined for signs of skin irritation, and the responses are scored immediately after patch removal and at time intervals of (1±0.1) h to (2±1) h, (24±2) h, (48±2) h and (72±2) h after patch removal. In another implementation, a patient may wear the device100as instructed for a duration of (24±2) hours, and if the patient's skin shows no reaction at the end of this duration, the device100is rated as a skin irritation grading of zero. Treatment is caused to be provided by the treatment delivery circuit in communication with the at least one pair of treatment electrodes114. In implementations, as shown inFIGS.2A and2B, the band110further comprises at least one of an anterior appendage150and a posterior appendage155, and at least one of the plurality of electrodes is disposed on the at least one of the anterior appendage and the posterior appendage. In implementations, one treatment electrode114aof the at least one pair of treatment electrodes114is disposed on the anterior appendage150and one treatment electrode114bof the at least one pair of treatment electrodes114is disposed on the posterior appendage155. In implementations, each of the at least one of the anterior appendage and the posterior appendage is a flap extending vertically along the thoracic region from a circumferential top edge160or a circumferential bottom edge165of the band110. In implementations, the anterior appendage and the posterior appendage cumulatively occupy 50 percent or less of the length of the band110so as to minimize the surface area of the thoracic region105covered by the device100which providing an effective placement of the at least one pair of treatment electrodes114. By positioning the at least one pair of treatment electrodes114on either side of the patient's heart, the device100can deliver effective treatment along a vector through the heart, restoring a normal rhythm upon detection of a cardiac arrhythmia requiring treatment. As depicted inFIGS.2A-B, the anterior and posterior appendages150,155rise from a top circumferential edge160of the band110. In such implementations, an average vertical rise V2, V3from a bottom edge165of the band110to a top edge170,175of each of the at least one of the anterior appendage150and the posterior appendage155is greater than the average vertical rise V1of the band. In implementations, at least one of the anterior appendage150and the posterior appendage155includes disposed thereon at least one ECG sensing electrode112. In implementations, such as that ofFIG.5, the device can include one or more appendages111a,111bmechanically attached to the band110, the one or more appendages111a,111b, configured to be continuously worn about the thoracic region105of the patient. In addition to supporting one more additional ECG sensing electrodes112thereon, the one or more appendages111a,111bare configured to receive one or more selectively added treatment electrodes114, shown in dashed line to indicate their being added to the device100optionally. In implementations, the device100can be configured to monitoring a patient's ECG signal, analyze the signal, predict a future event occurring based on the analysis, and provide an instruction to patient or caregiver to add the optional treatment electrodes to the one or more appendages111a,111band the band110. In other implementations, such as that ofFIG.6A, a cardiac monitoring and treatment system200includes a first wearable portion205and a second, separately worn portion215including one or more treatment electrodes214a,214b(collectively referred to as214) disposed on the second wearable portion215. The treatment electrodes214are disposed in the second wearable portion215such that a treatment vector formed between the treatment electrodes214is aligned through the patient's heart when the second wearable portion215is worn. In implementations, a cardiac monitoring and treatment system can include a controller220comprising at least one processor, a first wearable portion205, and a second wearable portion215. In implementations, as shown inFIG.7, the controller220can include the at least one processor218, a therapy delivery circuit1130including a polarity switching component such as an H-bridge1128, a data storage1207, a network interface1206, a user interface1208, at least one battery1140, a sensor interface1202that includes, for example, an ECG data acquisition and conditioning circuit, an alarm manager1214, one or more capacitors1135, and a Sudden Cardiac Arrhythmia (SCA) Risk Analysis Assessor219. In implementations, the first wearable portion205includes an elongated strap210, similar to the band110ofFIGS.2A-B, configured to encircle a thoracic region105of a patient. The elongated strap210is configured to be immobilized relative to a skin surface of the thoracic region105of the patient by exerting one or more compression forces against the thoracic region. For example, the compression force can be in a range from 0.025 psi to 0.75 psi, 0.05 psi to 0.70 psi, 0.075 psi to 0.675 psi, or 0.1 to about 0.65 psi. In implementations, the strap210exerts compression forces against the skin of the patient by one or more of manufacturing all or a portion of the strap210from a compression fabric, providing one or more tensioning mechanisms in and/or on the strap210, and proving a cinching closure mechanism for securing and compressing the strap210about the thoracic region105. Compression forces of the medical device can be determined, for example, using one or more pressure sensors and systems as described above with regard to the band110ofFIG.2A. The first wearable portion205includes a plurality of ECG sensing electrodes212disposed about the elongated strap210. The plurality of ECG sensing electrodes is configured to sense an ECG signal of the patient. The plurality of ECG sensing electrodes212can be disposed about the elongated strap210and configured to be in electrical contact with the patient. In implementations, the plurality of ECG sensing electrodes can include a driven ground electrode, or right leg drive electrode, configured to ground the patient and reduce noise in the sensed ECG signal. In embodiments, the plurality of ECG sensing electrodes212are configured to be assembled into the elongated strap210or removably attached to the elongated strap, using, for example, hook and loop fasteners, thermoform press fit receptacles, snaps, and magnets, among other restraints. An example ECG sensing electrode212includes a metal electrode with an oxide coating such as tantalum pentoxide electrodes, as described in, for example, U.S. Pat. No. 6,253,099 entitled “Cardiac Monitoring Electrode Apparatus and Method,” the content of which is incorporated herein by reference. In some implementations, as described previously, the plurality of ECG sensing electrodes212can be a permanent portion of the elongated strap210. For example, the plurality of ECG sensing electrodes212can be formed of the warp and weft of a fabric forming at least a layer of the elongated strap210. In implementations, the plurality of ECG sensing electrodes212are formed from conductive fibers that are interwoven with non-conductive fibers of the fabric. In some implementations, the plurality of ECG sensing electrodes212are metallic plates (e.g. stainless steel) or substrates that are formed as permanent portions of the elongated strap210. A metallic plate or substrate can be adhered to the elongated strap210, for example, by a polyurethane adhesive or a polymer dispersion adhesive such as a polyvinyl acetate (PVAc) based adhesive, or other such adhesive. In examples, the plurality of ECG sensing electrodes212are a plurality of dry ECG sensing electrodes. In examples, plurality of ECG sensing electrodes212are flexible, dry surface electrodes such as, for example, conductive polymer-coated nano-particle loaded polysiloxane electrodes mounted to the elongated strap210. In some examples, the plurality of ECG sensing electrodes212are flexible, dry surface electrodes such as, for example silver coated conductive polymer foam soft electrodes mounted to the elongated strap210. In examples, the plurality of ECG sensing electrodes212are screen printed onto the elongated strap210with a metallic ink, such as a silver-based ink. In implementations, each of the plurality of ECG sensing electrodes212has a conductive surface adapted for placement adjacent the patient's skin. In implementations, the first wearable portion205includes one or more receiving ports213configured to receive one or more additional components including at least one of a treatment electrode214and an additional sensor. The additional sensor can be, for example, one or more physiological sensors for detecting one or more of pulmonary vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), and tissue fluids (e.g., using radio-frequency transmitters and sensors). The additional sensor can be, for example, one or more physiological sensors including a pressure sensor for sensing compression forces of the garment, SpO2 sensors, blood pressure sensors, bioimpedance sensors, humidity sensors, temperature sensors, and photoplethysmography sensors. In implementations, the one or more receiving ports213enable the one or more additional components to be assembled into the elongated strap210or removably attached to the elongated strap210, using, for example, hook and loop fasteners, thermoform press fit receptacles, snaps, and magnets, among other restraints and/or mating features. In some examples, the ports213can also be configured to receive one or more motion and/or position sensors. For example, such motion sensors can include accelerometers for monitoring the movement of the patient's torso in x-, y- and z-axes to determine a movement of the patient, gait, and/or whether the patient is upright, standing, sitting, lying down, and/or elevated in bed with pillows. In certain implementations, one or more gyroscopes may also be provided to monitor an orientation of the patient's torso in space to provide information on, e.g., whether the patient is lying face down or face up, or a direction in which the patient is facing. In implementations, the first wearable portion205includes a plurality of conductive wires240configured to couple the plurality of ECG sensing electrodes212and the one or more receiving ports213with the controller220. In implementations, the plurality of conductive wires240extends between the controller220and plurality of ECG sensing electrodes212and the one or more receiving ports213. The one or more receiving ports213can include thereon a connector for receiving a complimentary mating portion of one or more additional sensors selectively disposed on the elongated strap210. The connector of the one or more ports213can be in wired communication with the plurality of conductive wires240such that upon receiving a sensor therein, the one or more receiving ports213each function as a conduit for communicating information between the additional sensor and the controller220. In implementations, the elongated strap210comprises a fabric. The elongated strap210may comprise or consist of an elastic polyurethane fiber that provides stretch and recovery. For example, the elongated strap210may comprise or consist of at least one of neoprene, spandex, nylon-spandex, nylon-LYCRA, ROICA, LINEL, INVIYA, ELASPAN, ACEPORA, and ESPA. In examples, the elongated strap210can comprise or consist of a fabric having a biocompatible surface treatment rendering the fabric water resistant and/or waterproof. In implementations, a portion of the elongated strap210comprises a water resistant and/or waterproof fabric covering and/or encapsulating electronic components including, for example, the sensing electrodes212, the treatment electrodes214, and the plurality of conductive wires240, and a portion of the elongated strap210comprises a water permeable, breathable fabric having a relatively higher moisture vapor transmission rate that the water resistant and/or waterproof portions. In implementations, a plurality of conductive wires240can be woven into the warp and weft of the fabric. In implementations, the plurality of conductive wires240can be integrated into the fabric, disposed between layers of the elongated strap210. In implementations, the elongated strap210can include the plurality of conductive wires240integrated into the fabric of the elongated strap210. In implementations, the plurality of conductive wires240can comprise or consist of conductive thread. In examples, the plurality of conductive wires240can be integrated in a zigzag or other doubled back pattern so as to straighten as the elongated strap210stretches. The zigzag or doubled-back pattern therefore accommodates for stretching and patient movement while keeping the plurality of conductive wires240from contacting the skin of the patient. Integrating the plurality of conductive wires240into the elongated strap210reduces and/or eliminates snagging the wire or thread on an external object. In other examples, the plurality of conductive wires240can be routed on an exterior surface of the elongated strap210so as to avoid contacting the skin of the patient and therefore avoid irritation associated with such potential contact. In implementations, the plurality of conductive wires240includes two or more conductive wires bundled within an insulating outer sheath. In implementations, the plurality of conductive wires240can be routed along the elongated strap and held securely to the elongated strap210by one or more loops of fabric, closable retention tabs, eyelets and/or other retainers so that the plurality of conductive wires240do not snag on or bulge beneath a patient's clothing worn over the elongated strap210. In implementations of the system200, the second wearable portion215is separate from the first wearable portion205. The second wearable portion215is configured to be worn over at least one shoulder of the patient. In implementations, the second wearable portion215includes a wearable substrate216, one or more treatment electrodes214disposed on the wearable substrate216, and at least one conductive wire242configured to releasably connect the one or more treatment electrodes214to the controller220. The one or more treatment electrodes214includes an anterior treatment electrode214aand a posterior treatment electrode214b. Each of the one or more treatment electrodes214comprises a corresponding conductive surface configured to contact the patient's skin at an anterior area and a posterior area of the thoracic region105of the patient. The one or more treatment electrodes214are configured to be assembled into the wearable substrate216or removably attached to the wearable substrate, using, for example, pockets formed in or on the wearable substrate, hook and loop fasteners, thermoform press fit receptacles, snaps, and magnets, among other restraints. In some implementations, the one or more treatment electrodes214can be a permanent portion of the wearable substrate216. In implementations, the wearable substrate216comprises or consists of fabric. The fabric may comprise or consist of an elastic polyurethane fiber that provides stretch and recovery. For example, the fabric may comprise or consist of at least one of neoprene, spandex, nylon-spandex, nylon-LYCRA, ROICA, LINEL, INVIYA, ELASPAN, ACEPORA, and ESPA. In implementations, the one or more treatment electrodes214can be formed of the warp and weft of a fabric forming at least a layer of the wearable substrate216. In implementations, the one or more treatment electrodes214are formed from conductive fibers that are interwoven with non-conductive fibers of the fabric. In some implementations, the one or more treatment electrodes214are metallic plates (e.g. stainless steel) or substrates that are formed as permanent portions of the wearable substrate216. A metallic plate or substrate can be adhered to the wearable substrate, for example, by a polyurethane adhesive or a polymer dispersion adhesive such as a polyvinyl acetate (PVAc) based adhesive, or other such adhesive. In examples, the one or more treatment electrodes214are screen printed onto the wearable substrate216with a metallic ink, such as a silver-based ink. As previously described, the example devices and systems described herein are prescribed to be worn continuously and typically for a prescribed duration of time. For example, the prescribed duration can be a duration for which a patient is instructed by a caregiver to wear the device in compliance with device use instructions. As noted above, the prescribed duration may be for a short period of time until a follow up medical appointment (e.g., 1 hour to about 24 hours, 1 day to about 14 days, or 14 days to about one month), or a longer period of time (e.g., 1 month to about 3 months) during which diagnostics information about the patient is being collected even as the patient is being protected against cardiac arrhythmias. The prescribed use can be uninterrupted until a physician or other caregiver provides a specific prescription to the patient to stop using the wearable medical device. For example, the wearable medical device can be prescribed for use by a patient for a period of at least one week. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least 30 days. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least one month. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least two months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least three months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least six months. In an example, the wearable medical device can be prescribed for use by a patient for an extended period of at least one year. Continuous use can include continuously monitoring the patient while the patient is wearing the device for cardiac-related information (e.g., electrocardiogram (ECG) information, including arrhythmia information, cardiac vibrations, etc.) and/or non-cardiac information (e.g., blood oxygen, the patient's temperature, glucose levels, tissue fluid levels, and/or pulmonary vibrations). For example, the wearable medical device can carry out its continuous monitoring and/or recording in periodic or aperiodic time intervals or times (e.g., every few minutes, every few hours, once a day, once a week, or other interval set by a technician or prescribed by a caregiver). Alternatively or additionally, the monitoring and/or recording during intervals or times can be triggered by a user action or another event. The user can be any one of the patient, remote or local physician, remote or local caregiver, or a remote or local technician, for example. Because these devices require continuous operation and wear by patients to which they are prescribed, advantages of the implementations herein include use of comfortable, non-irritating construction materials and features designed to enhance patient compliance. Such compliance-inducing design features include, for example, device ergonomics, weight of the components and/or distribution of the weight about the device or portions of the device, and inconspicuous appearance when worn under outer garments (e.g., patient clothing), among others. In some implementations described herein, the devices include various features that promote comfort while continuing to protect the patient from adverse cardiac events. These features can be tailored in accordance with patient comfort preference and body morphology. Segregating functionality between a first wearable portion205and a second wearable portion215of the system200ofFIG.6Aprovides advantages of increased patient comfort and device modularity while mitigating motion artifacts associated with the plurality of ECG sensing electrodes212shifting or sliding against the skin of the patient during a continuous duration of wear. Because the first wearable portion205includes sensors for monitoring one or more physiological conditions of the patient, e.g. the plurality of ECG sensing electrodes212and the additional sensor, the first wearable portion205is configured to be worn continuously or nearly continuously for a prescribed duration of wear. As previously described, that elongated strap210encircles the thoracic region105of the patient. As shown inFIG.6B, in implementations, the elongated strap210has a vertical span V4from a bottom circumferential edge265to a top circumferential edge260in a range of 1 to about 15 centimeters. For example, in implementations, the vertical span V4is between 2 to 12 centimeters. For example, in implementations, the vertical span V4is between 3 to 8 centimeters. The elongated strap210exerts a radial compression force in a range of 0.025 psi to 0.75 psi to the thoracic region105of the patient. In implementations the second wearable portion215comprises a compression force relatively lower than the compression force of the elongated strap210. In implementations the second wearable portion215comprises one or more compression forces relatively lower than the compression force of the elongated strap210. In implementations the second wearable portion215comprises an average compression force relatively lower than the compression force of the elongated strap210. Compression forces of the medical device can be determined, for example, using one or more pressure sensors and systems as described above with regard to the band110ofFIG.2A. In implementations, the second wearable portion215is configured to be worn for a cumulative duration less than or equal to the duration of wear of the first wearable portion205as will be described subsequently. Because the elongated strap210has a vertical span V4in a range of 1 to about 15 centimeters (e.g., 1 cm, 2 cm, 3 cm, 4 cm, 5 cm, 6 cm, 7 cm, 8 cm, 9 cm, 10 cm, 11 cm, 12 cm, 13 cm, 14 cm, 15 cm) and is configured to be worn about the thoracic region105at a position or a range of positions between around about the T1 thoracic region to about the T12 thoracic region, the system200accommodates a variety of body sizes and morphologies by avoiding anatomically diverse regions of the human body. Similarly to the embodiment shown inFIGS.3A-B, the elongated strap210is configured to be worn, for example, about the thoracic region at a position that can avoid a chest area and any protruding stomach area. In implementations, the strap210is configured to be worn within a T5 thoracic vertebra region and a T11 thoracic vertebra region. In implementations, the strap210is configured to be worn within a T8 thoracic vertebra region and a T10 thoracic vertebra region. By securing the elongated strap210on the thoracic region105in this range, the system200immobilizes the ECG sensing electrodes212and any additional sensor against the skin of the patient in a relatively smooth sensor surface to skin surface arrangement. This ensures complete sensor contact with the skin of the patient while reducing or eliminating motion artifacts regardless of patient gender or body type. This placement of the elongated strap210also avoids interference with a patient's arms and prevents movement of the elongated strap210as the patient goes about a daily routine, moving, shifting, bending, twisting, lifting arms, etc. The elongated strap210is discreetly and comfortably secured without covering a substantial portion of the patient's thoracic region105. A substantial portion can be for example, 25%, 30%, 35%, 40%, 45%, 50% or more than 50% of the thoracic region105. The second wearable portion215can be worn for a portion of the prescribed cumulative duration of wear as will be described subsequently with regard toFIGS.8A and8B. Because the second wearable portion215comprises one or more treatment electrodes214, the compression forces of the second portion need not be as great as those of the first wearable portion205having thereon or therein sensing electrodes sensitive to motion artifacts. In implementations, the first wearable portion205can be worn independently of the second wearable portion215and can be configured to provide monitoring functionality. In implementations, the first wearable portion205includes one or more receiving ports213configured to receive an additional sensor. Because the first wearable portion205is a compression device, the first wearable portion205supports monitoring sensors and/or sensing devices without the need for potentially irritating skin adhesives. Such adhesives are generally used to apply independently worn sensors and/or devices, but the first wearable portion205is immobilized by compression forces, reducing or eliminating a need for adhesives. As previously described, in implementations, the system200is a cardiac monitoring and treatment system and the first wearable portion205comprises a plurality of ECG sensing electrodes212. In implementations, the system200includes an ECG acquisition circuit in communication with the plurality of ECG sensing electrodes212and the at least one processor218of the controller220. The ECG acquisition circuit is configured to provide ECG information for the patient based on the sensed ECG signal. In one implementation, the ECG acquisition circuit is collocated with the plurality of ECG sensing electrodes212. In one implementation, the ECG acquisition circuit is located on the device controller220. In implementations, the system200includes a connection pod230in wired connection with one or more of the plurality of ECG sensing electrodes212and the ECG acquisition circuitry. In some examples, the connection pod230includes at least one of the ECG acquisition circuitry and a signal processor configured to amplify, filter, and digitize the cardiac signals prior to transmitting the cardiac signals to the controller220. In implementations, the system can include at least one ECG sensing electrode212configured to be adhesively attached to an upper portion of the thoracic region105, above the elongated strap, the at least one ECG sensing electrode212being in wired communication with at least one of the connection pod and the controller220. As previously described, the second wearable portion215is configured to be worn for a cumulative duration less than or equal to the duration of wear of the first wearable portion205. In implementations of the system200, the at least one processor218is configured to predict a likelihood of a cardiac event based on an analysis of the ECG information and provide a notification to the patient to wear the second wearable portion215upon detecting the impending cardiac event. In implementations, the controller220includes a set of instructions comprising a Sudden Cardia Arrhythmia (SCA) risk analysis assessor219. The SCA risk analysis assessor219provides a set of instructions to the processor for computing an SCA Risk score and analyzing whether the likelihood of an SCA occurring is high or not. Because the SCA risk analysis assessor219is predictive, the at least one processor218can determine, for example, a high likelihood of an SCA occurring in the next two weeks and prompt the controller220to provide an instruction and/or an alert to the patient to wear the second wearable portion215comprising the one or more treatment electrodes214. As shown inFIGS.8A and8B, in implementations, the at least one processor218receives S805the patient ECG signal from the plurality of ECG sensing electrodes212and computes S810a sudden cardiac arrhythmia (SCA) risk score (S) in method800. In implementations, the SCA risk score is computed based on at least one of ECG metrics passed from an ECG analyzer and patient demographic and clinical data. The ECG metrics can include, for example, one or more of the ECG metrics of table 4. TABLE 4Heart RateHRavgAverage heart rateHRminMinimum heart rateHRmaxMaximum heart rateHeart Rate VariabilityNNavgAverage normal-to-normal interval in secondsNNminMinimum normal-to-normal interval in secondsNNmaxMaximum normal-to-normal interval in secondsNNsdStandard deviation of normal-to-normal intervals insecondsRMSSquare root of the mean squared difference ofsuccessive normal-to-normal intervals measured insecondsNN50Number of successive normal-to-normal intervalsgreater than 50 ms per minute.pNN50Percentage of normal-to-normal intervals greaterthan 50 ms per minute.QRS DurationQRSmedMedian QRS durationQRSsdStandard deviation of QRS durationPVCsPVCcountNumber of PVCsnsvtCountNumber of consecutive heartbeat sequences of PVCs The patient demographic and clinical data include one or more of the metrics of table 5. TABLE 5Demographic and clinical metricsAgeGenderExplant of implantable cardioverterdefibrillator (ICD)coronary artery bypass graft (CABG)congestive heart failure (CHF)hypertrophic cardiomyopathy (HCM)Myocardial infarction (MI)ventricular tachycardia/ventricular fibrillation(VT/VF) The computed SCA Risk Score (S) is then compared S815against a user-defined risk score threshold (T). If S is less than T, the at least one processor218continues to receive S805patient ECG signals for analysis. If S is greater than T, the processor prompts S820a notification to wear the second wearable portion215. In implementations, computing the SCA Risk Score (S) associated with estimating a risk of a potential cardiac arrhythmia event for the patient includes applying the sets of ECG metrics and patient demographic and clinical data to one or more machine learning classifier models. In some implementations, a machine learning classifier can be trained on a large population, for example, a population that can range from several thousand to tens of thousands of patient records comprising electrophysiology, demographic and medical history information. The machine learning tool can include but is not limited to classification and regression tree decision models, such as random forest and gradient boosting, (e.g., implemented using R or any other statistical/mathematical programming language). Any other classification based machine learning tool can be used, including neural networks and support vector machines. Because the machine learning tool may be computationally intensive, some or all of the processing for the machine learning tool may be performed on a server that is separate from the medical device. Examples of risk prediction methods and classifiers are described in, for example, U.S. Publication No. US 2016/0135706 entitled “Medical Premonitory Event Estimation,” the entire content of which is incorporated herein by reference. In implementations, the system200includes an output device, such as the output device1216of the implementation of the controller220ofFIG.7, and the notification to wear the second wearable portion215is provided via the output device1216. In implementations the output device1216is a display and/or speaker of the controller220configured to provide a visible and/or audible alarm. In implementations, the controller220includes a speaker for providing an alarm sound and/or spoken instructions alerting the patient to wear the second wearable portion215. The alarm sound can be unique from an alarm sound indicating imminent treatment and in implementations is provided with increasing volume or frequency depending on the urgency of the predicted SCA. If the at least one processor218determines the SCA is likely to occur within two weeks but not imminently, the alarm may be softer and repeated less frequently than a more urgently impending event. For example, if the at least one processor218determines the SCA is likely to occur within a week, the notification can include a series of alerts provided at 1 minute increments at a first decibel level. If the processor determines the SCA is likely to occur beyond one week but within two weeks, the series of alerts are provided at 10 minute increments at a second decibel level that equivalent to or quieter than the first decibel level. The notification can comprise an instruction to connect the at least one conductive wire242of the second wearable portion215to the controller220. In implementations, the at least one processor is configured to initiate delivery of a therapeutic shock via the one or more treatment electrodes214. Accordingly, the one or more treatment electrodes214need to be operatively connected to the controller220. In implementations, the at least one processor218can be configured to detect successful connection of the at least one conductive wire242. In implementations, the at least one processor218provides, via the output device, an indication of successful connection of the at least one conductive wire242of the second wearable portion215to the controller220. If connection of the at least one conductive wire242is not detected within a threshold period of time, the at least one processor218provides an audible and/or visible alert to the patient. In implementations, this process repeats until the one or more treatment electrodes214of the second wearable portion215are operably connected to the controller220and available for providing a therapeutic shock to the patient as initiated by the at least one processor218. In implementations, the system200is configured for use with a remote server, and one or more functions of the at least one processor218are performed by the remote server. Additionally, one or more of the ECG metrics, patient demographic and clinical data, and threshold values can be stored on a remote database in communication with and accessible by the remote server. For example, a processor of the remote server can execute the instructions of the SCA Risk Analysis Assessor and provide the output to the at least one processor218of the controller220. Alternatively or additionally, a processor of the remote server can provide the output to a computing device of a physician or caregiver. The computing device can provide an audible and/or visible notification to the physician or caregiver to instruct the patient on wearing the second wearable portion215. In implementations, splitting computation between the controller220and a remote server300assists with reducing the overall size and construction of the controller220. As shown inFIG.9A, the controller220can further be reduced in size as compared to the controller ofFIGS.6A and9B, by distributing controller components throughout the first and second wearable portions205,215. As described with regard toFIG.7and as will be described in greater detail subsequently, implementations of a wearable cardiac monitoring and treatment system200include the controller220comprising one or more of the following components: a therapy delivery circuit1130including a polarity switching component such as an H-bridge1228, a data storage1207, a network interface1206, a user interface1208, at least one battery1140, a sensor interface1202that includes, for example, an ECG data acquisition and conditioning circuit, an alarm manager1214, the at least one processor218, and one or more capacitors1135. As shown inFIG.9A, in implementations, the high voltage components such as the one or more capacitors1135and therapy delivery circuit1130can be redistributed to the second wearable portion215. For example, the one or more treatment electrodes214can include one or more of these high voltage components. Redistributing these bulkier and heavier components to the second wearable portion215reduces the overall size and of the continuously worn controller220. Because the controller220is worn continuously or substantially continuously throughout the duration of wear of the system200and because the second wearable portion215is worn only when an SCA Risk Assessment Score (S) exceeds a threshold (T), the heavier portions are only worn when necessary. This further assists with overall patient comfort and encourages patient compliance with wearing the first wearable portion205for the prescribed duration of wear. In implementations, as shown in the timeline ofFIG.8B, the cumulative duration of wear850of the first wearable portion205is equal to or greater than the cumulative duration of wear852a,852bof the second wearable portion215because the second wearable portion215is worn only when the patient is prompted by the system200. Although a physician may prescribe the system200for a duration of wear850, only the first wearable portion205, the monitoring portion, need be worn continuously throughout that prescribed duration of wear850. As shown on the example timeline ofFIG.8B, the patient is not wearing the second wearable portion during an initial span851beginning at the start of the duration of wear850and lasting until S>T, about a week and a half past the one month mark. The second wearable portion215is worn when an SCA Risk Score (S) exceeds a threshold (T), but because the monitoring is continuous, the at least one processor218may detect an improvement in the patient's condition. In implementations, if the SCA Risk Score (S) improves during the prescribed duration of wear of the first wearable portion205so that S is less than T, the at least one processor218can notify the patient to remove the second wearable portion215. For example, in the timeline ofFIG.8B, the SCA risk analysis assessor outputs S<T about one and a half weeks past the second month mark. In implementations, the at least one processor218calculates a wait period853of about another 1-2 weeks and continues computing the SCA Risk Score (S) for the duration of the wait period853to ensure the patient's condition is stable. At the end of the wait period, the at least one processor218can provide a notification that the patient may remove the second wearable portion215during a second period854of not wearing the second wearable portion215because S has remained less than T. In implementations, S must remain less than T without fluctuation and within a user defined tolerance range (e.g. 5% or more less the threshold T) during the wait period853in order for the at least one processor218to provide a notification to remove the second wearable portion215. In this example, the at least one processor218computes an SCA Risk Score S greater than T at month4and again prompts a notification to wear the second wearable portion215. Because monitoring and analysis is continuous throughout the duration of wear850, the SCA Risk Score S may remain greater that T until the end of the prescribed duration of wear850, at which time the physician or caregiver may re-evaluate treatment options for the patient. The system200therefore continuously monitors the patient's physiological condition and protects the patient from harm while also accounting for patient comfort by avoiding unnecessary wear of the second wearable portion215. Patient comfort is also achieved by customizing one or more features of the elongated strap210for each patient's preferences and body morphology. Returning toFIGS.6A-B, the elongated strap210can be sized to fit about the thoracic region105of the patient by matching the length the elongated strap210to one or more circumferential measurements of the thoracic region105during an initial fitting. For example, in an initial fitting, a caregiver, physician or patient service representative (PSR) can measure the circumference of the thoracic region105of the patient at one or more locations disposed about the thoracic region105between about the T1 thoracic region and the T12 thoracic region, and select an elongated strap210having a length L2within a range of 2-25% longer than the largest measured circumference. For example, the strap210can be configured to be worn within a T5 thoracic vertebra region and a T11 thoracic vertebra region. For example, t the strap210can be configured to be worn within a T8 thoracic vertebra region and a T10 thoracic vertebra region. Having the elongated strap210be longer than the largest measured circumference of the thoracic region105can provide comfort advantages of loosening and tightening the elongated strap210to accommodate fluctuations in body mass throughout the prescribed duration of wear. Additionally, in embodiments of the system200having a fastener247configured to secure the elongated strap210about the thoracic region105, the patient can loosen or reposition the elongated strap210around one or more positions along the thoracic region105between about the T1 thoracic region and T12 thoracic region. In implementations, at least one fastener247is disposed on a first end245aof the elongated strap210for adjoining a second end245bof the elongated strap210in secured attachment about the thoracic region105of the patient. In implementations, the fastener247is an adjustable latching mechanism configured to secure and tighten the elongated strap210about the thoracic region105of the patient. Additionally or alternatively, the elongated strap210or other support elements of preceding and subsequently described implementations, such as appendages111ofFIG.2A and211ofFIGS.9A-B, and sash410ofFIG.14, can have proportions and dimensions derived from patient-specific thoracic 3D scan dimensions so as to provide conformally fitted support elements shaped to fit the particular patient's exact body shape, thereby providing a much higher degree of comfort than an off-the-shelf garment. The 3D scan dimensions may be generated from a three dimensional imaging system such as a 3D surface imaging technology with anatomical integrity, for instance the 3dMDthorax System by 3dMD LLC, Atlanta GA. The three-dimensional imaging system can comprise one or more of a digital camera, RGB camera, digital video camera, red-green-blue sensor, and/or depth sensor for capturing visual information and static or video images of the rescue scene. In some examples, the three-dimensional imaging system can comprise both optical and depth sensing components as with the Kinect motion sensing input device by Microsoft, or the Apple TrueDepth 3D sensing system which may include an infrared camera, flood illuminator, proximity sensor, ambient light sensor, speaker, microphone, 7-megapixel traditional camera, and dot projector (which projects up to 30,000 points on an object during a scan). The patient-specific thoracic 3D scan dimensions can be input into custom-tailoring software such as ACCUMARK MADE-TO-MEASURE and ACCUMARK 3D by Gerber Technology of Tolland, CT, or EFI Optitex 2D and 3D integrated pattern design software by EFI Optitex of New York, NY The dimensions as well as three-dimensional surfaces can also be input into a 3D printer such as the FORMLABS FORM3L 3D printer (by Formlabs of Somerville, MA) using the FORMLABS elastic resin to generate strap or other support elements that conform to the patient's body shape. The elastic resin comprises a shore durometer of between about 40A-80A (e.g. 40A, 45A, 50A, 55A, 60A, 65A, 70A, 75A, 80A). From a 3-dimensional scan of the thoracic region105of the patient, an elongated strap210or other support element can be sized to fit proportions and dimensions of the thoracic region105in a nested fit that conforms to the specific patient's body shape. In implementations, for example, various body size measurements and/or 3D images may be obtained of at least a portion of the patient's body, and one or more portions of the elongated strap210or other support element can be formed of a plastic, polymer, or woven fabric to have contours accommodating one or more portions of the thoracic region, or other anatomical region such as arms, neck, etc. conforming to the specific patient's body shape. For example one or more portions of the elongated strap210or other support element may be 3D printed from, for example, any suitable thermoplastic (e.g., ABS plastic) or any elastomeric and/or flexible 3D printable material. For example the elongated strap210may include at least two curved rigid or semi-rigid portions for engaging the patient's sides, under the arms. The at least two curved portions add rigid structure that assists with preventing the elongated strap210from shifting or rotating about the thoracic region. This stability provides consistency of sensor signal readings and prevents noise associated with sensor movement. As described previously with regard to the embodiments ofFIGS.2A-B, in implementations, the elongated strap210comprises at least one visible indicator249of elongated strap210tension disposed on a surface of the elongated strap210. For example, the visible indicator249can be a color changing indicator incorporated in the elongated strap210indicating whether the elongated strap210is too loose, overtightened, or compressed within the range of compressive forces. As the elongated strap210stretches, the material forming the visible indicator249, for example, can change color between blue, indicating over-tensioning or under-tensioning, and yellow or green, indicating proper tensioning for simultaneously enabling sensor readings and patient comfort. In one implementation, the visible indicator249can comprise one or more stretchable, multilayer smart fibers disposed in or on the elongated strap210. The one or more smart fibers change color from red, to orange, to yellow, to green and to blue as strain on the fiber increases. Providing a visible indication directly on the elongated strap210enables a patient to adjust or reapply the strap210so that the plurality of ECG sensing electrodes212and the one or more treatment electrodes214are properly positioned and immobilized on the thoracic region105and so that the strap210isn't overtightened and applying compressive forces in the thoracic region105to a level of patient discomfort. In other implementations, the elongated strap210can include a mechanical strain gauge in or on the elongated strap210. The mechanical strain gauge can be communicatively coupled to the plurality of conductive wires240such that the controller220provides an audible and/or visible indication of whether the elongated strap210is over-tightened, too loose, or within the range of compression forces enabling effective use and wear comfort. In implementations, the elongated strap210comprises an unbroken loop comprising a stretchable fabric. The elongated strap210can be configured to stretch over the shoulders or hips of the patient and contract when positioned about the thoracic region105. In implementations, the stretchable fabric comprises at least one of nylon, LYCRA, spandex, and neoprene. During an initial fitting, the physician, caregiver, or PSR can select an elongated strap210sized to fit the patient. For example, the physician, caregiver, or PSR can measure a circumference about one or more locations of the thoracic region105. The physician, caregiver, or PSR can select an elongated strap210having a circumference within about 75% to about 95% of the measurement of the one or more locations about the thoracic region105. In some implementations, the elongated strap210comprises an elasticized thread. In some implementations, the elongated strap210comprises an elasticized panel disposed in the elongated strap210, the elasticized panel comprising a portion of the elongated strap210spanning less than a total length of the elongated strap210. For example, the elongated strap210can include one or more mechanically joined sections forming a continuous length L2or unbroken loop. The one of the one or more sections can comprise a stretchable fabric and/or elasticized thread interspersed with non-stretchable or relatively less stretchable portions. In other embodiments, the elongated strap210can include a compression an adjustable tension element, such as one or more cords disposed in the elongated strap210and configured to be tensioned and held in tension by one or more pull stops. In all embodiments, the elongated strap210can include one or more visible or mechanical tension indicators configured to provide a notification of the elongated strap210exerting compression forces against the thoracic region105in a range from 0.025 psi to 0.75 psi. In implementations, the elongated strap210comprises a breathable, skin-facing layer including at least one of a compression padding, a silicone tread, and one or more textured surface contours. The breathable material and compression padding enable patient comfort throughout the duration of wear and the silicon tread and/or one or more surface contours assist with immobilizing the elongated strap210relative to the skin surface of the thoracic region. Implementations of the elongated strap210in accordance with the present disclosure may exhibit a moisture vapor transmission rate (MVTR) of, for example, between about 600 g/m2/day and about 1,400 g/m2/day when worn by a subject in an environment at room temperature (e.g., about 25° C.) and at a relative humidity of, for example, about 70%. In implementations, the elongated strap210has a water vapor permeability of 100 g/m2/24 hours, as measured by such vapor transmission standards of ASTM E-96-80 (Version E96/E96M-13), using either the “in contact with water vapor” (“dry”) or “in contact with liquid” (“wet”) methods. Such test methods are described in U.S. Pat. No. 9,867,976, titled “LONG-TERM WEAR ELECTRODE,” issued on Jan. 16, 2018 (hereinafter the “'976 Patent”), the disclosure of which is incorporated by reference herein in its entirety. In implementations, the elongated strap210comprises one or more moisture wicking fabrics for assisting with moving moisture away from the skin of the thoracic region105and improving patient comfort throughout the prescribed duration of wear. In implementations, the elongated strap210includes low skin-irritation fabrics and/or adhesives. In embodiments, the elongated strap210may be worn continuously by a patient for a long-term duration (e.g., duration of at least one week, at least 30 days, at least one month, at least two months, at least three months, at least six months, and at least one year) without the patient experiencing significant skin irritation. For example, a measure of skin irritation can be based on skin irritation grading of one or more as set forth in Table C.1 of Annex C of American National Standard ANSI/AAMI/ISO 10993-10:2010, reproduced above in Table 1. The second wearable portion215similarly can comprise or consist of low skin irritation fabrics. Additionally, the substrate216of second wearable portion215can be lightweight and less compressive than the elongated strap210of the first wearable portion205. In implementations, such as those ofFIGS.6A-Band9A-13B, the second wearable portion215comprises at least one of a shirt, a vest, a bandeau, a pinnie, a butterfly harness, a yoke, and a dickie. The first and second wearable portions205,215are configured to be worn beneath a clothing of the patient. By maintaining and minimizing the substrate216of the second wearable portion215, the system200further minimizes patient discomfort and visibility of the system200when worn beneath outer garments (e.g., the patient's clothing). For example, as shown in the implementation ofFIGS.10A-B, the second wearable portion215can comprise a belt271and suspenders217for supporting one or more treatment electrodes214and one or more additional sensors, such as a p-wave sensor223located on an upper half of the thoracic region105. In implementations, as shown inFIGS.11A-B, the second wearable portion215can be a holster worn about the armpits and supported by the patient's shoulders. In implementations, as shown inFIGS.12A-B, the second wearable portion215can be a butterfly harness worn about the armpits and supported by the patient's shoulders. Additionally or alternatively, in implementations, the first wearable portion205and/or the second wearable portion215can include additional sensors and in implementations, one or both of the first wearable portion205and second wearable portion215can include various structural elements for supporting one or more additional sensors of the system200. For example, the first wearable portion205further can include an appendage211mechanically attached to the elongated strap210. In implementations, the appendage is a flap, similar to the anterior and posterior appendages150,155ofFIGS.2A-B. In implementations, as shown inFIGS.6A and9Athe appendage211is an over-the-shoulder sash. In implementations, as shown inFIG.9B, the appendage211is a pair of over-the shoulder sashes crossing over the anterior area of the thoracic region105. In implementations, the appendage211is monolithically formed with the elongated strap210and therefore non-separable from the elongated strap210. In implementations, the appendage211is configured to be affixed to the elongated strap210. The appendage211can be affixed to the elongated strap by permanent fasteners, such as, for example rivets, stitches, heat welds, and adhesives. In other implementations, one or both ends of the appendage211can be affixed to the elongated strap210by releasable fasteners, such as zippers, hook and loop fasteners, buttons, and snaps. The appendage211can be adjustable in length and can comprise a stretchable fabric to hold the appendage211in compression against the thoracic region105. For example, the appendage211can comprise a fabric comprising or consisting of an elastic polyurethane fiber that provides stretch and recovery. For example, the fabric may comprise or consist of at least one of neoprene, spandex, nylon-spandex, nylon-LYCRA, ROICA, LINEL, INVIYA, ELASPAN, ACEPORA, and ESPA. In implementations, the appendage211can be optionally affixed to the elongated strap to provide additional functionality as prescribed by a physical and/or to provide the patient an opportunity to remove, launder, swap out, and/or replace the appendage211. For example, if the appendage211starts to stretch and loosen, the patient may prefer to remove the appendage211and don a new, more taught replacement. In implementations, the appendage211is configured to be continuously worn about the thoracic region105of the patient and comprises at least one additional ECG sensing electrode212bin communication with the plurality of conductive wires240of the elongated strap210. The at least one additional ECG sensing electrode212bis configured to sense the ECG signal of the patient in conjunction with the plurality of ECG sensing electrodes212of the elongated strap210. As previously described with regard to the device ofFIG.5, an appendage111comprises at least one treatment electrode114bin communication with the at least one processor, the at least one treatment electrode114bconfigured to provide a therapeutic shock. In such an implementation, the at least one treatment electrode114bis in wired communication with the plurality of conductive wires of the band110. Similarly, the appendage211ofFIGS.9A-Bcan include at least one of one or more permanently affixed and/or selectively added additional treatment electrodes, additional ECG sensing electrodes212b, p-wave sensors, and other physiological sensors. In the implementations ofFIGS.9A-9B, for example, the appendage211comprises thereon an additional ECG sensing electrode212bpositioned in an upper anterior region of the thoracic region105, such that the at least one processor218can monitor a standard ECG signal lead. Additionally, in the implementations ofFIGS.9A-9Bthe appendage includes one or more receiving ports213a-bconfigured to receive one or more additional sensors. The additional one or more sensors can be, for example, one or more physiological sensors for detecting one or more of pulmonary vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), and tissue fluids (e.g., using radio-frequency transmitters and sensors). The one or more additional sensors of the appendage211can be, for example, one or more physiological sensors including a pressure sensor for sensing compression forces of the garment, SpO2 sensors, blood pressure sensors, bioimpedance sensors, humidity sensors, temperature sensors, and photoplethysmography sensors. In some examples, the one or more receiving ports213a-bcan also be configured to receive one or more motion and/or position sensors. For example, the additional one or more sensors can be motion sensors including accelerometers for monitoring the movement of the patient's torso in x-, y- and z-axes to determine a movement of the patient, gait, and/or whether the patient is upright, standing, sitting, lying down, and/or elevated in bed with pillows. In certain implementations, one or more gyroscopes may also be provided to monitor an orientation of the patient's torso in space to provide information on, e.g., whether the patient is lying face down or face up, or a direction in which the patient is facing. In the implementation ofFIGS.10A-B, the one or more additional sensors can be supported by the second wearable portion215. For example, the suspenders217of the second wearable portion ofFIG.10Ahave disposed thereon an ECG sensor212and a p-wave sensor223located in an upper anterior portion of the thoracic region for optimal positioning for sensor readings. Similarly, the implementations of the second portion215ofFIGS.10B-12Binclude one or more additional sensors, including at least an ECG sensing electrode212. As previously described the first wearable portion205is continuously worn or substantially continuously worn about the thoracic region105throughout the prescribed duration of wear. In some implementations, such as that ofFIG.13, the monitoring portion of the system200can include a first wearable portion205including an elongated strap210as described previously in embodiments and a second, separate strap206configured to be draped around the upper portion of the thoracic region105. The separate strap206is configured to support one or more addition sensors such as a p-wave sensor223and an additional ECG sensing electrode212configured to detect an ECG signal of the patient in conjunction with the one or more ECG sensing electrodes of the elongated strap210. This second strap206provides optimal placement of the additional sensors for detecting or more conditions of the patient without the use of potentially irritating adhesives. While the first wearable portion205can provide, in implementations, various combinations of physiological sensors, in other implementations, the device can be a unitary wearable device include all sensing and treatment sensors. Similar to the device100ofFIGS.2A-2B, the cardiac monitoring and treatment device400ofFIG.14includes a continuously worn, cross-body sash410worn over a shoulder of a patient and around an opposite side of the patient. In implementations, the sash410is configured to be worn over a shoulder of a patient, encircling a thoracic region105, extending from over the first shoulder of the patient across an anterior area of the thoracic region105to an opposite lateral side of the thoracic region105under the second shoulder of the patient adjacent to the axilla, and further extending across a posterior area of the thoracic region105from under the second shoulder to over the first shoulder. The device400comprises a plurality of electrodes and associated circuitry disposed about the sash410. The plurality of electrodes can include at least one pair of sensing electrodes412disposed about the sash410and configured to be in electrical contact with the patient. The at least one pair of sensing electrodes412can be configured to detect one or more cardiac signals such as ECG signals. An example ECG sensing electrode412includes a metal electrode with an oxide coating such as tantalum pentoxide electrodes, as described in, for example, U.S. Pat. No. 6,253,099 entitled “Cardiac Monitoring Electrode Apparatus and Method,” the content of which is incorporated herein by reference. The device400can include an ECG acquisition circuit in communication with the at least one pair of ECG sensing electrodes412and configured to provide ECG information for the patient based on the sensed ECG signal. In implementations, the at least one pair of sensing electrodes can include a driven ground electrode, or right leg drive electrode, configured to ground the patient and reduce noise in the sensed ECG signal. The plurality of electrodes can include at least one pair of treatment electrodes414aand414b(collectively referred to herein as treatment electrodes414) coupled to a treatment delivery circuit. The at least one pair of treatment electrodes414can be configured to deliver an electrotherapy to the patient. For example, one or more of the at least one pair of treatment electrodes414can be configured to deliver one or more therapeutic defibrillating shocks to the body (e.g., the thoracic region105) of the patient when the medical device100determines that such treatment is warranted based on the signals detected by the at least one pair of ECG sensing electrodes412and processed by the medical device controller420. Example treatment electrodes414include, for example, conductive metal electrodes such as stainless steel electrodes that include, in certain implementations, one or more conductive gel deployment devices configured to deliver conductive gel to the metal electrode prior to delivery of a therapeutic shock. In implementations, a first one of the at least one pair of treatment electrodes414ais configured to be located within an anterior area of the thoracic region105and a second one of the at least one pair of treatment electrodes414bis configured to be located within a posterior area of the thoracic region105of the patient. In some implementations, the anterior area can include a side area of the thoracic region105. In some examples, at least some of the plurality of electrodes and associated circuitry of the device100can be configured to be selectively affixed or attached to the sash410which can be worn about the patient's thoracic region105. In some examples, at least some of the plurality of electrodes and associated circuitry of the device400can be configured to be permanently secured into the sash410. In implementations, the plurality of electrodes are manufactured as integral components of the sash410. For example, the at least one pair of treatment electrodes414and/or the at least one pair of ECG sensing electrodes412can be formed of the warp and weft of a fabric forming at least a layer of the sash410. In implementations, the at least one pair of treatment electrodes414and at least one pair of ECG sensing electrodes412are formed from conductive fibers that are interwoven with non-conductive fibers of the fabric. In implementations, the device400includes a controller420including an ingress-protected housing, and a processor disposed within the ingress-protected housing. The processor is configured to analyze the ECG information of the patient from the ECG acquisition circuit and detect one or more treatable arrhythmias based on the ECG information, and cause the treatment delivery circuit to deliver the electrotherapy to the patient on detecting the one or more treatable arrhythmias. The medical device controller420can be operatively coupled to the at least one pair of ECG sensing electrodes412, which can be affixed to the sash410. In embodiments, the at least one pair of ECG sensing electrodes412are assembled into the sash410or removably attached to the garment, using, for example, hook and loop fasteners, thermoform press fit receptacles, snaps, and magnets, among other restraints. In some implementations, as described previously, at least one pair of ECG sensing electrodes412can be a permanent portion of the sash410. The medical device controller420also can be operatively coupled to the at least one pair of treatment electrodes414. For example, the at least one pair of treatment electrodes414can also be assembled into the sash410, or, as described previously, in some implementations, the at least one pair of treatment electrodes414can be a permanent portion of the sash410. Optionally, the device can include a connection pod430in wired connection with one or more of the plurality of electrodes and associated circuitry. In some examples, the connection pod430includes at least one of the ECG acquisition circuit and a signal processor configured to amplify, filter, and digitize the cardiac signals prior to transmitting the cardiac signals to the medical device controller420. In implementations, the device400can include at least one ECG sensing electrode412configured to be adhesively attached to the upper portion of the thoracic region105, above the sash410, the at least one pair of ECG sensing electrodes412being in wired communication with the ECG acquisition circuitry and at least one of the connection pod and the controller420. In implementations, the device includes a conductive wiring440configured to communicatively couple the controller420to the plurality of electrodes and associated circuitry disposed about the sash410. In implementations, the conductive wiring440can be woven into the warp and weft of the fabric. In implementations, the conductive wiring440can be integrated into the fabric, disposed between layers of the sash. In implementations, the conductive wiring440can include one or more conductive threads integrated into the fabric of the sash410. In examples, the one or more conductive threads can be integrated in a zigzag or other doubled back pattern so as to straighten as the sash410stretches. The zigzag or doubled-back pattern therefore accommodates for stretching and patient movement while keeping the one or more conductive threads from contacting the skin of the patient. Integrating the conductive wiring440into the sash410reduces and/or eliminates snagging the wire or thread on an external object. In other examples, the conductive thread can be routed on an exterior surface of the sash410so as to avoid contacting the skin of the patient and therefore avoid irritation associated with such potential contact. In implementations, the conductive wiring440includes two or more conductive wires bundled within an insulating outer sheath. In implementations, the conductive wiring440can be routed along the sash410and held securely to the sash410by one or more loops of fabric, closable retention tabs, eyelets and/or other retainers so that the conductive wiring140. Similar to the implementation described previously with regard to the device100ofFIGS.2A-B, the ingress-protected housing of the controller420of the device400protects the components thereunder from external environmental impact, for example damage associated with water ingress. Preventing such ingress protects the electronic components of the device100from short-circuiting or corrosion of moisture-sensitive electronics, for example, when a patient wears the device while showering. Such features may also protect from other liquid and solid particle ingress. In implementations, the ingress-protected housing of the controller420includes at least one ingress-protected connector port421configured to receive at least one connector441of the conductive wiring440. The at least one ingress-protected connector port can have an IP67 rating such that the device can be connected to the controller420and operable when a patient is showering or bathing, for example. Additionally, the sash410can be water vapor-permeable, and substantially liquid-impermeable or waterproof. In implementations, a portion of the sash410comprises a water resistant and/or waterproof fabric covering and/or encapsulating electronic components including, for example, the at least one pair of ECG sensing electrodes412, the at least one pair of treatment electrodes414, and the conductive wiring440, and a portion of the sash410comprises a water permeable, breathable fabric having a relatively higher moisture vapor transmission rate that the water resistant and/or waterproof portions. The sash410can comprise or consist of at least one of neoprene, spandex, nylon-spandex, nylon-LYCRA, ROICA, LINEL, INVIYA, ELASPAN, ACEPORA, and ESPA. In examples, the sash410can comprise or consist of a fabric having a biocompatible surface treatment rendering the fabric water resistant and/or waterproof. For example, the fabric can be enhanced by dipping in a bath of fluorocarbon, such as Teflon or fluorinated-decyl polyhedral oligomeric silsesquioxane (F-POSS). Additionally or alternatively, the sash410can comprise or consist of a fabric including anti-bacterial and/or anti-microbial yarns. For example, these yarns can include a base material of at least one of nylon, polytetrafluoroethylene, and polyester. These yarns can be for example, one or more of an antibacterial silver coated yarn, antibacterial DRAYLON DRALON yarn, DRYTEX ANTIBACTERIAL yarn, NILIT BREEZE and NILIT BODYFRESH. In implementations, the outer surface of the sash410can comprise one or more patches of an electrostatically dissipative material such as a conductor-filled or conductive plastic in order to prevent static cling of a patient's clothing. Alternatively, in embodiments, the sash410comprises a static dissipative coating such as LICRON CRYSTAL ESD Safe Coating (TECHSPRAY, Kennesaw, GA), a clear electrostatic dissipative urethane coating. In implementations, the sash410can include one or more sensor ports415a-c(collectively referred to as415) for receiving one or more physiological sensors423separate from the at least one pair of ECG sensing electrodes412. The one or more physiological sensors423can be, for example, sensors for detecting one or more of pulmonary vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), and tissue fluids (e.g., using radio-frequency transmitters and sensors). The one or more additional sensors can be, for example, one or more physiological sensors including a pressure sensor for sensing compression forces of the garment, SpO2 sensors, blood pressure sensors, bioimpedance sensors, humidity sensors, temperature sensors, and photoplethysmography sensors. In some examples, the one or more sensor ports415can also be configured to receive one or more motion and/or position sensors. For example, such motion sensors can include accelerometers for monitoring the movement of the patient's torso in x-, y- and z-axes to determine a movement of the patient, gait, and/or whether the patient is upright, standing, sitting, lying down, and/or elevated in bed with pillows. In certain implementations, one or more gyroscopes may also be provided to monitor an orientation of the patient's torso in space to provide information on, e.g., whether the patient is lying face down or face up, or a direction in which the patient is facing. Returning toFIG.14, the sash410can be sized to fit about the thoracic region105of the patient. In implementations, the sash410can have proportions and dimensions derived from patient-specific thoracic 3D scan dimensions so as to be conformally fitted and shaped to fit the particular patient's exact body shape, thereby providing a much higher degree of comfort than an off-the-shelf garment. In implementations, sizing the device to fit the patient comprises determining dimensions of the thoracic region105in an initial fitting. In implementations, the sash is 3D printed to at least one of body proportions, body shape, body posture, and linear surface measurements of the thoracic region of the patient. In implementations, at least a portion of the sash is 3D printed to conform proportions, dimensions, and shape of the sash to one or more portions and dimensions of the thoracic region105and thereby provides a customize, comfort fit to the patient, further encouraging patient compliance with wearing the device400throughout the prescribed duration of wear. In implementations, for example, various body size measurements and/or 3D images may be obtained from the patient, and one or more portions of the sash410can be formed of a plastic or polymer to have contours accommodating one or more portions of the thoracic region in a fit that conforms to the specific patient's body shape. A 3D scan can determine, for example, thoracic circumference, lateral width of a patient's chest, contours of the thoracic region, and other relevant physical features of the patient. In implementations, one or more portions of the band may be 3D printed from, for example, any suitable thermoplastic (e.g., ABS plastic) or any elastomeric and/or flexible 3D printable material. In implementations a portion of the sash410can be 3D printed to nest with the contours of the patient's shoulder in a comfort fit, like a prosthetic cup sized and shaped to accommodate a limb. The 3D printed shoulder portion remains seated comfortably on the patient's shoulder and assists with preventing the sash410from shifting or rotating. In implementations, the sash410may include at least two curved rigid or semi-rigid portions for engaging the patient's shoulder and side, under the opposite shoulder. The at least two curved portions add rigid structure that assists with preventing the sash410from shifting or rotating about the thoracic region. This stability provides consistency of sensor signal readings and prevents noise associated with sensor movement. As described previously, during an initial fitting, a physician, caregiver, or PSR can perform a 3D scan of the patient's thoracic region using, for example, three-dimensional imaging systems such as cameras and scanners. For example, imaging system can include a handheld device, such as a handheld digital camera or smart phone, carried by the physician, caregiver, or PSR. In implementations, a 3D imaging system can include a plurality of conventional digital cameras. Although designs differ from different vendors, as is known in the art, a camera usually comprises a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) imaging sensor, a lens, a multifunctional video control chip, and a set of discrete components (e.g., capacitor, resistors, and connectors). An image is recorded by the imaging sensor and can be processed by the video control chip. Captured images can also be processed by, for example, a three-dimensional information and/or image processing module configured to identify anatomical structures, distances, and physical objects contained in the captured images. In some examples, a camera can include one or more of a digital camera, RGB camera, digital video camera, red-green-blue sensor, and/or depth sensor for capturing visual information and static or video images of the patient. The camera can also comprise multiple image capture features for obtaining stereo images of the thoracic region105of the patient. The stereo-image can be processed to determine depth information for physical features of the patient's thoracic region105. In other examples, the camera can be a wide angle or fish-eye camera, a three-dimensional camera, a light-field camera, or similar devices for obtaining images. A light-field or three-dimensional camera can refer to an image capture device having an extended depth of field. Advantageously, the extended depth of field means that during image processing, a user can change focus, point of view, or the perceived depth of field of a captured image after the image has been recorded. As such, it has been suggested that an image captured using a light-field or three-dimensional camera contains all information needed to calculate a three-dimensional form of a patient's thoracic region105. See Christian Perwass, et al. “Single Lens 3D-Camera with Extended Depth-of-Field”, Raytrix GmbH, Schauenburgerstr. 116, 24116 Kiel, Germany (2012), which describes an implementation of a light-field 3D camera that may be implemented in embodiments of the present disclosure. In implementations, 3D information and/or images from a three-dimensional imaging system or sensor can be processed to produce a three-dimensional representation of the thoracic region105. In some embodiments, the 3D imaging system can be configured to project a grid of markers so as to capture high resolution patient anatomical features. For example, a camera using technology similar to that of the Kinect motion sensing input device provided by Microsoft Corporation may be employed. Such cameras may include a depth sensor employing an infrared laser projector combined with a monochrome CMOS sensor which allows for 3D video data to be captured under ambient light conditions. It can be appreciated that any suitable 3D imaging systems may be used. A 3D representation may be generated by a 3D surface imaging technology with anatomical integrity, for instance the 3dMDthorax System (3dMD LLC, Atlanta GA). In implementations, a three-dimensional imaging system may be mounted on a tripod facing the patient or handheld by the caregiver such as using an iPhoneX provided by Apple Corporation, which has a built-in three-dimensional imaging system. In implementations, a 3D imaging system can comprise one or more of a digital camera, RGB camera, digital video camera, red-green-blue sensor, and/or depth sensor for capturing visual information and static or video images of the thoracic region105. In some examples, a 3D imaging system can comprise both optical and depth sensing components as with the Kinect motion sensing input device by Microsoft, or the Apple TrueDepth 3D sensing system which may include an infrared camera, flood illuminator, proximity sensor, ambient light sensor, speaker, microphone, 7-megapixel traditional camera, and dot projector (which projects up to 30,000 points on an object during a scan). The patient-specific thoracic 3D scan dimensions can be input into custom-tailoring software such as ACCUMARK MADE-TO-MEASURE and ACCUMARK 3D by Gerber Technology of Tolland, CT, or EFI Optitex 2D and 3D integrated pattern design software by EFI Optitex of New York, NY The dimensions as well as three-dimensional surfaces can also be input into a 3D printer such as the FORMLABS FORM3L 3D printer (by Formlabs of Somerville, MA) using the FORMLABS elastic resin to generate strap or other support elements that conform to the patient's body shape. The elastic resin comprises a shore durometer of between about 40A-80A (e.g. 40A, 45A, 50A, 55A, 60A, 65A, 70A, 75A, 80A). In addition or alternative to 3D-printing the sash410for a custom, nested fit with the morphology of the patient, the sash410can also provide a compression fit. In implementations, the sash410is configured to exert one or more compression forces against the thoracic region. In implementations, the sash410is configured to exert the one or more compression forces in a range from 0.025 to 0.75 psi to the thoracic region105. For example, the one or more compression forces can be in a range from 0.05 psi to 0.70 psi, 0.075 psi to 0.675 psi, or 0.1 to 0.65 psi. Compression forces of the medical device can be determined, for example, using one or more pressure sensors and systems as described above with regard to the band110ofFIG.2A. Immobilizing the sash410relative to the skin surface reduces or eliminates sensor signal noise and provides more reliable sensor signals for the processor to analyze the condition of the patient. In implementations, the sash comprises an unbroken loop comprising a stretchable fabric. The sash410can be configured to stretch over the shoulders or hips of the patient and contract when positioned about the thoracic region105. In implementations, the stretchable fabric comprises at least one of nylon, LYCRA, spandex, and neoprene. During an initial fitting, the physician, caregiver, or PSR can select a sash410sized to fit the patient. For example, the physician, caregiver, or PSR can measure a circumference about one or more locations on the thoracic region105. The physician, caregiver, or PSR can select a sash410having a circumference within about 75% to about 95% of the measurement of the one or more locations about the thoracic region105. In implementations, the sash410exerts compression forces against the skin of the patient by one or more of manufacturing all or a portion of the sash410from a compression fabric, providing one or more tensioning mechanisms in and/or on the sash410, and proving a cinching closure mechanism for securing and compressing the sash410about the thoracic region105. In some implementations, the sash410comprises an elasticized thread disposed in the sash410. In some implementations, the sash410comprises an elasticized panel spanning less than a total length of the sash410. For example, the sash410can include one or more mechanically joined sections forming a continuous length or unbroken loop. The one of the one or more sections can comprise a stretchable fabric and/or elasticized thread interspersed with non-stretchable or relatively less stretchable portions. In other embodiments, the sash410can include an adjustable tension element, such as one or more cables disposed in the sash410and configured to be tensioned and held in tension by one or more pull stops. In all embodiments, the sash410can include one or more visible or mechanical tension indicators configured to provide a notification of the sash410exerting compression forces against the thoracic region105in a range from about 0.025 psi to 0.75 psi. For example, the tension indicators can be configured to provide a notification that the compression forces are in a range from 0.05 psi to 0.70 psi, about 0.075 psi to 0.675 psi, or 0.1 psi to 0.65 psi. Compression forces of the medical device can be determined, for example, using one or more pressure sensors and systems as described above with regard to the band110ofFIG.2A. Because the device400can be a sash410configured to be worn about the thoracic region105of the patient, the sash410is immobilized by compression forces and unlikely to shift as the patient moves and goes about a daily routine. The sash410is immobilized relative to the skin surface of the thoracic region105and prevents or eliminates signal noise associated with sensors shifting against the skin. The size and position of the sash410also provides a discreet and comfortable device400covering only a relatively small portion of the surface area of the entire thoracic region105and accommodating a plurality of body types. In implementations, the band comprises a breathable, skin-facing layer including at least one of a compression padding, a silicone tread, and one or more textured surface contours. The breathable material and compression padding enable patient comfort throughout the duration of wear and the silicon tread and/or one or more surface contours assist with immobilizing the sash410relative to the skin surface of the thoracic region105. Implementations of the device400in accordance with the present disclosure may exhibit a moisture vapor transmission rate (MVTR) of, for example, between about 600 g/m2/day and about 1,400 g/m2/day when worn by a subject in an environment at room temperature (e.g., about 25° C.) and at a relative humidity of, for example, about 70%. In implementations, the device100has a water vapor permeability of 100 g/m2/24 hours, as measured by such vapor transmission standards of ASTM E-96-80 (Version E96/E96M-13), using either the “in contact with water vapor” (“dry”) or “in contact with liquid” (“wet”) methods. Such test methods are described in U.S. Pat. No. 9,867,976, titled “LONG-TERM WEAR ELECTRODE,” issued on Jan. 16, 2018 (hereinafter the “'976 Patent”), the disclosure of which is incorporated by reference herein in its entirety. In implementations, the sash410comprises one or more moisture wicking fabrics for assisting with moving moisture away from the skin of the thoracic region105and improving patient comfort throughout the prescribed duration of wear. Similar to implementations of the device ofFIGS.2A-2B, implementations of the device400can optionally include an adhesive configured to secure the sash410to the thoracic region105of the patient such that the sash410is immobile relative to a skin surface of the thoracic region105. In implementations, the adhesive is removable and/or replaceable and has a low skin irritation grading (e.g., a grading of 1) in accordance with the method set forth in American National Standard ANSI/AAMI/ISO 10993-10:2010, previously described. For example, the adhesive can comprise one or more adhesive patches424configured to be disposed between the sash410and the skin of the patient. The adhesive patches424comprise as pressure-sensitive adhesive having tack, adhesion, and cohesion properties suitable for use with a medical device applied to skin for short term and long-term durations. These pressure sensitive adhesives can include polymers such as acrylics, rubbers, silicones, and polyurethanes having a high initial tack for adhering to skin. These pressure sensitive adhesives also maintain adhesion during showering or while a patient is perspiring. The adhesives also enable removal without leaving behind uncomfortable residue. For example, such an adhesive can be a rubber blended with a tackifier. In implementations, the adhesive comprises one or more water vapor permeable adhesive patches. Additionally or alternatively, the adhesive can be a conductive patch disposed between the plurality of electrodes and the skin of thoracic region105, in some implementations. For example, as described in the '976 patent, a water-vapor permeable conductive adhesive patch can be, for example, the flexible, water vapor-permeable, conductive adhesive material can comprise a material selected from the group consisting of an electro-spun polyurethane adhesive, a polymerized microemulsion pressure sensitive adhesive, an organic conductive polymer, an organic semi-conductive conductive polymer, an organic conductive compound and a semi-conductive conductive compound, and combinations thereof. In an example, a thickness of the flexible, water vapor-permeable, conductive adhesive material can be between 0.25 and 100 mils. In another example, the water vapor-permeable, conductive adhesive material can comprise conductive particles. In implementations, the conductive particles may be microscopic or nano-scale particles or fibers of materials, including but not limited to, one or more of carbon black, silver, nickel, graphene, graphite, carbon nanotubes, and/or other conductive biocompatible metals such as aluminum, copper, gold, and/or platinum. In implementations in addition to or alternative to an adhesive, the sash410can include an auxiliary strap445, shown in dashed line inFIG.14to indicate optional use. In implementations, a patient optionally may attach the auxiliary strap445around the thoracic region. In implementations, the auxiliary strap445can attach to an anterior portion of the sash410with a connector447asuch as a hook and look fastener, a clip, buttons, or snaps. Similarly, the auxiliary strap445can attach to a posterior portion of the sash410with a connector447bsuch as a hook and look fastener, a clip, buttons, or snaps. The optionally worn auxiliary strap445is configured to prevent the sash410from shifting and/or rotating. A patient may attach the auxiliary strap445during periods of high activity, such as during exercise, and remove the auxiliary strap while seated or prone, such as while sleeping. As described above, the teachings of the present disclosure can be generally applied to external medical monitoring and/or treatment devices (e.g., devices that are not completely implanted within the patient's body). External medical devices can include, for example, ambulatory medical devices that are capable of and designed for moving with the patient as the patient goes about his or her daily routine. An example ambulatory medical device can be a wearable medical device such as a wearable cardioverter defibrillator (WCD), a wearable cardiac monitoring device, an in-hospital device such as an in-hospital wearable defibrillator, a short-term wearable cardiac monitoring and/or therapeutic device, and other similar wearable medical devices. A wearable medical cardiac monitoring device is capable of continuous use by the patient. Further, the wearable medical device can be configured as a long-term or extended use medical device. Such devices can be designed to be used by the patient for a long period of time, for example, a period of 24 hours or more, several days, weeks, months, or even years. Accordingly, the long period of use can be uninterrupted until a physician or other caregiver provides specific prescription to the patient to stop use of the wearable medical device. For example, the wearable medical device can be prescribed for use by a patient for a period of at least one week. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least 30 days. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least one month. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least two months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least three months. In an example, the wearable medical device can be prescribed for use by a patient for a period of at least six months. In an example, the wearable medical device can be prescribed for use by a patient for a long period of at least one year. In some implementations, the extended use can be uninterrupted until a physician or other caregiver provides specific instruction to the patient to stop use of the wearable medical device. Regardless of the period of wear, the use of the wearable medical device can include continuous or nearly continuous wear by the patient as previously described. For example, the continuous use can include continuous wear of the wearable medical device to the patient. Continuous use can include continuously monitoring the patient while the patient is wearing the device for cardiac-related information (e.g., electrocardiogram (ECG) information, including arrhythmia information, cardiac vibrations, etc.) and/or non-cardiac information (e.g., blood oxygen, the patient's temperature, glucose levels, tissue fluid levels, and/or pulmonary vibrations). For example, the wearable medical device can carry out its continuous monitoring and/or recording in periodic or aperiodic time intervals or times (e.g., every few minutes, hours, once a day, once a week, or other interval set by a technician or prescribed by a caregiver). Alternatively or additionally, the monitoring and/or recording during intervals or times can be triggered by a user action or another event. As noted above, the wearable medical device can be configured to monitor other physiologic parameters of the patient in addition to cardiac related parameters. For example, the wearable medical device can be configured to monitor, for example, pulmonary-vibrations (e.g., using microphones and/or accelerometers), breath vibrations, sleep related parameters (e.g., snoring, sleep apnea), tissue fluids (e.g., using radio-frequency transmitters and sensors), among others. In implementations, such as that ofFIG.7, the patient-worn arrhythmia monitoring and treatment device100further includes a patient notification output via an output device1216. In response to detecting one or more treatable arrhythmia conditions, the processor218is configured to prompt the patient for a response by issuing the patient notification output, which may be an audible output, tactile output, visual output, or some combination of any and all of these types of notification outputs. In the absence of a response to the notification output from the patient, the processor is configured to cause the therapy delivery circuit1130to deliver the one or more therapeutic pulses to the patient. FIG.15depicts an example of a process1500for determining whether to initiate a therapy sequence and apply a therapeutic pulse to the thoracic region105of a patient. In implementations, the processor218, receives S1502a patient ECG signal from the ECG sensing electrodes212and analyzes S1504the ECG signal for an arrhythmia condition. The processor218determines S1506whether the arrhythmia is life threatening condition and requires treatment. If the arrhythmia is not life threatening, the processor218can cause a portion of the ECG signal to be stored in memory for later analysis and continue to monitor the patient ECG signal. If the arrhythmia is life threatening, the processor218provides S1508a patient notification output and requests S1510a patient response to the provided notification output. In implementations, the patient responds to an alert by interacting with a user interface (e.g., the user interface1208ofFIG.7), which includes, for example, one or more buttons (e.g. the at least one button122,422of the device100,400, as shown inFIGS.2A and14) or touch screen interface buttons with haptic feedback (e.g., touch screen buttons on the user interface1208of the controller220,420and/or a second at least one response button of a wearable article (e.g. an arm band or wrist worn article comprising at least one of a mechanically-actuatable button, a touch screen interface, and at least one touch screen button on a user interface of the wearable article) or like devices, such as smartphones running user-facing interactive applications.). The response may be, for example, pressing one or more buttons in a particular sequence or for a particular duration. The processor218determines S1512whether the patient response was received. If the patient responds to the notification output, the processor218is notified that the patient is conscious and returns to a monitoring mode, thereby delaying delivery of a therapeutic defibrillation or pacing shock. If the patient is unconscious and unable to respond to the provided alert, the processor218initiates S1514the therapy sequence and treats S1516the patient with the delivery of energy to the thoracic region of the patient. In implementations, if a user response button is pressed for longer than a threshold duration (e.g. longer than 5 seconds), the processor218instructs the device to prompt the patient to release the button. If the user response button is not released the device will return to a state of imminent therapy delivery and will alert the patient to the imminent shock. FIGS.2A-6and9A-14illustrate example cardiac monitoring and treatment devices that are external, ambulatory, and wearable by a patient, and configured to implement one or more configurations described herein. In examples, the medical device can include physiological sensors configured to detect one or more cardiac signals. Examples of such signals include ECG signals and/or other sensed cardiac physiological signals from the patient. In certain implementations, the physiological sensors can include additional components such as accelerometers, vibrational sensors, and other measuring devices for recording additional parameters. For example, the physiological sensors can also be configured to detect other types of patient physiological parameters and vibrational signals, such as tissue fluid levels, cardio-vibrations, pulmonary-vibrations, respiration-related vibrations of anatomical features in the airway path, patient movement, etc. Example physiological sensors can include ECG sensors including a metal electrode with an oxide coating such as tantalum pentoxide electrodes, as described in, for example, U.S. Pat. No. 6,253,099 entitled “Cardiac Monitoring Electrode Apparatus and Method,” the content of which is incorporated herein by reference. In examples, the physiological sensors can include a heart rate sensor for detecting heart beats and monitoring the heart rate of the patient. For instance, such heart rate sensors can include the ECG sensors and associated circuitry described above. In some examples, the heart rate sensors can include a radio frequency based pulse detection sensor or a pulse oximetry sensor worn adjacent an artery of the patient. In implementations, the heart rate sensor can be worn about the wrist of a patient, for example, incorporated on and/or within a watch or a bracelet. In some examples, the heart rate sensor can be integrated within a patch adhesively coupled to the skin of the patient over an artery. In some examples, the treatment electrodes114,214,414can also be configured to include sensors configured to detect ECG signals as well as other physiological signals of the patient. The ECG data acquisition and conditioning circuitry is configured to amplify, filter, and digitize these cardiac signals. One or more of the treatment electrodes114,214,414can be configured to deliver one or more therapeutic defibrillating shocks to the body of the patient when the medical device determines that such treatment is warranted based on the signals detected by the ECG sensing electrodes112,212,412and processed by the processor218. Example treatment electrodes114,214,414can include conductive metal electrodes such as stainless steel electrodes that include, in certain implementations, one or more conductive gel deployment devices configured to deliver conductive gel to the metal electrode prior to delivery of a therapeutic shock. In some implementations, medical devices as described herein can be configured to switch between a therapeutic medical device and a monitoring medical device that is configured to only monitor a patient (e.g., not provide or perform any therapeutic functions). The therapeutic elements can be deactivated (e.g., by means or a physical or a software switch), essentially rendering the therapeutic medical device as a monitoring medical device for a specific physiologic purpose or a particular patient. As an example of a software switch, an authorized person can access a protected user interface of the medical device and select a preconfigured option or perform some other user action via the user interface to deactivate the therapeutic elements of the medical device. FIG.7illustrates an example component-level view of the controller220. As shown inFIG.7, the controller220can include a therapy delivery circuit1130including a polarity switching component such as an H-bridge1228, a data storage1207, a network interface1206, a user interface1208at least one battery1140, a sensor interface1212that includes, for example, an ECG data acquisition and conditioning circuit, an alarm manager1214, at least one processor218, and one or more capacitors1135. A patient monitoring medical device can include components like those described with regard toFIG.7, but does not include the therapy delivery circuit1130. Alternatively, a patient monitoring medical device can include components like those described with regard toFIG.7, but includes a switching mechanism for rendering the therapy delivery circuit1130inoperative. For example, the processor218can prompt the switching mechanism to render the therapy delivery circuit1130inoperative when the second wearable portion215is not connected to the controller220. The therapy delivery circuit1130is coupled to two or more treatment electrodes configured to provide therapy to the patient. For example, the therapy delivery circuit1130includes, or is operably connected to, circuitry components that are configured to generate and provide the therapeutic shock. The circuitry components include, for example, resistors, one or more capacitors, relays and/or switches, an electrical bridge such as an H-bridge1228(e.g., an H-bridge including a plurality of insulated gate bipolar transistors or IGBTs that deliver and truncate a therapy pulse), voltage and/or current measuring components, and other similar circuitry arranged and connected such that the circuitry work in concert with the therapy delivery circuit and under control of one or more processors (e.g., processor218) to provide, for example, one or more pacing or defibrillation therapeutic pulses. Pacing pulses can be used to treat cardiac arrhythmias such as bradycardia (e.g., in some implementations, less than 30 beats per minute) and tachycardia (e.g., in some implementations, more than 150 beats per minute) using, for example, fixed rate pacing, demand pacing, anti-tachycardia pacing, and the like. Defibrillation pulses can be used to treat ventricular tachycardia and/or ventricular fibrillation. In implementations, each of the treatment electrodes114,214,414has a conductive surface adapted for placement adjacent the patient's skin and has an impedance reducing means contained therein or thereon for reducing the impedance between a treatment electrode and the patient's skin. In implementations, each of the treatment electrodes can include a conductive impedance reducing adhesive layer, such as a breathable anisotropic conductive hydrogel disposed between the treatment electrodes and the torso of the patient. In implementations, a patient-worn cardiac monitoring and treatment device may include gel deployment circuitry configured to cause the delivery of conductive gel substantially proximate to a treatment site (e.g., a surface of the patient's skin in contact with the treatment electrode114) prior to delivering therapeutic shocks to the treatment site. As described in U.S. Pat. No. 9,008,801, titled “WEARABLE THERAPEUTIC DEVICE,” issued on Apr. 14, 2015 (hereinafter the “'801 Patent”), which is incorporated herein by reference in its entirety, the gel deployment circuitry can be configured to cause the delivery of conductive gel immediately before delivery of the therapeutic shocks to the treatment site, or within a short time interval, for example, within about 1 second, 5 seconds, 10 seconds, 30 seconds, or one minute before delivery of the therapeutic shocks to the treatment site. Such gel deployment circuitry can be coupled to or integrated with each of the treatment electrodes114,214,414. When a treatable cardiac condition is detected and no patient response is received after device prompting, the gel deployment circuitry can be signaled to deploy the conductive gel. In some examples, the gel deployment circuitry can be constructed as one or more separate and independent gel deployment modules. Such modules can be configured to receive removable and/or replaceable gel cartridges (e.g., cartridges that contain one or more conductive gel reservoirs). As such, the gel deployment circuitry can be permanently disposed in the device as part of the therapy delivery systems, while the cartridges can be removable and/or replaceable. In some implementations, the gel deployment modules can be implemented as gel deployment packs and include at least a portion of the gel deployment circuitry along with one or more gel reservoirs within the gel deployment pack. In such implementations, the gel deployment pack, including the one or more gel reservoirs and associated gel deployment circuitry can be removable and/or replaceable. In some examples, the gel deployment pack, including the one or more gel reservoirs and associated gel deployment circuitry, and the treatment electrode can be integrated into a treatment electrode assembly that can be removed and replaced as a single unit either after use, or if damaged or broken. Continuing with the description of the example medical device ofFIG.7, in implementations, the one or more capacitors1135is a plurality of capacitors (e.g., two, three, four or more capacitors) comprising a capacitor bank1402. These capacitors1135can be switched into a series connection during discharge for a defibrillation pulse. For example, four capacitors of approximately 650 μF can be used. In one implementation, the capacitors can have between 200 to 2500 volt surge rating and can be charged in approximately 5 to 30 seconds from a battery1140depending on the amount of energy to be delivered to the patient. For example, each defibrillation pulse can deliver between 60 to 400 joules (J) of energy. In some implementations, the defibrillating pulse can be a biphasic truncated exponential waveform, whereby the signal can switch between a positive and a negative portion (e.g., charge directions). An amplitude and a width of the two phases of the energy waveform can be automatically adjusted to deliver a predetermined energy amount. The data storage1207can include one or more of non-transitory computer readable media, such as flash memory, solid state memory, magnetic memory, optical memory, cache memory, combinations thereof, and others. The data storage1207can be configured to store executable instructions and data used for operation of the medical device. In certain implementations, the data storage1207can include executable instructions that, when executed, are configured to cause the processor218to perform one or more functions. In some examples, the network interface1206can facilitate the communication of information between the medical device and one or more other devices or entities over a communications network. For example, the network interface1206can be configured to communicate with a remote computing device such as a remote server or other similar computing device. The network interface1206can include communications circuitry for transmitting data in accordance with a BLUETOOTH wireless standard for exchanging such data over short distances to an intermediary device(s) (e.g., a base station, a “hotspot” device, a smartphone, a tablet, a portable computing device, and/or other devices in proximity of the wearable medical device100). The intermediary device(s) may in turn communicate the data to a remote server over a broadband cellular network communications link. The communications link may implement broadband cellular technology (e.g., 2.5G, 2.75G, 3G, 4G, 5G cellular standards) and/or Long-Term Evolution (LTE) technology or GSM/EDGE and UMTS/HSPA technologies for high-speed wireless communication. In some implementations, the intermediary device(s) may communicate with a remote server over a WI-FI communications link based on the IEEE 802.11 standard. In certain implementations, the user interface1208can include one or more physical interface devices such as input devices, output devices, and combination input/output devices and a software stack configured to drive operation of the devices. These user interface elements may render visual, audio, and/or tactile content. Thus, the user interface1208may receive input or provide output, thereby enabling a user to interact with the medical device. In some implementations, the user interface1208can be implemented as a wearable article or as a hand-held user interface device (for example, wearable articles including the patient interface pod40ofFIG.1and the wrist and arm worn remote devices.) For instance, a hand-held user interface device can be a smartphone or other portable device configured to communicate with the processor218via the network interface1206. In an implementation, the hand-held user interface device may also be the intermediary device for facilitating the transfer of information from the device to a remote server. As described, the medical device can also include at least one battery1140configured to provide power to one or more components, such as the one or more capacitors1135. The battery1140can include a rechargeable multi-cell battery pack. In one example implementation, the battery1140can include three or more 2200 mAh lithium ion cells that provide electrical power to the other device components. For example, the battery1140can provide its power output in a range of between 20 mA to 1000 mA (e.g., 40 mA) output and can support 24 hours, 48 hours, 72 hours, or more, of runtime between charges. As previously descried in detail, in certain implementations, the battery capacity, runtime, and type (e.g., lithium ion, nickel-cadmium, or nickel-metal hydride) can be changed to best fit the specific application of the medical device. The sensor interface1202can be coupled to one or more sensors configured to monitor one or more physiological parameters of the patient. As shown inFIG.7the sensors can be coupled to the medical device controller (e.g., processor218) via a wired or wireless connection. The sensors can include one or more sensing electrodes (e.g., ECG sensing electrode212), vibrations sensors1224, and tissue fluid monitors1226(e.g., based on ultra-wide band radiofrequency devices). For example, the sensor interface1202can include ECG circuitry (such as ECG acquisition and conditioning circuitry) and/or accelerometer circuitry, which are each configured to receive and condition the respective sensor signals. The sensing electrodes can monitor, for example, a patient's ECG information. For example, the sensing electrodes ofFIG.7can be ECG sensing electrodes212and can include conductive electrodes with stored gel deployment (e.g., metallic electrodes with stored conductive gel configured to be dispersed in the electrode-skin interface when needed), conductive electrodes with a conductive adhesive layer, or dry electrodes (e.g., a metallic substrate with an oxide layer in direct contact with the patient's skin). The sensing electrodes can be configured to measure the patient's ECG signals. The sensing electrodes can transmit information descriptive of the ECG signals to the sensor interface1202for subsequent analysis. The vibrations sensors1224can detect a patient's cardiac or pulmonary (cardiopulmonary) vibration information. For example, the cardiopulmonary vibrations sensors1224can be configured to detect cardio-vibrational biomarkers in a cardio-vibrational signal, including any one or all of S1, S2, S3, and S4 cardio-vibrational biomarkers. From these cardio-vibrational biomarkers, certain electromechanical metrics can be calculated, including any one or more of electromechanical activation time (EMAT), percentage of EMAT (% EMAT), systolic dysfunction index (SDI), left ventricular diastolic perfusion time (LDPT), and left ventricular systolic time (LVST). The cardiopulmonary vibrations sensors1224may also be configured to detect heart wall motion, for example, by placement of the cardiopulmonary vibrations sensor1224in the region of the apical beat. The vibrations sensors1224can include an acoustic sensor configured to detect vibrations from a subject's cardiac or pulmonary (cardiopulmonary) system and provide an output signal responsive to the detected vibrations of the targeted organ. For instance, in some implementations, the vibrations sensors1224are able to detect vibrations generated in the trachea or lungs due to the flow of air during breathing. The vibrations sensors1224can also include a multi-channel accelerometer, for example, a three channel accelerometer configured to sense movement in each of three orthogonal axes such that patient movement/body position can be detected. The vibrations sensors1224can transmit information descriptive of the cardiopulmonary vibrations information or patient position/movement to the sensor interface1202for subsequent analysis. The tissue fluid monitors1226can use radio frequency (RF) based techniques to assess changes of accumulated fluid levels over time. For example, the tissue fluid monitors1226can be configured to measure fluid content in the lungs (e.g., time-varying changes and absolute levels), for diagnosis and follow-up of pulmonary edema or lung congestion in heart failure patients. The tissue fluid monitors1226can include one or more antennas configured to direct RF waves through a patient's tissue and measure output RF signals in response to the waves that have passed through the tissue. In certain implementations, the output RF signals include parameters indicative of a fluid level in the patient's tissue. The tissue fluid monitors1226can transmit information descriptive of the tissue fluid levels to the sensor interface1202for subsequent analysis. The sensor interface1202can be coupled to any one or combination of sensing electrodes/other sensors to receive other patient data indicative of patient parameters. Once data from the sensors has been received by the sensor interface1202, the data can be directed by the processor218to an appropriate component within the medical device. For example, if cardiac data is collected by the cardiopulmonary vibrations sensor1224and transmitted to the sensor interface1202, the sensor interface1202can transmit the data to the processor218which, in turn, relays the data to a cardiac event detector. The cardiac event data can also be stored on the data storage1207. An alarm manager1214can be configured to manage alarm profiles and notify one or more intended recipients of events specified within the alarm profiles as being of interest to the intended recipients. These intended recipients can include external entities such as users (e.g., patients, physicians, other caregivers, patient care representatives, and other authorized monitoring personnel) as well as computer systems (e.g., monitoring systems or emergency response systems). The alarm manager1214can be implemented using hardware or a combination of hardware and software. For instance, in some examples, the alarm manager1214can be implemented as a software component that is stored within the data storage1207and executed by the processor218. In this example, the instructions included in the alarm manager1214can cause the processor218to configure alarm profiles and notify intended recipients according to the configured alarm profiles. In some examples, alarm manager1214can be an application-specific integrated circuit (ASIC) that is coupled to the processor218and configured to manage alarm profiles and notify intended recipients using alarms specified within the alarm profiles. Thus, examples of alarm manager1214are not limited to a particular hardware or software implementation. In some implementations, the processor218includes one or more processors (or one or more processor cores) that each are configured to perform a series of instructions that result in manipulated data and/or control the operation of the other components of the medical device. In some implementations, when executing a specific process (e.g., cardiac monitoring), the processor218can be configured to make specific logic-based determinations based on input data received, and be further configured to provide one or more outputs that can be used to control or otherwise inform subsequent processing to be carried out by the processor218and/or other processors or circuitry with which processor218is communicatively coupled. Thus, the processor218reacts to a specific input stimulus in a specific way and generates a corresponding output based on that input stimulus. In some example cases, the processor218can proceed through a sequence of logical transitions in which various internal register states and/or other bit cell states internal or external to the processor218can be set to logic high or logic low. The processor218can be configured to execute a function stored in software. For example, such software can be stored in a data store coupled to the processor218and configured to cause the processor218to proceed through a sequence of various logic decisions that result in the function being executed. The various components that are described herein as being executable by the processor218can be implemented in various forms of specialized hardware, software, or a combination thereof. For example, the processor can be a digital signal processor (DSP) such as a 24-bit DSP processor. The processor218can be a multi-core processor, e.g., a processor having two or more processing cores. The processor can be an Advanced RISC Machine (ARM) processor such as a 32-bit ARM processor or a 64-bit ARM processor. The processor can execute an embedded operating system and include services provided by the operating system that can be used for file system manipulation, display & audio generation, basic networking, firewalling, data encryption and communications. In implementations, the therapy delivery circuit1130includes, or is operably connected to, circuitry components that are configured to generate and provide the therapeutic shock. As described previously, the circuitry components include, for example, resistors, one or more capacitors1135, relays and/or switches, an electrical bridge such as an H-bridge1228(e.g., an H-bridge circuit including a plurality of switches, (e.g. insulated gate bipolar transistors or IGBTs, silicon carbide field effect transistors (SiC FETs), metal-oxide semiconductor field effect transistors (MOSFETS), silicon-controlled rectifiers (SCRs), or other high current switching devices)), voltage and/or current measuring components, and other similar circuitry components arranged and connected such that the circuitry components work in concert with the therapy delivery circuit1130and under control of one or more processors (e.g., processor218) to provide, for example, one or more pacing or defibrillation therapeutic pulses. In implementations, the device further includes a source of electrical energy, for example, the one or more capacitors1135, that stores and provides energy to the therapy delivery circuit1130. The one or more therapeutic pulses are defibrillation pulses of electrical energy, and the one or more treatable arrhythmias include ventricular fibrillation and ventricular tachycardia. In implementations, the one or more therapeutic pulses are biphasic exponential pulses. Such therapeutic pulses can be generated by charging the one or more capacitors1135and discharging the energy stored in the one or more capacitors1135into the patient. For example, the therapy delivery circuit1130can include one or more power converters for controlling the charging and discharging of the one or more capacitors1135. In some implementations, the discharge of energy from the one or more capacitors1135can be controlled by, for example, an H-bridge that controls the discharge of energy into the body of the patient, like the H-bridge circuit described in U.S. Pat. No. 6,280,461, titled “PATIENT-WORN ENERGY DELIVERY APPARATUS,” issued on Aug. 28, 2001, and U.S. Pat. No. 8,909,335, titled “METHOD AND APPARATUS FOR APPLYING A RECTILINEAR BIPHASIC POWER WAVEFORM TO A LOAD,” issued on Dec. 9, 2014, each of which is hereby incorporated herein by reference in its entirety. As shown in the embodiment toFIG.16, the H-bridge1228is electrically coupled to a capacitor bank1402including four capacitors1135a-dthat are charged in parallel at a preparation phase1227aand discharged in series at a treatment phase1227b. In some implementations, the capacitor bank1402can include more or fewer than four capacitors1135. During the treatment phase1227b, the H-bridge1228applies a therapeutic pulse that causes current to flow through the torso5of a patient101in desired directions for desired durations. The H-bridge1228includes H-bridge switches1229a-dthat are opened and closed selectively by a switching transistor such as insulated gate bipolar transistors (IGBTs), silicon carbide field effect transistors (SiC FETs), metal-oxide semiconductor field effect transistors (MOSFETS), silicon-controlled rectifiers (SCRs), or other high current switching devices. Switching a pair of transistors to a closed position, for example switches1229aand1229c, enables current to flow in a first direction for first pulse segment P1. Opening switches1229aand1229cand closing switches1229band1229denables current to flow through the torso5of the patient101in a second pulse segment P2directionally opposite the flow of the first pulse segment P1. Although the subject matter contained herein has been described in detail for the purpose of illustration, it is to be understood that such detail is solely for that purpose and that the present disclosure is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment. Other examples are within the scope and spirit of the description and claims. Additionally, certain functions described above can be implemented using software, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions can also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. | 180,612 |
11857330 | DETAILED DESCRIPTION Overview Certain EEG monitoring systems can include complicated multi-component medical device systems, which require technical skill for set-up and coordination. When such systems are used outside of a large or research hospital with special expertise, set-up and coordination can be difficult and prone to user error. EEG monitoring systems which use multiple sensor components also require time synchronization across individual devices in order to combine sensor data. When the devices are not wired together, achieving time synchronization of sensor across multiple sensor devices can be difficult to achieve. EEG monitoring systems may also be used for long-term use, either at home or in a hospital of any given size or specialty including, for example, small general hospitals in rural areas. Long-term EEG recording requires a high level of complexity in set up and coordination but needs to be seamless and simple for day-to-day use. EEG monitoring systems and methods have been described in U.S. Pat. No. 11,020,035 and in U.S. Patent Publication No. 2021/0307672, each of which are incorporated by reference in their entirety. Described herein are improved systems, kits, and methods for EEG monitoring. Wearable Sensor and EEG Monitoring Kit FIG.1Ais a perspective top view and bottom view illustration of an EEG recording wearable sensor101, which can be used as a seizure monitoring tool. As shown inFIG.1A, the wearable sensor101is self-contained in a housing102. The housing102may be formed of a plastic, polymer, composite, or the like that is water-resistant, waterproof, or the like. The housing102can contain all of the electronics for recording EEG from at least two electrodes104,105. The electrodes104,105are on the bottom, or scalp facing, side shown on the right side ofFIG.1A. Electrodes104,105may be formed of any suitable material. For example, electrodes104,105may comprise gold, silver, silver-silver chloride, carbon, combinations of the foregoing, or the like. One of the electrodes104,105can be a reference electrode and the other can be a measurement (or measuring) electrode. As noted above, the entire wearable sensor101may be self-contained in a watertight housing102. The wearable sensor101can be designed to be a self-contained EEG machine that is one-time limited use per user and disposable. The wearable sensor101can include more than two electrodes. In some cases, the wearable sensor101includes three electrodes. In some implementations, the wearable sensor101includes four electrodes. Additional electrodes (such as a third and/or fourth electrode) may be formed of any suitable material, for example gold, silver, silver-silver chloride, carbon, combinations of the foregoing, or the like. The wearable sensor101has two electrodes104,105and can be used alone or in combination with other wearable sensors101(such as, three other wearable sensors101) as a discrete tool to monitor seizures (and in some cases count seizures). It may be desirable, but not necessary, that the user has had a previous diagnosis of a seizure disorder using a traditional wired EEG based on the 10-20 montage. This diagnosis provides clinical guidance as to the most optimal location to place the wearable sensor101for recording electrographic seizure activity in an individual user. In some cases, the electrode104,105spacing uses a bipolar derivation to form a single channel of EEG data. FIG.1Bis a perspective top view illustration of an EEG recording wearable sensor101with a housing102that has an extended, rounded shape. Such shape can be referred to as a jellybean shape, and may facilitate accurate placement on a patient in a correct orientation as well as promote patient comfort and prolonged wear. In some cases, the EEG recording wearable sensor101is shaped to fit behind the ear. The EEG recording wearable sensor101can be shaped to fit along the hairline. The EEG recording wearable sensor101can be shaped to fit along a scalp. For example, as shown inFIG.1B, the EEG recording wearable sensor101has an extended rounded shape which is configured to fit around or complement a hairline of a user, such that the extended, rounded shape of the housing102facilitates unobtrusive wear of the sensor on the scalp of the user while facilitating collection of the EEG signals. In some implementations, the housing102includes a narrow portion configured to curve around the hairline of a user.FIG.1Cprovides a cross-sectional view, andFIG.1Dprovides a perspective view of the EEG recording wearable sensor101ofFIG.1B.FIGS.1C-1Dillustrate that the housing102includes a narrow portion110. The side of the housing102with the narrow portion110can be positioned closer to the patient's ear (seeFIG.2C), which can facilitate unobtrusive wear and collection of the EEG signals. The narrow portion110can be thinner than other parts of the housing102. The housing102can become thicker (or widen) from the end that includes the narrow portion110to the opposite end111. Such varying thickness of the housing102can facilitate unobtrusive wear. Thickness of the housing102in the widest portion can be about 10.0 mm, 9.5 mm, 9.0 mm, 8.5 mm, 8.0 mm, 7.5 mm, 7.0 mm, 6.5 mm, 6.0 mm, 5.5 mm, 5.0 mm, 4.5 mm, 4.0 mm, or within a range constructed from any of the aforementioned values. In some implementations, the EEG recording wearable sensor101is shaped to mimic the look of hearing aids. The EEG recording wearable sensor101can include an antenna. The external design (jellybean shape) of the EEG recording wearable sensor101can influence the internal shape, requiring unique design and tuning of the antenna. In some cases, the EEG recording wearable sensor101includes a power source supported by the housing and configured to provide power to the electronic circuitry. In some cases, the EEG recording wearable sensor101includes a rechargeable battery. The EEG recording wearable sensor101can includes electrode. The EEG recording wearable sensor101can include at least two electrodes positioned on an exterior surface of the housing and configured to detect EEG signals indicative of a brain activity of the user when the housing is positioned on a scalp of the user. The electrodes may be disposed within the housing102of the EEG recording wearable sensor101. Unlike traditional wired EEG systems employing the 10-20 montage, the EEG recording wearable sensor101can allow a much smaller spacing between the measurement and reference electrodes, which may not only make the housing102more compact, but also improve signal quality. The distance between the electrodes can be configured to allow for less noisy EEG signal capture, thus improving signal quality. The distance between the electrodes can be reduced, particularly when compared to traditional wired EEG systems employing the 10-20 montage. The distance between electrodes can be no more than about 25 mm center to center, no more than about 20 mm center to center, no more than about 18 mm center to center, no more than about 15 mm center to center, no more than about 10 mm center to center, or within a range constructed from any of the aforementioned values. The housing102can be configured so that the electrodes are disposed at a distance configured to allow better EEG signal capture. The EEG recording wearable sensor101includes an electronic circuitry that may be supported by the housing102. The electronic circuitry can be configured to process the EEG signals detected by the at least two electrodes. In some implementations, the electronic circuitry is configured to wirelessly communicate processed EEG signal to a remote computing device. The remote computing device can be a portable computing device as described herein. An extended, rounded shape for an EEG recording wearable sensor101may allow an EEG recording wearable sensor101to provide: (a) proper electrode pair spacing to allow EEG signal capture; (b) an enclosed housing102large enough to contain a full electronics package, including an antenna and a battery that supports frequent communication (such as, Bluetooth or Bluetooth low energy (BLE)); and/or (c) a housing102design that complements the curvature around a scalp and/or a hairline and/or behind ears. In some cases, the surface area of the housing102is about 8.5 cm2, 8.0 cm2, 7.5 cm2, 7.0 cm2, 6.5 cm2, 6.0 cm2, 5.5 cm2, 5.0 cm2, 4.5 cm2, or within a range constructed from any of the aforementioned values. The surface area of the jellybean shaped housing102illustrated inFIG.1Bcan be about 20 cm2, 19.5 cm2, 19.0 cm2, 18.5 cm2, 18.0 cm2, 17.5 cm2, 17.0 cm2, 16.5 cm2, 16.0 cm2, 15.5 cm2, 15.0 cm2, 14.5 cm2, 14.0 cm2, 13.5 cm2, 13.0 cm2, 12.5 cm2, 12.0 cm2, 11.5 cm2, 11.0 cm2, 10.5 cm2, 10.0 cm2, 9.5 cm2, 9.0 cm2, 8.5 cm2, 8.0 cm2, 7.5 cm2, 7.0 cm2, 6.5 cm2, 6.0 cm2, 5.5 cm2, 5.0 cm2, 4.5 cm2or less, or within a range constructed from any of the aforementioned values. The volume of the jellybean shaped housing102illustrated inFIG.1Bcan be about 8.0 cm3, 7.5 cm3, 7.0 cm3, 6.5 cm3, 6.0 cm3, 5.0 cm3, 4.5 cm3, 4.0 cm3, 3.5 cm3, 3.0 cm3, 2.5 cm3, 2.0 cm3or less, or within a range constructed from any of the aforementioned values. The wearable sensor101can be placed anywhere on the scalp of a patient to record EEG (such as, behind the ear). The wearable sensor101may be packaged such that removal from the package activates the circuitry. Implementations of the wearable sensor101can be placed anywhere on the scalp as placing a conventional wired EEG electrode. The wearable sensor101can self-adhere to the scalp either through a conductive adhesive, an adhesive with a conductive, and/or through mechanical means such as intradermal fixation with a memory-shape metal, or the like. Once attached to the scalp (for instance, with an attachment as described below), some implementations enable the wearable sensor101to perform as seizure detection device (alone or in combination with one or more other wearable sensors101, such as three other wearable sensors101). The wearable sensor101can record EEG continuously, uninterrupted for up to seven days. In some implementations, each EEG recording wearable sensor101is configured to detect EEG signals independent of the other sensors. Following a recording session, the wearable sensor101may be placed in the mail and returned to a service that reads the EEG to identify epileptiform activity according to ACNS guidelines. In some cases, data may be retrieved from the wearable sensor101via an I/O data retrieval port (not shown) and uploaded or otherwise sent to a service for reading the EEG data. The I/O data retrieval port may operate with any suitable I/O protocol, such as USB protocol, Bluetooth protocol, or the like. Epileptiform activity such as seizures and interictal spikes may be identified in a report along with EEG recording attributes and made available to physicians through a user's electronic medical records, or the like. The wearable sensor101may employ capacitive coupling as a means to spot-check signal quality. A handheld, or other device, can be brought near the wearable sensor101to capacitively couple with the device as a means to interrogate the EEG or impedance signal in real time. The wearable sensor101may be used to alert to seizures in real time, or near real time. The wearable sensor101may continuously transmit to a base station (not shown) that runs seizure detection algorithm(s) in real-time. The base station may sound an alarm if a seizure is detected either at the base station itself, or through communication to other devices (not shown) capable of providing a visual and/or audio and/or tactile alarm. The base station may also keep a record of EEG for later review by an epileptologist. These EEG may also be archived in electronic medical records, or otherwise stored. The wearable sensor101could be used to record ultra-low frequency events from the scalp such as cortical spreading depressions. Amplifier circuitry (not shown) may be appropriate for recording DC signals. Alternatively, the amplifier circuitry may be appropriate for recording both DC and AC signals. The wearable sensor101may be used after a suspected stroke event as a means to monitor for the presence or absence of cortical spreading depressions and/or seizures or other epileptiform activity. The wearable sensor101may be placed on the scalp of a patient by any type of health care provider such as an emergency medical technician, medical doctor, nurse, or the like. In some implementations, the wearable sensor101may employ capacitive coupling to monitor for cortical spreading depressions in real time. The spreading depressions could be analyzed over time and displayed as a visualization of the EEG. The wearable sensor101may store these EEG (e.g., in storage) for later retrieval. These EEG could also be archived in electronic medical records, or the like. FIG.2Adepicts an attachment200being peeled off a backing201to reveal an adhesive side. The attachment200can be referred to as a sticker or adhesive. The backing201may be made of paper, plastic, or any other suitable material.FIG.2Bdepicts the attachment200placed onto the wearable sensor101, aligned over the electrodes104,105. In some cases, the attachment includes a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing102of the wearable sensor101. In some implementations, the attachment includes a second side configured to removably position the wearable sensor101on the scalp of a user. A layered attachment200may be utilized, which is provided to a user that may remove a layer (the backing201) to expose an adhesive containing the hydrogel in wells aligned with the positioning of the electrodes (such as electrodes104,105). The attachment may then be placed on the sensor, (sensor101) and thereafter on the user's skin to adhere a sensors such as sensor101to the user's skin. Even though the attachment200may be illustrated as having rectangular shape, in any of the implementations disclosed herein, the attachment200can have a jellybean shape that matches the shape of the housing102illustrated inFIG.1B. FIG.2Cillustrates a sensor101placed onto a patient's scalp. The sensor101is reversibly attached to the scalp with the attachment200. The sensor101is located at an appropriate place on the user, for example, on the scalp below the hairline, in order to sense and record EEG data. The EEG data may be analyzed on-board, for example, via application of an analysis or machine learning model stored in the sensor101or may be analyzed by a local device or remote device or a combination of the foregoing. By way of example, the sensor101may communicate using a wired or wireless protocol, for example, secure Bluetooth Low Energy (BLE), to a local device using a personal area network (PAN), such as communicating data to a smartphone or a tablet. Similarly, the sensor101may communicate with a remote device using a wide area network (WAN), such as communicating EEG data to a remote server or cloud server over the Internet, with or without communicating via an intermediary device such as a local device. The hydrogel is conductive and also provides enough adhesion to the scalp for effective recording of EEG for long wear times. Alternatively, the wearable sensor101may be adhered with a combination conductive hydrogel with an adhesive construct. After use, the attachment200can simply be peeled off the wearable sensor101and thrown away. Prior to the next use (for example after a wear period), a new attachment200can be applied to the wearable sensor101. Consistent EEG signal data from person-to-person is made possible by using a one-piece converted conductive hydrogel and adhesive construct200. The attachment200enables reversable adhesion of the wearable sensor101to the scalp. The design of the attachment200also reduces both water infiltration and water evaporation from the hydrogel during long wear times. In some cases, the attachment200is made by laminating a number of adhesive and non-adhesive layers with wells filled with a hydrogel and sandwiched between release liners. In some implementations, the attachment200is further packaged individually in air-tight and water-tight pouches. FIG.3Aillustrates an exploded view of an attachment200. In the example ofFIG.3A, attachment200includes a clear PET (polyethylene terephthalate) liner301, a hydrogel302, a hydrogel303, a transfer adhesive304, and a paper backing201. The attachment200may include a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing102of a wearable sensor101(sensor side). In some cases, the first side of the attachment200is configured to be attached to a bottom surface of a wearable sensor101. The attachment200may include a second side configured to removably position the wearable sensor101on the scalp of the user (skin side). In some implementations, the clear PET liner301is configured to be removed before the attachment200is placed on the scalp of a user. The hydrogel302,303can facilitate repositioning the wearable sensor101on the scalp of the user. FIG.3Billustrates an exploded view of an attachment200. In the example ofFIG.3B, attachment200includes layers1101-1106. The first layer1101can include a top liner which may be composed of thermoplastic resin. The thermoplastic resin may be polyethylene terephthalate (PET). In some implementations, second layer1102comprises a cured hydrogel. The third layer1103can include a transfer adhesive. In some implementations, fourth layer1104comprises a non-woven fabric. The non-woven fabric may be scrim-spun lace non-woven polyester. The fifth layer1105can include an adhesive. The adhesive may be a thick double-sided adhesive foam. The sixth layer1106can include a bottom liner which may be composed of thermoplastic resin. The thermoplastic resin may be PET. In some cases, two or more of first layer1101, second layer1102, third layer1103, fourth layer1104, fifth layer1105, and sixth layer1106are laminated to one another such that second layer1102is disposed between first layer1101and third layer1103. In some implementations, first layer1101is removable. The sixth layer1106can be removable. Third layer1103and fifth layer1105can form apertures therein. The apertures may align with electrodes of a sensor. One or more of third layer1103, fourth layer1104, and fifth layer1105can include a cured hydrogel. The hydrogel can be intermingled with the non-woven fabric of fourth layer1104. The hydrogel can be transitioned from a liquid or semi-liquid or gel form to a solid or semi-solid form using a crosslinking process. The cross-linking process can be triggered by application of one or more ultraviolet (UV) light and an electron beam. Provided herein are methods for preparing an attachment200. In some implementations, the method includes providing two or more layers, at least one of the two or more layers including an aperture. Providing two or more layers can include providing a fabric layer. The fabric can be non-woven. The method can further include stacking the two or more layers. The method can include providing hydrogel to the apertures. Providing hydrogel can include pouring the hydrogel into the apertures. The method can further include fixing the layers. Fixing the hydrogel can include curing the hydrogel via a UV light or an electron beam exposure. FIG.3Cillustrates an exploded view of an attachment200. In the example ofFIG.3C, attachment200includes a clear PET liner301, hydrocolloid material305, a hydrogel303, a double-coated tape306, and a paper backing201. The attachment200may include a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing102of a wearable sensor101(sensor side). The first side of the attachment200can be configured to be attached to a bottom surface of a wearable sensor101. The attachment200may include a second side configured to removably position the wearable sensor101on the scalp of the user (skin side). In some cases, the clear PET liner301is configured to be removed before the attachment200is placed on the scalp of a user. The hydrocolloid material305can facilitate repositioning the wearable sensor101on the scalp of the user. FIG.4is a front perspective view of a charger400.FIG.4illustrates a charger400in a closed configuration (left image) and in an open configuration (right image). The system for monitoring brain activity can include a charger400comprising a charger housing401. The charger housing401can be configured to receive and simultaneously charge power sources of at least two wearable sensors101. For example, the charger400may receive and charge power sources for two wearable sensors, or four sensors, or more,101at the same time. In some implementations, the charger400includes multiple charging stations for wearable sensors101. The wearable sensor101may be worn continuously for a period of days before it needs to be removed, such as for charging an on-board power source such as a rechargeable battery. To enable continued monitoring, the user may have two (or more) sets of wearable sensors101and will use one (or more) while the other(s) is being recharged. Such an arrangement will allow for continuous EEG data capturing and monitoring. FIG.5illustrates a kit or system500for monitoring brain activity. In some cases, the kit or system500disclosed herein includes a plurality of sensors101. For example, the kit or system500may include 2 sensors, 3 sensors, 4 sensors, 5 sensors, 6 sensors, 7 sensors, 8 sensors, 9 sensors, or 10 sensors, and so on. The kit or system500may include two sets of sensors, with a first set for use while a second set is charging. After the first set is used, the first set may be charged while the second set is used. The kit or system500disclosed herein can include a plurality of attachments200. For example, the kit or system500includes 2 attachments, 3 attachments, 4 attachments, 5 attachments, 6 attachments, 7 attachments, 8 attachments, 9 attachments, 10 attachments, 11 attachments, 12 attachments, 13 attachments, 14 attachments, 15 attachments, 16 attachments, 17 attachments, 18 attachments, 19 attachments, or 20 attachments, and so forth. The number of attachments in the plurality of attachments may be greater than a number of wearable sensors included in the plurality of wearable sensors. The number of attachments200in the plurality of attachments200can include the number of wearable sensors101in the plurality of wearable sensors101multiplied by a number of days during which the plurality of wearable sensors are configured to record the brain activity of the user. For example, if there are four wearable sensors101configured to record the brain activity of the user for 7 days, the kit or system would include at least 28 attachments200. For example, if there are four wearable sensors101configured to record the brain activity of the user for 3 days, the kit or system would include at least 12 attachments200. The kit or system can include additional attachments200beyond the number of wearable sensors101in the plurality of wearable sensors101multiplied by a number of days during which the plurality of wearable sensors101are configured to record the brain activity of the user. Disclosed herein are methods for monitoring brain activity. The methods can include detaching at least one wearable sensor101of a plurality of wearable sensors101configured to record a brain activity of a user. In some cases, each wearable sensor101includes a housing102having an extended, rounded shape. Each wearable sensor101can include at least two electrodes104,105positioned on an exterior surface of the housing102and configured to detect EEG signals indicative of the brain activity of the user. The methods can further include replacing a first attachment200of a plurality of attachments200with a second attachment200of the plurality of attachments200. The first and second attachments200can include a first side shaped to substantially match the extended, rounded shape of the housing102. The first side can be configured to be attached to the exterior surface of the housing102of the at least one wearable sensor101. The first and second attachments200can include a second side configured to removably position the at least one wearable sensor101on a scalp of the user. In some cases, the number of attachments200in the plurality of attachments200is greater than a number of wearable sensors101in the plurality of wearable sensors101. The method can further includes reattaching the at least one wearable sensor101to the scalp of the user by adhering the second side of the second attachment200to the scalp of the user. The method may further includes resuming recording of EEG signals indicative of the brain activity of the user. EEG System Setup and Provisioning The systems and methods provided herein can include software to assist a user in setting up the system. The user may be a healthcare provider or a patient. FIG.6is an illustration of an EEG monitoring system600. The system ofFIG.6includes a plurality of wearable sensors601configured to record a brain activity of a patient. Each wearable sensor601can include at least two electrodes configured to detect signals indicative of the brain activity of the user when the wearable sensor is positioned on a scalp of the user. Each wearable sensor601can further includes an electronic circuitry configured to, based on the signals detected by the at least two electrodes, determine data associated with the brain activity of the user and wirelessly transmit the data associated with the brain activity of the user to one or more portable computing devices602. In some cases, the system further includes a non-transitory computer readable medium storing instructions that, when executed by at least one processor the one or more portable computing devices602, cause the at least one processor to facilitate activation of the plurality of wearable sensors601; instruct the user to position the plurality of wearable sensors601on the scalp of the user using a plurality of attachments configured to removable attach the plurality of wearable sensors601to the scalp of the user; and record the data associated with the brain activity of the user transmitted by the plurality of wearable sensors601. The portable computing device602can include communication functionality, such as wireless communication functionality. The portable computing device602can be configured for being worn by the user. The portable computing device602can include a smartwatch, which may have a display. The portable computing device can602can include a smart band, smart jewelry, or the like, which may not have a display. The portable computing device602can include a tablet or another computing device, such as medical grade tablet. Such portable computing device602may include a display that is larger than the display of a smartwatch. The portable computing device602may connect to a remote server or cloud server through connection with a phone application, or may connect to a remote server or cloud server directly (for example, the portable computing device602may include a cellular communication chip that enables wireless communication with a remote server or cloud server). Provided herein are systems for monitoring brain activity. In some implementations, the systems include a plurality of wearable sensors601configured to detect EEG signals indicative of a brain activity of a patient. Each wearable sensor of the plurality of wearable sensors601can include at least two electrodes configured to monitor the EEG signals when the wearable sensor is positioned on a scalp of the patient. Each wearable sensor of the plurality of wearable sensors601can include an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes. In some cases, systems described herein further include a non-transitory computer readable medium storing executable instructions which may be executed by at least one processor of a portable computing device602. FIGS.7A-7Lprovide example processes and screens (or modals) for wearable sensor601set-up, activation, placement, and verification. These can be implemented by or executed on a portable computing device602, such as at least one processor of the portable computing device. While some illustrations may depict a wearable sensor601with a certain shape or configuration, this is meant as an illustrative example and not by way of limitation. For example, the processes and screens described herein may be used to guide a user through set-up, activation, placement, and verification of wearable sensors601having a variety of configurations, such as wearable sensors601having an extended, rounded shape as described herein. In some cases, through each of the screens, the system displays the next screen in response to user input, for example a user may press a button (such as a button displayed on a touch screen or a physical button on a portable computing device602) to go to the next screen showing the next instructions. FIG.7Aillustrates a flow diagram of a process for set-up of an EEG recording session. The process may include guiding a user through steps which may include starting a new session610, inputting basic settings620, inputting advanced settings630, and inputting patient information640. The set-up process can continue to sensor identification662or can be restarted650. During set-up, a series of screens may be displayed on the portable computing device602. In some cases, a start new session screen610is displayed. A user may interact with a screen, such as pressing a start button or pressing a settings button, to start set-up of a new session or to open a settings menu. The system can verify whether IT contact information has already been entered, and if it has already been entered, the system bypasses the start new session screen610and automatically transitions to basic settings screen620. When basic settings screen620is displayed, a user may enter basic settings such as hospital IT contact information. In some implementations, the system stores the user input. When basic settings screen620is displayed, the system may verify that communication (such as, Bluetooth) is enabled on the portable computing device602the user is using to perform session set-up. If communication is not enabled, the system may request the operating system of the portable computing device602to enable communication or may prompt the user with a native modal informing the user that the app has requested communication be turned on. In response to user input, the system can then displays a start new session screen610. User input to proceed to a start new session screen610may be allowed only if IT contact information has been entered. When a user enters a password, advanced settings screen630can be displayed. When advanced settings screen630is displayed, the system may receive and store user input related to advanced settings. Advanced settings may include server URL and server path. Advanced settings may also include toggling on/off a kiosk mode, allowing patient barcode scan, and allowing device barcode scan. Advanced settings may also include a manual entry for a patient barcode and/or a device barcode. In some cases, in response to user input, the system then displays a start new session screen610. User input to proceed to a start screen may be allowed only if IT contact information has been entered. When patient information screen640is displayed, the system may receive and store user inputs related to patient information. A patient barcode may be scanned with a camera of the portable computing device602or a patient barcode may be entered manually. Patient information may also include a patient's first and last name. In response to user input, the system can then display a sensor identification screen660. User input to proceed to a sensor identification screen660may be allowed only if patient information has been entered. In response to user input, the process can then displays a restart session modal650. The screen may ask a user to confirm whether the user wants to restart the session set-up. In response to a user input confirming restart, the system can clear all patient and sensor data already saved in a memory and the process may be restarted in610. In response to user input canceling restart, the restart session modal650can be dismissed. In step662, a user has completed the session set-up process and begins sensor identification, as further described inFIG.7B. The executable instructions can cause at the least one processor to, prior to providing instructions to position the wearable sensor in the location on the scalp of the user, provide instructions to scan or enter the identification information for the wearable sensor601.FIG.7Billustrates an example sensor identification screen660. The sensor identification screen660may include a display of a scan captured with a camera to enable a user to capture a barcode via camera of a portable computing device602. The system may automatically look for barcode designs in the image captured by the camera. The system may store information related to sensor identification. The sensor identification screen660may include a display indicating whether a sensor ID corresponding to each wearable sensor601of the plurality of wearable sensors601has been entered. In the example of sensor identification screen660, a circle corresponding to each wearable sensor of the plurality of wearable sensors601is filled in once sensor identification information for that wearable sensor601has been entered. In response to a user input (for example, selecting an already-entered sensor), the system may overwrite sensor identification information relating to an already-entered wearable sensor601with new sensor identification information. The system may receive and store sensor identification information manually entered by a user. In response to user input, the system can then displays a restart session modal650. The system can automatically display the next screen once sensor identification information for each wearable sensor601of the plurality of wearable sensors601has been entered/scanned. FIG.7Cillustrates an example sensor initialization screen. The screen may guide a user to remove the contents of a pouch in preparation for placement of one or more wearable sensors601, such as one or more wearable sensors601, one or more adhesives, and one or more alcohol wipes. The screen may also guide a user to activate one or more wearable sensors601. A user may activate a wearable sensor601by pressing a button on top of the wearable sensor601. The screen may include a display indicating whether each wearable sensor601of the plurality of wearable sensors601has been activated. In the example ofFIG.7C, a circle corresponding to each wearable sensor601of the plurality of wearable sensors601is filled in once that wearable sensor601has been activated. In response to a user input (for example, selecting an already-activated wearable sensor601), the system may overwrite sensor activation information relating to an already-activated wearable sensor601with new sensor activation information. In some cases, the system starts a wireless scan (such as via Bluetooth) for the wearable sensors601. The wireless scan may continue until recording is started or sensor set-up session is ended. When a wearable sensor601detected by a wireless scan, a local state can be updated with sensor information if the detected wearable sensor601is among the wearable sensors601previously identified (such as in the process described in connection withFIG.7B). The system can wirelessly connect (such as via Bluetooth) with each activated wearable sensor601, verifies connection capability, and verifies the wearable sensor601is working. After each of the plurality of wearable sensors601connect to the system, the system can wirelessly command the plurality of wearable sensors601to enter a synchronization state and then synchronization information (such as a sync-time-set advertisement) is wirelessly sent to the plurality of wearable sensors601, as further described in connection withFIG.10A or10B. Wireless connection can then be re-established with each wearable sensor601of the plurality of sensors601. The system can command each wearable sensor601of the plurality of sensors601to enter a sleep state. In the sleep state, a sensor601may not monitor EEG signals or transmit data. Once each wearable sensor601of the plurality of sensors601is activated, a user may enter an input to advance to the next step of the set-up such asFIG.7E. In response to user input, the system can then display a restart session modal650. If a set amount of time (such as one minute, two minutes, etc.) passes from beginning of the sensor activation process or from a sensor connection without a new wearable sensor601connecting, the system may display a connection time-out modal, such as the connection time-out modal ofFIG.7D. FIG.7Dis an example connection time-out modal. The connection time-out modal may display errors and trouble-shooting information related to sensor activation. The system may allow a user to snooze (dismiss for a set amount of time) the connection time-out modal, replace one or more wearable sensors601, or end the session. If a user selects to end the session, the system may display an end session modal. If a user selects to replace one or more wearable sensors601, a replace sensor modal may be displayed. If a user selects to snooze, the connection time-out modal may disappear. In some cases, the snooze interval (amount of time) is saved to a memory store. The user may input the snooze interval. The system can display the connection time-out modal again after the snooze interval has elapsed. If a wearable sensor601is activated and connects, the system may automatically dismiss the connection time-out modal. FIG.7Eis an example sticker (or attachment) placement screen. Providing instructions to position the wearable sensor601in the location on the scalp of the user can include instructing a use of a plurality of attachments configured to removably attach the wearable sensor601to the scalp of the user. For example, a sticker placement screen may guide a user to open an adhesive pouch and remove an adhesive. The sticker placement screen can guide a user to remove a film (also referred to as a liner or backing) from the adhesive. A sticker placement screen may instruct a user to apply the adhesive to the wearable sensor601(such as with the correct alignment over the electrodes). A sticker placement screen may guide a user to remove a second film from the adhesive in preparation for sticker placement. In response to user input, the system can then display a restart session modal650. A user may enter an input to cause the system to advance to the next step of the set-up, such asFIG.7F. The executable instructions can cause the at least one processor to: provide instructions to position a wearable sensor601of the plurality of wearable sensors601in a location of a plurality of locations on the scalp of the user and activate the wearable sensor601; verify an identification of the wearable sensor601; responsive to verification of the identification of the wearable sensor601, verify an impedance of the wearable sensor601; and responsive to verification of the impedance of the wearable sensor601, provide instructions to position and activate another wearable sensor601of the plurality of wearable sensors601and perform verification of an identification and an impedance of the another wearable sensor601. FIG.7Fdepicts an example sensor placement and activation screen. Sensor may be placed in four locations on the scalp, such as behind left ear (LE), behind right ear (LE), left front side of the forehead (LF), and right front side of the forehead (RF). A sensor placement and activation screen may guide a user to wipe the location on the user's body with an alcohol swab provided in a kit. For example, the location may be behind an ear or on another location on the scalp. The screen may display a graphic to show the user which location on the user's body to clean with the swab. A sensor placement and activation screen may guide a user to place a wearable sensor601on the patient's body in a location (for instance, the scalp). In some cases, providing instructions to position the wearable sensor601includes displaying instructions on a screen of the portable computing device602. For example, the screen may display a graphic to show the user where to place the wearable sensor601. In some implementations, providing instructions to position the wearable sensor601comprises displaying the location on the screen of the portable computing device602and instructions to activate the wearable sensor601. For example, a sensor placement and activation screen may instruct a user to press a button on the wearable sensor601after placement in the directed location to activate the wearable sensor601. In some cases, wireless connection (such as via Bluetooth) is made with previously connected wearable sensors601. The system can verify the identity of an activated wearable sensor601to confirm that the activated wearable sensor601is a wearable sensor601that was identified by the user, for example scanned in the process described in connection withFIG.7Bor activated in the process described in connection withFIG.7C. The screen may include a display indicating whether each wearable sensor601of the plurality of wearable sensors601has been activated. In the example ofFIG.7F, a circle corresponding to each wearable sensor of the plurality of wearable sensors601is filled in once that wearable sensor601has been activated. The screen may include a display indicating whether each wearable sensor601of the plurality of wearable sensors601remains wirelessly connected to the portable computing device602, and may display a notification, graphic, and/or alert if an activated wearable sensor601becomes disconnected. In response to a user input, the system can display a restarts placements modal, such as the restart placement modal ofFIG.7H. Once a wearable sensor601of the plurality of wearable sensors601is activated, the system may prompt as user to initiate an impedance test of the wearable sensor(s)601. In the example ofFIG.7F, a user may press a test button once the wearable sensor601is placed and activated. In some cases, when a user initiates the impedance test, a command is wirelessly sent (such as via Bluetooth) to a wearable sensor601to run an impedance check. In some implementations, the systems checks whether an impedance measurement from a wearable sensor601satisfies a pre-determined threshold, such as 100 kOhm or more or 500 kOhm or more. In some implementations, if the impedance measurement does not satisfy a pre-determined threshold, the system may provide an indication to the user, such as display an appropriate screen informing of poor electrode contact. In some implementations, the executable instructions further cause the at least one processor to, responsive to not verifying the impedance of the wearable sensor601, repeat one or more of: providing instructions, verifying the identification, and verifying the impedance for the wearable sensor601. In some cases, repeating includes providing instructions to reposition the wearable sensor601and verifying the impedance of the sensor. In some cases, responsive to not verifying the impedance of the wearable sensor601for a second time, the executable instructions cause the processor to restart providing instructions, verify the identification, and verify the impedance for the plurality of wearable sensors601. In some implementations, when restarted, the system wipes all saved sensor placements data at all locations and begins providing instructions, verifying the identification, and verifying the impedance for the plurality of wearable sensors601from the first of the plurality of wearable sensors601. If the impedance is verified (for example, the impedance measurement satisfies the pre-determined threshold), the system can return to the placement screen with instructions to place, activate, and test the next wearable sensor601in the sequence of the plurality of sensors601. The executable instructions can further cause the at least one processor to provide an alert in response to detecting that at least two wearable sensors601of the plurality of wearable sensors601have been positioned in the same location on the scalp of the user or other position or activation in error. This can include responsive to detecting that at least two wearable sensors601of the plurality of wearable sensors601have been positioned in a particular (i.e., incorrect or unidentified) location of the plurality of locations on the scalp of the user, restart providing instructions, verifying the identification, and verifying the impedance for the plurality of wearable sensors601. Detecting that the at least two wearable sensors601have been positioned in the same location can include detecting that multiple wearable sensors601have been activated substantially simultaneously. This can be performed as follows. The executable instructions can cause the at least one processor to sequentially provide instructions to position and activate, verify an identification, and verify an impedance of each wearable sensor601of the plurality of wearable sensors. One wearable sensor601of the plurality of wearable sensors601can be placed, activated, and tested for impedance for one location, one at a time (in other words, in a sequence). For example, the user is instructed to place a first wearable sensor601in a first location, activate the first wearable sensor601, and initiate an impedance test for the first wearable sensor601; then the user is instructed to place a second wearable sensor601in a second location, activate the second wearable sensor601, and initiate an impedance test for the second wearable sensor601; and so on. When a user initiates an impedance test, if multiple wearable sensors601have been activated before the impedance test, the system can display a multiple sensors activated alert modal, such as the multiple sensors activated alert modal ofFIG.7G. As another example, if multiple wearable sensors601have been activated (for instance, inFIG.7F) before an impedance test is initiated, the system can display the activated alert modal. In some cases, determining whether multiple wearable sensors601have been activated substantially simultaneously can be performed based on determining whether at least two sensors have been activated within a threshold time duration (such as, 10 seconds or less, 30 seconds or less or more, or the like). FIG.7Gis an example multiple sensors activated alert modal. In some implementations, the multiple sensors activated alert modal notifies a user that more than one wearable sensor601has been activated for a location and instructs the user to only place and activate one wearable sensor601at a time. The multiple sensors activated alert modal can instruct a user to wait. When a multiple sensors activated alert modal is displayed, the system can connect to the wearable sensors601, and commands all connected wearable sensors to enter a sleep state. A user may select to dismiss or snooze the multiple sensors activated alert modal. When a user selects to dismiss or snooze the multiple sensors activated alert modal, the system can command wearable sensors601that are connected and that have not been correctly placed to enter an inactive state. When a user selects to dismiss or snooze the multiple sensors activated alert modal, the system once again may display a sensor placement and activation screen. FIG.7Hshows an example restart placement modal. In some cases, the restart placement modal prompts a user to confirm or cancel whether to restart the placement process. When placement is restarted, the system can wipe all saved sensor placements data at all locations. When placement is restarted, the system can command the sensors to enter a sleep state. When placement is restarted, the system can display the sticker placement screen, such as the sticker placement screen ofFIG.7E. When restart is canceled, the restart placement modal can be dismissed. FIG.7Ishows an example verify session screen. This screen may be displayed after all the sensors have been placed and activated. The system can display a screen so that a user can verify each sensor identification matches the sensor identification recorded in the system for each location (for instance, recorded inFIG.7B). In the example ofFIG.7I, the portable computing device602displays a diagram of the head with squares representing sensor placements (such as sensor ID) and sensor state (such as connected). The portable computing device602can display a notification if there is an issue, such as an activation, identification, or impedance issue, with one or more of the wearable sensor601placements. For example, the graphical representation of a sensor with an issue may flash red/blue, such as an alert modal appeared and was snoozed. In response to user input such as clicking on a representation of a wearable sensor601on the screen, the system can display a sensor modal with sensor information. In response to user input (such as selecting to cancel), the system can display an end session modal. In response to user input, the system can initiate the recording session. After verification has been completed, EEG recording session may be started. As part of verification of subsequent to the verification, wearable sensors601may be synchronized, as further described in connection withFIG.10A or10B.FIG.7Jillustrates an example screen that informs the user that synchronization is being performed. The executable instructions can further cause the at least one processor to, responsive to verification of the identity and impedance of each wearable sensor601of the plurality of wearable sensors601, record the processed EEG signals wirelessly transmitted by the plurality of wearable sensors601.FIG.7Kis an example active recording screen. The system can store the recording started time in the memory. Wireless scans (such as Bluetooth scan) can be stopped. A wireless scan may start again if a wearable sensor601disconnects. Real-time data notifications can be enabled for all wearable sensors601. The portable computing device602may display a notification if there is an issue, such as an activation, identification, or impedance issue, with one or more of the wearable sensor601placements. For example, the graphical representation of a wearable sensor601with an issue may flash red/blue, such as an alert modal appeared and was snoozed. In response to user input such as clicking on a representation of a wearable sensor601on the screen of a portable computing device602, the system can display a sensor modal with sensor information. The portable computing device602can send a message containing information for session events for recording started and recording ended to a remote server or cloud server over the Internet. A recording started or recording ended message can contains information such as patient information, wearable sensor(s)601information, recording start time, and/or recording end time. The portable computing device602can receive messages containing real-time data/events from the plurality of wearable sensors601and communicates messages containing real-time data/events to a remote server or cloud server over the Internet. The portable computing device602can display a notification on the active recording screen if there is an issue, such as an activation, identification, or impedance issue, with one or more of the wearable sensor601placements. In the example ofFIG.7K, the portable computing device602displays a diagram of the head with squares representing wearable sensor601placements (such as sensor ID) and sensor state (such as connected). A user may interact with the display to end the recording session. The portable computing device602can display diagnostic information as well as session ID and portable computing device602ID. In response to user input (such as clicking “end recording”), the system can display an end recording modal. After a predefined amount of time has elapsed (for example 48 hours) after beginning of recording, the system can automatically end the recording session and displays a finalizing session screen (not shown). FIG.7Lillustrates a flow diagram of a process for guiding a user through sensor set-up and provisioning. As described herein, the process may include guiding a user through the steps of sensor(s) activation710, sensor(s) placement720, recording data730, and self-diagnostics740. In some cases, sensor(s) activation710includes using the application to guide a user to scan a barcode associated with a wearable sensor601with the camera feature of a portable computing device602. Sensor(s) activation710can include using an application to guide a user to manually enter a barcode associated with a wearable sensor601using a portable computing device602. The application can create a passcode for each sensor based on the scanned or entered barcode. The passcode may be unique and ensure that only 4 provisioned wearable sensors601set up in a session can communicate with the portable computing device602. The application can guide a user to press and/or hold a button on each wearable sensor601to activate the wearable sensor601. The application can display information related to sensor activation on a display of a portable computing device602. Information related to sensor activation may include whether each wearable sensor601has been activated.FIG.7Fillustrates an example screen associated with sensor(s) activation710and sensor placement720displayed on a portable computing device602. In some cases, sensor(s) placement720includes using the application to guide a user to place wearable sensors601on the scalp of a patient. The application can walk the user through multiple images, one for each wearable sensor601, and shows the user the location that the user should place each sensor. Placement of multiple wearable sensors601can follow a pattern, such as left to right. A display may provide a graphical instruction such as that illustrated inFIG.7F. An emergent care screening may be conducted on a patient using four wearable sensors601, two on the forehead and two behind the ears. With this four-sensor arrangement, a desired montage may be created, for example via subtracting the EEG signal from one sensor relative to another to create a 10-channel longitudinal-transverse montage, as described in U.S. Pat. No. 11,020,035 and U.S. Patent Publication No. 2021/0307672, each of which is incorporated by reference in its entirety. The instructions can further cause the processor to activate a wearable sensor601to run an impedance test to ensure the wearable sensor601is attached to the skin and has adequate electrode contact. The impedance test may include pushing a current and measuring a voltage. Recording data730can include recording EEG data. Recording data730may be initiated in response to user input (such as pressing a button) on an application running on a portable computing device602. The application on the portable computing device602can display a screen indicating that recording is in session, such as recording screen ofFIG.7K. In some cases, recording data730includes determining the state of the wearable sensor601. States may include a waiting state (on but not recording), a recording state, a charging state, a communicating state, etc. State information may include whether and when the internal clock of the sensor has been set, a number of recordings/pages of recordings, battery charging status (fully charged, partially charged, etc.). In some implementations, the instructions further cause the at least one processor to cause a display of a portable computing device602to display push notifications. The push notifications may be based on state information. The push notifications may be based on change in state. The instructions can further cause the at least one processor to run self-diagnostics740on the system, including on the plurality of wearable sensors601. Self-diagnostics740can include identifying a problem associated with one or more wearable sensors601, sensor data, and/or communication to or from the wearable sensors601. Self-diagnostics740can include diagnosing a problem. In some cases, the problem includes system issues. System issues may include a battery issue, a wearable sensor601being disconnected, connection time-out, poor electrode contact, cellular signal error, Bluetooth error, portable computing device602Wi-Fi failure, remote server or cloud server issues, or that the recording had not started. The problem may include sensor/data issues. Sensor/data issues may include in-phase cancellation of electrographic activity (due to close spacing of electrodes), muscle artifacts, or saturation. Provided herein are methods for monitoring brain activity of a patient. The methods include, by at least one processor of a portable computing device602, providing instructions to position a plurality of wearable sensors601in a plurality of locations on the scalp of the user. The plurality of wearable sensors601can be configured to detect EEG signals indicative of a brain activity of the patient, each wearable sensor601including at least two electrodes configured to monitor the EEG signals when the wearable sensor601is positioned on the scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes. The method can further include providing an alert in response to detecting that at least two wearable sensors601have been positioned in a particular location of the plurality of locations on the scalp of the user. The method can further includes, in response to verifying that the plurality of wearable sensors601has been correctly positioned in the plurality of locations on the scalp of the user, recording the processed EEG signals wirelessly transmitted by the plurality of wearable sensors601. Advantageously, guiding a user using a portable computing device602can allow EEG setup and monitoring by non-experts, such as clinicians in a local or rural hospital unaccustomed to EEG monitoring. Handing Off Control It can be advantageous to hand off control of EEG sensors between portable computing devices. For example, EEG sensor setup can be performed on a first portable computing device (such as, a tablet) and subsequently transferred to a second computing device (such as, a watch, smart band, smart jewelry, or the like). The second portable computing device can be a wearable computing device without a screen or with a screen that is smaller than that of the first computing device. The first portable computing device can be configured to facilitate activating and positioning the EEG sensors on the scalp of the patient and the second portable computing device can be configured to facilitate monitoring of the brain activity of the patient and detecting one or more disorders. Transfer of control to the second device may advantageously allow EEG monitoring with a smaller and cheaper user-worn computing device. The second portable computing device and one or more EEG sensors can communicate directly or through another computing device, such as a phone. The second portable computing device can communicate with a remote server or cloud server through another computing device or directly (such as, with a cellular communication chip). The first portable computing device can have a prescriptive function to train, prescribe, provision or otherwise determine how the EEG system will be used during ambulatory wear. This could include parental controls, duration and location of wear, and other prescriptive functions. The second portable computing device can have an endemic function and its interaction with the patient differs depending on the prescription and provisioning by the first portable computing device. FIG.8is an illustration of an EEG sensor control transfer environment800. The sensor control transfer environment800includes a plurality of wearable sensors802(which can be similar to the sensors101), a first portable computing device832, and a second portable computing device834, which can be configured to be worn by a patient. In the example ofFIG.8, the plurality of wearable sensors802and the first portable computing device832communicate to facilitate setup810of the plurality of wearable sensors802. The first portable computing device832and the second portable computing device834communicate to facilitate hand-off820of control of the plurality of sensors802from the first portable computing device832to the second portable computing device834. After hand-off820, the second portable computing device834communicates with the plurality of wearable sensors802to receive and send data830. Data830may include sensor data such as EEG measurements or sensor status, and also may include commands to the plurality of sensors802from the second portable computing device834such as to begin recording data. Provided herein are methods for monitoring of brain activity. In some cases, the method includes activating (for example, setup810) a plurality of wearable sensors802configured to detect EEG signals indicative of a brain activity of a patient and positioned in a plurality of locations on a scalp of the patient. Each wearable sensor802can have at least two electrodes configured to monitor the EEG signals when the wearable sensor802is positioned on the scalp of the patient, and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a first portable computing device832. Activating can include following instructions displayed on a display of the first portable computing device832(for example, setup810). The method can further includes, subsequent to the activation of the plurality of wearable sensors802, transferring control (for example, hand-off820) of the plurality of wearable sensors802to a second portable computing device834. The second portable computing device834can be configured to be worn by the patient to permit the second portable computing device834to wirelessly receive the processed EEG signals (for example, data830). The second portable computing device may not include a display or may include a display that is smaller than the display of the first portable computing device832. The first portable computing device832can be configured to facilitate activating and positioning the plurality of wearable sensors802on the scalp of the patient (for example, setup810) and the second portable computing device834is configured to facilitate monitoring of the brain activity of the patient (for example, receiving data830) and detecting one or more disorders. Transferring control can cause the first portable computing device832to cease wirelessly receiving the processed EEG signals. The first portable computing device832can be a tablet and the second portable computing device834can be a smartwatch. The method can further include, prior to transferring control to the second portable computing device834, authenticating the second portable computing device834. Authenticating the second portable computing device834can include scanning a QR code of the second portable computing device834. For example, a first portable computing device832may instruct a user to scan a QR code displayed on a second portable computing device834using a camera of the first portable computing device832. A first portable computing device832may instruct a user to manually enter a code associated with a second portable computing device834(such as a code displayed on a second portable computing device834). The method can further includes, responsive to an alert displayed on the display of the second portable computing device834, causing the second portable computing device834to display instructions for resolving the alert and following the instructions to resolve the alert. FIG.9Aillustrates a process for data recording and sensor management, which can be executed by a second portable computing device834. The process is illustrated as a sequence of screens that can be displayed on the second portable computing device834(or otherwise reproduced, for instance, auditorily reproduced by the second portable computing device). Through each of the screens, the system can display the next screen in response to user input, for example a user may press a button (for instance, a virtual control on the screen or a physical button on the second portable computing device834) to go to the next screen showing the next instructions. The user can be a patient. The second portable computing device834can display a screen916showing that recording of EEG data by the wearable sensors is in session. The second portable computing device834can display an alert, for example an action required screen910. The action required screen910can indicate that one or more actions are required, for example due to one or more detected issues. If multiple actions are required, the action required screen910may display a number representing the number of actions required based on a number of detected alerts. In some cases, auditory, visual, or haptic feedback by the second portable computing device834alerts a user that an action is required. Auditory, visual, or haptic feedback on one or more wearable sensors802can alert a user that an action is required. The application display on the second portable computing device834can display specific information about one or more detected alerts. Specific information about the alert can be displayed in response to user input (such as a tap) on the action required screen910. For example, the display may indicate that a signal issue is detected (such as a signal issue alert912) or that a sensor is disconnected (such as a sensor disconnected alert914). Signal issue alert912can indicate that poor electrode contact by one or more wearable sensors has been detected. This can be determined based impedance, as described herein. The number of times that a signal check has been attempted may be tracked (such as, stored in a memory), and an alert is generated responsive to the number of times reaching a threshold (such as, 1 time, 2 times, 3 times, 4 times, 5 times, or more). Signal issue alert912can indicate which wearable sensor(s)802has a signal issue. Sensor disconnected alert914can indicate that one or more wearable sensor(s)802have stopped wireless communication with the second portable computing device834. The system can wirelessly scan (such as, with Bluetooth) for wearable sensors802. Current sensor state and disconnection count can be tracked (such as, are stored to the memory), and an alert is generated responsive to the number of times reaching a threshold (such as, 1 time, 2 times, 3 times, 4 times, 5 times, or more). In response to user input, the system can display instructions for how to troubleshoot or reconnect one or more wearable sensors802. In response to user input, a screen can be displayed, which may instruct a user to confirm whether or not the wearable sensor802is attached. In response to user input, signal issue alert912or sensor disconnected alert914can be snoozed or dismissed. Dismissing or snoozing signal issue alert912or sensor disconnected alert914may be disabled. After a predefined amount of time has elapsed, signal issue alert912or sensor disconnected alert914can be automatically timed out and dismissed. In response to user input, a replace attachment screen918can be displayed to provide instructions on troubleshooting signal issue alert912and sensor disconnected alert914, as further described herein. After detecting an alert, the second portable computing device834can pause recording of EEG data when an alert is detected. The instructions can further cause the at least one processor to cause a display of a first computing device832or a second portable computing device834to display an instruction corresponding to a self-diagnosed problem. For example, an instruction may include moving a wearable sensor802, replacing an attachment (for example, screen918instructing a user to replace an attachment), charging the battery of a wearable sensor802, charging the battery of a first portable computing device832or a second portable computing device834, rebooting a wearable sensor802or a first or second portable computing device832,834, etc. Once a user completes the instructed steps, the alert can be dismissed and recording of EEG data can be resumed. The second portable computing device834can display the recording in session screen916once an alert has been resolved. Recorded EEG data can be sent to a remote server or cloud device. The remote server or cloud device can combine and/or processes the recorded data to determine presence of one or more physiological conditions. The instructions can be associated with replacing an attachment configured to removably attach a wearable sensor802of the plurality of wearable sensors802to the scalp of the user.FIG.9Billustrates example screens illustrating a process for instructing a user to replace an attachment. Through each of the screens, the system can display the next screen in response to user input, for example a user may press a button (as described in connection withFIG.9A)) to go to the next screen showing the next instructions. An alert screen918can instruct a user that an attachment should be replaced. If there has been a sensor failure, the system can display screen922and instruct a user to remove a wearable sensor802from a location on the scalp. The second portable computing device834can display a screen920instructing a user to confirm whether to proceed with attachment replacement. A user may confirm by pressing a button (as described in connection withFIG.9A). In response to user input canceling replacement of the attachment (such as selecting a cancel user interface option) or in response to the passing of a predefined amount of time, screen920may be dismissed. In response to user input confirming proceeding with replacement, the system can display screen922and instructs a user to remove a wearable sensor802from a location on the scalp. The second portable computing device834can command a wearable sensor802to enter a sleep state. Responsive to the instructions to replace attachment, the user can remove the wearable sensor802, replaces the attachment with another attachment, and repositions the wearable sensor802on the scalp of the user. The second portable computing device834can instruct a user to remove a wearable sensor802from a location on a scalp. In screen922, for example, the screen of the second portable computing device834displays a graphical representation of the location of the wearable sensor. The system can use the known sensor location (determined during set-up, as described herein) to set a screen showing a specific location. Information for the sensor location can be stored in the memory. The second portable computing device834can instruct a user to clean the area of the indicated location on the scalp, for example, by displaying a screen924. The second portable computing device834can instruct a user to place a new attachment onto the wearable sensor802, for example, by displaying a screen926. The second portable computing device834can instruct a user to place the wearable sensor802on a location on the scalp, for example, by displaying screen928. Screen928can include instructions to remove a second liner to expose a second adhesive side of an attachment. The system can use a known sensor/location from the memory to set a screen showing a specific location. The second portable computing device834can instruct a user to activate the placed wearable sensor802by pressing a button on the placed wearable sensor802, for example, by displaying screen930. The second portable computing device834can instruct a user to wait, for example, by displaying screen932, while the wearable sensor802is tested for impedance, such as with impedance tests described herein. In some cases, impedance is verified, and the memory is updated on impedance level. If an impedance test fails on a first or a subsequent try (such as, a second try), a Poor Electrode Contact Alert screen (not shown) may be displayed. If an impedance test fails on yet another subsequent try (such as, a third try), a Sensor Failure—No Retry screen (not shown) may be displayed. If impedance is verified (impedance test succeeds), the process may be repeated if attachments for additional wearable sensors802need replacement. If attachment replacement completes for one or more wearable sensors802, the second portable computing device834can inform the user that replacement is complete, for example, by displaying screen934. Attachments may need to be periodically replaced, as described herein. The second portable computing device834can periodically instruct the user to replace one or more attachments, which can be performed by displaying instructions.FIG.9Cis an illustration of a process for guiding a user through replacing one or more attachments. In some cases, through each of the screens, the system displays the next screen in response to user input, for example a user may press a button (on a screen display or a physical button on the second portable computing device834) to go to the next screen showing the next instructions. In step941, an application running on a second portable computing device834alerts a user to replace one or more attachments. The alert can be provided periodically, such as every 6 hours or less, 12 hours, 18 hours, 24 hours, 30 hours, 36 hours, 42 hours, 48 hours, 54 hours, 60 hours, 66 hours, or 72 hours, 4 days, 5 days, 6 days, or 7 days or more, or the like or a value therebetween, or a range constructed from any of the aforementioned values or values therebetween. In step943, the user changes (replaces) one or more attachments (including, for example, all attachments). The application may display several screens on a second portable computing device834to guide a user through replacement of an attachment. In screen940(which may be displayed responsive the alert in step941), the application can request user input as to whether one or all attachments would be replaced. In some implementations, the option of replacing more than one but less than all attachments may be provided. In some cases, in response to user input, a process for instructing replacement of only one attachment is initiated. In screen942, a diagram of wearable sensor802locations can be displayed so that a user can input a selection of which wearable sensor802attachment should be replaced. Only some wearable sensors802may be displayed or allowed to be selected by the user, for example only wearable sensors802that are currently active and/or in use. A display (not shown) can ask a user to confirm the selection of the wearable sensor802whose attachment that is being replaced. The second portable computing device834can command the selected wearable sensor802to enter a sleep state. In screen944, a user is instructed to remove the selected wearable sensor802from its location on the scalp. In the example of screen944, the screen of the second portable computing device834displays a graphical representation of the wearable sensor802location. In screen924, a user is instructed to clean the area where the sticker is placed on the scalp. In screen926, a user is instructed to place a new attachment (adhesive sticker) onto the wearable sensor802. Subsequently, additional screens described in connection withFIG.9Bcan be displayed. The process may be repeated in sequence to replace attachments for one or more additional wearable sensors802. For instance, the process may repeat screens942,944,924,926, etc. for each of the additional wearable sensors802. As another example, the user can select multiple wearable sensor802attachments for replacement in screen942, and the process can repeat screens944,924,926, etc. for each of the selected wearable sensors. FIG.9Dis an illustration of a process for guiding a user through changing all attachments. In some cases, by all attachments, it is meant all adhesive attachments for all wearable sensors802currently in use. In screen940, the user can select the option for replacing “all stickers.” In display950, the user can be instructed to remove each of the plurality of wearable sensors802from each location on the scalp. In display952, the user can be instructed to remove all attachments from each of the plurality of wearable sensors802. In display954, the user can be instructed to clean all areas on the scalp for placement of each of the plurality of wearable sensors802. In display956, a user can be instructed to place a new attachment onto each of the plurality of wearable sensors802. Subsequently, additional screens similar to those described in connection withFIG.9Bcan be displayed (such as screens928,930,932,934). Provided herein are systems for monitoring of brain activity. The systems can include a plurality of wearable sensors802configured to detect EEG signals indicative of a brain activity of a user. Each wearable sensor802can include at least two electrodes configured to monitor the EEG signals when the wearable sensor802is positioned on a scalp of the user, and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a first portable computing device832. The systems can further include a first non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the first portable computing device832, cause the at least one processor of the first portable computing device832to facilitate an activation of the plurality of wearable sensors802by displaying instructions for the on a display of the first portable computing device832. The executable instructions can further cause the at least one processor of the first portable computing device832to, subsequent to the activation of the plurality of wearable sensors802, transfer control of the plurality of wearable sensors802to a second portable computing device834to permit the second portable computing device834configured to be worn by the user to wirelessly receive processed EEG signals. The second portable computing device834may not include a display or may include a display that is smaller than the display of the first portable computing device834. The first portable computing device834can be configured to facilitate activating and positioning the plurality of wearable sensors802on the scalp of the user and the second portable computing device834is configured to facilitate monitoring of the brain activity of the user and detecting one or more disorders. The first portable computing device832can be a tablet and the second portable computing device834can be a smartwatch. The executable instructions can further cause the at least one processor to of the first portable computing device832to, prior to transferring control to the second portable computing device834, authenticate the second portable computing device834. Authenticating the second portable computing device834can include scanning a QR code of the second portable computing device834. The systems can further include a second non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the second portable computing device834, cause the at least one processor of the second portable computing device834to cause display of an alert on the display of the second portable computing device834. The executable instructions can further cause the at least one processor of the second portable computing device834to cause display of instructions for resolving the alert on the display of the second portable computing device834. The executable instructions can further cause the at least one processor of the second portable computing device834to pause collection of the processed EEG signals. The executable instructions can cause the at least one processor of the second portable computing device834to detect and display the alert responsive to determining that an impedance of at least one wearable sensor of the plurality of wearable sensors does not satisfy an impedance threshold. For example, the signal issue alert912ofFIG.9Ais a display of an alert on the display of the second portable computing device834related to an impedance issue. The instructions can be associated with replacing an attachment configured to removably attach the at least one wearable sensor802to the scalp of the user. The instructions can include causing removal of the at least one wearable sensor802, replacement of the attachment with another attachment, and repositioning the at least one wearable sensor802on the scalp of the user. The executable instructions can further cause the at least one processor of the second portable computing device834to, responsive to verifying the impedance of the at least one wearable sensor802after it has been repositioned on the scalp of the user, resume collection of the processed EEG signals. Verifying the impedance of the at least one wearable sensor802can include determining that the impedance of the at least one wearable sensor802satisfies the impedance threshold. The executable instructions can facilitate selection of the at least one wearable sensor802from the plurality of wearable sensors802. The (user) instructions can display a position of the at least one wearable sensor802on the scalp of the user. The executable instructions can cause the at least one processor of the second portable computing device834to cause display of the alert responsive to passage of a duration of time since replacement of a plurality of attachments configured to removably attach the plurality of wearable sensors802to the scalp of the user. The duration of time may be 6 hours or less, 12 hours, 18 hours, 24 hours, 30 hours, 36 hours, 42 hours, 48 hours, 54 hours, 60 hours, 66 hours, or 72 hours, 4 days, 5 days, 6 days, or 7 days or more, or the like or a value therebetween, or a range constructed from any of the aforementioned values or values therebetween. The systems can further include a second non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the second portable computing device834, cause the at least one processor of the second portable computing device834to, responsive to a detection of a possible seizure, cause display of instructions for confirming occurrence of a seizure. As described herein, the wearable sensors802can monitor EEG signals for detection of one or more physiological conditions, such as a seizure. EEG data can be processed by one or more recognition techniques, such as machine learning techniques, to detect a seizure. To improve the detection (such as, to train one or more recognition techniques), it may be advantageous to have the user confirm occurrence oft whether a possible seizure has been detected correctly.FIG.9Edisplays a process for confirming with a user the occurrence of a seizure. Recording in session screen916shows that EEG recording is in session. In response to user input (such as, a press of a button as described in connection withFIG.9A), the second portable computing device834can display a log event screen960. The second portable computing device834can display a screen (such as log event screen960) instructing a user to indicate whether or not a seizure just occurred. A user may select yes or no (that a seizure has or has not just occurred) by, for example, pressing a button (as described in connection withFIG.9A). If a user inputs that a seizure has occurred, the second portable computing device834can add a record of an event to the memory. If a user inputs that no seizure has occurred, the second portable computing device834may not add a record of an event to the memory. If a user inputs that a seizure has occurred, the second portable computing device834can display a confirm event screen962. The second portable computing device834can display a confirm event screen962automatically after a predetermined amount of time (such as 30 seconds or the like) has elapsed since the log event screen960was opened. The confirm event screen962can indicate to a user that an event indicating occurrence of a seizure has been logged. In response to user input or after the lapse of another predetermined amount of time since displaying confirm event screen962, recording in session screen916can be displayed again. In some cases, wireless scans (such as Bluetooth scan) are stopped during recording sessions. A wireless scan may start again if a wearable sensor802disconnects. Real-time data notifications can be enabled for all wearable sensors802. When EEG recording is in session, the second portable computing device834can receive messages containing real-time data/events from the plurality of wearable sensors802and communicates messages containing real-time data/events to a remote server or cloud server over the Internet. Session end messages can be communicated to a remote server or cloud server. In response to user input, the second portable computing device834can display an options screen (not shown) or a parental lockout screen (not shown). Synchronizing Independent Wireless EEG Sensors Each EEG sensor of a plurality of EEG sensors can independently monitor and collect EEG signals without communicating with the other EEG sensors. Collected EEG signals can be wirelessly transmitted to one or more portable computing device for processing, which can include collating (or unifying, aligning, or synchronizing) and analyzing EEG signals to determine occurrence of one or more physiological conditions. At least some of the processing can be performed by a remote computing device. It can be advantageous to synchronize one or more of collection or transmission of EEG signals by the plurality of sensors so that processing is performed correctly. Provided herein are methods and systems for synchronized monitoring of brain activity by a plurality of independent EEG sensors configured to detect EEG signals indicative of a brain activity of a user (such as, a patient). Each sensor can be configured to detect EEG signals independent of the other sensors and may not communicate with the other sensors. Each EEG sensor can include at least two electrodes configured to monitor the EEG signals when the EEG sensor is positioned on a scalp of the user. Each EEG sensor can further include an electronic circuitry configured to, based on the signals detected by the at least two electrodes process the EEG signals monitored by the at least two electrodes and wirelessly transmit the data associated with the brain activity of the user to one or more portable computing devices. The system can further includes a non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the one or more portable computing devices, cause the at least one processor to wirelessly transmit a first message to the plurality of EEG sensors to listen to a second message. The executable instructions can cause the at least one processor to, subsequent to transmitting the first message, wirelessly transmit a second message to the plurality of EEG sensors, the second message comprising timing information. Transmission of the second message can cause the electronic circuitry of each EEG sensor to set an internal clock to substantially match internal clocks of the other EEG sensors, the internal clock being used for time stamping recorded signals indicative of the brain activity of the user. Data determined by each EEG wearable sensor can be correlated with data determined by the EEG wearable sensors to within approximately 10 ms or less, 20 ms, 30 ms, 40 ms, 50 ms or more, or the like, or within a range constructed from any of the aforementioned values. In some cases, no EEG sensor communicates with another EEG sensor. The executable instructions can further cause the at least one processor to, subsequent to wirelessly transmitting the second message, confirm that the EEG sensors have set their internal clocks. The executable instructions can further cause the at least one processor to verify that the processed EEG signals received from each EEG sensor are correlated with the processed EEG signals determined by the other EEG sensors of the plurality of wearable sensors (for example, to within approximately 10 ms or less, 20 ms, 30 ms, 40 ms, 50 ms or more, or the like, or within a range constructed from any of the aforementioned values). The executable instructions can further cause the at least one processor to, responsive to the verification, transmit the processed EEG signals received from the plurality of EEG sensors to a remote computing device. The executable instructions can further cause the at least one processor to poll the plurality of EEG sensors for their internal clocks. The executable instructions can further cause the at least one processor to, in response to detecting that a difference between an internal clock of at least one EEG sensor and an expected internal clock satisfies a threshold, repeat wireless transmission of the first and second messages to cause the electronic circuitry of the at least one EEG sensor to set the internal clock. The threshold can be no more than about 10 ms, 20 ms, 50 ms, 75 ms, 90 ms, 100 ms, or 200 ms, or within a range constructed from any of the aforementioned values and may be dependent on the specification for the clock synchronization. For instance, a threshold with a higher value would be used for monitoring a physiological signal that varies less frequently. Monitoring such a signal may be performed even when the internal clocks are less accurately synchronized. As another example, a threshold with a lower value would be used for monitoring a physiological signal that varies more frequently. Monitoring such a signal may be necessitate a greater accuracy of synchronization of the internal clocks. Provided herein are methods for synchronized monitoring of brain activity. The methods can include wirelessly transmitting a first message to a plurality of EEG sensors configured to detect EEG signals indicative of a brain activity of a user (such as, a patient). Each EEG sensor can include at least two electrodes configured to monitor the EEG signals when the EEG sensor is positioned on a scalp of the user. Each EEG sensor can include an electronic circuitry configured to, based on the signals detected by the at least two electrodes, determine data associated with the brain activity of the user. The methods can further include, subsequent to transmitting the first message, wirelessly transmitting a second message to the plurality of EEG sensors. The second message can include timing information. Transmission of the second message can cause the electronic circuitry of each EEG sensor to set an internal clock to substantially match internal clocks of the other EEG sensors, for example, within a set of activated sensors for a sensor session. The internal clock can be used for time stamping recorded signals indicative of the brain activity of the user. The methods can further include wirelessly receiving the processed EEG signals from the plurality of EEG sensors and verifying that the processed EEG signals received from each EEG sensor are correlated with the processed EEG signals determined by the other EEG sensors (for example, to within approximately 10 ms or less, 20 ms, 30 ms, 40 ms, 50 ms or more, or the like, or within a range constructed from any of the aforementioned values). The methods can further include, responsive to the verification, transmitting the processed EEG signals received from the EEG sensors to a remote computing device. The remote computing device can be a portable computing device as described herein. In some cases, no EEG sensor communicates with another EEG sensor. The methods can include verifying that the processed EEG signals received from each EEG sensor is correlated with the processed EEG signals determined by the other EEG sensors (for example, to within approximately 10 ms or less, 20 ms, 30 ms, 40 ms, 50 ms or more, or the like, or within a range constructed from any of the aforementioned values). The methods can further include confirming that that the plurality of EEG sensors have set their internal clocks. The methods can further include polling the plurality of EEG sensors for their internal clocks. The methods can further include in response to detecting that a difference between an internal clock of at least one EEG sensor and an expected internal clock satisfies a threshold, repeating wireless transmission of the first and second messages to cause the electronic circuitry of the at least one EEG sensor to set the internal clock. This way, any unacceptable clock drift can be detected and corrected. FIG.10Aillustrates a method of synchronizing sensor data for a plurality of independent EEG sensors. The method can be executed by a portable computing device. In step1010, a first message is sent to each EEG sensor cause the EEG sensor to listen (or transition to a listening state). The first message can be a command sent from the portable computing device. The first message can be a directed message sent individually to each EEG sensor. In some cases, listening is a state of scanning and waiting for a second command (or second message) that includes clock information (such as, a time stamp). The second message can be sent, for instance, as an advertising message using the BLE protocol. BLE mesh capability may be used. The second message can be a single message broadcast to all EEG sensors (as compared to the first message that is directly sent to each EEG sensor). The reason for sending first and second messages can be that the first message causes the EEG sensors to enter into the listening state in which the EEG sensors look for a broadcast message that is received by the EEG sensors simultaneously. In some cases, a different wireless communication protocol can be used, such as the WiFi protocol, NFC protocol, RFID protocol, or the like. For protocols that support direct broadcast to the EEG sensors (such as, WiFi which supports directs broadcast to all devices on a subnet), it would be sufficient to send a single broadcast message to all EEG devices. The broadcast message can include clock information, which can be a time stamp. In step1020, the second message can be sent to each EEG sensor to synchronize the internal clocks of the EEG sensors. The second message can be a command to set the clock of the EEG sensor to the portable device clock (or some other clock value). Thus, the second message can include a clock information (or clock value). As described herein, each of the plurality of individual sensors can receive the second message simultaneously. This way, the internal clocks of EEG sensors will be set to approximately the same clock value (which can be the clock value included in the second message) resulting in synchronous processing of EEG data received from the EEG sensors since the EEG data can transmitted by the EEG sensors along with internal clock values, as described herein. After all EEG sensors receive and process the second message, each EEG sensor can set the internal clock to the same time setting to a desired tolerance (for example, approximately 10 ms or less, 20 ms, 30 ms, 40 ms, 50 ms or more, or the like, or within a range constructed from any of the aforementioned values). Each individual EEG sensor can record EEG data with a time stamp derived from the internal clock. EEG data packets from sensors can be sent to a portable computing device independently and possibly at different times. The portable computing device may combine data from the plurality of EEG sensors based on time stamps from the individual sensors. In some cases, if an EEG sensor does not receive the first or second command and does not set its internal clock as described here, when the EEG sensor tries to reconnect with the portable computing device, the portable computing device will recognize that the sensor has not synchronized its internal clock. The portable computing device may then restart the synchronization process ofFIG.10A. In some implementations, synchronization can be performed as follows. Each EEG sensor can be caused to process an electrical stimulation generated by another EEG sensor and sensed by at least two electrodes and record the electrical stimulation along with data associated with the brain activity of the user. Recording of the electrical stimulation facilitates combining and processing data associated with the brain activity of the user collected by the plurality of EEG sensors. As shown inFIG.10B, in step1030, an EEG sensor can stimulate the skin by applying an electrical signal with the electrodes. For example, the EEG sensor can sends a signal (such as railing power through one of the electrodes), which stimulates the skin to create an electrical tap. In step1040, the other EEG sensors can sense the tap through the skin to synchronize the sensors. Rather than synchronizing the clocks, EEG data can be synchronized by including in the data information indicating that the tap was applied (for the EEG sensor applying the tap) and sensed (for the other EEG sensors). Accordingly, EEG data from different EEG sensors can be combined and aligned by using the information related to tap. Synchronization can be initiated by a portable computing device which receives data packets containing tap information. In some cases, synchronization can be performed as follows. A recordable event (such as a ping or an instruction to generate stimulation) can be provided via a portable computing device to one of the EEG sensors. The recordable event can be relayed by the EEG sensor to the other EEG sensors, and can be recorded by each of the EEG sensors. Data can be later synchronized using the techniques described in connection withFIG.10B. ADDITIONAL EXAMPLES Example 1: A system for monitoring brain activity comprising: a plurality of wearable sensors configured to record a brain activity of a user, each wearable sensor comprising a housing, at least two electrodes positioned on an exterior surface of the housing and configured to detect electroencephalogram (EEG) signals indicative of the brain activity of the user when the wearable sensor is positioned on a scalp of the user, an electronic circuitry supported by the housing and configured to process the EEG signals detected by the at least two electrodes, and a power source supported by the housing and configured to provide power to the electronic circuitry, the housing having an extended, rounded shape; and a plurality of attachments, each attachment including a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing of a wearable sensor and a second side configured to removably position the wearable sensor on the scalp of the user, a number of attachments in the plurality of attachments being greater than a number of wearable sensors in the plurality of wearable sensors. Example 2: The system of any of the preceding examples, further comprising a charger comprising a charger housing configured to receive and simultaneously charge power sources of at least two wearable sensors of the plurality of wearable sensors. Example 3: The system of any of the preceding examples, wherein the extended, rounded shape of the housing is configured to fit around a hairline of the user such that the extended, rounded shape of the housing facilitates unobtrusive wear of the wearable sensor on the scalp of the user while facilitating collection of the EEG signals. Example 4: The system of example 3, wherein the housing comprises a first portion having a first thickness and a second portion having a second thickness greater than the first thickness. Example 5: The system of any of the preceding examples, wherein a surface area of the housing is between 16.0 cm2and 10 cm2. Example 6: The system of any of the preceding examples, wherein a volume of the housing is between 5.0 cm3and 3.0 cm3. Example 7: The system of any of the preceding examples, wherein the number of attachments in the plurality of attachments comprises the number of wearable sensors in the plurality of wearable sensors multiplied by a number of days during which the plurality of wearable sensors are configured to record the brain activity of the user. Example 8: The system of any of the preceding examples, wherein the first side of each attachment is configured to be attached to a bottom surface of the housing. Example 9: The system of any of the preceding examples, wherein each attachment of the plurality of attachments comprises hydrocolloid material on the second side of the attachment, the hydrocolloid material facilitating repositioning a wearable sensor on the scalp of the user. Example 10: The system of any of the preceding examples, wherein each attachment comprises a plurality of layers including one or more of: a first layer comprising a thermoplastic resin; a second layer comprising a cured hydrogel; a third layer comprising an adhesive; a fourth layer comprising a non-woven fabric; a fifth layer comprising an adhesive; or a sixth layer comprising a thermoplastic resin. Example 11: The system of example 10, wherein the thermoplastic resin comprises PET. Example 12: The system of any of examples 10 to 11, wherein two or more of the first, second, third, fourth, fifth, or sixth layers are laminated to one another such that the cured hydrogel is disposed between the first layer and the third layer. Example 13: The system of any of examples 10 to 12, wherein the third and fifth layers form apertures and one or more of the third layer, fourth layer, or the fifth layer includes the cured hydrogel. Example 14: The system of example 13, wherein the apertures align with the at least two electrodes of a wearable sensor. Example 15: A unitary, wireless, and wearable sensor configured for monitoring brain activity comprising: a housing with an extended, rounded shape configured to fit around a hairline of a user; at least two electrodes positioned on an exterior surface of the housing and configured to detect electroencephalogram (EEG) signals indicative of a brain activity of the user when the housing is positioned on a scalp of the user; and an electronic circuitry supported by the housing and configured to process the EEG signals detected by the at least two electrodes and wirelessly communicate processed EEG signal to a remote computing device, wherein the extended, rounded shape of the housing facilitates unobtrusive wear of the housing on the scalp of the user while facilitating collection of the EEG signals. Example 16: The sensor of example 15, wherein the housing comprises a first portion having a first thickness and a second portion having a second thickness greater than the first thickness. Example 17: The sensor of any of examples 15 to 16, wherein a surface area of the housing is between 16.0 cm2and 10 cm2. Example 18: The sensor of any of examples 15 to 17, wherein a volume of the housing is between 5.0 cm3and 3.0 cm3. Example 19: A kit comprising a plurality of sensors of any of examples 15 to 18, wherein each sensor is configured to detect electroencephalogram (EEG) signals independent of the other sensors. Example 20: The kit of example 19, further comprising a plurality of attachments, each attachment including a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing of a sensor of the plurality of sensors and a second side configured to removably position the sensor on the scalp of the user, a number of attachments in the plurality of attachments being greater than a number of sensors in the plurality of sensors. Example 21: The kit of example 20, wherein the number of attachments in the plurality of attachments comprises the number of sensors in the plurality of sensors multiplied by a number of days during which the plurality of sensors are configured to record the brain activity of the user. Example 22: A system for monitoring brain activity comprising: a plurality of unitary, wireless, and wearable sensors configured to record a brain activity of a user, each sensor comprising: a housing with an extended, rounded shape configured to fit around a hairline of a user; at least two electrodes positioned on an exterior surface of the housing and configured to detect electroencephalogram (EEG) signals indicative of a brain activity of the user when the housing is positioned on a scalp of the user; and an electronic circuitry supported by the housing and configured to process the EEG signals detected by the at least two electrodes and wirelessly communicate processed EEG signal to a remote computing device, wherein the extended, rounded shape of the housing facilitates unobtrusive wear of the housing on the scalp of the user while facilitating collection of the EEG signals; and a plurality of attachments, each attachment including a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing of a sensor and a second side configured to removably position the sensor on the scalp of the user, a number of attachments in the plurality of attachments being greater than a number of sensors in the plurality of sensors. Example 23: A method for monitoring brain activity comprising: detaching at least one wearable sensor of a plurality of wearable sensors configured to record a brain activity of a user, each wearable sensor comprising a housing having an extended, rounded shape and at least two electrodes positioned on an exterior surface of the housing and configured to detect electroencephalogram (EEG) signals indicative of the brain activity of the user; replacing a first attachment of a plurality of attachments with a second attachment of the plurality of attachments, the first and second attachments including a first side shaped to substantially match the extended, rounded shape and configured to be attached to the exterior surface of the housing of the at least one wearable sensor and a second side configured to removably position the at least one wearable sensor on a scalp of the user, a number of attachments in the plurality of attachments being greater than a number of wearable sensors in the plurality of wearable sensors; reattaching the at least one sensor to the scalp of the user by adhering the second side of the second attachment to the scalp of the user; and resuming recording of EEG signals indicative of the brain activity of the user. Example 24: A system for monitoring brain activity comprising: a plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of a user, each wearable sensor comprising at least two electrodes configured to monitor the EEG signals when the wearable sensor is positioned on a scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes; and a non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of a portable computing device, cause the at least one processor to: provide instructions to position a wearable sensor of the plurality of wearable sensors in a location of a plurality of locations on the scalp of the user and activate the wearable sensor; verify an identification of the wearable sensor; responsive to verification of the identification of the wearable sensor, verify an impedance of the wearable sensor; and responsive to verification of the impedance of the wearable sensor, provide instructions to position and activate another wearable sensor of the plurality of wearable sensors and perform verification of an identification and an impedance of the another wearable sensor. Example 25: The system of example 24, wherein the executable instruction further cause the at least one processor to sequentially provide instructions to position and activate, verify an identification, and verify an impedance of each wearable sensor of the plurality of wearable sensors. Example 26: The system of example 25, wherein the executable instruction further cause the at least one processor to, responsive to verification of the identification and impedance of each wearable sensor of the plurality of wearable sensors, record the processed EEG signals wirelessly transmitted by the plurality of wearable sensors. Example 27: The system of any of examples 24 to 26, wherein the executable instructions further cause the at least one processor to, responsive to not verifying that the impedance of the wearable sensor satisfies an impedance threshold, repeat providing instructions, verifying the identification, and verifying the impedance for the wearable sensor. Example 28: The system of example 27, wherein the executable instructions further cause the at least one processor to, responsive to not verifying the impedance of the wearable sensor for a second time, restart providing instructions, verifying the identification, and verifying the impedance for the wearable sensor. Example 29: The system of any of examples 24 to 28, wherein the executable instruction further cause the at least one processor to provide an alert in response to detecting that at least two wearable sensors of the plurality of wearable sensors have been activated for positioning in a location of the plurality of locations on the scalp of the user. Example 30: The system of example 29, wherein the executable instructions further cause the processor to, responsive to detecting that at least two wearable sensors of the plurality of wearable sensors have been activated for positioning in a particular location of the plurality of locations on the scalp of the user, restart providing instructions, verifying the identification, and verifying the impedance for the plurality of wearable sensors. Example 31: The system of example 30, wherein detecting that the at least two wearable sensors have been activated for positioning in the particular location comprises detecting that multiple sensors have been activated substantially simultaneously. Example 32: The system of any of examples 24 to 31, wherein providing instructions to position the wearable sensor comprises displaying instructions on a screen of the portable computing device. Example 33: The system of example 32, wherein providing instructions to position the wearable sensor comprises displaying the location on the screen of the portable computing device and instructions to activate the wearable sensor. Example 34: The system of any of examples 24 to 33, wherein the executable instructions further cause the at least one processor to, prior to providing instructions to position the wearable sensor in the location on the scalp of the user, provide instructions to scan or enter the identification for the wearable sensor. Example 35: The system of any of examples 24 to 34, wherein providing instructions to position the wearable sensor in the location on the scalp of the user comprises instructing a use of a plurality of attachments configured to removably attach the wearable sensor to the scalp of the user. Example 36: A method for monitoring brain activity comprising: by at least one processor of a portable computing device: providing instructions to position a wearable sensor of a plurality of wearable sensors in a location of a plurality of locations on a scalp of a user and activate the wearable sensor, the plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of the user, each wearable sensor comprising at least two electrodes configured to monitor the EEG signals when the wearable sensor is positioned on the scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes; verifying an identification of the wearable sensor; responsive to verifying the identification of the wearable sensor, verifying an impedance of the wearable sensor; and responsive to verifying the impedance of the wearable sensor, providing instructions to position and activate another wearable sensor of the plurality of wearable sensors and perform verification of an identification and an impedance of the another wearable sensor. Example 37: The method of example 36, further comprising sequentially providing instructions to position and activate, verify an identification, and verify an impedance of each wearable sensor of the plurality of wearable sensors. Example 38: The method of example 37, further comprising, responsive to verifying the identification and impedance of each wearable sensor of the plurality of wearable sensors, recording the processed EEG signals wirelessly transmitted by the plurality of wearable sensors. Example 39: The method of any of examples 36 to 37, further comprising, responsive to not verifying that the impedance of the wearable sensor satisfies an impedance threshold, repeating providing instructions, verifying the identification, and verifying the impedance for the wearable sensor. Example 40: The method of example 39, further comprising, responsive to not verifying the impedance of the wearable sensor for a second time, restarting providing instructions, verifying the identification, and verifying the impedance for the wearable sensor. Example 41: The method of any of examples 36 to 40, further comprising providing an alert in response to detecting that at least two wearable sensors of the plurality of wearable sensors have been activated for positioning in a location of the plurality of locations on the scalp of the user. Example 42: The method of example 41, further comprising, responsive to detecting that at least two wearable sensors of the plurality of wearable sensors have been activated for positioning in a particular location of the plurality of locations on the scalp of the user, restarting providing instructions, verifying the identification, and verifying the impedance for the plurality of wearable sensors. Example 43: The method of example 42, wherein detecting that the at least two wearable sensors have been activated for positioning in the particular location comprises detecting that multiple sensors have been activated substantially simultaneously. Example 44: The method of any of examples 36 to 43, wherein providing instructions to position the wearable sensor comprises displaying instructions on a screen of the portable computing device. Example 45: The method of example 44, wherein providing instructions to position the wearable sensor comprises displaying the location on the screen of the portable computing device and instructions to activate the wearable sensor. Example 46: The method of any of examples 36 to 45, further comprising, prior to providing instructions to position the wearable sensor in the location on the scalp of the user, providing instructions to scan or enter the identification for the wearable sensor. Example 47: The method of any of examples 36 to 46, wherein providing instructions to position the wearable sensor in the location on the scalp of the user comprises instructing a use of a plurality of attachments configured to removably attach the wearable sensor to the scalp of the user. Example 48: A method for monitoring of brain activity comprising: activating a plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of a user and positioned in a plurality of locations on a scalp of the user, each wearable sensor comprising at least two electrodes configured to monitor the EEG signals when the wearable sensor is positioned on the scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a first portable computing device, the activating comprising following instructions displayed on a display of the first portable computing device; and subsequent to the activation of the plurality of wearable sensors, transferring control of the plurality of wearable sensors to a second portable computing device configured to be worn by the user to permit the second portable computing device to wirelessly receive the processed EEG signals, the second portable computing device not including a display or including a display that is smaller than the display of the first portable computing device, wherein the first portable computing device is configured to facilitate activating and positioning the plurality of wearable sensors on the scalp of the user and the second portable computing device is configured to facilitate monitoring of the brain activity of the user and detecting one or more disorders. Example 49: The method of example 48, wherein transferring control causes the first portable computing device to cease wirelessly receiving the processed EEG signals. Example 50: The method of any of examples 48 to 49, wherein the first portable computing device comprises a tablet and the second portable computing device comprises a smartwatch. Example 51: The method of any of examples 48 to 50, further comprising, prior to transferring control to the second portable computing device, authenticating the second portable computing device. Example 52: The method of example 51, wherein authenticating the second portable computing device comprises scanning a QR code of the second portable computing device. Example 53: The method of any of examples 48 to 52, further comprising, responsive to an alert displayed on the display of the second portable computing device, causing the second portable computing device to display instructions for resolving the alert and following the instructions to resolve the alert. Example 54: The method of example 53, wherein the instructions are associated with replacing an attachment configured to removably attach a wearable sensor of the plurality of wearable sensors to the scalp of the user, and wherein the method further comprises, responsive to the instructions, removing the wearable sensor, replacing the attachment with another attachment, and repositioning the wearable sensor on the scalp of the user. Example 55: The method of any of examples 53 to 54, wherein the instructions are associated with replacing a plurality of attachments configured to removably attach the plurality of wearable sensors to the scalp of the user, and wherein the method further comprises, responsive to the instructions, removing the plurality of wearable sensors, replacing the plurality of attachments with another plurality of attachments, and repositioning the plurality of wearable sensors on the scalp of the user. Example 56: A system for monitoring of brain activity comprising: a plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of a user, each wearable sensor comprising at least two electrodes configured to monitor the EEG signals when the wearable sensor is positioned on a scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a first portable computing device; a first non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the first portable computing device, cause the at least one processor of the first portable computing device to: facilitate an activation of the plurality of wearable sensors by displaying instructions on a display of the first portable computing device; and subsequent to the activation of the plurality of wearable sensors, transfer control of the plurality of wearable sensors to a second portable computing device to permit the second portable computing device configured to be worn by the user to wirelessly receive processed EEG signals, the second portable computing device not including a display or including a display that is smaller than the display of the first portable computing device, wherein the first portable computing device is configured to facilitate activating and positioning the plurality of wearable sensors on the scalp of the user and the second portable computing device is configured to facilitate monitoring of the brain activity of the user and detecting one or more disorders. Example 57: The system of example 56, wherein the first portable computing device comprises a tablet and the second portable computing device comprises a smartwatch. Example 58: The system of any of examples 56 to 57, wherein the executable instructions further cause the at least one processor to of the first portable computing device to, prior to transferring control to the second portable computing device, authenticate the second portable computing device. Example 59: The system of example 58, wherein authenticating the second portable computing device comprises scanning a QR code of the second portable computing device. Example 60: The system of any of examples 56 to 59 further comprising a second non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the second portable computing device, cause the at least one processor of the second portable computing device to: cause display of an alert on the display of the second portable computing device; cause display of user instructions for resolving the alert on the display of the second portable computing device; and pause collection of the processed EEG signals. Example 61: The system of example 60, wherein the executable instructions cause the at least one processor of the second portable computing device to detect the alert responsive to determining that an impedance of at least one wearable sensor of the plurality of wearable sensors does not satisfy an impedance threshold. Example 62: The system of example 61, wherein: the user instructions are associated with replacing an attachment configured to removably attach the at least one wearable sensor to the scalp of the user, the instructions comprising causing removal of the at least one wearable sensor, replacement of the attachment with another attachment, and repositioning the at least one wearable sensor on the scalp of the user; and the executable instructions further cause the at least one processor of the second portable computing device to, responsive to verifying the impedance of the at least one wearable sensor after it has been repositioned on the scalp of the user, resume collection of the processed EEG signals. Example 63: The system of example 62, wherein verifying the impedance of the at least one wearable sensor comprises determining that the impedance of the at least one wearable sensor satisfies the impedance threshold. Example 64: The system of any of examples 62 to 63, wherein the executable instructions facilitate selection of the at least one wearable sensor from the plurality of wearable sensors, and wherein the instructions display a position of the at least one wearable sensor on the scalp of the user. Example 65: The system of any of examples 60 to 64, wherein the executable instructions cause the at least one processor of the second portable computing device to cause display of the alert responsive to passage of a duration of time since replacement of a plurality of attachments configured to removably attach the plurality of wearable sensors to the scalp of the user. Example 66: The system of example 65, wherein the duration of time comprises 24 hours. Example 67: The system of any of examples 56 to 66 further comprising a second non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the second portable computing device, cause the at least one processor of the second portable computing device to: responsive to a detection of a possible seizure, cause display of instructions for confirming occurrence of a seizure. Example 68: A system for synchronized monitoring of brain activity comprising: a plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of a user, each wearable sensor comprising at least two electrodes configured to configured to monitor the EEG signals when the wearable sensor is positioned on a scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a portable computing device; and a non-transitory computer readable medium storing executable instructions that, when executed by at least one processor of the portable computing device, cause the at least one processor to: wirelessly transmit a message including a clock information to the plurality of wearable sensors; and cause the electronic circuitry of each wearable sensor of the plurality of wearable sensors to set an internal clock to the clock information so that the internal clock substantially matches internal clocks of the other wearable sensors of the plurality of wearable sensors, the internal clock being used for time stamping recorded signals indicative of the brain activity of the user, wherein data determined by each wearable sensor of the plurality of wearable sensors is correlated with data determined by the other wearable sensors of the plurality of wearable sensors to no more than 200 ms. Example 69: The system of example 68, wherein no wearable sensor of the plurality of wearable sensors communicates with another wearable sensor of the plurality of wearable sensors. Example 70: The system of any of examples 68 to 69, wherein data determined by each wearable sensor of the plurality of wearable sensors is correlated with data determined by the other wearable sensors of the plurality of wearable sensors to no more than 50 ms. Example 71: The system of any of examples 66 to 68, wherein the executable instructions further cause the at least one processor to, subsequent to wirelessly transmitting the message, confirm that the plurality of wearable sensors have set their internal clocks. Example 72: The system of any of examples 68 to 71, wherein the executable instructions further cause the at least one processor to verify that the processed EEG signals received from each wearable sensor of the plurality of wearable sensors are correlated with the processed EEG signals determined by the other wearable sensors of the plurality of wearable sensors to no more than 200 ms. Example 73: The system of example 72, wherein the executable instructions further cause the at least one processor to, responsive to the verifying, transmit the processed EEG signals received from the plurality of wearable sensors to a remote computing device. Example 74: The system of any of examples 68 to 73, wherein the executable instructions further cause the at least one processor to: poll the plurality of wearable sensors for their internal clocks; and in response to detecting that a difference between an internal clock of at least one wearable sensor of the plurality of wearable sensors and an expected internal clock satisfies a threshold, repeat wireless transmission of the message to cause the electronic circuitry of the at least one wearable sensor to set the internal clock. Example 75: The system of any of examples 68 to 74, wherein the executable instructions cause the at least one processor to wirelessly transmitting the message by: wirelessly transmitting a first message to each wearable sensor of the plurality of wearable sensors and cause the electronic circuitry of the plurality of wearable sensors to listen to a second message; and subsequent to transmitting the first message, wirelessly broadcasting a second message to the plurality of wearable sensors, the second message comprising the clock information. Example 76: The system of example 75, wherein the first and second messages are transmitted using Bluetooth low energy (BLE) protocol. Example 77: A method for synchronized monitoring of brain activity comprising: wirelessly transmitting a message including a clock information to a plurality of wearable sensors, the plurality of wearable sensors configured to detect electroencephalogram (EEG) signals indicative of a brain activity of a user, each wearable sensor comprising at least two electrodes configured to configured to monitor the EEG signals when the wearable sensor is positioned on a scalp of the user and an electronic circuitry configured to process the EEG signals monitored by the at least two electrodes and wirelessly transmit processed EEG signals to a portable computing device; cause the electronic circuitry of each wearable sensor of the plurality of wearable sensors to set an internal clock to the clock information so that the internal clock substantially matches internal clocks of the other wearable sensors of the plurality of wearable sensors, the internal clock being used for time stamping recorded signals indicative of the brain activity of the user; wirelessly receiving the processed EEG signals from the plurality of wearable sensors and verifying that the processed EEG signals received from each wearable sensor of the plurality of wearable sensors are correlated with the processed EEG signals determined by the other wearable sensors of the plurality of wearable sensors to no more than 200 ms; and responsive to the verifying, transmitting the processed EEG signals received from the plurality of wearable sensors to a remote computing device. Example 78: The method of example 77, wherein no wearable sensor of the plurality of wearable sensors communicates with another wearable sensor of the plurality of wearable sensors. Example 79: The method of any of examples 77 to 78, wherein verifying comprises verifying that the processed EEG signals received from each wearable sensor of the plurality of wearable sensors are correlated with the processed EEG signals determined by the other wearable sensors of the plurality of wearable sensors to no more than 50 ms. Example 80: The method of any of examples 77 to 79, further comprising confirming that that the plurality of wearable sensors have set their internal clocks. Example 81: The method of example 80, further comprising: polling the plurality of wearable sensors for their internal clocks; and in response to detecting that a difference between an internal clock of at least one wearable sensor of the plurality of wearable sensors and an expected internal clock satisfies a threshold, repeating wireless transmission of the message to cause the electronic circuitry of the at least one wearable sensor to set the internal clock. Example 82: The method of any of examples 77 to 81, wherein wirelessly transmitting the message comprises: wirelessly transmitting a first message to each wearable sensor of the plurality of wearable sensors and cause the electronic circuitry of the plurality of wearable sensors to listen to a second message; and subsequent to transmitting the first message, wirelessly broadcasting a second message to the plurality of wearable sensors, the second message comprising the clock information. Example 83: The method of example 82, wherein the first and second messages are transmitted using Bluetooth low energy (BLE) protocol. Example 84: A system for synchronized monitoring of brain activity comprising: a plurality of wearable sensors configured to record a brain activity of a user, each wearable sensor comprising at least two electrodes configured to detect signals indicative of the brain activity of the user when the wearable sensor is positioned on a scalp of the user and an electronic circuitry configured to, based on the signals detected by the at least two electrodes, determine data associated with the brain activity of the user; and a non-transitory computer readable medium storing instructions that, when executed by at least one processor of the electronic circuitry of a wearable sensor of a plurality of wearable sensors, cause the at least one processor to: cause at least one two electrodes of the wearable sensor to apply an electrical stimulation configured to be sensed by other wearable sensors of the plurality of wearable sensors; and cause an electronic circuitry of each wearable sensor of the other wearable sensors to process the electrical stimulation sensed by at least two electrodes and record the electrical stimulation along with data associated with the brain activity of the user, wherein recording of the electrical stimulation facilitates combining and processing data associated with the brain activity of the user collected by the plurality of wearable sensors. One or more features of any one of the foregoing examples can be used with one or more features of any other example. Other Variations The general principles described herein may be extended to other scenarios. For example, for intensive care in pediatric and adults two sensors, four sensors, eight sensors, or various combination of sensors may be used. Various other configurations are may also be used, with particular elements that are depicted as being implemented in hardware may instead be implemented in software, firmware, or a combination thereof. One of ordinary skill in the art will recognize various alternatives to the specific embodiments described herein. The specification and figures describe particular embodiments which are provided for ease of description and illustration and are not intended to be restrictive. Embodiments may be implemented to be used in various environments without departing from the spirit and scope of the disclosure. At least some elements of a device of the present application can be controlled and at least some steps of a method of the invention can be effectuated, in operation with a programmable processor governed by instructions stored in a memory. The memory may be random access memory (RAM), read-only memory (ROM), flash memory or any other memory, or combination thereof, suitable for storing control software or other instructions and data. Those skilled in the art should also readily appreciate that instructions or programs defining the functions of the present invention may be delivered to a processor in many forms, including, but not limited to, information permanently stored on non-writable storage media (for example read-only memory devices within a computer, such as ROM, or devices readable by a computer I/O attachment, such as CD-ROM or DVD disks), information alterably stored on writable storage media (for example floppy disks, removable flash memory and hard drives) or information conveyed to a computer through communication media, including wired or wireless computer networks. In addition, while the invention may be embodied in software, the functions necessary to implement the invention may optionally or alternatively be embodied in part or in whole using firmware and/or hardware components, such as combinatorial logic, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs) or other hardware or some combination of hardware, software and/or firmware components. In various embodiments, input from a user may be requested. Examples of methods for receiving user input, such as receiving a button press from a user, are illustrative and not by means of limitation. Alternative methods of receiving user input may be used, including receiving a button press on a touch screen, a physical button press on a device, a swipe, a tap, any other touch gestures, a spoken (audio) input, etc. Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein. Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Depending on the embodiment, certain acts, events, or functions of any of the processes or algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether. Moreover, in certain embodiments, operations or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of electronic hardware and executable software. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, or as software that runs on hardware, depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure. Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a machine learning service server, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A machine learning service server can be or include a microprocessor, but in the alternative, the machine learning service server can be or include a controller, microcontroller, or state machine, combinations of the same, or the like configured to generate and publish machine learning services backed by a machine learning model. A machine learning service server can include electrical circuitry configured to process computer-executable instructions. Although described herein primarily with respect to digital technology, a machine learning service server may also include primarily analog components. For example, some or all of the modeling, simulation, or service algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few. The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a machine learning service server, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An illustrative storage medium can be coupled to the machine learning service server such that the machine learning service server can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the machine learning service server. The machine learning service server and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the machine learning service server and the storage medium can reside as discrete components in a user terminal (for example, access device or network service client device). Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “for example,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain embodiments disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 145,450 |
11857331 | Throughout the figures, the same parts are always denoted using the same reference characters so that, as a general rule, they will only be described once. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION An exemplary embodiment of the measurement and testing system is seen generally at100inFIG.1. In the illustrative embodiment, the force measurement system100generally comprises a force measurement assembly102that is operatively coupled to a data acquisition/data processing device104(i.e., a data acquisition and processing device or computing device that is capable of collecting, storing, and processing data), which in turn, is operatively coupled to a subject visual display device107and an operator visual display device130. As illustrated inFIG.1, the force measurement assembly102is configured to receive a subject108thereon, and is capable of measuring the forces and/or moments applied to its substantially planar measurement surfaces114,116by the subject108. As shown inFIG.1, the data acquisition/data processing device104includes a plurality of user input devices132,134connected thereto. Preferably, the user input devices132,134comprise a keyboard132and a mouse134. In addition, the operator visual display device130may also serve as a user input device if it is provided with touch screen capabilities. While a desktop-type computing system is depicted inFIG.1, one of ordinary of skill in the art will appreciate that another type of data acquisition/data processing device104can be substituted for the desktop computing system such as, but not limited to, a laptop or a palmtop computing device (i.e., a PDA). In addition, rather than providing a data acquisition/data processing device104, it is to be understood that only a data acquisition device could be provided without departing from the spirit and the scope of the invention. Referring again toFIG.1, it can be seen that the force measurement assembly102of the illustrated embodiment is in the form of a displaceable, dual force plate assembly. The displaceable, dual force plate assembly includes a first plate component110, a second plate component112, at least one force measurement device (e.g., a force transducer) associated with the first plate component110, and at least one force measurement device (e.g., a force transducer) associated with the second plate component112. In the illustrated embodiment, a subject108stands in an upright position on the force measurement assembly102and each foot of the subject108is placed on the top surfaces114,116of a respective plate component110,112(i.e., one foot on the top surface114of the first plate component110and the other foot on the top surface116of the second plate component112). The at least one force transducer associated with the first plate component110is configured to sense one or more measured quantities and output one or more first signals that are representative of forces and/or moments being applied to its measurement surface114by the left foot/leg108aof the subject108, whereas the at least one force transducer associated with the second plate component112is configured to sense one or more measured quantities and output one or more second signals that are representative of forces and/or moments being applied to its measurement surface116by the right foot/leg108bof subject108. In one or more embodiments, when the subject is displaced on the force measurement assembly102, the subject108generally does not move relative to the displaceable force measurement assembly102(i.e., the subject108and the force measurement assembly102generally move together in synchrony). Also, in one or more embodiments, the top surfaces114,116of the respective plate components110,112are not rotated underneath the feet of the subject108, but rather remain stationary relative to the feet of the subject108(i.e., the top surfaces114,116are displaced in generally the same manner as the feet of the subject). In one non-limiting, exemplary embodiment, the force plate assembly102has a load capacity of up to approximately 500 lbs. (up to approximately 2,224 N) or up to 500 lbs. (up to 2,224 N). Advantageously, this high load capacity enables the force plate assembly102to be used with almost any subject requiring testing on the force plate assembly102. Also, in one non-limiting, exemplary embodiment, the force plate assembly102has a footprint of approximately eighteen (18) inches by twenty (20) inches. However, one of ordinary skill in the art will realize that other suitable dimensions for the force plate assembly102may also be used. Now, with reference toFIG.2, it can be seen that the displaceable force measurement assembly102is movably coupled to a base assembly106. The base assembly106generally comprises a substantially planar center portion106bwith two spaced-apart side enclosures106a,106cthat are disposed on opposed sides of the center portion106b. As shown inFIG.2, the displaceable force measurement assembly102is recessed-mounted into the top surface of the center portion106bof the base assembly106(i.e., it is recess-mounted into the top surface of the translatable sled assembly156which is part of the center portion106bof the base assembly106) so that its upper surface lies substantially flush with the adjacent stationary top surfaces122a,122bof the center portion106bof the base assembly106. The upper surface of the displaceable force measurement assembly102also lies substantially flush with the top surface of the translatable sled assembly156. Moreover, in the illustrated embodiment, it can be seen that the base assembly106further includes a pair of mounting brackets124disposed on the outward-facing side surfaces of each side enclosure106a,106c. Each mounting bracket124accommodates a respective support rail128. The support rails128can be used for various purposes related to the force measurement system100. For example, the support rails128can be used for supporting a safety harness system, which is worn by the subject during testing so as to prevent injury. Referring again toFIG.2, each side enclosure106a,106chouses a plurality of electronic components that generate a significant amount of waste heat that requires venting. Because the bottom of each side enclosure106a,106cis substantially open, the waste heat is vented through the bottom thereof. InFIG.2, it can be seen that the side enclosure106acomprises an emergency stop switch138(E-stop) provided in the rear, diagonal panel thereof. In one embodiment, the emergency stop switch138is in the form of a red pushbutton that can be easily pressed by a user of the force measurement system100in order to quasi-instantaneously stop the displacement of the force measurement assembly102. As such, the emergency stop switch138is a safety mechanism that protects a subject disposed on the displaceable force measurement assembly102from potential injury. Next, turning toFIG.3, the drive components of the base assembly106will be described in detail. Initially, the actuator system for producing the translation of the force measurement assembly102will be explained. InFIG.3, the front top cover of the center portion106bof the base assembly106has been removed to reveal the translation drive components. As shown in this figure, the force measurement assembly102is rotatably mounted to a translatable sled assembly156. The translatable sled assembly156is displaced forward and backward (i.e., in directions generally parallel to the sagittal plane SP of the subject (see e.g.,FIG.1) disposed on the force measurement assembly102) by means of a first actuator assembly158. That is, the first actuator assembly158moves the translatable sled assembly156backwards and forwards, without any substantial rotation or angular displacement (i.e., the first actuator assembly158produces generally pure translational movement). In the illustrated embodiment, the first actuator assembly158is in the form of ball screw actuator, and includes an electric motor that drives a rotatable screw shaft which, in turn, is threadingly coupled to a nut fixedly secured to the translatable sled assembly156. As such, when the screw shaft of the first actuator assembly158is rotated by the electric motor, the translatable sled assembly is displaced forward and backward along a substantially linear path. The electric motor of the first actuator assembly158is operatively coupled to a gear box (e.g., a 4:1 gear box) which, in turn, drives the rotatable screw shaft. Advantageously, because the nut of the ball screw actuator runs on ball bearings, friction is minimized and the actuator assembly158is highly efficient. However, an undesirable consequence of the highly efficient ball screw actuator design is its back-driveability. This poses a potential safety hazard to a subject disposed on the displaceable force measurement assembly102because the force plate could inadvertently move when a subject's weight is applied thereto. In order to prevent the force measurement assembly102from inadvertently being translated, the first actuator assembly158is additionally provided with a brake assembly disposed adjacent to the electric motor thereof. The brake assembly of the first actuator assembly158prevents any unintentional translation of the force measurement assembly102. InFIG.42, a top view of the base assembly106is illustrated, while inFIG.43, a longitudinal cross-sectional view of the base assembly106is illustrated. As shown inFIGS.42and43, the force measurement assembly102is mounted on a rotatable carriage assembly157(i.e., a swivel frame157). The rotatable carriage assembly157is mounted to, and rotates relative to, the translatable sled assembly156(i.e., the translatable frame156). The rotatable carriage assembly157is rotated by a second actuator assembly160(seeFIG.3) about a rotational shaft163(seeFIG.43—the rotatable carriage assembly157is provided with diagonal hatching thereon). As indicated by the curved arrows159inFIG.43, the rotatable carriage assembly157is capable of either clockwise or counter-clockwise rotation about the transverse rotational axis TA inFIG.3(i.e., generally single degree-of-freedom rotation about the transverse axis TA). In contrast, as indicated by the straight arrows161inFIGS.42and43, the translatable sled assembly156is capable of forward and backward translational movement by virtue of being linearly displaced by first actuator assembly158. InFIGS.42and43, a rearwardly displaced position156aof the translatable sled assembly156is indicated using center lines, while a forwardly displaced position156bof the translatable sled assembly156is indicated using dashed lines with small dashes. Again, referring toFIG.3, the actuator system for producing the rotation of the force measurement assembly102will now be described. InFIG.3, the top cover of the side enclosure106cof the base assembly106has been removed to reveal the rotational drive components. The force measurement assembly102is rotated within the translatable sled assembly156by the second actuator assembly160. Like the first actuator assembly158, the second actuator assembly160is also in the form of ball screw actuator, and includes an electric motor with a gear box (e.g., a 4:1 gear box) that drives a rotatable screw shaft which, in turn, is threadingly coupled to a nut that runs on ball bearings. Although, unlike the first actuator assembly158, the second actuator assembly160further includes a swing arm which is operatively coupled to the nut of the ball screw actuator. When the nut undergoes displacement along the screw shaft, the swing arm, which is attached to the rotatable carriage assembly157with the force measurement assembly102, is rotated. As such, when the swing arm is rotated, the rotatable carriage assembly157with the force measurement assembly102is also rotated about a transverse rotational axis TA (seeFIG.3). That is, the force measurement assembly102undergoes generally single degree-of-freedom rotation about the transverse rotational axis TA. In one embodiment, the imaginary transverse rotational axis TA approximately passes through the center of the ankle joints of the subject108when he or she is disposed on the force measurement assembly102. Because the second actuator assembly160is also in the form of a highly efficient ball screw actuator, it includes a brake assembly disposed adjacent to the electric motor to prevent it from being back-driven, similar to that of the first actuator assembly158. The brake assembly of the second actuator assembly160prevents the force measurement assembly102from being inadvertently rotated so as to protect a subject disposed thereon from its inadvertent movement. When the translatable sled assembly156is translated by the first actuator assembly158, the second actuator assembly160is translated with the sled assembly156and the force plate. In particular, when the translatable sled assembly156is translated backwards and forwards by the first actuator assembly158, the second actuator assembly160is displaced along a rail or rod of the base assembly106. In a preferred embodiment of the invention, both the first actuator assembly158and the second actuator assembly160are provided with two (2) electrical cables operatively coupled thereto. The first cable connected to each actuator assembly158,160is a power cable for the electric motor and brake of each actuator, while the second cable transmits positional information from the respective actuator encoder that is utilized in the feedback control of each actuator assembly158,160. Referring back toFIG.1, it can be seen that the base assembly106is operatively coupled to the data acquisition/data processing device104by virtue of an electrical cable118. The electrical cable118is used for transmitting data between the programmable logic controller (PLC) of the base assembly106and the data acquisition/data processing device104(i.e., the operator computing device104). Various types of data transmission cables can be used for cable118. For example, the cable118can be a Universal Serial Bus (USB) cable or an Ethernet cable. Preferably, the electrical cable118contains a plurality of electrical wires bundled together that are utilized for transmitting data. However, it is to be understood that the base assembly106can be operatively coupled to the data acquisition/data processing device104using other signal transmission means, such as a wireless data transmission system. In the illustrated embodiment, the at least one force transducer associated with the first and second plate components110,112comprises four (4) pylon-type force transducers154(or pylon-type load cells) that are disposed underneath, and near each of the four corners (4) of the first plate component110and the second plate component112(seeFIG.4). Each of the eight (8) illustrated pylon-type force transducers has a plurality of strain gages adhered to the outer periphery of a cylindrically-shaped force transducer sensing element for detecting the mechanical strain of the force transducer sensing element imparted thereon by the force(s) applied to the surfaces of the force measurement assembly102. As shown inFIG.4, a respective base plate162can be provided underneath the transducers154of each plate component110,112for facilitating the mounting of the force plate assembly to the rotatable carriage assembly157of the translatable sled assembly156of the base assembly106. Alternatively, a plurality of structural frame members (e.g., formed from steel) could be used in lieu of the base plates162for attaching the dual force plate assembly to the rotatable carriage assembly157of the translatable sled assembly156of the base assembly106. In an alternative embodiment, rather than using four (4) pylon-type force transducers154on each plate component110,112, force transducers in the form of transducer beams could be provided under each plate component110,112. In this alternative embodiment, the first plate component110could comprise two transducer beams that are disposed underneath, and on generally opposite sides of the first plate component110. Similarly, in this embodiment, the second plate component112could comprise two transducer beams that are disposed underneath, and on generally opposite sides of the second plate component112. Similar to the pylon-type force transducers154, the force transducer beams could have a plurality of strain gages attached to one or more surfaces thereof for sensing the mechanical strain imparted on the beam by the force(s) applied to the surfaces of the force measurement assembly102. Rather, than using four (4) force transducer pylons under each plate, or two spaced apart force transducer beams under each plate, it is to be understood that the force measurement assembly102can also utilize the force transducer technology described in U.S. Pat. No. 8,544,347, the entire disclosure of which is incorporated herein by reference. In other embodiments of the invention, rather than using a force measurement assembly102having first and second plate components110,112, it is to be understood that a force measurement assembly102′ in the form of a single force plate may be employed (seeFIG.6). Unlike the dual force plate assembly illustrated inFIGS.1and4, the single force plate comprises a single measurement surface on which both of a subject's feet are placed during testing. Although, similar to the measurement assembly102, the illustrated single force plate102′ comprises four (4) pylon-type force transducers154(or pylon-type load cells) that are disposed underneath, and near each of the four corners (4) thereof for sensing the load applied to the surface of the force measurement assembly102′. Also, referring toFIG.6, it can be seen that the single force plate102′ may comprise a single base plate162′ disposed beneath the four (4) pylon-type force transducers154. Referring toFIGS.2and3, the base assembly106is preferably provided with a plurality of support feet126disposed thereunder. Preferably, each of the four (4) corners of the base assembly106is provided with a support foot126. In one embodiment, each support foot126is attached to a bottom surface of base assembly106. In one preferred embodiment, at least one of the support feet126is adjustable so as to facilitate the leveling of the base assembly106on an uneven floor surface (e.g., seeFIG.3, the support foot can be provided with a threaded shaft129that permits the height thereof to be adjusted). For example, referring toFIG.2, the right corner of the base assembly106may be provided with a removable cover plate127for gaining access to an adjustable support foot126with threaded shaft129. In one exemplary embodiment, with reference toFIG.2, the base assembly106has a length LBof approximately five feet (5′-0″), a width WBof approximately five feet (5′-0″), and a step height HBof approximately four (4) inches. In other words, the base assembly has an approximately 5′-0″ by 5′-0″ footprint with step height of approximately four (4) inches. In other exemplary embodiments, the base assembly106has a width WBof slightly less than five feet (5′-0″), for example, a width WBlying in the range between approximately fifty-two (52) inches and approximately fifty-nine (59) inches (or between fifty-two (52) inches and fifty-nine (59) inches). Also, in other exemplary embodiments, the base assembly106has a step height lying in the range between approximately four (4) inches and approximately four and one-half (4½) inches (or between four (4) inches and four and one-half (4½) inches). Advantageously, the design of the base assembly106is such that its step height is minimized. For example, the placement of the second actuator assembly160above the top surface of the base assembly106facilitates a reduction in the step height of the base assembly106. It is highly desirable for the base assembly106to have as low a profile as possible. A reduced step height especially makes it easier for subjects having balance disorders to step on and off the base assembly106. This reduced step height is particularly advantageous for elderly subjects or patients being tested on the force measurement system100because it is typically more difficult for elderly subjects to step up and down from elevated surfaces. Now, with reference toFIGS.8-10, the subject visual display device107of the force measurement system100will be described in more detail. In the illustrated embodiment, the subject visual display device107generally comprises a projector164, a generally spherical mirror166(i.e., a convexly curved mirror that has the shape of a piece cut out of a spherical surface), and a generally hemispherical concave projection screen168with a variable radius (i.e., the radius of the hemispherical projection screen168becomes increasingly larger from its center to its periphery—see radii R1, R2, and R3inFIG.10). As shown inFIGS.8-10, the hemispherical projection screen168may be provided with a peripheral flange169therearound. The lens of the projector164projects an image onto the generally spherical mirror166which, in turn, projects the image onto the generally hemispherical projection screen168(seeFIG.10). As shown inFIGS.8and10, the top of the generally hemispherical projection screen168is provided with a semi-circular cutout180for accommodating the projector light beam165in the illustrative embodiment. Advantageously, the generally hemispherical projection screen168is a continuous curved surface that does not contain any lines or points resulting from the intersection of adjoining planar or curved surfaces. Thus, the projection screen168is capable of creating a completely immersive visual environment for a subject being tested on the force measurement assembly102because the subject is unable to focus on any particular reference point or line on the screen168. As such, the subject becomes completely immersed in the virtual reality scene(s) being projected on the generally hemispherical projection screen168, and thus, his or her visual perception can be effectively altered during a test being performed using the force measurement system100(e.g., a balance test). In order to permit a subject to be substantially circumscribed by the generally hemispherical projection screen168on three sides, the bottom of the screen168is provided with a semi-circular cutout178in the illustrative embodiment. While the generally hemispherical projection screen168thoroughly immerses the subject108in the virtual reality scene(s), it advantageously does not totally enclose the subject108. Totally enclosing the subject108could cause him or her to become extremely claustrophobic. Also, the clinician would be unable to observe the subject or patient in a totally enclosed environment. As such, the illustrated embodiment of the force measurement system100does not utilize a totally enclosed environment, such as a closed, rotating shell, etc. Also, as shown inFIGS.1-3and8-10, the subject visual display device107is not attached to the subject108, and it is spaced apart from the force measurement assembly102disposed in the base assembly106. In one embodiment of the invention, the generally hemispherical projection screen168is formed from a suitable material (e.g., an acrylic, fiberglass, fabric, aluminum, etc.) having a matte gray color. A matte gray color is preferable to a white color because it minimizes the unwanted reflections that can result from the use of a projection screen having a concave shape. Also, in an exemplary embodiment, the projection screen168has a diameter (i.e., width WS) of approximately 69 inches and a depth Ds of approximately 40 inches (seeFIGS.8and9). In other exemplary embodiments, the projection screen168has a width WSlying in the range between approximately sixty-eight (68) inches and approximately ninety-two (92) inches (or between sixty-eight (68) inches and ninety-two (92) inches). For example, including the flange169, the projection screen168could have a width WSof approximately seventy-three (73) inches. In some embodiments, the target distance between the subject and the front surface of the projection screen168can lie within the range between approximately 25 inches and approximately 40 inches (or between 25 inches and 40 inches). Although, those of ordinary skill in the art will readily appreciate that other suitable dimensions and circumscribing geometries may be utilized for the projection screen168, provided that the selected dimensions and circumscribing geometries for the screen168are capable of creating an immersive environment for a subject disposed on the force measurement assembly102(i.e., the screen168of the subject visual display device engages enough of the subject's peripheral vision such that the subject becomes, and remains immersed in the virtual reality scenario). In one or more embodiments, the projection screen168fully encompasses the peripheral vision of the subject108(e.g., by the coronal plane CP of the subject being approximately aligned with the flange169of the projection screen168or by the coronal plane CP being disposed inwardly from the flange169within the hemispherical confines of the screen168). In other words, the output screen168of the at least one visual display107at least partially circumscribes three sides of a subject108(e.g., seeFIG.1). As shown inFIGS.8-10, a top cover171is preferably provided over the projector164, the mirror166, and the cutout180in the output screen168so as to protect these components, and to give the visual display device107a more finished appearance. In a preferred embodiment, the data acquisition/data processing device104is configured to convert a two-dimensional (2-D) image, which is configured for display on a conventional two-dimensional screen, into a three-dimensional (3-D) image that is capable of being displayed on the hemispherical output screen168without excessive distortion. That is, the data acquisition/data processing device104executes a software program that utilizes a projection mapping algorithm to “warp” a flat 2-D rendered projection screen image into a distorted 3-D projection image that approximately matches the curvature of the final projection surface (i.e., the curvature of the hemispherical output screen168), which takes into account both the distortion of the lens of the projector164and any optical surfaces that are used to facilitate the projection (e.g., generally spherical mirror166). In particular, the projection mapping algorithm utilizes a plurality of virtual cameras and projection surfaces (which are modeled based upon the actual projection surfaces) in order to transform the two-dimensional (2-D) images into the requisite three-dimensional (3-D) images. Thus, the projector164lens information, the spherical mirror166dimensional data, and the hemispherical projection screen168dimensional data are entered as inputs into the projection mapping algorithm software. When a human subject is properly positioned in the confines of the hemispherical output screen168, he or she will see a representation of the virtual reality scene wrapping around them instead of only seeing a small viewing window in front of him or her. Advantageously, using a software package comprising a projection mapping algorithm enables the system100to use previously created 3-D modeled virtual worlds and objects without directly modifying them. Rather, the projection mapping algorithm employed by the software package merely changes the manner in which these 3-D modeled virtual worlds and objects are projected into the subject's viewing area. Those of ordinary skill in the art will also appreciate that the subject visual display device107may utilize other suitable projection means. For example, rather using an overhead-type projector164as illustrated inFIGS.8-10, a direct or rear projection system can be utilized for projecting the image onto the screen168, provided that the direct projection system does not interfere with the subject's visibility of the target image. In such a rear or direct projection arrangement, the generally spherical mirror166would not be required. With reference toFIGS.28and29, in one exemplary embodiment, a single projector164′ with a fisheye-type lens and no mirror is utilized in the subject visual display system to project an image onto the screen168(e.g., the projector164′ is disposed behind the subject108). As illustrated in these figures, the projector164′ with the fisheye-type lens projects a light beam165′ through the cutout180in the top of the generally hemispherical projection screen168. In another exemplary embodiment, two projectors164′, each having a respective fisheye-type lens, are used to project an image onto the screen168(seeFIGS.30and31—the projectors164′ are disposed behind the subject108). As depictedFIGS.30and31, the projectors164′ with the fisheye-type lens project intersecting light beams165′ through the cutout180in the top of the generally hemispherical projection screen168. Advantageously, the use of two projectors164′ with fisheye-type lens, rather than just a single projector164′ with a fisheye lens, has the added benefit of removing shadows that are cast on the output screen168by the subject108disposed on the force measurement assembly102. Another alternative embodiment of the projector arrangement is illustrated inFIG.44. As shown in this figure, a projector164″ having a fisheye lens182is mounted on the top of the hemispherical projection screen168. InFIG.44, it can be seen that the fisheye lens182is connected to the body of the projector164″ by an elbow fitting184. In other words, the fisheye lens182is disposed at a non-zero, angled orientation relative to a body of the projector164″. In the illustrated embodiment, the non-zero, angled orientation at which the fisheye lens182is disposed relative to the body of the projector164″ is approximately 90 degrees. The elbow fitting184comprises a one-way mirror disposed therein for changing the direction of the light beam emanating from the projector164″. As illustrated inFIG.44, the fisheye lens182is disposed at approximately the apex of the hemispherical projection screen168, and it extends down through the cutout180′ at the top of the screen168. Because a fisheye lens182is utilized in the arrangement ofFIG.44, the generally spherical mirror166is not required, similar to that which was described above for the embodiment ofFIGS.28and29. Referring again toFIG.44, it can be seen that the generally hemispherical projection screen168can be supported from a floor surface using a screen support structure186, which is an alternative design to that which is illustrated inFIGS.2and8-10. As described above for the screen support structure167, the screen support structure186is used to elevate the projection screen168a predetermined distance above the floor of a room. With continued reference toFIG.44, it can be seen that the illustrated screen support structure186comprises a plurality of lower leg members187(i.e., four (4) leg members187) that support an upper support cage portion189, which is disposed around the upper portion of the generally hemispherical projection screen168. In particular, the upper support cage portion189is securely attached to the peripheral flange169of the hemispherical projection screen168(e.g., by using a plurality of fasteners on each side of the flange169). Because the upper support cage portion189is mostly attached to the upper portion (e.g., upper half) of the screen168, the screen168is generally supported above its center-of-gravity, which advantageously results in a screen mounting arrangement with high structural stability. As shown inFIG.44, one pair of the plurality of lower leg members187are disposed on each of the opposed lateral sides of the screen168. Also, it can be seen that each of the lower leg members187is provided with a height-adjustable foot188for adjusting the height of the screen168relative to the floor. Also, as shown inFIG.44, the projector164″ is supported on the top of the screen168by a projector support frame190, which is secured directly to the upper support cage portion189of the screen support structure186so as to minimize the transmission of vibrations from the projector164″ to the hemispherical projection screen168. Advantageously, the mounting arrangement of the projector164″ on the projector support frame190affords adjustability of the projector164″ in a front-to-back direction. It is highly desirable for the hemispherical projection screen168to be maintained in a stationary position essentially free from external vibrations so that the subject is completely immersed in the virtual environment being created within the hemispherical projection screen168. Advantageously, the structural rigidity afforded by the screen support structure186ofFIG.44virtually eliminates the transmission of vibrations to the projection screen168, including those vibrations emanating from the building itself in which the force measurement system100is located. In particular, the screen support structure186is designed to minimize any low frequency vibrations that are transmitted to the screen168. In addition, the elimination of the generally spherical mirror166from the projector arrangement inFIG.44, minimizes the transmission of visible vibrations to the screen image that is projected onto the hemispherical projection screen168by the projector164″. In one or more embodiments, the base assembly106has a width WB(see e.g.,FIG.2) measured in a direction generally parallel to the coronal plane CP of the subject (see e.g.,FIG.1) and a length LB(FIG.2) measured in a direction generally parallel to the sagittal plane SP of the subject (FIG.1). In these one or more embodiments, a width WSof the output screen168of the at least one visual display device107(seeFIG.9) is less than approximately 1.5 times the width WBof the base assembly106(or less than 1.5 times the width WBof the base assembly106), and a depth Ds of the output screen168of the at least one visual display device107(seeFIG.8) is less than the length LBof the base assembly106(FIG.2). As shown inFIG.9, in the illustrated embodiment, the width WSof the output screen168of the at least one visual display device107is greater than the width WBof the base assembly106. In some embodiments, a width WSof the output screen168of the at least one visual display device107(seeFIG.9) is greater than approximately 1.3 times the width WBof the base assembly106(or greater than 1.3 times the width WBof the base assembly106). As illustrated inFIGS.2and8-10, the generally hemispherical projection screen168can be supported from a floor surface using a screen support structure167. In other words, the screen support structure167is used to elevate the projection screen168a predetermined distance above the floor of a room. With continued reference toFIGS.2and8-10, it can be seen that the illustrated screen support structure167comprises a lower generally U-shaped member167a, an upper generally U-shaped member167b, and a plurality of vertical members167c,167d,167e. As best shown inFIGS.2,9, and10, the two vertical members167c,167dare disposed on opposite sides of the screen168, while the third vertical member167eis disposed generally in the middle of, and generally behind, the screen168. The screen support structure167maintains the projection screen168in a stationary position. As such, the position of the projection screen168is generally fixed relative to the base assembly106. In the side view ofFIG.10, it can be seen that the rearmost curved edge of the projection screen168is generally aligned with the back edge of the base assembly106. Next, referring again toFIG.1, the operator visual display device130of the force measurement system100will be described in more particularity. In the illustrated embodiment, the operator visual display device130is in the form of a flat panel monitor. Those of ordinary skill in the art will readily appreciate that various types of flat panel monitors having various types of data transmission cables140may be used to operatively couple the operator visual display device130to the data acquisition/data processing device104. For example, the flat panel monitor employed may utilize a video graphics array (VGA) cable, a digital visual interface (DVI or DVI-D) cable, a high-definition multimedia interface (HDMI or Mini-HDMI) cable, or a DisplayPort digital display interface cable to connect to the data acquisition/data processing device104. Alternatively, in other embodiments of the invention, the visual display device130can be operatively coupled to the data acquisition/data processing device104using wireless data transmission means. Electrical power is supplied to the visual display device130using a separate power cord that connects to a building wall receptacle. Also, as shown inFIG.1, the subject visual display device107is operatively coupled to the data acquisition/data processing device104by means of a data transmission cable120. More particularly, the projector164of the subject visual display device107is operatively connected to the data acquisition/data processing device104via the data transmission cable120. Like the data transmission cable140described above for the operator visual display device130, various types of data transmission cables120can be used to operatively connect the subject visual display device107to the data acquisition/data processing device104(e.g., the various types described above). Those of ordinary skill in the art will appreciate that the visual display device130can be embodied in various forms. For example, if the visual display device130is in the form of flat screen monitor as illustrated inFIG.1, it may comprise a liquid crystal display (i.e., an LCD display), a light-emitting diode display (i.e., an LED display), a plasma display, a projection-type display, or a rear projection-type display. The operator visual display device130may also be in the form of a touch pad display. For example, the operator visual display device130may comprise multi-touch technology which recognizes two or more contact points simultaneously on the surface of the screen so as to enable users of the device to use two fingers for zooming in/out, rotation, and a two finger tap. Now, turning toFIG.11, it can be seen that the illustrated data acquisition/data processing device104(i.e., the operator computing device) of the force measurement system100includes a microprocessor104afor processing data, memory104b(e.g., random access memory or RAM) for storing data during the processing thereof, and data storage device(s)104c, such as one or more hard drives, compact disk drives, floppy disk drives, flash drives, or any combination thereof. As shown inFIG.11, the programmable logic controller (PLC) of the base assembly106, the subject visual display device107, and the operator visual display device130are operatively coupled to the data acquisition/data processing device104such that data is capable of being transferred between these devices104,106,107, and130. Also, as illustrated inFIG.11, a plurality of data input devices132,134such as the keyboard132and mouse134shown inFIG.1, are operatively coupled to the data acquisition/data processing device104so that a user is able to enter data into the data acquisition/data processing device104. In some embodiments, the data acquisition/data processing device104can be in the form of a desktop computer, while in other embodiments, the data acquisition/data processing device104can be embodied as a laptop computer. Advantageously, the programmable logic controller172of the base assembly106(see e.g.,FIGS.12and13, which is a type of data processing device) provides real-time control of the actuator assemblies158,160that displace the force measurement assembly102(i.e., force plate assembly102). The real-time control provided by the programmable logic controller172ensures that the motion control software regulating the displacement of the force plate assembly102operates at the design clock rate, thereby providing fail-safe operation for subject safety. In one embodiment, the programmable logic controller172comprises both the motion control software and the input/output management software, which controls the functionality of the input/output (I/O) module of the programmable logic controller172. In one embodiment, the programmable logic controller172utilizes EtherCAT protocol for enhanced speed capabilities and real-time control. In one or more embodiments, the input/output (I/O) module of the programmable logic controller172allows various accessories to be added to the force measurement system100. For example, an eye movement tracking system, such as that described by U.S. Pat. Nos. 6,113,237 and 6,152,564 could be operatively connected to the input/output (I/O) module of the programmable logic controller172. As another example, a head movement tracking system, which is instrumented with one or more accelerometers, could be operatively connected to the input/output (I/O) module. FIG.12graphically illustrates the acquisition and processing of the load data and the control of the actuator assemblies158,160carried out by the exemplary force measurement system100. Initially, as shown inFIG.12, a load L is applied to the force measurement assembly102by a subject disposed thereon. The load is transmitted from the first and second plate components110,112to its respective set of pylon-type force transducers or force transducer beams. As described above, in one embodiment of the invention, each plate component110,112comprises four (4) pylon-type force transducers154disposed thereunder. Preferably, these pylon-type force transducers154are disposed near respective corners of each plate component110,112. In a preferred embodiment of the invention, each of the pylon-type force transducers includes a plurality of strain gages wired in one or more Wheatstone bridge configurations, wherein the electrical resistance of each strain gage is altered when the associated portion of the associated pylon-type force transducer undergoes deformation resulting from the load (i.e., forces and/or moments) acting on the first and second plate components110,112. For each plurality of strain gages disposed on the pylon-type force transducers, the change in the electrical resistance of the strain gages brings about a consequential change in the output voltage of the Wheatstone bridge (i.e., a quantity representative of the load being applied to the measurement surface). Thus, in one embodiment, the four (4) pylon-type force transducers154disposed under each plate component110,112output a total of three (3) analog output voltages (signals). In some embodiments, the three (3) analog output voltages from each plate component110,112are then transmitted to an analog preamplifier board170in the base assembly106for preconditioning (i.e., signals SFPO1-SFPO6inFIG.12). The preamplifier board is used to increase the magnitudes of the transducer analog output voltages. After which, the analog force plate output signals SAPO1-SAPO6are transmitted from the analog preamplifier170to the programmable logic controller (PLC)172of the base assembly106. In the programmable logic controller (PLC)172, analog force plate output signals SAPO1-SAPO6are converted into forces, moments, centers of pressure (COP), and/or a center of gravity (COG) for the subject. Then, the forces, moments, centers of pressure (COP), subject center of gravity (COG), and/or sway angle for the subject computed by the programmable logic controller172are transmitted to the data acquisition/data processing device104(operator computing device104) so that they can be utilized in reports displayed to an operator OP. Also, in yet another embodiment, the preamplifier board170additionally could be used to convert the analog voltage signals into digital voltage signals (i.e., the preamplifier board170could be provided with an analog-to-digital converter). In this embodiment, digital voltage signals would be transmitted to the programmable logic controller (PLC)172rather than analog voltage signals. When the programmable logic controller172receives the voltage signals SACO1-SACO6, it initially transforms the signals into output forces and/or moments by multiplying the voltage signals SACO1-SACO6by a calibration matrix (e.g., FLz, MLx, MLy, FRz, MRx, MRy). After which, the the center of pressure for each foot of the subject (i.e., the x and y coordinates of the point of application of the force applied to the measurement surface by each foot) are determined by the programmable logic controller172. Referring toFIG.5, which depicts a top view of the measurement assembly102, it can be seen that the center of pressure coordinates (xPL, yPL) for the first plate component110are determined in accordance with x and y coordinate axes142,144. Similarly, the center of pressure coordinates (xPR, yPR) for the second plate component112are determined in accordance with x and y coordinate axes146,148. If the force transducer technology described in U.S. Pat. No. 8,544,347 is employed, it is to be understood that the center of pressure coordinates (xPL, yPL, xPR, xPR) can be computed in the particular manner described in that application. As explained above, rather than using a measurement assembly102having first and second plate components110,112, a force measurement assembly102′ in the form of a single force plate may be employed (seeFIGS.6and7, which illustrate a single force plate). As discussed hereinbefore, the single force plate comprises a single measurement surface on which both of a subject's feet are placed during testing. As such, rather than computing two sets of center of pressure coordinates (i.e., one for each foot of the subject), the embodiments employing the single force plate compute a single set of overall center of pressure coordinates (xP, yP) in accordance with x and y coordinate axes150,152. In one exemplary embodiment, the programmable logic controller172in the base assembly106determines the vertical forces FLz, FRzexerted on the surface of the first and second force plates by the feet of the subject and the center of pressure for each foot of the subject, while in another exemplary embodiment, the output forces of the data acquisition/data processing device104include all three (3) orthogonal components of the resultant forces acting on the two plate components110,112(i.e., FLx, FLy, FLz, FRx, FRy, FRz) and all three (3) orthogonal components of the moments acting on the two plate components110,112(i.e., MLx, MLy, MLz, MRx, MRy, MRz). In yet other embodiments of the invention, the output forces and moments of the data acquisition/data processing device104can be in the form of other forces and moments as well. In the illustrated embodiment, the programmable logic controller172converts the computed center of pressure (COP) to a center of gravity (COG) for the subject using a Butterworth filter. For example, in one exemplary, non-limiting embodiment, a second-order Butterworth filter with a 0.75 Hz cutoff frequency is used. In addition, the programmable logic controller172also computes a sway angle for the subject using a corrected center of gravity (COG′) value, wherein the center of gravity (COG) value is corrected to accommodate for the offset position of the subject relative to the origin of the coordinate axes (142,144,146,148) of the force plate assembly102. For example, the programmable logic controller172computes the sway angle for the subject in the following manner: θ=sin-1(COG′0.55h)-2.3°(1) where:θ: sway angle of the subject;COG′: corrected center of gravity of the subject; andh: height of the center of gravity of the subject. Now, referring again to the block diagram ofFIG.12, the manner in which the motion of the force measurement assembly102is controlled will be explained. Initially, an operator OP inputs one or more motion commands at the operator computing device104(data acquisition/data processing device104) by utilizing one of the user input devices132,134. Once, the one or more motion commands are processed by the operator computing device104, the motion command signals are transmitted to the programmable logic controller172. Then, after further processing by the programmable logic controller172, the motion command signals are transmitted to the actuator control drive174. Finally, the actuator control drive174transmits the direct-current (DC) motion command signals to the first and second actuator assemblies158,160so that the force measurement assembly102, and the subject disposed thereon, can be displaced in the desired manner. The actuator control drive174controls the position, velocity, and torque of each actuator motor. In order to accurately control the motion of the force measurement assembly102, a closed-loop feedback control routine may be utilized by the force measurement system100. As shown inFIG.12, the actuator control drive174receives the position, velocity, and torque of each actuator motor from the encoders provided as part of each actuator assembly158,160. Then, from the actuator control drive174, the position, velocity, and torque of each actuator motor is transmitted to the programmable logic controller172, wherein the feedback control of the first and second actuator assemblies158,160is carried out. In addition, as illustrated inFIG.12, the position, velocity, and torque of each actuator motor is transmitted from the programmable logic controller172to the operator computing device104so that it is capable of being used to characterize the movement of the subject on the force measurement assembly102(e.g., the motor positional data and/or torque can be used to compute the sway of the subject). Also, the rotational and translational positional data that is received from first and second actuator assemblies158,160can be transmitted to the operator computing device104. Next, the electrical single-line diagram ofFIG.13, which schematically illustrates the power distribution system for the base assembly106, will be explained. As shown in this figure, the building power supply is electrically coupled to an isolation transformer176(also refer toFIG.3). In one exemplary embodiment, the isolation transformer176is a medical-grade isolation transformer that isolates the electrical system of the base assembly106from the building electrical system. The isolation transformer176greatly minimizes any leakage currents from the building electrical system, which could pose a potential safety hazard to a subject standing on the metallic base assembly106. The primary winding of the isolation transformer176is electrically coupled to the building electrical system, whereas the secondary winding of isolation transformer176is electrically coupled to the programmable logic controller172(as schematically illustrated inFIG.13). Referring again toFIG.13, it can be seen that the programmable logic controller172is electrically coupled to the actuator control drive174via an emergency stop (E-stop) switch138. As explained above, in one embodiment, the emergency stop switch138is in the form of a red pushbutton that can be easily pressed by a user of the force measurement system100(e.g., a subject on the force measurement assembly102or an operator) in order to quasi-instantaneously stop the displacement of the force measurement assembly102. Because the emergency stop switch138is designed to fail open, the emergency stop switch138is a fail-safe means of aborting the operations (e.g., the software operations) performed by the programmable logic controller172. Thus, even if the programmable logic controller172fails, the emergency stop switch138will not fail, thereby cutting the power to the actuator control drive174so that the force measurement assembly102remains stationary (i.e., the brakes on the actuator assemblies158,160will engage, and thus, prevent any intentional movement thereof). Also, in one embodiment, the emergency stop switch assembly138includes a reset button for re-enabling the operation of the actuator control drive174after it is has been shut down by the emergency stop switch. As shown inFIG.13, the first and second actuator assemblies158,160are powered by the actuator control drive174. While not explicitly shown inFIG.13, the electrical system of the base assembly106may further include a power entry module that includes a circuit breaker (e.g., a 20 A circuit breaker) and a filter. Also, the electrical system of the base assembly106may additionally include an electromagnetic interference (EMI) filter that reduces electrical noise so as to meet the requirements of the Federal Communications Commission (FCC). Now, specific functionality of the immersive virtual reality environment of the force measurement system100will be described in detail. It is to be understood that the aforedescribed functionality of the immersive virtual reality environment of the force measurement system100can be carried out by the data acquisition/data processing device104(i.e., the operator computing device) utilizing software, hardware, or a combination of both hardware and software. For example, the data acquisition/data processing device104can be specially programmed to carry out the functionality described hereinafter. In one embodiment of the invention, the computer program instructions necessary to carry out this functionality may be loaded directly onto an internal data storage device104cof the data acquisition/data processing device104(e.g., on a hard drive thereof) and subsequently executed by the microprocessor104aof the data acquisition/data processing device104. Alternatively, these computer program instructions could be stored on a portable computer-readable medium (e.g., a flash drive, a floppy disk, a compact disk, etc.), and then subsequently loaded onto the data acquisition/data processing device104such that the instructions can be executed thereby. In one embodiment, these computer program instructions are embodied in the form of a virtual reality software program executed by the data acquisition/data processing device104. In other embodiments, these computer program instructions could be embodied in the hardware of the data acquisition/data processing device104, rather than in the software thereof. It is also possible for the computer program instructions to be embodied in a combination of both the hardware and the software. In alternative embodiments of the invention, a force measurement assembly102in the form of a static force plate (i.e., the force plate surface is stationary and is not displaced relative to the floor or ground) can be used with the immersive virtual reality environment described herein. Such a static force plate does not have any actuators or other devices that translate or rotate the force measurement surface(s) thereof. For example, as shown inFIG.52, the static force plate102′ is disposed beneath the semi-circular cutout178of the generally hemispherical projection screen168of the visual display device107. As depicted inFIG.52, the static force plate102′ is vertically aligned with the semi-circular cutout178in the bottom portion of the generally hemispherical projection screen168(i.e., when a subject108stands on the static force plate102′, his or her legs pass through the semi-circular cutout178in the bottom portion of the generally hemispherical projection screen168so that he or she is able to become fully immersed in the simulated environment created by the scenes displayed on the screen168). As described in detail hereinafter, the data acquisition/data processing device of the force measurement system illustrated inFIG.52may be programmed to perturb the visual input of the subject108during the performance of a balance test or training routine by manipulating the scenes on the output screen168of the visual display device107. During the performance of the balance test or training routine while the subject is disposed on the static force plate102′, the data acquisition/data processing device may be further programmed to utilize the output forces and/or moments computed from the output data of the static force plate102′ in order to assess a response of the subject to the visual stimuli on the generally hemispherical projection screen168of the visual display device107. For example, to assess the response of the subject108during the performance of the balance test or training routine, the output forces and/or moments determined using the static force plate102′ may be used to determine any of the scores or parameters (i)-(viii) described below in conjunction with the embodiment illustrated inFIG.53. As described above, in one or more embodiments of the invention, one or more virtual reality scenes are projected on the generally hemispherical projection screen168of the subject visual display device107so that the visual perception of a subject can be effectively altered during a test being performed using the force measurement system100(e.g., a balance test). In order to illustrate the principles of the invention, the immersive virtual reality environment of the force measurement system100will be described in conjunction with an exemplary balance assessment protocol, namely the Sensory Organization Test (“SOT”). Although, those of ordinary skill in the art will readily appreciate that the immersive virtual reality environment of the force measurement system100can be utilized with various other assessment protocols as well. For example, the force measurement system100could also include protocols, such as the Center of Gravity (“COG”) Alignment test, the Adaptation Test (“ADT”), the Limits of Stability (“LOS”) test, the Weight Bearing Squat test, the Rhythmic Weight Shift test, and the Unilateral Stance test. In addition, the immersive virtual reality environment and the displaceable force measurement assembly102of the force measurement system100can be used with various forms of training, such as closed chain training, mobility training, quick training, seated training, and weight shifting training. A brief description of each of these five categories of training will be provided hereinafter. Closed chain training requires users to specify hip, knee, ankle, or lower back for target training. The training exercises associated with closed chain training are designed to gradually increase the amount of flexibility and to increase the overall amount of difficulty. Mobility training starts with elements from seated training and progresses up through a full stepping motion. One goal of this training series is to help a patient coordinate the sit to stand movement and to help the patient regain control of normal activities of daily living. Quick Training is designed to meet the basic needs of training in a quick and easy to set up interface. A variety of different trainings can be chosen that range from simple stand still with the cursor in the center target toFIG.8motions. Seated training is performed while in a seated position. Seated training is typically performed using a twelve (12) inch block as a base together with a four (4) inch block, a foam block, or a rocker board placed on top of the twelve (12) inch block. These training exercises help a subject or patient begin to explore their base of support as well as coordinate core stability. Weight shifting training involves leaning or moving in four different directions: forward, backward, left, or right. Combined with movements on top of different surfaces, the goal of the weight shifting training is to get people more comfortable with moving beyond their comfort levels through challenging them to hit targets placed close together initially, and then moving them outward toward their theoretical limits. In general, each training protocol may utilize a series of targets or markers that are displayed on the screen168of the subject visual display device107. The goal of the training is for the subject or patient108to move a displaceable visual indicator (e.g., a cursor) into the stationary targets or markers that are displayed on the screen168. For example, as shown in the screen image232ofFIG.22, the output screen168of the subject visual display device107may be divided into a first, inner screen portion234, which comprises instructional information for a subject204performing a particular test or training protocol, and a second outer screen portion236, which comprises a displaceable background or one or more virtual reality scenes that are configured to create a simulated environment for the subject204. As shown inFIG.22, a plurality of targets or markers238(e.g., in the form of circles) are displayed on the first, inner screen portion234. In addition, a displaceable visual indicator or cursor240is also displayed on the first, inner screen portion234. The data acquisition/data processing device104controls the movement of the visual indicator240towards the plurality of stationary targets or markers238by using the one or more computed numerical values determined from the output signals of the force transducers associated with the force measurement assembly102. In the illustrated embodiment, the first, inner screen portion234is provided with a plain white background. In an illustrative embodiment, the one or more numerical values determined from the output signals of the force transducers associated with the force measurement assembly102,102′ comprise the center of pressure coordinates (xP, yP) computed from the ground reaction forces exerted on the force plate assembly102by the subject. For example, with reference to the force plate coordinate axes150,152ofFIG.7, when a subject leans to the left on the force measurement assembly102′ (i.e., when the x-coordinate xPof the center of pressure is positive), the cursor240displayed on the inner screen portion234is displaced to the left. Conversely, when a subject leans to the right on the force measurement assembly102′ (i.e., when the x-coordinate xPof the center of pressure is negative inFIG.7), the cursor240on the inner screen portion234is displaced to the right. When a subject leans forward on the force measurement assembly102′ (i.e., when the y-coordinate yPof the center of pressure is positive inFIG.7), the cursor240displayed on the inner screen portion234is upwardly displaced on the inner screen portion234. Conversely, when a subject leans backward on the force measurement assembly102′ (i.e., when the y-coordinate yPof the center of pressure is negative inFIG.7), the cursor240displayed on the inner screen portion234is downwardly displaced on the inner screen portion234. In the training scenario illustrated inFIG.22, the subject204may be instructed to move the cursor240towards each of the plurality of targets or markers238in succession. For example, the subject204, may be instructed to move the cursor240towards successive targets238in a clockwise fashion (e.g., beginning with the topmost target238on the first, inner screen portion234). As illustrated inFIG.22, a virtual reality scenario is displayed on the second outer screen portion236of the output screen168of the subject visual display device107. The virtual reality scenario inFIG.22comprises a three-dimensional checkerboard room. As shown inFIG.22, the three-dimensional checkerboard room comprises a plurality of three-dimensional boxes or blocks205in order to give the subject204a frame of reference for perceiving the depth of the room (i.e., the boxes or blocks205enhance the depth perception of the subject204with regard to the virtual room). In one or more embodiments, the data acquisition/data processing device104is configured to generate a motion profile for the selective displacement of the virtual reality scenario. The data acquisition/data processing device104may generate the motion profile for the virtual reality scenario (e.g., the three-dimensional checkerboard room) in accordance with any one of: (i) a movement of the subject on the force measurement assembly (e.g., by using the center of pressure coordinates (xP, yP)), (ii) a displacement of the force measurement assembly102by one or more actuators (e.g., by using the motor positional data and/or torque from the first and second actuator assemblies158,160), and (iii) a predetermined velocity set by a system user (e.g., the virtual reality scenario may be displaced inwardly at a predetermined velocity, such as 5 meters per second). In an alternative embodiment, the subject could be instructed to adapt to a pseudorandom movement of the displaceable force measurement assembly102and/or the pseudorandom movement of the virtual reality scenario. Displacing the virtual reality scene inwardly on the visual display device107inhibits the sensory ability, namely the visual flow, of the subject by creating artificial visual inputs from which he or she must differentiate from his or her actual surroundings. In other embodiments, rather than comprising a virtual reality scenario, the second outer screen portion236of the output screen168may comprise a displaceable background (e.g., a background comprising a plurality of dots). For either the virtual reality scenario or the displacement background, the data acquisition/data processing device104is configured to displace the image displayed on the outer screen portion236using a plurality of different motion profiles. For example, when a displaceable background is displayed in the outer screen portion236, the displaceable background may be displaced, or scrolled, left-to-right, right-to-left, top-to-bottom, or bottom-to-top on the output screen168. In addition, the data acquisition/data processing device104may be configured to rotate the displaceable background about a central axis in any of the pitch, roll, or yaw direction (i.e., an axis passing centrally through the output screen168, such as along radius line R1inFIG.10, rotation in the roll direction). Moreover, the data acquisition/data processing device104may be configured to adjust the position of the central axis, about which the displaceable background rotates, based upon a subject height input value so that the central axis is approximately disposed at the eye level of the subject. It is to be understood that any of these motion profiles described in conjunction with the displaceable background also can be applied to the virtual reality scenario by the data acquisition/data processing device104. Preferably, the data acquisition/data processing device104is also specially programmed so as to enable a system user (e.g., a clinician) to selectively choose the manner in which the displaceable background is displaced during the training routines (i.e., the data acquisition/data processing device104is preferably provided with various setup options that allow the clinician to determine how the displaceable background moves during the training routines described herein). Also, preferably, the data acquisition/data processing device104is specially programmed so as to enable a system user (e.g., a clinician) to selectively choose from a plurality of different displaceable backgrounds that can be interchangeably used during various training routines. InFIG.22, the inner screen portion234is depicted with a border242, which separates the inner screen portion234from the second outer screen portion236. However, it is to be understood that, in other embodiments of the invention, the border242may be omitted. Also, in some embodiments, the data acquisition/data processing device104is specially programmed so as to enable a system user (e.g., a clinician) to selectively choose whether or not the inner screen portion234with the patient instructional information displayed thereon is displaced in accordance with the movement of the subject on the displaceable force measurement assembly102. In another embodiment, with reference toFIG.23, it can be seen that only instructional information for a subject204may be displayed on the output screen168of the subject visual display device107. As shown in this figure, a plurality of enlarged targets or markers238′ (e.g., in the form of circles) and an enlarged displaceable visual indicator or cursor240′ are displayed on the output screen168. As described above, during the execution of the training protocol, the subject204is instructed to move the cursor240′ into each of the plurality targets or markers238′ in successive progression. InFIG.23, the plurality of targets or markers238′ and the displaceable visual indicator or cursor240′ are displaced on a plain white background that is not moving. In yet another embodiment, the configuration of the output screen168of the subject visual display device107could be similar to that which is depicted inFIG.22, except that the plurality of targets or markers238and the displaceable visual indicator or cursor240could be superimposed directly on the displaceable background, rather than being separated therefrom in the inner screen portion234. Thus, unlike inFIG.22, where the targets238and the cursor240are superimposed on a plain white background, the targets238and the cursor240would be displayed directly on the displaceable background. In some scenarios, the targets238would be stationary on the output screen168of the subject visual display device107, while in other scenarios, the targets238could be displaced on the output screen168of the subject visual display device107while the subject204is undergoing training. In still another embodiment, the data acquisition/data processing device104is specially programmed with a plurality of options that can be changed in order to control the level of difficulty of the training. These options may include: (i) pacing (i.e., how fast a patient must move from target238,238′ to target238,238′), (ii) the percentage of limits of stability (which changes the spacing of the targets238,238′, (iii) percent weight bearing (i.e., the targets238,238′ can be adjusted in accordance with the percentage of a subject's weight that is typically placed on his or her right leg as compared to his or her left leg so as to customize the training for a particular subject, i.e., to account for a disability, etc.), and (iv) accessories that can placed on the plate surface (i.e., type of surface to stand on (e.g., solid or foam), size of box to step on/over, etc.). In addition, the data acquisition/data processing device104can be specially programmed to adjust the magnitude of the response of the displaceable force measurement assembly102and the virtual reality environment on the screen168. For example, while a subject is undergoing training on the system100, the displacement of the virtual reality environment on the screen168could be set to a predetermined higher or lower speed. Similarly, the speed of rotation and/or translation of the displaceable force measurement assembly102could be set to predetermined higher or lower speed. In yet another embodiment, the data acquisition/data processing device104is configured to compute the vertical forces FLz, FRzexerted on the surface of the first and second force plate components110,112by the respective feet of the subject, or alternatively, to receive these computed values for FLz, FRzfrom the programmable logic controller172. In this embodiment, with reference to the screen image258ofFIG.24, first and second displaceable visual indicators (e.g., in the form of adjacent displaceable bars260,262) are displayed on the output screen168of the subject visual display device107. As shown inFIG.24, the first displaceable bar260represents the percentage of a subject's total body weight that is disposed on his or her left leg, whereas the second displaceable bar262represents the percentage of a subject's total body weight that is disposed on his or her right leg. InFIG.24, because this is a black-and-white image, the different colors (e.g., red and green) of the displaceable bars260,262are indicated through the use of different hatching patterns (i.e., displaceable bar260is denoted using a crisscross type hatching pattern, whereas displaceable bar262is denoted using a diagonal hatching pattern). The target percentage line264inFIG.24(e.g., a line disposed at 60% of total body weight) gives the subject204a goal for maintaining a certain percentage of his body weight on a prescribed leg during the performance of a particular task. For example, the subject204, may be instructed to move from a sit-to-stand position while being disposed on the dual force measurement assembly102. While performing the sit-to-stand task, the subject204is instructed to maintain approximately 60% of his or her total body weight on his or her left leg, or alternatively, maintain approximately 60% of his or her total body weight on his or her right leg. During the performance of this task, the data acquisition/data processing device104controls the respective positions of the displaceable bars260,262using the computed values for the vertical forces FLz, FRz(i.e., the bars260,262are displayed on the output screen168in accordance with the values for the vertical forces FLz, FRzdetermined over the time period of the sit-to-stand task task). If the subject is able to maintain approximately the percentage weight goal on his or her prescribed leg, then the displaceable bar260,262for that leg (either right or left) will continually oscillate in close proximity to the target percentage line264. For the left bar260displayed inFIG.24, the percentage weight for the left leg is computed as follows: %WL=(FZLFZL+FZR)*100%(2) where:% WL: percentage of total body weight disposed on subject's left leg;FZL: vertical force on subject's left leg (e.g., in Newtons); andFZR: vertical force on subject's right leg (e.g., in Newtons). For the right bar262displayed inFIG.24, the percentage weight for the right leg is computed as follows: %WR=(FZRFZL+FZR)*100%(3) where:% WR: percentage of total body weight disposed on subject's right leg;FZL: vertical force on subject's left leg (e.g., in Newtons); andFZR: vertical force on subject's right leg (e.g., in Newtons). People maintain their upright posture and balance using inputs from somatosensory, vestibular and visual systems. In addition, individuals also rely upon inputs from their somatosensory, vestibular and visual systems to maintain balance when in other positions, such as seated and kneeling positions. During normal daily activity, where dynamic balance is to be maintained, other factors also matter. These factors are visual acuity, reaction time, and muscle strength. Visual acuity is important to see a potential danger. Reaction time and muscle strength are important to be able to recover from a potential fall. During the performance of the Sensory Organization Test (“SOT”), certain sensory inputs are taken away from the subject in order to determine which sensory systems are deficient or to determine if the subject is relying too much on one or more of the sensory systems. For example, the performance of the SOT protocol allows one to determine how much a subject is relying upon visual feedback for maintaining his or her balance. In one embodiment, the SOT protocol comprises six conditions under which a subject is tested (i.e., six test stages). In accordance with the first sensory condition, a subject simply stands in stationary, upright position on the force plate assembly102with his or her eyes open. During the first condition, a stationary virtual reality scene is projected on the generally hemispherical projection screen168of the subject visual display device107, and the force plate assembly102is maintained in a stationary position. For example, the virtual reality scene displayed on the generally hemispherical projection screen168may comprise a checkerboard-type enclosure or room (e.g., seeFIG.14), or some other appropriate scene with nearfield objects (e.g., boxes or blocks205). In the illustrated embodiment, the virtual reality scene is in the form of a three-dimensional image, and the nature of the scene will remain consistent throughout the performance of the SOT protocol. As shown in the screen image200ofFIG.14, a subject204is disposed in an immersive virtual reality environment202comprising a three-dimensional checkerboard room. As shown inFIG.14, the three-dimensional checkerboard room comprises a plurality of three-dimensional boxes or blocks205in order to give the subject204a frame of reference for perceiving the depth of the room (i.e., the boxes or blocks205enhance the depth perception of the subject204with regard to the virtual room). In accordance with the second sensory condition of the SOT protocol, the subject is blindfolded so that he or she is unable to see the surrounding environment. Similar to the first condition, the force plate assembly102is maintained in a stationary position during the second condition of the SOT test. By blindfolding the subject, the second condition of the SOT effectively removes the visual feedback of the subject. During the third condition of the SOT protocol, like the first and second conditions, the force plate assembly102remains in a stationary position. However, in accordance with the third sensory condition of the test, the virtual reality scene displayed on the generally hemispherical projection screen168is moved in sync with the sway angle of the subject disposed on the force plate assembly102. For example, when the subject leans forward on the force plate assembly102, the virtual reality scene displayed on the screen168is altered so as to appear to the subject to be inwardly displaced on the output screen168. Conversely, when the subject leans backward on the force plate assembly102, the virtual reality scene is adjusted so as to appear to the subject to be outwardly displaced on the screen168. As in the first condition, the eyes of the subject remain open during the third condition of the SOT protocol. In accordance with the fourth sensory condition of the SOT protocol, the force plate assembly102and the subject disposed thereon is displaced (e.g., rotated), while the eyes of the subject remain open. The force plate assembly102is displaced according to the measured sway angle of the subject (i.e., the rotation of the force plate assembly102is synchronized with the computed sway angle of the subject). During the fourth condition, similar to the first condition, a stationary virtual reality scene is projected on the generally hemispherical projection screen168of the subject visual display device107. During the fifth condition of the SOT protocol, like the second condition thereof, the subject is blindfolded so that he or she is unable to see the surrounding environment. However, unlike during the second condition, the force plate assembly102does not remain stationary, rather the force plate assembly102and the subject disposed thereon is displaced (e.g., rotated). As for the fourth condition, the force plate assembly102is displaced according to the measured sway angle of the subject (i.e., the rotation of the force plate assembly102is synchronized with the computed sway angle of the subject). As was described above for the second condition of SOT protocol, by blindfolding the subject, the fifth condition of the SOT test effectively removes the visual feedback of the subject. Lastly, during the sixth sensory condition of the SOT protocol, like the fourth and fifth conditions, the force plate assembly102and the subject disposed thereon is displaced (e.g., rotated). Although, in accordance with the sixth sensory condition of the test, the virtual reality scene displayed on the generally hemispherical projection screen168is also moved in sync with the sway angle of the subject disposed on the force plate assembly102. As previously described for the fourth and fifth conditions, the displacement of the force plate assembly102is governed by the measured sway angle of the subject (i.e., the rotation of the force plate assembly102is synchronized with the computed sway angle of the subject). In an exemplary embodiment, when the subject is forwardly displaced on the force plate assembly102during the sixth condition of the SOT protocol, the virtual reality scene displayed on the screen168is altered so as to appear to the subject to be inwardly displaced on the output screen168. Conversely, when the subject is rearwardly displaced on the force plate assembly102, the virtual reality scene is adjusted so as to appear to the subject to be outwardly displaced on the screen168. As in the fourth condition, the eyes of the subject remain open during the sixth condition of the SOT protocol. During the performance of the SOT protocol, the scene or screen image displayed on the generally hemispherical projection screen168may also comprise one of the images illustrated inFIGS.47and48. Initially, turning toFIG.47, it can be seen that the screen image276on the hemispherical projection screen168comprises a plurality of substantially concentric bands278that are configured to create a disorienting visual stimuli for the subject disposed on the force measurement assembly102. The specially programmed data acquisition/data processing device104of the force measurement system100generates the screen image276ofFIG.47, and the projector164″ projects the generated image onto the screen168. As depicted inFIG.47, it can be seen that each of the plurality of substantially concentric bands278comprise blurred edge portions without any clearly defined boundary lines so that the subject is unable to establish a particular focal point of reference on the output screen168. In other words, each band or ring278comprises a gradient-type edge portion280that is very diffuse in nature, and does not comprise any hard line transitions. As shown inFIG.47, the plurality of substantially concentric bands278generated by the specially programmed data acquisition/data processing device104of the force measurement system100are three-dimensionally arranged on the screen168so as to create a three-dimensional tunnel effect for the subject. Advantageously, because the blurred or gradient-type edge portions280of the band or rings278do not include any clearly defined boundary lines or fixation points, the subject is unable to establish a particular focal point of reference on the output screen168. When a subject is performing conditions three and six of the SOT protocol, it is desired that the subject believe that he or she is not moving, when in fact, he or she is actually moving on the surface of the force measurement assembly102. Advantageously, the absence of all hard lines and defined points in the screen image276ofFIG.47eliminates the frame of reference that the subject would otherwise utilize to visually detect their movement on the force measurement assembly102, and thus greatly enhances the effectiveness of the SOT protocol. The screen image276ofFIG.47enhances the effectiveness of the SOT protocol by precluding the visual input normally available from a defined reference point or line on the screen in front of the subject. While each of the bands or rings278in the screen image276ofFIG.47is generally circular in shape, it is to be understood that the invention is not so limited. Rather, other suitable shapes may be used for the bands or rings278as well. For example, in other embodiments of the invention, the bands or rings generated by the one or more data processing devices, and projected on the hemispherical projection screen168, may comprise one or more of the following other configurations: (i) a plurality of elliptical concentric bands or rings, (ii) a plurality of rectangular concentric bands or rings, (iii) a plurality of square concentric bands or rings, and (iv) a plurality of concentric bands or rings having generally straight side portions with rounded corner portions. In an exemplary embodiment, when the plurality of concentric bands or rings have generally straight side portions with rounded corner portions, the straight side portion of each band or ring may comprise one-third (⅓) of its overall height or width, while the radius of each rounded corner portion may comprise one-third (⅓) of its overall height or width. As such, in the exemplary embodiment, the straight side portion of each band or ring comprises one-third (⅓) of its overall height or width, the first rounded corner portion comprises one-third (⅓) of its overall height or width, and the second rounded corner portion comprises the remaining one-third (⅓) of its overall height or width. However, in other embodiments, it is to be understood that straight side portions and the rounded corner portions may comprise other suitable ratios of the overall height or width of band or rings. Next, turning toFIG.48, it can be seen that the screen image276′ on the hemispherical projection screen168comprises a plurality of substantially concentric bands278′, which are similar in many respects to those depicted inFIG.47, except that the bands278′ inFIG.48have a different overall shape. In particular, the plurality of concentric bands or rings278′ inFIG.48are generally elliptical or oval in shape, rather than circular in shape. As was described above for the image276ofFIG.47, the data acquisition/data processing device104of the force measurement system100is specially programmed to generate the screen image276′ ofFIG.48, and the projector164″ projects the generated image onto the screen168. Like the bands or rings278ofFIG.47, the concentric bands or rings278′ ofFIG.48also comprise gradient-type edge portions280′ without any clearly defined boundary lines so that the subject is unable to establish a particular focal point of reference on the output screen168. In addition, the substantially concentric bands278′ ofFIG.48are three-dimensionally arranged on the screen168so as to create a three-dimensional tunnel effect for the subject. As was explained above for the screen image ofFIG.47, the absence of all hard lines and defined points in the screen image276′ ofFIG.48eliminates the frame of reference that the subject would otherwise utilize to visually detect their movement on the force measurement assembly102, and thus greatly enhances the effectiveness of the SOT protocol. In some embodiments, the concentric bands278′ inFIG.48are provided with an elliptical or oval shape in order to emulate a passageway of a building (i.e., so the subject viewing the screen image276′ has the illusion that he or she is traveling down a hallway or passageway of a building). Also, in some embodiments, the screen image276′ ofFIG.48, as well as the screen image276ofFIG.47, is rotated about a central axis in any one of the pitch, roll, or yaw direction (i.e., an axis passing centrally through the output screen168, such as along radius line R1inFIG.10, rotation in the roll direction) during the performance of the SOT protocol. The movement of the screen image276,276′ may also be synchronized with the movement of the subject such that the screen image276,276′ is rotated and inwardly or outwardly displaced on the screen in sync with a sway angle of the subject (e.g., when the subject leans forward on the force plate, the image is rotated downwardly and displaced into the screen, and is oppositely displaced when the subject leans backward of the force plate). Referring again toFIG.48, it can be seen that the screen image276′ also comprises a horizontal line282disposed laterally across the plurality of concentric bands or rings278′. That is, the horizontal line282laterally intersects the plurality of concentric bands or rings278′. As shown inFIG.48, the horizontal line282is disposed closer to the top of each elliptically-shaped band or ring278′, than the bottom of each elliptically-shaped band or ring278′, so as to be generally aligned with the line of sight of a subject of average height. In one or more embodiments, the horizontal line282is utilized as a visual reference line for the subject during the performance of conditions one, two, four, and five of the SOT protocol (i.e., the conditions of the SOT protocol during which the screen image on the hemispherical projection screen168is not displaced). In these one or more embodiments, the horizontal line282is configured to be selectively turned on and off by the specially programmed data acquisition/data processing device104of the force measurement system100so that it is capable of being displayed during conditions one, two, four, and five of the SOT protocol (i.e., when the screen image is not displaced), but then turned off during conditions three and six of the SOT protocol (i.e., when the screen image is displaced). In addition, while the screen images276,276′ ofFIGS.47and48are particularly suitable for use in the SOT protocol, it is to be understood that these the screen images276,276′ may be utilized in conjunction with other balance and postural stability tests and protocols as well. For example, the screen images276,276′ ofFIGS.47and48may also be employed in the Adaptation Test (“ADT”) and the Motor Control Test (“MCT”), while testing the balance of a subject or patient. In another embodiment, the data acquisition/data processing device104of the force measurement system100may be specially programmed to execute a modified version of the SOT protocol wherein two or more of the six conditions of the protocol are combined with one another so as to more accurately and efficiently perform the test. The SOT protocol described above comprises six separate conditions (i.e., discrete sub-tests), each of which are performed for a predetermined period of time (e.g., each of the six conditions of the SOT protocol may be performed for approximately 20 seconds). In the modified version of the SOT protocol that will be described hereinafter, the displacement velocity (e.g., gain) of the screen background image is gradually incremented over time by the data acquisition/data processing device104from an initial static condition to a final dynamic condition. At the final dynamic condition, the screen image on the hemispherical projection screen168is displaced at a velocity that is equal to, or greater than the subject's movement (e.g., the displacement velocity of the screen image may be synchronized with the subject's computed sway angle velocity or it may be greater than the subject's sway angle velocity). Rather than the plurality of discrete conditions or sub-tests that are utilized in the SOT protocol explained above, the modified version of the SOT protocol that will be described hereinafter utilizes a continuum of different test conditions that continually vary over the time duration of the SOT protocol. Advantageously, the modified protocol allows the SOT testing to be performed more efficiently because, if it is determined that the subject is successfully performing the protocol as the velocity of the screen image is continually increased, the testing can simply be stopped after a shorter period (i.e., the test may be able to be stopped after 20 seconds, 30 seconds, etc.). The modified version of the SOT protocol may initially comprise generating, by using the specially programmed data acquisition/data processing device104of the force measurement system100, one or more screen images (e.g., screen images276,276′) that are displayed on the hemispherical projection screen168. At the commencement of the SOT protocol, the one or more screen images are displayed on the output screen168in an initial static condition wherein the one or more images are generally stationary (i.e., the images are not moving on the screen168). Subsequent to the initial static condition, the one or more images are displaced on the output screen168in accordance with a velocity value that incrementally increases over time as the SOT protocol progresses (e.g., the velocity value of the one or more images may be incremented by a predetermined amount every five (5) seconds so that the velocity value gets progressively higher as the SOT protocol progresses). The displacement of the one or more images on the output screen168may comprise, for example, left-to-right displacement, right-to-left displacement, top-to-bottom displacement, and/or bottom-to-top displacement on the screen168. The displacement of the one or more images on the output screen168also may comprise displacements corresponding to: (i) a medial-lateral direction of the subject, (ii) an anterior-posterior direction of the subject, and/or (iii) a superior-inferior direction of the subject. In addition, the data acquisition/data processing device104of the force measurement system100may be specially programmed to rotate the displaceable image(s) about a central axis in any of the pitch, roll, or yaw direction (i.e., an axis passing centrally through the output screen168, such as along radius line R1inFIG.10, rotation in the roll direction) during the SOT protocol. As such, the image velocity value that is incrementally increased during the modified SOT protocol may comprise a linear velocity value or an angular velocity value. While the modified SOT protocol is being performed, and the screen image is gradually being displaced at an incrementally higher velocity, the performance of the subject or patient is evaluated in order to assess a postural stability of the subject (e.g., by evaluating the subject's center-of-pressure, center-of-gravity, and/or sway angle over time). As an alternative to displacing one or images on the screen168during the performance of the modified SOT protocol, it is to be understood that a displaceable visual surround device, which at least partially circumscribes three sides of a subject (i.e., three sides of a torso of a subject), may be employed. In particular, with reference toFIG.46, the visual surround device191may comprise visual surround portion192that is displaceable by means of one or more actuators in an actuator device193that are operatively coupled to the specially programmed data acquisition/data processing device104of the force measurement system100. In this alternative embodiment, the one or more data processing devices generate a motion profile for the visual surround device191, rather than generating a motion profile for a displaceable image on the screen168. At the commencement of the SOT protocol, the visual surround portion192of the visual surround device191is maintained in an initial static condition wherein the visual surround portion192is generally stationary. Subsequent to the initial static condition, the visual surround portion192is displaced by the one or more actuators in the actuator device193in accordance with a velocity value that incrementally increases over time as the SOT protocol progresses (e.g., the displacement velocity value of the visual surround portion192may be incremented by a predetermined amount every five (5) seconds so that the velocity value gets progressively higher as the SOT protocol progresses). Similar to that which was described above for the displaceable screen images, the displacement the visual surround portion192may comprise, for example, left-to-right displacement, right-to-left displacement, top-to-bottom displacement, and/or bottom-to-top displacement, depending on the quantity and the placement of the actuators. The displacement of the visual surround portion192also may comprise displacements corresponding to: (i) a medial-lateral direction of the subject, (ii) an anterior-posterior direction of the subject, and/or (iii) a superior-inferior direction of the subject. In addition, the data acquisition/data processing device104of the force measurement system100may be specially programmed to rotate the visual surround portion192about a central axis in any of the pitch, roll, or yaw direction (i.e., an axis passing centrally through the visual surround portion192, rotation in the roll direction) during the SOT protocol. As such, as was described above for the displaceable screen images, the velocity value of the visual surround portion192that is incrementally increased during the modified SOT protocol may comprise a linear velocity value or an angular velocity value. While the modified SOT protocol is being performed, and the visual surround portion192is gradually being displaced at an incrementally higher velocity, the performance of the subject or patient is evaluated in order to assess a postural stability of the subject (e.g., by evaluating the subject's center-of-pressure, center-of-gravity, and/or sway angle over time). Referring again toFIG.46, it can be seen that, in the illustrated embodiment, the visual surround portion192is rotationally displaced (i.e., as indicated by curved arrows194) about a transverse horizontal axis TAVS. Similar to that described above for the displacement of the screen image, the displacement velocity (e.g., gain) of the visual surround portion192may be gradually incremented over time from an initial static condition to a final dynamic condition. At the final dynamic condition, the visual surround portion192is displaced at a velocity that is equal to, or greater than the subject's movement (e.g., the displacement velocity of the visual surround portion192may be synchronized with the subject's computed sway angle velocity or it may be greater than the subject's sway angle velocity). For example, in the illustrated embodiment ofFIG.46, the angular velocity of the visual surround portion192about the transverse axis TAVSmay be equal to the subject's sway angle velocity at the final dynamic condition. In the modified SOT protocol, the force measurement assembly102, and the subject disposed thereon, may be incrementally displaced in a manner similar to that described above for the screen image on the screen168and the visual surround portion192. That is, the second actuator assembly160may be used to rotate the force measurement assembly102, and the subject disposed thereon, at an incrementally higher angular velocity during the modified SOT protocol (seeFIG.2). At the commencement of the SOT protocol, the force measurement assembly102, and the subject disposed thereon, are maintained in an initial static condition wherein the force measurement assembly102and the subject are stationary. Subsequent to the initial static condition, the force measurement assembly102, and the subject disposed thereon, are rotated in accordance with a velocity value that incrementally increases over time as the SOT protocol progresses (e.g., the angular velocity value of the force measurement assembly102and the subject may be incremented by a predetermined amount every five (5) seconds so that the velocity value gets progressively higher as the SOT protocol progresses). While the modified SOT protocol is being performed, and the force measurement assembly102carrying the subject is gradually being displaced at an incrementally higher angular velocity, the performance of the subject or patient is evaluated in order to assess a postural stability of the subject (e.g., by evaluating the subject's center-of-pressure, center-of-gravity, and/or sway angle over time). Similar to that described above for the displacement of the screen image and the visual surround portion192, the rotational velocity (e.g., gain) of the force measurement assembly102carrying the subject may be gradually incremented over time from an initial static condition to a final dynamic condition. At the final dynamic condition, the force measurement assembly102carrying the subject is displaced at an angular velocity that is equal to, or greater than the subject's movement (e.g., the displacement velocity of the screen image may be synchronized with the subject's computed sway angle velocity or it may be greater than the subject's sway angle velocity). During the performance of the modified SOT protocol, the displacement velocity of either the screen image or the visual surround portion192may be generally equal to the angular displacement velocity of the force measurement assembly102and the subject (i.e., the force measurement assembly102and the subject disposed thereon may be displaced using an angular velocity that is synchronized with the displacement velocity of either the screen image or the visual surround portion192). Advantageously, because the velocity of the screen image, the visual surround portion192, and/or the force measurement assembly102carrying the subject are incrementally increased during the performance of the modified SOT protocol, the SOT protocol is capable of being performed in a far more efficient manner. That is, rather than laboriously executing six separate conditions for fixed time durations, the protocol conditions are combined with one another so that it takes far less time to determine the actual performance of the subject. In one or more embodiments, the data acquisition/data processing device104of the force measurement system100may be specially programmed to determine the time duration of the SOT protocol in accordance with a quasi real-time assessment of the postural stability of the subject (e.g., by evaluating the subject's center-of-pressure, center-of-gravity, and/or sway angle over time). As such, when an accurate assessment of the subject's performance is obtained, the modified SOT protocol is simply concluded. As explained above, the modified version of the SOT protocol combines two or more conditions with one another so as to more accurately and efficiently perform the test. For example, the first SOT condition described above may be combined with the third SOT condition. Similarly, the fourth SOT condition described above may be combined with the six SOT condition, while the second condition (blindfolded subject, stationary force measurement assembly102) may be combined with the fifth SOT condition (blindfolded subject, displaced force measurement assembly102). Also, in some embodiments, three conditions of the SOT protocol may be combined with one another. For example, the first, third, and sixth conditions of the SOT protocol may be combined with one another so that either the displacement velocity of the screen image or visual surround portion192is simultaneously incremented together with the angular displacement velocity of the force measurement assembly102and the subject disposed thereon. In one exemplary embodiment of the modified SOT protocol, the combined first, third, and sixth conditions of the SOT protocol may be performed initially, and then the combined second and fifth conditions may be performed thereafter. Also, because the subject's performance is evaluated in quasi real-time during the performance of the modified SOT protocol, the data acquisition/data processing device104of the force measurement system100may be specially programmed so as to automatically combine different sequences of conditions with one another based upon the subject's performance during the modified SOT protocol. Also, the data acquisition/data processing device104of the force measurement system100may be specially programmed so as to allow the system operator (e.g., a clinician) to select different conditions to be combined with one another while the subject or patient is executing the modified SOT protocol. That is, the system operator may monitor the subject's performance during the execution of the modified SOT protocol, and then select a particular sequence of conditions based upon that performance. During the performance of the modified SOT protocol, it is to be understood that the screen images276,276′, which were described in conjunction withFIGS.47and48above, may be displayed on the generally hemispherical projection screen168while one or more of the conditions are being performed. For example, the screen images276,276′ ofFIGS.47and48may be utilized during the performance of any combination of the first, third, fourth, and sixth conditions. Also, in one or more alternative embodiments, during the performance of the balance test (e.g., the SOT protocol or modified SOT protocol described above), the subject or patient108may be outfitted with augmented reality glasses (i.e., reality-altering glasses) in order to perturb the subject's visual input during the performance of the balance test. For example, in these embodiments, the screen images276,276′ ofFIGS.47and48may be displayed using the augmented reality glasses (i.e., reality-altering glasses), rather than on the generally hemispherical projection screen168. The augmented reality glasses may comprise one or more high-resolution liquid-crystal (LCD) displays (e.g., two (2) LCD displays) that generate a large virtual screen image (e.g., 60″ to 80″ virtual screen viewed from 10 feet) for the user, wherein the augmented reality glasses are capable of displaying both two-dimensional (2D) and three-dimensional (3D) video for the user thereof. The augmented reality glasses may further comprise one or more discrete video graphics array (VGA) cameras (e.g., two (2) VGA cameras) for capturing both two-dimensional (2D) and three-dimensional (3D) stereoscopic video images of the environment surrounding the user. In addition, the augmented reality glasses may comprise an inertial measurement unit (IMU) integrated therein, which comprises three (3) accelerometers, three (3) gyroscopes, and three (3) magnetometers for tracking the head movement of the user so that the user's current head direction and angle of view may be determined. Also, the augmented reality glasses may comprise one or more interfaces (e.g., high-definition multimedia interface (HDMI) or a suitable wireless interface) that operatively connect the augmented reality glasses to an external data processing device (e.g., data acquisition/data processing device104or330). It is to be understood that the augmented reality glasses worn by the subject or patient108during the performance of a balance test (e.g., the SOT protocol or modified SOT protocol) could achieve the same effect described above with regard to the images ofFIGS.47and48. Also, the augmented reality glasses may be worn by the subject or patient108while performing the modified SOT protocol described above so that the one or more images, which are displaced at incrementally higher velocities during the performance of the protocol, are displaced on a virtual output screen projected in front of the subject. Also, in one or more alternative embodiments, the reality-altering glasses worn by the subject or patient108may be designed such that the subject or patient108has no direct view of the surrounding environment. That is, the only view that the subject or patient108has of the surrounding environment is the view projected on the one or more visual display devices of the reality-altering glasses (i.e., the reality-altering glasses may be in the form of virtual reality glasses). In these one or more alternative embodiments, the data acquisition/data processing device330may be specially programmed to alter, displace, or both alter and displace one or more video images of an environment surrounding the subject so that, when the one or more video images are displayed to the subject on the one or more visual displays of the augmented reality glasses, the one or more video images no longer depict an accurate representation of the environment surrounding the subject so as to perturb a visual input of the subject. For example, the data acquisition/data processing device330may be specially programmed to laterally, vertically, or rotationally displace the images of the environment surrounding the subject so that, when they are viewed by the subject using the one or more visual displays of the augmented reality glasses, the images of the environment are skewed relative to the actual environmental setting. As such, after being altered by the data acquisition/data processing device330, the images of the environment surrounding the subject, as viewed by the subject through the augmented reality glasses, are visually distorted relative to the actual environment that the subject would see if he or she were looking directly at the environment and not wearing the augmented reality glasses. Thus, the manipulation of the images of the environment by the data acquisition/data processing device330results in the conveyance of a distorted view of reality to the subject, thereby perturbing the visual input of the subject. In addition, in one or more alternative embodiments, it is to be understood that the augmented reality glasses may be used alone to perturb the subject's visual input, or in combination with a force measurement assembly (e.g., force measurement assembly102described herein). For example, the augmented reality glasses may be used to perturb the subject's visual input while the subject108is simultaneously displaced on the force measurement assembly102during the performance of a balance test, such as the SOT protocol or modified SOT protocol described herein. In these one or more alternative embodiments, the data acquisition/data processing device330may also be specially programmed to perform the functionality that is illustrated inFIGS.51A-51C. Initially, with reference toFIG.51A, it can be seen that a subject108wearing augmented reality glasses326is disposed on a force measurement assembly102′ (i.e., a force plate) in a generally upright position (i.e., so that a vertical reference axis passing through the subject108is perpendicular to the top surface of the force plate102). In the configuration ofFIG.51A, the screen image332that is viewed by the subject108through the augmented reality glasses326matches the actual view of the scenery334that is captured by the one or more cameras328of the augmented reality glasses326. That is, inFIG.51A, the scenery that is disposed in front of the one or more cameras328of the augmented reality glasses326, and in front of the subject108, is not altered by the data acquisition/data processing device330that is operatively coupled to the augmented reality glasses326. A reference line340is superimposed on each of the screen images334,336,338inFIGS.51A,51B,51Cin order to more clearly illustrate the manner in which the image captured by the camera328is shifted when the subject108is in the sway angle θ1, θ2positions on the force plate102′. Next, turning toFIG.51B, it can be seen that the subject108who is wearing the augmented reality glasses326and is disposed on the force measurement assembly102′ (i.e., a force plate) is disposed in a rearwardly inclined sway angle position. That is, as illustrated inFIG.51B, the longitudinal reference axis passing through the subject108is disposed at a rearward angle θ2relative to a vertical reference axis that is disposed perpendicular to the force plate top surface. In the configuration ofFIG.51B, the screen image332that is viewed by the subject108through the augmented reality glasses326does not match the actual view of the scenery336that is captured by the one or more cameras328of the augmented reality glasses326. Rather, in the scenario illustrated byFIG.51B, the data acquisition/data processing device330that is operatively coupled to the augmented reality glasses326is specially programmed to alter the actual view captured by the one or more cameras328of the augmented reality glasses326so that subject108sees the exact same view through the glasses326that he would see if he were standing in a straight upright position on the force plate102′ (i.e., in the standing position ofFIG.51A). As such, by using the data acquisition/data processing device330to alter the actual view of the surrounding scenery that is captured by the one or more cameras328of the augmented reality glasses326, the augmented reality system creates the illusion to the subject108that he has not moved at all (i.e., the subject's visual sense of perception is altered so that he is unable to perceive that he is disposed in a rearwardly inclined position). In the scenario ofFIG.51B, in order to make the screen image332that is viewed by the subject108through the augmented reality glasses326match the upright position image ofFIG.51A, the data acquisition/data processing device330is specially programmed to slightly enlarge and downwardly rotate the actual view of the scenery captured by the one or more cameras328of the augmented reality glasses326so as to correct for the rearwardly inclined orientation of the subject108. In theFIG.51Bscenario, in order to determine the magnitude of the magnification and the downward rotation angle of the actual view of the scenery captured by the one or more cameras328, the data acquisition/data processing device330may initially determine the center of pressure (COP) for the subject108from the force measurement assembly102′. Then, in the manner described above, the data acquisition/data processing device330may determine the center of gravity (COG) for the subject based upon the center of pressure. After which, the sway angle θ2may be determined for the subject108using the center of gravity (COG) in the manner described above (e.g., see equation (1) above). Finally, once the sway angle θ2is determined for the subject108, the displacement angle and magnification of the image of the surrounding environment displayed to the subject108using the augmented reality or reality-altering glasses326may be determined using geometric relationships between the sway angle θ2of the subject108and the displacement angle of the video image of the surrounding environment captured by the one or more cameras328of the reality-altering glasses326(e.g., the sway angle θ2of the subject108is generally equal to the displacement angle of the video image). In an alternative embodiment, rather than enlarging and downwardly rotating the actual view of the scenery captured by the one or more cameras328of the reality-altering glasses326, the data acquisition/data processing device330is specially programmed to capture an initial image of the environment surrounding the subject before the subject displaces his or her body into the rearwardly inclined sway angle position (i.e., while the subject is still standing in a straight upright position ofFIG.51A). Then, once the subject has displaced his or her body into the rearwardly inclined sway angle position ofFIG.51B, the data acquisition/data processing device330is specially programmed to display the initial image to the subject using the one or more visual displays of the reality-altering glasses326so as to create the illusion to the subject108that he or she is still in the straight upright position ofFIG.51A. Next, turning toFIG.51C, it can be seen that the subject108who is wearing the augmented reality glasses326and is disposed on the force measurement assembly102′ (i.e., a force plate) is disposed in a forwardly inclined sway angle position. That is, as illustrated inFIG.51C, the longitudinal reference axis passing through the subject108is disposed at a forward angle θ1relative to a vertical reference axis that is disposed perpendicular to the force plate top surface. In the configuration ofFIG.51C, the screen image332that is viewed by the subject108through the augmented reality glasses326does not match the actual view of the scenery338that is captured by the one or more cameras328of the augmented reality glasses326. Rather, in the scenario illustrated byFIG.51C, the data acquisition/data processing device330that is operatively coupled to the augmented reality glasses326is specially programmed to alter the actual view captured by the one or more cameras328of the augmented reality glasses326so that subject108sees the exact same view through the glasses326that he would see if he were standing in a straight upright position on the force plate102′ (i.e., in the standing position ofFIG.51A). As such, as described above with respect toFIG.51B, by using the data acquisition/data processing device330to alter the actual view of the surrounding scenery that is captured by the one or more cameras328of the augmented reality glasses326, the augmented reality system creates the illusion to the subject108that he has not moved at all (i.e., the subject's visual sense of perception is altered so that he is unable to perceive that he is disposed in a forwardly inclined position). In the scenario ofFIG.51C, in order to make the screen image332that is viewed by the subject108through the augmented reality glasses326match the upright position image ofFIG.51A, the data acquisition/data processing device330is specially programmed to slightly decrease and upwardly rotate the actual view of the scenery captured by the one or more cameras328of the augmented reality glasses326so as to correct for the forwardly inclined orientation of the subject108. In theFIG.51Cscenario, similar to that described above for theFIG.51Bscenario, in order to determine the magnitude of the demagnification and the upward rotation angle of the actual view of the scenery captured by the one or more cameras328, the data acquisition/data processing device330may initially determine the center of pressure (COP) for the subject108from the force measurement assembly102′. Then, in the manner described above, the data acquisition/data processing device330may determine the center of gravity (COG) for the subject based upon the center of pressure. After which, the sway angle θ1may be determined for the subject108using the center of gravity (COG) in the manner described above (e.g., see equation (1) above). Finally, once the sway angle θ1is determined for the subject108, the displacement angle and demagnification of the image of the surrounding environment displayed to the subject108using the augmented reality or reality-altering glasses326may be determined using geometric relationships between the sway angle θ1of the subject108and the displacement angle of the video image of the surrounding environment captured by the one or more cameras328of the reality-altering glasses326(e.g., the sway angle θ1of the subject108is generally equal to the displacement angle of the video image). In an alternative embodiment, rather than decreasing and upwardly rotating the actual view of the scenery captured by the one or more cameras328of the reality-altering glasses326, data acquisition/data processing device330is specially programmed to capture an initial image of the environment surrounding the subject before the subject displaces his or her body into the forwardly inclined sway angle position (i.e., while the subject is still standing in a straight upright position ofFIG.51A). Then, once the subject has displaced his or her body into the forwardly inclined sway angle position ofFIG.51C, the data acquisition/data processing device330is specially programmed to display the initial image to the subject using the one or more visual displays of the reality-altering glasses326so as to create the illusion to the subject108that he or she is still in the straight upright position ofFIG.51A. Rather than using the stationary-type force measurement assembly102′ depicted inFIGS.51A-51C, it is to be understood that, in one or more other embodiments, the displaceable force measurement assembly102described above may alternatively be used during the balance test described above, wherein the subject's visual input is perturbed using the reality-altering glasses326. In these one or more other embodiments, because the displaceable force measurement assembly102is movably coupled to the base assembly106, the rearwardly and forwardly inclined angular positions of the subject108(as shown inFIGS.51B and51C, respectively) may be achieved by rotating the force measurement assembly102with the subject108disposed thereon in accordance with a predetermined angle to achieve the forward and rearward angular displacements. Further, in one or more alternative embodiments, the subject or patient108may be outfitted with another type of head-mounted visual display device that is different than the augmented reality glasses depicted inFIGS.51A-51C. For example, as shown inFIG.53, the subject108disposed on the base assembly106with displaceable force measurement assembly102may be outfitted with a head-mounted visual display device344with an output screen that at least partially circumscribes the head of the subject108such that the output screen of the head-mounted visual display device344engages enough of the peripheral vision of the subject108such that the subject108becomes immersed in the simulated environment created by the scenes displayed on the output screen. In one or more embodiments, the head-mounted visual display device may comprise a virtual reality headset or an augmented reality headset that has a wraparound shape in order to at least partially circumscribe the head of the subject108. The base assembly106and the displaceable force measurement assembly102depicted inFIG.53have the same constituent components and functionality as described above. In the embodiment ofFIG.53, similar to that described above, the displaceable force measurement assembly102may be operatively connected to a programmable logic controller (PLC)172and a data acquisition/data processing device104. In this embodiment, the programmable logic controller (PLC)172and/or the data acquisition/data processing device104may be programmed to displace the force measurement assembly102so as to perturb a somatosensory or proprioceptive input of the subject108during the performance of a balance test (e.g., the Sensory Organization Test, the Adaptation Test, or the Motor Control Test) or training routine where one or more sensory inputs of the subject are modified. During the performance of a balance test or training routine where the force measurement assembly102is displaced, the data acquisition/data processing device104may be further programmed to utilize the output forces and/or moments computed from the output data of the force measurement assembly102in order to assess a response of the subject108to the displacement of the force measurement assembly102. For example, to assess the response of the subject108during the performance of the balance test or training routine, the output forces and/or moments determined using the force measurement assembly102may be used to determine: (i) a quantitative score of the subject's sway during the trials of the test or training routine (e.g., see sway angle calculations described above), (ii) the type of strategy used by the subject108to maintain his or her balance (e.g., hip or ankle strategy) during the trials of the test or training routine, (iii) the changes in the center of gravity of the subject108during the trials of the test or training routine (e.g., refer to center of gravity determination described above), (iv) one or more quantitative sensory scores indicative of which sensory system(s) are impaired (i.e., indicative of whether one or more of the somatosensory, vestibular and visual systems are impaired), (v) the latency time of the subject108(e.g., the amount of time that it takes for the subject108to respond to a translational or rotational perturbation of the force measurement assembly102), (vi) the weight symmetry of the subject108(i.e., how much weight is being placed on the right leg versus the left leg), (vii) the amount of force that the subject108is able to exert in order to bring his or her body back to equilibrium after a perturbation (e.g., the amount of force exerted by the subject108in response to a translational or rotational perturbation of the force measurement assembly102), and (viii) the sway energy of the subject108in response to a perturbation (e.g., the anterior-posterior sway of the subject108in response to a translational or rotational perturbation of the force measurement assembly102). In one or more embodiments, the head-mounted visual display device344may have an organic light-emitting diode (OLED) display or liquid crystal display (LCD) with a resolution of at least 2160 pixels in the horizontal direction by 1200 pixels in the vertical direction (or 1080 by 1200 pixel resolution for each eye of the subject). Also, in one or more embodiments, the head-mounted visual display device344may have a refresh rate of at least 59 Hertz, or alternatively, at least 90 Hertz. In one or more further embodiments, the head-mounted visual display device344may have a refresh rate between approximately 59 Hertz and approximately 240 Hertz, inclusive (or between 59 Hertz and 240 Hertz, inclusive). Moreover, in one or more embodiments, the display latency or display time lag of the head-mounted visual display device344(i.e., amount of time that it takes for the pixels of the display to update in response to the head movement of the user) is between approximately 50 milliseconds and approximately 70 milliseconds, inclusive (or between 50 milliseconds and 70 milliseconds, inclusive). In one or more further embodiments, the head-mounted visual display device344may have a display latency or display time between approximately 10 milliseconds and approximately 50 milliseconds, inclusive (or between 10 milliseconds and 50 milliseconds, inclusive). Furthermore, in one or more embodiments, the data acquisition/data processing device104that is operatively coupled to the head-mounted visual display device344may execute a machine learning algorithm for predictive tracking of the subject's head movement so as to predict how the subject is going to move and pre-render the correct image for that view, thereby significantly decreasing the display latency or display time lag. Also, in one or more embodiments, the head-mounted visual display device344may have a horizontal field of view of at least 50 degrees and a vertical field of view of at least 50 degrees. More particularly, in one or more further embodiments, the head-mounted visual display device344may have a horizontal field of view of at least 110 degrees and a vertical field of view of at least 90 degrees. In yet one or more further embodiments, the head-mounted visual display device344may have a horizontal field of view of at least 210 degrees and a vertical field of view of at least 130 degrees. In still one or more further embodiments, the head-mounted visual display device344may have a horizontal field of view between approximately 100 degrees and approximately 210 degrees, inclusive (or between 100 degrees and 210 degrees, inclusive), and a vertical field of view between approximately 60 degrees and approximately 130 degrees, inclusive (or between 60 degrees and 130 degrees, inclusive). Advantageously, maximizing the horizontal and vertical fields of view results in a more immersive experience for the subject because a greater portion of the subject's peripheral vision is covered. In one or more embodiments, the head-mounted visual display device344may be operatively coupled to the data acquisition/data processing device104by one or more wired connections. For example, the video signal(s) for the head-mounted visual display device344may be transmitted using a high-definition multimedia interface (HDMI) cable and the data signal(s) for the head-mounted visual display device344may be transmitted using a Universal Serial Bus (USB) cable. The head-mounted visual display device344may also include a wired power connection. In one or more alternative embodiments, the head-mounted visual display device344may be operatively coupled to the data acquisition/data processing device104using a wireless connection rather than hardwired connection(s). In one or more embodiments, in order to effectively handle the data processing associated with the head-mounted visual display device344, the data acquisition/data processing device104coupled to the head-mounted visual display device344may have a high performance microprocessor, one or more high performance graphics cards, and sufficient random-access memory (RAM). For example, in an illustrative embodiment, the data acquisition/data processing device104coupled to the head-mounted visual display device344may have an Intel® Core i5 processor or greater, one or more NVIDIA® GeForce 900 series graphics processing units (GPU) or a higher series GPU, and eight (8) gigabytes of random-access memory (RAM) or greater. In one or more embodiments, the head-mounted visual display device344may incorporate an integral inertial measurement unit (IMU) with an accelerometer, magnetometer, and gyroscope for sensing the head movement of the subject. Also, the head-mounted visual display device344may include optical-based outside-in positional tracking for tracking the position and orientation of the head-mounted visual display device344in real time. The optical-based outside-in positional tracking may include remote optical photosensors that detect infrared light-emitting diode (LED) markers on the head-mounted visual display device344, or conversely, remote infrared light-emitting diode (LED) beams that are detected by photosensors on the head-mounted visual display device344. In one or more embodiments, the head-mounted visual display device344may be formed using lightweight materials (e.g., lightweight polymeric materials or plastics) so as to minimize the weight of the head-mounted visual display device344on the subject. Referring again toFIG.53, it can be seen that the force measurement system illustrated therein may also be provided with a wall-mounted visual display device346comprising a plurality of flat display screens348arranged or joined together in a concave arrangement so as to at least partially circumscribe the three sides of the torso of the subject108. As such, rather than using the head-mounted visual display device344, the scenes creating the simulated environment for the subject108may be displayed on the plurality of flat display screens348inFIG.53. In an alternative embodiment, rather than using the plurality of flat display screens348arranged or joined together in the concave arrangement ofFIG.53, a continuously curved display screen may be used to display the immersive scenes to the subject108, the curved display screen engaging enough of the peripheral vision of the subject108such that the subject108becomes immersed in the simulated environment. In another alternative embodiment, rather than using the plurality of flat display screens348arranged or joined together in the concave arrangement ofFIG.53or a continuously curved display screen, a wall-mounted flat display screen may be used to display the immersive scenes to the subject108. While in the aforedescribed alternative embodiments, these other visual display screens may be used with the base assembly106and the displaceable force measurement assembly102depicted inFIG.53, it is to be understood that these other wall-mounted visual displays do not have an immersive effect that is equivalent to the generally hemispherical projection screen168described above because the subject108is not as encapsulated by these alternative visual displays (i.e., these alternative visual displays lack the wraparound side portions and wraparound top and bottom portions that are provided by the generally hemispherical projection screen168). As described above in conjunction with the preceding embodiments, the data acquisition/data processing device104of the force measurement system illustrated inFIG.53may further perturb the visual input of the subject108during the performance of the balance test or training routine by manipulating the scenes on the output screen of the visual display device. Also, in the embodiment ofFIG.53, and the other embodiments described above, the force measurement system may further comprise an eye movement tracking device configured to track eye movement and/or eye position of the subject108while the subject108performs the balance test or training routine (e.g., the eye movement tracking device312described hereinafter in conjunction withFIG.50). In this embodiment, the eye movement tracking device outputs one or more eye tracking signals to the data acquisition/data processing device104, and the data acquisition/data processing device104utilizes the eye tracking signals in order to assess a response of the subject108to the perturbed visual input (i.e., by measuring the eye movement of the subject108in response to a displaced image on the output screen of the visual display device). In one or more embodiments, the eye movement tracking device may be incorporated into the head-mounted visual display device344depicted inFIG.53. In yet one or more alternative embodiments of the invention, a force measurement assembly102′ in the form of a static force plate, such as that illustrated inFIG.52, may be used with the head-mounted visual display device344, rather than the displaceable force measurement assembly102and base assembly106ofFIG.53. Similar to that described above in conjunction with the preceding embodiments, the data acquisition/data processing device of the static force plate system may perturb the visual input of the subject during the performance of the balance test or training routine by manipulating the scenes on the output screen of the head-mounted visual display device344. During the performance of the balance test or training routine while the subject is disposed on the static force plate, the data acquisition/data processing device may be further programmed to utilize the output forces and/or moments computed from the output data of the static force plate in order to assess a response of the subject to the visual stimuli on the output screen of the head-mounted visual display device344. For example, to assess the response of the subject108during the performance of the balance test or training routine, the output forces and/or moments determined using the force measurement assembly102may be used to determine any of the scores or parameters (i)-(viii) described above in conjunction with the embodiment illustrated inFIG.53. In one or more embodiments, the data acquisition/data processing device104coupled to the head-mounted visual display device344is programmed to generate one or more scenes of a simulated and/or augmented environment displayed on the head-mounted visual display device344, and further generate one or more clinician screens (e.g., one or more screens with test results) that are displayed on an additional visual display device visible to a clinician (e.g., operator visual display device130inFIG.1). In these one or more embodiments, the one or more scenes of the simulated and/or augmented environment that are displayed on the head-mounted visual display device344comprise a plurality of targets or markers (e.g., the plurality of targets or markers238′ inFIG.23) and at least one displaceable visual indicator (e.g., the displaceable visual indicator or cursor240′ inFIG.23), and the data acquisition/data processing device104is programmed to control the movement of the at least one displaceable visual indicator240′ towards the plurality of targets or markers238′ based upon one or more computed numerical values (e.g., the center of pressure coordinates) determined using the output forces and/or moments of the force measurement assembly102. In further embodiments of the invention, the data acquisition/data processing device104is configured to control the movement of a game element of an interactive game displayed on the immersive subject visual display device107by using one or more numerical values determined from the output signals of the force transducers associated with the force measurement assembly102. Referring to screen images210,210′,218illustrated inFIGS.16-21and27, it can be seen that the game element may comprise, for example, an airplane214,214′ that can be controlled in a virtual reality environment212,212′ or a skier222that is controlled in a virtual reality environment220. With particular reference toFIG.16, because a subject204is disposed within the confines of the generally hemispherical projection screen168while playing the interactive game, he or she is completely immersed in the virtual reality environment212.FIG.16illustrates a first variation of this interactive game, whereasFIGS.17-21illustrate a second variation of this interactive game (e.g., each variation of the game uses a different airplane214,214′ and different scenery). AlthoughFIGS.17-21depict generally planar images, rather than a concave image projected on a generally hemispherical screen168as shown inFIG.16, it is to be understood that the second variation of the interactive game (FIGS.17-21) is equally capable of being utilized on a screen that at least partially surrounds a subject (e.g., a generally hemispherical projection screen168). In the first variation of the interactive airplane game illustrated inFIG.16, arrows211can be provided in order to guide the subject204towards a target in the game. For example, the subject204may be instructed to fly the airplane214through one or more targets (e.g., rings or hoops) in the virtual reality environment212. When the airplane is flown off course by the subject204, arrows211can be used to guide the subject204back to the general vicinity of the one or more targets. With reference toFIGS.25and26, a target in the form of a ring or hoop216is illustrated in conjunction with the second variation of the interactive airplane game. In this virtual reality scenario212″, a subject is instructed to fly the airplane214′ through the ring216. InFIG.25, the airplane214′ is being displaced to the left by the subject through the ring216, whereas, inFIG.26, the airplane214′ is being displaced to the right by the subject through the ring216. Advantageously, the airplane simulation game is a type of training that is more interactive for the patient. An interactive type of training, such as the airplane simulation game, improves patient compliance by more effectively engaging the subject in the training. In addition, in order to further increase patient compliance, and ensure that the subject exerts his or her full effort, a leaderboard with scores may be utilized in the force measurement system100. To help ensure subject performance comparisons that are fair to the participants, separate leaderboards may be utilized for different age brackets. In some embodiments, the position of the rings or hoops216in the virtual reality scenario212″ could be selectively displaceable in a plurality of different positions by a user or operator of the system100. For example, the data acquisition/data processing device104could be specially programmed with a plurality of predetermined difficulty levels for the interactive airplane game. A novice difficulty level could be selected by the user or operator of the system100for a subject that requires a low level of difficulty. Upon selecting the novice difficulty level, the rings or hoops216would be placed in the easiest possible locations within the virtual reality scenario212″. For a subject requiring a higher level of difficulty, the user or operator of the system100could select a moderate difficulty level, wherein the rings or hoops216are placed in locations that are more challenging than the locations used in the novice difficulty level. Finally, if a subject requires a maximum level of difficulty, the user or operator could select a high difficulty level, wherein the rings or hoops216are placed in extremely challenging locations in the virtual reality scenario212″. In addition, in some embodiments, the position of the rings or hoops216in the virtual reality scenario212″ could be randomly located by the data acquisition/data processing device104so that a subject undergoing multiple training sessions using the interactive airplane game would be unable to memorize the locations of the rings or hoops216in the scenario212″, thereby helping to ensure the continued effectiveness of the training. In yet a further embodiment of the invention, the interactive type of subject or patient training may comprise an interactive skiing game. For example, as illustrated in the screen image218ofFIG.27, the immersive virtual reality environment220may comprise a scenario wherein the subject controls a skier222on a downhill skiing course. Similar to the interactive airplane game described above, the interactive skiing game may comprise a plurality of targets in the forms of gates224that the subject is instructed to contact while skiing the downhill course. To make the interactive skiing game even more engaging for the subject, a plurality of game performance parameters may be listed on the screen image, such as the total distance traveled226(e.g., in meters), the skier's speed228(e.g., in kilometers per hour), and the skier's time230(e.g., in seconds). In an illustrative embodiment, the one or more numerical values determined from the output signals of the force transducers associated with the force measurement assembly102comprise the center of pressure coordinates (xP, yP) computed from the ground reaction forces exerted on the force plate assembly102by the subject. For example, with reference to the force plate coordinate axes150,152ofFIG.7, when a subject leans to the left on the force measurement assembly102′ (i.e., when the x-coordinate xPof the center of pressure is positive), the airplane214′ in the interactive airplane game is displaced to the left (see e.g.,FIG.18) or the skier222in the interactive skiing game is displaced to the left (see e.g.,FIG.27). Conversely, when a subject leans to the right on the force measurement assembly102′ (i.e., when the x-coordinate xPof the center of pressure is negative inFIG.7), the airplane214′ in the interactive airplane game is displaced to the right (see e.g.,FIG.19) or the skier222in the interactive skiing game is displaced to the right (see e.g.,FIG.27). When a subject leans forward on the force measurement assembly102′ (i.e., when the y-coordinate yPof the center of pressure is positive inFIG.7), the altitude of the airplane214′ in the interactive airplane game is increased (see e.g.,FIG.20) or the speed of the skier222in the interactive skiing game is increased (see e.g.,FIG.27). Conversely, when a subject leans backward on the force measurement assembly102′ (i.e., when the y-coordinate yPof the center of pressure is negative inFIG.7), the altitude of the airplane214′ in the interactive airplane game is decreased (see e.g.,FIG.21) or the speed of the skier222in the interactive skiing game is decreased (see e.g.,FIG.27). In still a further embodiment, a force and motion measurement system is provided that includes both the force measurement system100described above together with a motion detection system300that is configured to detect the motion of one or more body gestures of a subject (seeFIGS.32and33). For example, during a particular training routine, a subject may be instructed to reach for different targets on the output screen168. While the subject reaches for the different targets on the output screen168, the motion detection system300detects the motion of the subject's body gestures (e.g., the motion detection system300tracks the movement of one of the subject's arms while reaching for an object on the output screen168). As shown inFIG.33, a subject108is provided with a plurality of markers304disposed thereon. These markers304are used to record the position of the limbs of the subject in 3-dimensional space. A plurality of cameras302are disposed in front of the subject visual display device107(and behind the subject108), and are used to track the position of the markers304as the subject moves his or her limbs in 3-dimensional space. While three (3) cameras are depicted inFIGS.32and33, one of ordinary skill in the art will appreciate that more or less cameras can be utilized, provided that at least two cameras302are used. In one embodiment of the invention, the subject has a plurality of single markers applied to anatomical landmarks (e.g., the iliac spines of the pelvis, the malleoli of the ankle, and the condyles of the knee), or clusters of markers applied to the middle of body segments. As the subject108executes particular movements on the force measurement assembly102, and within the hemispherical subject visual display device107, the data acquisition/data processing device104calculates the trajectory of each marker304in three (3) dimensions. Then, once the positional data is obtained using the motion detection system300(i.e., the motion acquisition/capture system300), inverse kinematics can be employed in order to determine the joint angles of the subject108. While the motion detection system300described above employs a plurality of markers304, it is to be understood that the invention is not so limited. Rather, in another embodiment of the invention, a markerless motion detection/motion capture system is utilized. The markerless motion detection/motion capture system uses a plurality of high speed video cameras to record the motion of a subject without requiring any markers to be placed on the subject. Both of the aforementioned marker and markerless motion detection/motion capture systems are optical-based systems. In one embodiment, the optical motion detection/motion capture system300utilizes visible light, while in another alternative embodiment, the optical motion detection/motion capture system300employs infrared light (e.g., the system300could utilize an infrared (IR) emitter to project a plurality of dots onto objects in a particular space as part of a markless motion capture system). For example, as shown inFIG.50, a motion capture device318with one or more cameras320, one or more infrared (IR) depth sensors322, and one or more microphones324may be used to provide full-body three-dimensional (3D) motion capture, facial recognition, and voice recognition capabilities. It is also to be understood that, rather than using an optical motion detection/capture system, a suitable magnetic or electro-mechanical motion detection/capture system can also be employed in the system100described herein. In some embodiments, the motion detection system300, which is shown inFIGS.32and33, is used to determine positional data (i.e., three-dimensional coordinates) for one or more body gestures of the subject108during the performance of a simulated task of daily living. The one or more body gestures of the subject108may comprise at least one of: (i) one or more limb movements of the subject, (ii) one or more torso movements of the subject, and (iii) a combination of one or more limb movements and one or more torso movements of the subject108. In order to simulate a task of daily living, one or more virtual reality scenes can be displayed on the subject visual display device107. One such exemplary virtual reality scene is illustrated inFIG.37. As illustrated in the screen image244ofFIG.37, the immersive virtual reality environment246simulating the task of daily living could comprise a scenario wherein a subject204is removing an object248(e.g., a cereal box) from a kitchen cabinet250. While the subject204is performing this simulated task, the data acquisition/data processing device104could quantify the performance of the subject204during the execution of the task (e.g., removing the cereal box248from the kitchen cabinet250) by analyzing the motion of the subject's left arm252, as measured by the motion detection system300. For example, by utilizing the positional data obtained using the motion detection system300, the data acquisition/data processing device104could compute the three-dimensional (3-D) trajectory of the subject's left arm252through space. The computation of the 3-D trajectory of the subject's left arm252is one exemplary means by which the data acquisition/data processing device104is able to quantify the performance of a subject during the execution of a task of daily living. At the onset of the training for a subject204, the 3-D trajectory of the subject's left arm252may indicate that the subject204is taking an indirect path (i.e., a zigzag path or jagged path256indicated using dashed lines) in reaching for the cereal box248(e.g., the subject's previous injury may be impairing his or her ability to reach for the cereal box248in the most efficient manner). However, after the subject204has been undergoing training for a continuous period of time, the 3-D trajectory of the subject's left arm252may indicate that the subject204is taking a more direct path (i.e., an approximately straight line path254) in reaching for the cereal box248(e.g., the training may be improving the subject's ability to reach for the cereal box248in an efficient fashion). As such, based upon a comparison of the subject's left arm trajectory paths, a physical therapist treating the subject or patient204may conclude that the subject's condition is improving over time. Thus, advantageously, the motion detection system300enables a subject's movement to be analyzed during a task of daily living so that a determination can be made as to whether or not the subject's condition is improving. Moreover, in other embodiments, the motion detection system300may also be used to determine the forces and/or moments acting on the joints of a subject108. In particular,FIG.34diagrammatically illustrates an exemplary calculation procedure400for the joint angles, velocities, and accelerations carried out by the force and motion measurement system that includes the motion detection system300depicted inFIGS.32and33. Initially, as shown in block402ofFIG.34, the plurality of cameras302are calibrated using the image coordinates of calibration markers and the three-dimensional (3-D) relationship of calibration points such that a plurality of calibration parameters are generated. In one exemplary embodiment of the invention, the calibration of the plurality of cameras302is performed using a Direct Linear Transformation (“DLT”) technique and yields eleven (11) DLT parameters. However, it is to be understood that, in other embodiments of the invention, a different technique can be used to calibrate the plurality of cameras302. Then, in block404, the perspective projection of the image coordinates of the body markers304is performed using the calibration parameters so that the image coordinates are transformed into actual three-dimensional (3-D) coordinates of the body markers304. Because the digitization of the marker images involves a certain amount of random error, a digital filter is preferably applied to the three-dimensional (3-D) coordinates of the markers to remove the inherent noise in block406. Although, it is to be understood that the use of a digital filter is optional, and thus is omitted in some embodiments of the invention. In block408, local coordinate systems are utilized to determine the orientation of the body segments relative to each other. After which, in block410, rotational parameters (e.g., angles, axes, matrices, etc.) and the inverse kinematics model are used to determine the joint angles. The inverse kinematics model contains the details of how the angles are defined, such as the underlying assumptions that are made regarding the movement of the segments relative to each other. For example, in the inverse kinematics model, the hip joint could be modeled as three separate revolute joints acting in the frontal, horizontal, and sagittal plane, respectively. In block412, differentiation is used to determine the joint velocities and accelerations from the joint angles. Although, one of ordinary skill in the art will appreciate that, in other embodiments of the invention, both differentiation and analytical curve fitting could be used to determine the joint velocities and accelerations from the joint angles. In addition,FIG.34diagrammatically illustrates the calculation procedure for the joint forces and moments that is also carried out by the force and motion measurement system, which comprises the force measurement system100and motion detection system300ofFIGS.32-33. Referring again to this figure, antrophometric data is applied to a segment model in block416in order to determine the segment inertial parameters. By using the segment inertial parameters together with the joint velocities and accelerations and the force plate measurements, joint and body segment kinetics are used in block414to determine the desired joint forces and moments. In a preferred embodiment of the invention, Newton-Euler Formulations are used to compute the joint forces and moments. However, it is to be understood that the invention is not so limited. Rather, in other embodiments of the invention, the kinetics analysis could be carried out using a different series of equations. In order to more clearly illustrate the requisite calculations for determining the joint forces and moments, the determination of the joint reaction forces and joint moments of the subject will be explained using an exemplary joint of the body. In particular, the computation of the joint reaction forces and joint moments of the subject will be described in reference to an exemplary determination of the forces and moments acting on the ankle. The force measurement assembly102is used to determine the ground reaction forces and moments associated with the subject being measured. These ground reaction forces and moments are used in conjunction with the joint angles computed from the inverse kinematics analysis in order to determine the net joint reaction forces and net joint moments of the subject. In particular, inverse dynamics is used to calculate the net joint reaction forces and net joint moments of the subject by using the computed joint angles, angular velocities, and angular accelerations of a musculoskeletal model, together with the ground reaction forces and moments measured by the force measurement assembly102. An exemplary calculation of the forces and moments at the ankle joint will be explained with reference to the foot diagram ofFIG.35and the free body diagram500depicted inFIG.36. InFIGS.35and36, the ankle joint502is diagrammatically represented by the point “A”, whereas the gravitational center504of the foot is diagrammatically represented by the circular marker labeled “GF”. In this figure, the point of application for the ground reaction forces {right arrow over (F)}Gr(i.e., the center of pressure506) is diagrammatically represented by the point “P” in the free body diagram500. The force balance equation and the moment balance equation for the ankle are as follows: mF·{right arrow over (a)}GF={right arrow over (F)}Gr+{right arrow over (F)}A(4) {hacek over (J)}F{right arrow over ({dot over (ω)})}F+{right arrow over (ω)}F={right arrow over (M)}A+{right arrow over (T)}+({right arrow over (r)}GA×{right arrow over (F)}A)+({right arrow over (r)}GP×{right arrow over (F)}Gr) (5)where:mF: mass of the foot{right arrow over (a)}GF: acceleration of the gravitational center of the foot{right arrow over (F)}Gr: ground reaction forces{right arrow over (F)}A: forces acting on the ankle{hacek over (J)}F: rotational inertia of the foot{right arrow over ({dot over (ω)})}F: angular acceleration of the foot{right arrow over (ω)}F: angular velocity of the foot{right arrow over (M)}A: moments acting on the ankle{right arrow over (T)}: torque acting on the foot{right arrow over (r)}GA: position vector from the gravitational center of the foot to the center of the ankle{right arrow over (r)}GP: position vector from the gravitational center of the foot to the center of pressure In above equations (4) and (5), the ground reaction forces {right arrow over (F)}Grare equal in magnitude and opposite in direction to the externally applied forces {right arrow over (F)}ethat the body exerts on the supporting surface through the foot (i.e., {right arrow over (F)}Gr=−{right arrow over (F)}e). Then, in order to solve for the desired ankle forces and moments, the terms of equations (4) and (5) are rearranged as follows: {right arrow over (F)}A=mF·{right arrow over (a)}GF−{right arrow over (F)}Gr(6) {right arrow over (M)}A={hacek over (J)}F{right arrow over ({dot over (ω)})}F+{right arrow over (ω)}F×{hacek over (J)}F{right arrow over (ω)}F−{right arrow over (T)}−({right arrow over (r)}GA×{right arrow over (F)}A)−({right arrow over (r)}GP×{right arrow over (F)}Gr) (7) By using the above equations, the magnitude and directions of the ankle forces and moments can be determined. The net joint reaction forces and moments for the other joints in the body can be computed in a similar manner. In an alternative embodiment, the motion detection system that is provided in conjunction with the force measurement system100may comprise a plurality of inertial measurement units (IMUs), rather than taking the form of a marker-based or markerless motion capture system. As described above for the motion detection system300, the IMU-based motion detection system300′ may be used to detect the motion of one or more body gestures of a subject (see e.g.,FIG.49). As shown inFIG.49, a subject or patient108may be provided with a plurality of inertial measurement units306disposed thereon. The one or more body gestures of the subject108may comprise one or more limb movements of the subject, one or more torso movements of the subject, or a combination of one or more limb movements and one or more torso movements of the subject. As shown inFIG.49, a subject or patient108may be outfitted with a plurality of different inertial measurement units306for detecting motion. In the illustrative embodiment, the subject108is provided with two (2) inertial measurement units306on each of his legs108a,108b(e.g., on the side of his legs108a,108b). The subject is also provided with two (2) inertial measurement units306on each of his arms108c,108d(e.g., on the side of his arms108c,108d). In addition, the subject108ofFIG.49is provided with an inertial measurement unit306around his waist (e.g., with the IMU located on the back side of the subject108), and another inertial measurement unit306around his or her chest (e.g., with the IMU located on the front side of the subject108near his sternum). In the illustrated embodiment, each of the inertial measurement units306is operatively coupled to the data acquisition/data processing device104by wireless means, such as Bluetooth, or another suitable type of personal area network wireless means. In the illustrated embodiment ofFIG.49, each of the inertial measurement units306is coupled to the respective body portion of the subject108by a band308. As shown inFIG.49, each of the inertial measurement units306comprises an IMU housing310attached to an elastic band308. The band308is resilient so that it is capable of being stretched while being placed on the subject108(e.g., to accommodate the hand or the foot of the subject108before it is fitted in place on the arm108c,108dor the leg108a,108bof the subject108). The band308can be formed from any suitable stretchable fabric, such as neoprene, spandex, and elastane. Alternatively, the band308could be formed from a generally non-stretchable fabric, and be provided with latching means or clasp means for allowing the band308to be split into two portions (e.g., the band308could be provided with a snap-type latching device). In other embodiments, it is possible to attach the inertial measurement units306to the body portions of the subject108using other suitable attachment means. For example, the inertial measurement units306may be attached to a surface (e.g., the skin or clothing item of the subject108using adhesive backing means. The adhesive backing means may comprise a removable backing member that is removed just prior to the inertial measurement unit306being attached to a subject108or object. Also, in some embodiments, the adhesive backing means may comprise a form of double-sided bonding tape that is capable of securely attaching the inertial measurement unit306to the subject108or another object. In one or more embodiments, each inertial measurement unit306may comprise a triaxial (three-axis) accelerometer sensing linear acceleration {right arrow over (a)}′, a triaxial (three-axis) rate gyroscope sensing angular velocity {right arrow over (ω)}′, a triaxial (three-axis) magnetometer sensing the magnetic north vector {right arrow over (n)}′, and a central control unit or microprocessor operatively coupled to each of accelerometer, gyroscope, and the magnetometer. In addition, each inertial measurement unit306may comprise a wireless data interface for electrically coupling the inertial measurement unit306to the data acquisition/data processing device104. In one or more embodiments, the motion of the subject108may be detected by the plurality of inertial measurement units306while one or more images are displayed on the hemispherical projection screen168of the subject visual display device107. The one or more images that are displayed on the screen168may comprise one or more simulated tasks, interactive games, training exercises, or balance tests. The data acquisition/data processing device104is specially programmed to quantify the performance of a subject108during the execution of the one or more simulated tasks, interactive games, training exercises, or balance tests by analyzing the motion of the one or more body gestures of the subject108detected by the plurality of inertial measurement units306. For example, as described above with regard to the motion detection system300, the inertial measurement units306of the motion detection system300′ may be used to determine positional data (i.e., three-dimensional coordinates) for one or more body gestures of the subject108during the performance of a simulated task of daily living. In order to simulate a task of daily living, one or more virtual reality scenes can be displayed on the subject visual display device107. One such exemplary virtual reality scene is illustrated inFIG.50. As illustrated in the screen image244′ ofFIG.50, the immersive virtual reality environment246′ simulating the task of daily living could comprise a scenario wherein a subject204′ is pointing and/or grabbing towards an object248′ (e.g., a cereal box) that he is about ready to grasp from a kitchen cabinet250′. While the subject204′ is performing this simulated task, the data acquisition/data processing device104may quantify the performance of the subject204′ during the execution of the task (e.g., reaching for, and removing the cereal box248′ from the kitchen cabinet250′) by analyzing the motion of the subject's right arm251, as measured by the motion detection system300′. For example, by utilizing the positional data obtained using the motion detection system300′ (with inertial measurement units (IMUs)306), the data acquisition/data processing device104may compute the three-dimensional (3-D) position and orientation of the subject's right arm251in space. The computation of the 3-D position and orientation of the subject's right arm251is one exemplary means by which the data acquisition/data processing device104is able to quantify the performance of a subject during the execution of a task of daily living. Thus, advantageously, the motion detection system300′ enables a subject's movement to be quantified and analyzed during a task of daily living. Next, an illustrative manner in which the data acquisition/data processing device104of the force measurement system100performs the inertial measurement unit (IMU) calculations will be explained in detail. In particular, this calculation procedure will describe the manner in which the orientation and position of one or more body portions (e.g., limbs) of a subject108,204′ could be determined using the signals from the plurality of inertial measurement units (IMUs)306of the motion detection system300′. As explained above, in one or more embodiments, each inertial measurement unit306includes the following three triaxial sensor devices: (i) a three-axis accelerometer sensing linear acceleration {right arrow over (a)}′, (ii) a three-axis rate gyroscope sensing angular velocity {right arrow over (ω)}′, and (iii) a three-axis magnetometer sensing the magnetic north vector {right arrow over (n)}′. Each inertial measurement unit306senses in the local (primed) frame of reference attached to the IMU itself. Because each of the sensor devices in each IMU is triaxial, the vectors {right arrow over (a)}′, {right arrow over (ω)}, {right arrow over (n)}′ are each 3-component vectors. A prime symbol is used in conjunction with each of these vectors to symbolize that the measurements are taken in accordance with the local reference frame. The unprimed vectors that will be described hereinafter are in the global reference frame. The objective of these calculations is to find the orientation {right arrow over (θ)}(t) and position {right arrow over (R)}(t) in the global, unprimed, inertial frame of reference. Initially, the calculation procedure begins with a known initial orientation {right arrow over (θ)}0and position {right arrow over (R)}0in the global frame of reference. For the purposes of the calculation procedure, a right-handed coordinate system is assumed for both global and local frames of reference. The global frame of reference is attached to the Earth. The acceleration due to gravity is assumed to be a constant vector {right arrow over (g)}. Also, for the purposes of the calculations presented herein, it is presumed the sensor devices of the inertial measurement units (IMUs) provide calibrated data. In addition, all of the signals from the IMUs are treated as continuous functions of time. Although, it is to be understood the general form of the equations described herein may be readily discretized to account for IMU sensor devices that take discrete time samples from a bandwidth-limited continuous signal. The orientation {right arrow over (θ)}(t) is obtained by single integration of the angular velocity as follows: {right arrow over (θ)}(t)={right arrow over (θ)}0+∫0t{right arrow over (ω)}(t)dt(8) {right arrow over (θ)}(t)={right arrow over (θ)}0+∫0t{right arrow over (Θ)}(t){right arrow over (ω)}′(t)dt(9) where {right arrow over (Θ)}(t) is the matrix of the rotation transformation that rotates the instantaneous local frame of reference into the global frame of reference. The position is obtained by double integration of the linear acceleration in the global reference frame. The triaxial accelerometer of each IMU senses the acceleration {right arrow over (a)}′ in the local reference frame. The acceleration {right arrow over (a)}′ has the following contributors: (i) the acceleration due to translational motion, (ii) the acceleration of gravity, and (iii) the centrifugal, Coriolis and Euler acceleration due to rotational motion. All but the first contributor has to be removed as a part of the change of reference frames. The centrifugal and Euler accelerations are zero when the acceleration measurements are taken at the origin of the local reference frame. The first integration gives the linear velocity as follows: {right arrow over (v)}(t)={right arrow over (v)}0+∫0t{{right arrow over (a)}(t)−{right arrow over (g)}}dt(10) {right arrow over (v)}(t)={right arrow over (v)}0+∫0t{{right arrow over (Θ)}(t)[{right arrow over (a)}′(t)+2{right arrow over (ω)}′×{right arrow over (v)}′(t)]−{right arrow over (g)}}dt(11) where 2{right arrow over (ω)}′×{right arrow over (v)}′(t) is the Coriolis term, and where the local linear velocity is given by the following equation: {right arrow over (v)}′(t)={right arrow over (Θ)}−1(t){right arrow over (v)}(t(12) The initial velocity {right arrow over (v)}0can be taken to be zero if the motion is being measured for short periods of time in relation to the duration of Earth's rotation. The second integration gives the position as follows: {right arrow over (R)}(t)={right arrow over (R)}0+∫0t{right arrow over (v)}(t)dt(13) At the initial position, the IMU's local-to-global rotation's matrix has an initial value {right arrow over (Θ)}(0)≡{right arrow over (Θ)}0. This value can be derived by knowing the local and global values of both the magnetic north vector and the acceleration of gravity. Those two vectors are usually non-parallel. This is the requirement for the {right arrow over (Θ)}0({right arrow over (g)}′, {right arrow over (n)}′, {right arrow over (g)}, {right arrow over (n)}) to be unique. The knowledge of either of those vectors in isolation gives a family of non-unique solutions {right arrow over (Θ)}0({right arrow over (g)}′, {right arrow over (g)}) or {right arrow over (Θ)}0({right arrow over (n)}′, {right arrow over (n)}) that are unconstrained in one component of rotation. The {right arrow over (Θ)}0({right arrow over (g)}′, {right arrow over (n)}′, {right arrow over (g)}{right arrow over (g)}, {right arrow over (n)}) has many implementations, with the common one being the Kabsch algorithm. As such, using the calculation procedure described above, the data acquisition/data processing device104of the force measurement system100may determine the orientation {right arrow over (θ)}(t) and position {right arrow over (R)}(t) of one or more body portions of the subject108,204′. For example, the orientation of a limb of the subject108,204′ (e.g., the right arm251of the subject204′ inFIG.50) may be determined by computing the orientation {right arrow over (θ)}(t) and position {right arrow over (R)}(t) of two points on the limb of the subject108,204′ (i.e., at the respective locations of two inertial measurement units (IMUs)306disposed on the limb of the subject108,204′). Referring again toFIG.50, it can be seen that the subject204′ is also provided with an eye movement tracking device312that is configured to track the eye movement and/or eye position of the subject204′ (i.e., the eye movement, the eye position, or the eye movement and the eye position of the subject204′) while he performs the one or more simulated tasks, interactive games, training exercises, or balance tests. The eye movement tracking device312may be utilized in conjunction with the motion detection system300′. For example, in the virtual reality environment246′ ofFIG.50, the eye movement tracking device312may be used to determine the eye movement and/or eye position of the subject204′ while he performs the one or more simulated tasks, interactive games, training exercises, or balance tests. The eye movement tracking device312may be in the form of the eye movement tracking devices described in U.S. Pat. Nos. 6,113,237 and 6,152,564, the entire disclosures of which are incorporated herein by reference. The eye movement tracking device312is configured to output one or more first signals that are representative of the detected eye movement and/or eye position of the subject204′ (e.g., the saccadic eye movement of the subject). As explained above, the eye movement tracking device312may be operatively connected to the input/output (I/O) module of the programmable logic controller172, which in turn, is operatively connected to the data acquisition/data processing device104. As will be described in more detail hereinafter, referring again toFIG.50, a head position detection device (i.e., an inertial measurement unit306) may also be provided on the head of the subject204′ so that a head position of the subject204′ is capable of being determined together with the eye movement and/or eye position of the subject204′ determined using the eye movement tracking device312. The head position detection device306is configured to output one or more second signals that are representative of the detected position of the head of the subject204′. As such, using the one or more first output signals from the eye movement tracking device312and the one or more second output signals from head position detection device306, the data acquisition/data processing device104may be specially programmed to determine one or more gaze directions of the subject204′ (as diagrammatically indicated by dashed line342inFIG.50) during the performance of the one or more simulated tasks, interactive games, training exercises, or balance tests. In addition, as will be described in further detail hereinafter, the data acquisition/data processing device104may be further configured to compare the one or more gaze directions of the subject204′ to the position of one or more objects (e.g., cereal box248′) in the one or more scene images of the at least one visual display device so as to determine whether or not the eyes of the subject204′ are properly directed at the object248′ (e.g., cereal box) that is about to be grasped by the subject204′. In one or more embodiments, the data acquisition/data processing device104determines the one or more gaze directions of the subject204′ as a function of the eye angular position (θE) of the subject204′ determined by the eye movement tracking device312and the angular position of the subject's head (θH) determined by the head position detection device306. More particularly, in one or more embodiments, the data acquisition/data processing device104is specially programmed to determine the one or more gaze directions of the subject204′ by computing the algebraic sum of the eye angular position (θE) of the subject204′ (as determined by the eye movement tracking device312) and the angular position (θH) of the subject's head (as determined by the head position detection device306). In addition, the data acquisition/data processing device104may be specially programmed to determine a position of one or more objects (e.g., the cereal box248′ inFIG.50) in the one or more scene images244′ of the visual display device. Once the position of the one or more objects on the screen of the visual display device are determined, the data acquisition/data processing device104may be further specially programmed to compare the one or more gaze directions of the subject204′ (as detected from the output of the eye movement tracking device312and head position detection device306) to the position of one or more objects on the screen of visual display device (e.g., by using a ray casting technique to project the imaginary sight line342inFIG.50determined from the output of the eye movement tracking device312and head position detection device306) towards one or more objects (e.g., the cereal box248′) in a virtual world. That is, one or more objects (e.g., the cereal box248′) displayed on the visual display device may be mapped into the virtual environment so that an intersection or collision between the projected sight line342and the one or more objects may be determined. Alternatively, or in addition to, comparing the one or more gaze directions of the subject to the position of one or more objects on the screen, the data acquisition/data processing device104may be specially programmed to compute a time delay between a movement of the one or more objects (e.g., the cereal box248′) in the one or more scene images of the visual display device and a change in the gaze direction of the subject204′. For example, the data acquisition/data processing device104may be specially programmed to move or displace the object across the screen, then subsequently determine how much time elapses (e.g., in seconds) before the subject changes his or her gaze direction in response to the movement of the object. In an exemplary scenario, a clinician may instruct a patient to continually direct his or her eyes at a particular object on the screen. When the object is displaced on the screen, the time delay (or reaction time of the subject) would be a measure of how long it takes the subject to change the direction of his or her eyes in response to the movement of the object on the screen (i.e., so the subject is still staring at that particular object). When utilizing the eye movement tracking device312, the data acquisition/data processing device104may be specially programmed to assess a performance of the subject204′ while performing one or more simulated tasks, interactive games, training exercises, or balance tests using the comparison of the one or more gaze directions of the subject204′ to the position of one or more objects or the computed time delay of the subject204′. For example, if the comparison between the one or more gaze directions of the subject204′ to the position of one or more objects reveals that there is a large distance (e.g., 10 inches or more) between the projected sight line342of the subject's gaze direction and the position determined for the one or more objects on the screen of the visual display device, then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is below a baseline normative value (i.e., below average performance). Conversely, if the comparison between the one or more gaze directions of the subject204′ to the position of one or more objects reveals that there is a small distance (e.g., 3 inches or less) between the projected sight line342of the subject's gaze direction and the position determined for the one or more objects on the screen of the visual display device, then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is above a baseline normative value (i.e., above average performance). Similarly, if the computed time delay between a movement of the one or more objects on the screen of the visual display device and a change in the subject's gaze direction is large (e.g., a large time delay of 1 second or more), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is below a baseline normative value (i.e., below average performance). Conversely, if the computed time delay between a movement of the one or more objects on the screen of the visual display device and a change in the subject's gaze direction is small (e.g., a small time delay of 0.25 seconds or less), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is above a baseline normative value (i.e., above average performance). Also, as illustrated inFIG.50, the subject204′ is provided with a scene camera314mounted to the eye movement tracking device312so that one or more video images of an environment surrounding the subject may be captured, as the one or more gaze directions of the subject204′ are determined using the output of the eye movement tracking device312and head position detection device306. The scene camera314records the environment surrounding the subject204′ in relation to a video being captured by a forward facing head-mounted camera. Similar to the eye movement tracking device312, the scene camera314may be operatively connected to the input/output (I/O) module of the programmable logic controller172, which in turn, is operatively connected to the data acquisition/data processing device104. As such, using the one or more output signals from the scene camera314, the data acquisition/data processing device104may be specially programmed to utilize the one or more video images captured by the scene camera in a virtual reality environment246′ displayed on the visual display device107during the performance of the one or more simulated tasks, interactive games, training exercises, or balance tests. For example, in one or more embodiments, the scene camera314may be used to add to the imagery in a virtual reality environment in which the subject204′ is immersed. In one or more alternative embodiments, the scene camera314may be used to capture the gaze position of the subject204′ while he or she interacts in a virtual reality environment (e.g., the virtual reality environment246′ inFIG.50). Turning once again toFIG.50, it can be seen that the head position detection device (i.e., the inertial measurement unit306) is disposed on the head of the subject204′ so that a head position and orientation of the subject204′ is capable of being determined together with the eye movement and/or eye position of the subject204′ determined using the eye movement tracking device312, and the one or more video images of an environment surrounding the subject204′ determined using the scene camera314. As such, using the one or more output signals from the head-mounted inertial measurement unit306, the data acquisition/data processing device104may be specially programmed to calculate the head position and orientation of the subject204′ during the performance of the one or more simulated tasks, interactive games, training exercises, or balance tests (i.e., by using the calculation procedure described above for the IMUs306). For example, in the virtual reality environment246′ ofFIG.50, the head-mounted inertial measurement unit306may be used to determine whether or not the head of the subject204′ is properly pointing toward the object248′ (e.g., cereal box) that is about to be grasped by the subject204′. In addition, as shown inFIG.50, the subject204′ may also be provided with an instrumented motion capture glove316on his right hand in order to detect one or more finger motions of the subject while the subject performs the one or more simulated tasks, interactive games, training exercises, or balance tests. The instrumented motion capture glove316may comprise a plurality of different sensor devices, which may include a plurality of finger flexion or bend sensors on each finger, a plurality of abduction sensors, one or more palm-arch sensors, one or more sensors measuring thumb crossover, one or more wrist flexion sensors, and one or more wrist abduction sensors. The sensor devices of the instrumented motion capture glove316may be attached to an elastic material that fits over the hand of the subject204′, which permits the subject204′ to manipulate his hand without any substantial decrease in mobility due the instrumented glove316. The instrumented motion capture glove316outputs a plurality of signals that are representative of the detected finger movement of the subject204′. The instrumented motion capture glove316may be operatively connected to the data acquisition/data processing device104of the force measurement system100. The data acquisition/data processing device104may be specially programmed to determine the finger positions and orientations of the subject during the performance of the one or more simulated tasks, interactive games, training exercises, or balance tests using the plurality of signals outputted by the instrumented motion capture glove316(e.g., by executing calculations similar to those described above for the IMUs). In the illustrated embodiment, the instrumented motion capture glove316is operatively connected to the data acquisition/data processing device104by wireless means, such as Bluetooth, or another suitable type of personal area network wireless means. In a further embodiment, a measurement and analysis system is provided that generally includes a visual display device together with an eye movement tracking device312and a data acquisition/data processing device104. The eye movement tracking device312functions in the same manner described above with regard to the preceding embodiments. In addition, the measurement and analysis system may also comprise the scene camera314and the head-mounted inertial measurement unit306explained above. In yet a further embodiment, a measurement and analysis system is provided that generally includes a visual display device having an output screen, the visual display device configured to display one or more scene images on the output screen so that the images are viewable by a subject; an object position detection system, the object position detection system configured to detect a position of a body portion of a subject108,204′ and output one or more first signals representative of the detected position; and a data acquisition/data processing device104operatively coupled to the visual display device and the object position detection system. In this further embodiment, the object position detection system may comprise one or more of the following: (i) one or more inertial measurement units306(e.g., seeFIGS.49and50), (ii) a touchscreen interface of the visual display device130(e.g., seeFIG.1), (iii) one or more infrared (IR) sensing devices322(e.g., seeFIG.50), and (iv) one or more cameras (e.g., seeFIGS.32and33). In this further embodiment, the data acquisition/data processing device104is specially programmed to receive the one or more first signals that are representative of the position of the body portion (e.g., an arm) of the subject108,204′ and to compute the position, orientation, or both the position and orientation, of the body portion (e.g., the arm) of the subject108,204′ using the one or more first signals. For example, the data acquisition/data processing device104may be specially programmed to determine the position and orientation of an arm251of the subject108,204′ using the output signals from a plurality of inertial measurement units306attached along the length of the subject's arm (e.g., as illustrated inFIG.50). As explained above, the positional coordinates of the subject's arm may be initially determined relative to a local coordinate system, and then subsequently transformed to a global coordinate system. In addition, the data acquisition/data processing device104may be specially programmed to determine a position of one or more objects (e.g., the cereal box248′ inFIG.50) in the one or more scene images244′ of the visual display device. For example, the pixel coordinates (x pixels by y pixels) defining the position of the object (e.g., the cereal box248′) on the screen may be transformed into dimensional coordinates (e.g., x inches by y inches) using the physical size of the screen (e.g., 40 inches by 30 inches). As such, the position of the object (e.g., the cereal box248′) on the screen may be defined in terms of a global coordinate system having an origin at the center of the visual display device. Also, the positional coordinates of the subject's arm may be transformed such that they are also defined in accordance with the same global coordinate system having its origin at the center of the visual display device. Once the position and/or orientation of the body portion (e.g., the arm) of the subject108,204′ and the position of the one or more objects on the screen of the visual display device are defined relative to the same coordinate system, the data acquisition/data processing device104may be further specially programmed to compute a difference value between the computed position and/or orientation of the body portion (e.g., the arm) of the subject and the position determined for the one or more objects on the screen of the visual display device. For example, the data acquisition/data processing device104may compute a distance value between the coordinates of the arm of the subject and the coordinates of the one or more objects on the screen in order to assess how close the subject's arm is to the intended object (e.g., the cereal box248′) on the screen (e.g., to determine if he or she is pointing at, or reaching for, the correct object on the screen). Alternatively, or in addition to, computing the difference value, the data acquisition/data processing device104may be specially programmed to compute a time delay between a movement of the one or more objects on the screen of the visual display device and a movement of the body portion (e.g., an arm) of the subject. For example, the data acquisition/data processing device104may be specially programmed to move or displace the object across the screen, then subsequently determine how much time elapses (e.g., in seconds) before the subject moves his or her arm in response to the movement of the object. In an exemplary scenario, a clinician may instruct a patient to continually point to a particular object on the screen. When the object is displaced on the screen, the time delay (or reaction time of the subject) would be a measure of how long it takes the subject to move his or her arm in response to the movement of the object on the screen (i.e., so the subject is still pointing at that particular object). In this further embodiment, the data acquisition/data processing device104may be specially programmed to utilize a ray casting technique in order to project an imaginary arm vector of the subject204′, the orientation and position of which may be determined using the output signal(s) from one or more inertial measurement units306on the arm of the subject204′ (seeFIG.50), towards one or more objects (e.g., the cereal box248′) in a virtual world. That is, one or more objects (e.g., the cereal box248′) displayed on the visual display device may be mapped into the virtual environment so that an intersection or collision between the projected arm vector and the one or more objects may be determined. As such, in one exemplary scenario, the data acquisition/data processing device104is capable of determining whether or not a subject204′ is correctly pointing his or her arm in the direction of a particular object (e.g., cereal box248′) on the screen of the visual display device. Also, in this further embodiment, the data acquisition/data processing device104may be specially programmed to assess a performance of the subject204′ while performing one or more simulated tasks, interactive games, training exercises, or balance tests using the computed difference value or the computed time delay of the subject204′. For example, if the computed difference value between the calculated position and/or orientation of the body portion (e.g., the arm) of the subject and the position determined for the one or more objects on the screen of the visual display device is large (i.e., a large distance of 20 inches or more), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is below a baseline normative value (i.e., below average performance). Conversely, if the computed difference value between the calculated position and/or orientation of the body portion (e.g., the arm) of the subject and the position determined for the one or more objects on the screen of the visual display device is small (i.e., a small distance of 3 inches or less), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is above a baseline normative value (i.e., above average performance). Similarly, if the computed time delay between a movement of the one or more objects on the screen of the visual display device and a movement of the body portion (e.g., an arm) of the subject is large (i.e., a large time delay of 1 second or more), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is below a baseline normative value (i.e., below average performance). Conversely, if the computed time delay between a movement of the one or more objects on the screen of the visual display device and a movement of the body portion (e.g., an arm) of the subject is small (i.e., a small time delay of 0.2 seconds or less), then the data acquisition/data processing device104may determine that the performance of the subject204′ during the one or more simulated tasks, interactive games, training exercises, or balance tests is above a baseline normative value (i.e., above average performance). Also, in one or more other embodiments, the measurement and analysis system described above may further comprise a force measurement assembly (e.g., force measurement assembly102) configured to receive a subject. In addition, in one or more other embodiments, the measurement and analysis system may additionally include the instrumented motion capture glove described in detail above. In yet a further embodiment, a modified version of the force measurement system100′ may comprise a force measurement device600in the form of an instrumented treadmill. Like the force measurement assemblies102,102′ described above, the instrumented treadmill600is configured to receive a subject thereon. Refer toFIG.38, it can be seen that the subject visual display device107′ is similar to that described above, except that the screen168′ of the subject visual display device107′ is substantially larger than the screen168utilized in conjunction with the force measurement system100(e.g., the diameter of the screen168′ is approximately two (2) times larger that of the screen168). In one exemplary embodiment, the projection screen168′ has a width WS′ lying in the range between approximately one-hundred and eighty (180) inches and approximately two-hundred and forty (240) inches (or between one-hundred and eighty (180) inches and two-hundred and forty (240) inches). Also, rather than being supported on a floor surface using the screen support structure167explained above, the larger hemispherical screen168′ ofFIG.38rests directly on the floor surface. In particular, the peripheral edge of the semi-circular cutout178′, which is located at the bottom of the screen168′, rests directly on the floor. The hemispherical screen168′ ofFIG.38circumscribes the instrumented treadmill600. Because the other details of the subject visual display device107′ and the data acquisition/data processing device104are the same as that described above with regard to the aforementioned embodiments, no further description of these components104,107′ will be provided for this embodiment. As illustrated inFIG.38, the instrumented treadmill600is attached to the top of a motion base602. The treadmill600has a plurality of top surfaces (i.e., a left and right rotating belt604,606) that are each configured to receive a portion of a body of a subject (e.g., the left belt of the instrumented treadmill600receives a left leg108aof a subject108, whereas the right belt606of the instrumented treadmill600receives a right leg108bof the subject108). In a preferred embodiment, a subject108walks or runs in an upright position atop the treadmill600with the feet of the subject contacting the top surfaces of the treadmill belts604,606. The belts604,606of the treadmill600are rotated by one or more electric actuator assemblies608, which generally comprise one or more electric motors. Similar to the force measurement assemblies102,102′ described above, the instrumented treadmill600is operatively connected to the data acquisition/data processing device104by an electrical cable. While it is not readily visible inFIG.38due to its location, the force measurement assembly610, like the force measurement assemblies102,102′, includes a plurality of force transducers (e.g., four (4) pylon-type force transducers) disposed below each rotating belt604,606of the treadmill600so that the loads being applied to the top surfaces of the belts604,606can be measured. Similar to that described above for the force measurement assembly102, the separated belts604,606of the instrumented treadmill600enables the forces and/or moments applied by the left and right legs108a,108bof the subject108to be independently determined. The arrows T1, T2, T3disposed adjacent to the motion base602inFIG.38schematically depict the displaceable nature (i.e., the translatable nature) of the instrumented treadmill600, which is effectuated by the motion base602, whereas the curved arrows R1, R2, R3inFIG.38schematically illustrate the ability of the instrumented treadmill600to be rotated about a plurality of different axes, the rotational movement of the instrumented treadmill600being generated by the motion base602. The primary components of the motion base602are schematically depicted inFIGS.39A and39B. As depicted in these figures, the motion base602comprises a movable top surface612that is preferably displaceable (i.e., translatable, as represented by straight arrows) and rotatable (as illustrated by curved arrows R1, R2) in 3-dimensional space by means of a plurality of actuators614. In other words, the motion base602is preferably a six (6) degree-of-freedom motion base. The instrumented treadmill600is disposed on the movable top surface612. The motion base602is used for the dynamic testing of subjects when, for example, the subject is being tested, or is undergoing training, in a virtual reality environment. While the motion base602is preferably translatable and rotatable in 3-dimensional space, it is to be understood that the present invention is not so limited. Rather, motion bases602that only are capable of 1 or 2 dimensional motion could be provided without departing from the spirit and the scope of the claimed invention. Also, motion bases602that are only capable of either linear motion or rotational motion are encompassed by the present invention. Another modified version of the force measurement system100″, which comprises a force measurement device600′ in the form of an instrumented treadmill, is illustrated inFIG.45. Similar to the instrumented treadmill inFIG.38, the instrumented treadmill ofFIG.45comprises left and right rotating belts604,606and a force measurement assembly610disposed underneath the treadmill belts604,606. The force measurement system100″ ofFIG.45is similar in many respects to the force measurement system100′ ofFIG.38, except that the projector arrangement is different from that of theFIG.38embodiment. In particular, in the embodiment ofFIG.45, two (2) projectors164″, each having a respective fisheye-type lens182, are used to project an image onto the generally hemispherical projection screen168″. As illustrated inFIG.45, each of the projectors164″ generally rests on the top surface of the floor, and has a fisheye-type lens182that is angled upward at an approximately 90 degree angle. Similar to that described above with regard toFIGS.30and31, the projectors164″ with the fisheye-type lenses182project intersecting light beams onto the generally hemispherical projection screen168″. Advantageously, the use of two projectors164″ with fisheye-type lenses182, rather than just a single projector164″ with a fisheye lens182, accommodates the larger diameter projection screen168″ that is utilized with the instrumented treadmill600′, and it also has the added benefit of removing shadows that are cast on the output screen168″ by the subject108disposed on the force measurement assembly600′. In still a further embodiment of the invention, the virtual reality environment described herein may include the projection of an avatar image onto the hemispherical projection screen168of the subject visual display device107. For example, as illustrated in the screen image266ofFIG.40, the immersive virtual reality environment268may comprise a scenario wherein an avatar270is shown walking along a bridge207. The avatar image270on the screen168represents and is manipulated by the subject108disposed on the force measurement assembly102or the instrumented treadmill600. The animated movement of the avatar image270on the screen168is controlled based upon the positional information acquired by the motion acquisition/capture system300described above, as well as the force and/or moment data acquired from the force measurement assembly102or the instrumented treadmill600. In other words, an animated skeletal model of the subject108is generated by the data acquisition/data processing device104using the acquired data from the motion capture system300and the force measurement assembly102or the instrumented treadmill600. The data acquisition/data processing device104then uses the animated skeletal model of the subject108to control the movement of the avatar image270on the screen168. The avatar image270illustrated in the exemplary virtual reality scenario ofFIG.40has a gait disorder. In particular, it can be seen that the left foot272of the avatar270is positioned in an abnormal manner, which is indicative of the subject108who is controlling the avatar270having a similar disorder. In order to bring this gait abnormality to the attention of the subject108and the clinician conducting the evaluation and/or training of the subject, the left foot272of the avatar270is shown in a different color on the screen168(e.g., the left foot turns “red” in the image in order to clearly indicate the gait abnormality). InFIG.40, because this is a black-and-white image, the different color (e.g., red) of the left foot272of the avatar270is indicated using a hatching pattern (i.e., the avatar's left foot272is denoted using crisscross type hatching). It is to be understood that, rather than changing the color of the left foot272, the gait abnormality may indicated using other suitable means in the virtual reality environment268. For example, a circle could be drawn around the avatar's foot272to indicate a gait disorder. In addition, a dashed image of an avatar having normal gait could be displayed on the screen168together with the avatar270so that the subject108and the clinician could readily ascertain the irregularities present in the subject's gait, as compared to a virtual subject with normal gait. InFIG.41, another virtual reality environment utilizing an avatar270′ is illustrated. This figure is similar in some respects toFIG.37described above, except that the avatar270′ is incorporated into the virtual reality scenario. As shown in the screen image244′ ofFIG.41, the immersive virtual reality environment246′ simulates a task of daily living comprising a scenario wherein the avatar270′, which is controlled by the subject108, is removing an object248(e.g., a cereal box) from a kitchen cabinet250. Similar to that described above in conjunction withFIG.40, the avatar image270′ on the screen168represents and is manipulated by the subject108disposed on the force measurement assembly102or the instrumented treadmill600. The animated movement of the avatar image270′ on the screen168is controlled based upon the positional information acquired by the motion acquisition/capture system300described above, as well as the force and/or moment data acquired from the force measurement assembly102or the instrumented treadmill600. In other words, the manner in which the avatar270′ removes cereal box248from the kitchen cabinet250is controlled based upon the subject's detected motion. Similar to that explained above forFIG.40, a disorder in a particular subject's movement may be animated in the virtual reality environment246′ by making the avatar's left arm274turn a different color (e.g., red). As such, any detected movement disorder is brought to the attention of the subject108and the clinician conducting the evaluation and/or training of the subject. In this virtual reality scenario246′, the cameras302of the motion acquisition/capture system300may also be used to detect the head movement of the subject108in order to determine whether or not the subject is looking in the right direction when he or she is removing the cereal box248from the kitchen cabinet250. That is, the cameras302may be used to track the direction of the subject's gaze. It is also to be understood that, in addition to the cameras302ofFIGS.32and33, a head-mounted camera on the subject108may be used to track the subject's gaze direction. The head-mounted camera could also be substituted for one or more of the cameras inFIGS.32and33. In yet further embodiments of the invention incorporating the avatar270,270′ on the screen168, the data acquisition/data processing device104is specially programmed so as to enable a system user (e.g., a clinician) to selectively choose customizable biofeedback options in the virtual reality scenarios246′ and268. For example, the clinician may selectively choose whether or not the color of the avatar's foot or arm would be changed in the virtual reality scenarios246′,268so as to indicate a disorder in the subject's movement. As another example, the clinician may selectively choose whether or not the dashed image of an avatar having normal gait could be displayed on the screen168together with the avatar270,270′ so as to provide a means of comparison between a particular subject's gait and that of a “normal” subject. Advantageously, these customizable biofeedback options may be used by the clinician to readily ascertain the manner in which a particular subject deviates from normal movement(s), thereby permitting the clinician to focus the subject's training on the aspects of the subject's movement requiring the most correction. In other further embodiments of the invention, the force measurement system100described herein is used for assessing the visual flow of a particular subject, and at least in cases, the impact of a subject's visual flow on the vestibular systems. In one or more exemplary embodiments, the assessment of visual flow is concerned with determining how well a subject's eyes are capable of tracking a moving object. In still further embodiments, the force measurement system100described herein is used for balance sensory isolation, namely selectively isolating or eliminating one or more pathways of reference (i.e., proprioceptive, visual, and vestibular). As such, it is possible to isolate the particular deficiencies of a subject. For example, the elderly tend to rely too heavily upon visual feedback in maintaining their balance. Advantageously, tests performed using the force measurement system100described herein could reveal an elderly person's heavy reliance upon his or her visual inputs. In yet further embodiments, the virtual reality scenarios described above may include reaction time training and hand/eye coordination training (e.g., catching a thrown ball, looking and reaching for an object, etc.). In order to effectively carry out the reaction time training routines and the hand/eye coordination training routines, the system100could be provided with the motion capture system300described above, as well as eye movement tracking system for tracking the eye movement (gaze) of the subject or patient. In yet further embodiments, the data acquisition/data processing device104of the force measurement system100is programmed to determine a presence of a measurement error resulting from a portion of the load from the at least one portion of the body of the subject108being applied to an external object rather than the intended top surfaces114,116of the force measurement assembly102. As illustrated in the graphs ofFIGS.56-58, the data acquisition/data processing device104may be configured to determine the presence of the measurement error by computing a maximum drop in the vertical component (FZ) of the output force for a predetermined duration of time. Also, as illustrated in the graphs ofFIGS.56-58, the data acquisition/data processing device104may be configured to determine the presence of the measurement error by computing an average drop in the vertical component (FZ) of the output force for a predetermined duration of time. For example, a vertical force curve (i.e., an FZcurve) generated for a test trial where the subject108is pulling on the harness352while standing still is illustrated inFIG.56. As shown in this figure, the y-axis362of the graph360is the vertical component (FZ) of the output force in Newtons (N), while the x-axis364of the graph360is the time in seconds (sec). In the graph360ofFIG.56, it can be seen that the vertical force curve366has a minimum point at368. As another example, a vertical force curve (i.e., an FZcurve) generated for a test trial where the subject108steps off the force measurement assembly102with one foot, and places his or her foot back onto the force measurement assembly102, is illustrated inFIG.57. As shown in this figure, the y-axis372of the graph370is the vertical component (FZ) of the output force in Newtons (N), while the x-axis374of the graph370is the time in seconds (sec). In the graph370ofFIG.57, it can be seen that the vertical force curve376has a minimum point at378. As yet another example, a vertical force curve (i.e., an FZcurve) generated for a test trial where the subject108steps off the force measurement assembly102with both feet, but does not return to the force measurement assembly102, is illustrated inFIG.58. As shown in this figure, the y-axis382of the graph380is the vertical component (FZ) of the output force in Newtons (N), while the x-axis384of the graph380is the time in seconds (sec). In the graph380ofFIG.58, it can be seen that the vertical force curve386has a minimum point and endpoint at388where the subject108steps off the force measurement assembly102with both feet. In these further embodiments, the force measurement system100may further comprise an external force sensor (i.e., a load transducer) configured to measure a force exerted on an external object by the subject. The external force sensor (i.e., a load transducer) is operatively coupled to the data acquisition/data processing device104of the force measurement system100so that the load data acquired by the external force sensor (i.e., a load transducer) may be transmitted to the data acquisition/data processing device104. When the external force sensor is used to measure the force exerted on the external object by the subject, the data acquisition/data processing device104may be configured to determine the presence of the measurement error by determining whether the force measured by the external force sensor is greater than a predetermined threshold value (e.g., greater than 10 Newtons). In the illustrative embodiment, with reference toFIG.54, the external object on which the subject108is exerting the force may comprise a safety harness352worn by the subject108to prevent the subject from falling, and the safety harness352may be provided with the external force sensor350(seeFIGS.54and55). As shown inFIGS.54and55, the external force sensor350may be connected between the harness support structure128′ and the safety harness352. More particularly, in the illustrative embodiment, the top of the external force sensor350is connected to the harness support structure128′ by the upper harness connectors354, and the bottom of the external force sensor350is connected to the harness ropes358by the lower harness connectors356. The safety harness352is suspended from the lower harness connectors356by the harness ropes358. Also, in the illustrative embodiment, with reference now toFIG.42, another external object on which the subject108is exerting the force may comprise stationary portions122a,122bof the base assembly106of the force measurement system100, and the stationary portions122a,122bof the base assembly106may be provided with respective external force sensors390,392for measuring the forces exerted thereon by the subject108. In the illustrative embodiment, the data acquisition/data processing device104is further configured to classify the type of action by the subject108that results in the subject108exerting the force on the external object (e.g., on the harness352or the stationary portions122a,122bof the base assembly106). For example, in the illustrative embodiment, the type of action by the subject108that results in the subject108exerting the force on the external object is selected from the group consisting of: (i) pulling on a safety harness352, (ii) stepping at least partially off the top surfaces114,116of the force measurement assembly102, and (iii) combinations thereof. In these further embodiments, the data acquisition/data processing device104is further configured to generate an error notification on the output screen of the operator visual display device130when the data acquisition/data processing device104determines the presence of the measurement error. In addition, the data acquisition/data processing device104may be further configured to classify the type of action by the subject108that results in the portion of the load being applied to the external object (e.g., the harness352or the stationary portion122a,122bof the base assembly106). The error notification generated on the output screen of the operator visual display device130by the data acquisition/data processing device104may include the classification of the type of action by the subject108that results in the portion of the load being applied to the external object. In these further embodiments, the data acquisition/data processing device104may compute the normalized maximum drop in the vertical component (FZdropMAX) of the output force during a test trial by using the following equation: FZdropMAX=(StartMeanFZ-MinFZ)StartMeanFZ(14) where:StartMean FZ: mean value of the vertical force (FZ) for the first 500 milliseconds of the trial; andMin FZ: minimum value of the vertical force (FZ) for the trial. For example, for the vertical force curve366depicted inFIG.56, the FZdropMAX value is approximately 27.8% for the trial in which the subject108is pulling on the harness352while standing still. As another example, for the vertical force curve376depicted inFIG.57, the FZdropMAX value is approximately 78.3% for the trial in which the subject108steps off the force measurement assembly102with one foot, and then places his or her foot back onto the force measurement assembly102. As yet another example, for the vertical force curve386depicted inFIG.58, the FZdropMAX value is approximately 95.0% for the trial in which the subject108steps off the force measurement assembly102with both feet, and does not return to the force measurement assembly102. In these further embodiments, the data acquisition/data processing device104may compute the normalized average drop in the vertical component (FZdropAVG) of the output force during a test trial by using the following equation: FZdropAVG=(StartMeanFZ-MeanFZ)StartMeanFZ(15) where:StartMean FZ: mean value of the vertical force (FZ) for the first 500 milliseconds of the trial; andMean FZ: mean value of the vertical force (FZ) from 501 milliseconds to the end of the trial. For example, for the vertical force curve366depicted inFIG.56, the FZdropAVG value is approximately 2.7% for the trial in which the subject108is pulling on the harness352while standing still. As another example, for the vertical force curve376depicted inFIG.57, the FZdropAVG value is approximately 3.8% for the trial in which the subject108steps off the force measurement assembly102with one foot, and then places his or her foot back onto the force measurement assembly102. As yet another example, for the vertical force curve386depicted inFIG.58, the FZdropAVG value is approximately 4.7% for the trial in which the subject108steps off the force measurement assembly102with both feet, and does not return to the force measurement assembly102. In these further embodiments, the data acquisition/data processing device104may be configured to generate an error notification on the output screen of the operator visual display device130based upon comparing the FZdropMAX and FZdropAVG values computed for a particular trial to predetermined threshold values. For example, the data acquisition/data processing device104may be configured to determine if the FZdropMAX value computed for a particular trial is greater than 0.50 and if the FZdropAVG value computed for a particular trial is greater than 0.02. When the data acquisition/data processing device104determines that the FZdropMAX value is greater than 0.50 and the FZdropAVG value is greater than 0.02 (i.e., both of these two conditions are true), the error notification outputted by the data acquisition/data processing device104on the operator visual display device130may indicate that the subject has likely fallen during the trial (e.g., by outputting a message on the screen, such as “Subject has most likely fallen during trial, it is suggested that trial be repeated.”). However, if the data acquisition/data processing device104determines that the FZdropAVG value computed for a particular trial is greater than 0.01, but at least one of the preceding two conditions is not true (i.e., the FZdropMAX value is not greater than 0.50 and/or the FZdropAVG value is not greater than 0.02), then the error notification outputted by the data acquisition/data processing device104on the operator visual display device130may indicate that the subject has likely pulled on the harness352(e.g., by outputting a message on the screen, such as “Subject has most likely pulled on harness, it is suggested that trial be repeated.”). Otherwise, if the data acquisition/data processing device104determines that the first set criteria are not satisfied (i.e., FZdropMAX value is less than 0.50 and/or FZdropAVG value is less than 0.02) and the second criteria are not satisfied (i.e., FZdropAVG value is less than 0.01), then no error notification will be outputted by the data acquisition/data processing device104on the operator visual display device130because, based on the computed FZdropMAX and FZdropAVG values, it appears to have been a good trial. In still further embodiments, the data acquisition/data processing device104of the force measurement system is programmed to determine a type of balance strategy that the subject is using to maintain his or her balance on the force measurement assembly. In these further embodiments, the type of balance strategy determined by the data acquisition/data processing device104is selected from the group consisting of: (i) an ankle strategy, (ii) a hip strategy, (iii) a step strategy, and (iv) combinations thereof. As will be described hereinafter, the data acquisition/data processing device104may determine the type of balance strategy that the subject is using to maintain his or her balance on the force measurement assembly by using output data from a variety of different devices. In one or more of these further embodiments, the force measurement assembly may be in the form of a force plate or a balance plate (e.g., the displaceable force plate102depicted inFIG.44or the static force plate102′ depicted inFIG.52). When the force measurement assembly is in the form of the displaceable force plate102depicted inFIG.44, the force measurement system further includes the base assembly106described above, which has a stationary portion and a displaceable portion. In this arrangement, as described above, the force measurement assembly102forms a part of the displaceable portion of the base assembly106, and the force measurement system additionally comprises a plurality of actuators158,160coupled to the data acquisition/data processing device104. As explained above, the first actuator158is configured to translate the displaceable portion of the base assembly106, which includes the force measurement assembly102, relative to the stationary portion of the base assembly106, while the second actuator160is configured to rotate the force measurement assembly102about a transverse rotational axis TA relative to the stationary portion of the base assembly106. In these further embodiments, the best type of balance strategy that can be employed by the subject depends on the particular task that the subject is being asked to perform. That is, for one particular task, an ankle balance strategy may be the best strategy for the subject to use, while for another particular task, a hip balance strategy may be the best strategy for the subject to use. For other tasks, a step balance strategy may be the best option for the subject. For example, because the ankle balance strategy does not offer as much opportunity to change the subject's center of gravity, it is not the best balance option for all situations. Also, a particular subject may have physical limitations that affect his or her balance strategy (e.g., an older person with stiff joints may have a significantly harder time using an ankle balance strategy as compared to a younger person with more flexible joints). In one or more other further embodiments, the force measurement assembly may be in the form of an instrumented treadmill600(seeFIG.38), rather than a force measurement assembly102,102′. In one or more of these further embodiments, the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject is using to maintain his or her balance on the force measurement assembly based upon one or more of the output forces and/or moments determined from the one or more signals of the force measurement assembly (i.e., the one or more signals of the force measurement assembly102,102′ or instrumented treadmill600). In these further embodiments, the one or more output forces and/or moments used by the data acquisition/data processing device104to determine the type of balance strategy comprise one or more shear forces, one or more vertical forces, or one or more moments used to compute the center of pressure of the subject. For example, when the data acquisition/data processing device104utilizes the shear force or a parameter based on the shear force to determine the type of balance strategy, a shear force approximately equal to zero is representative of an all ankle strategy by the subject, whereas a shear force that is equal to a substantial non-zero value is indicative of a hip strategy by the subject. In such a case, if the shear force measured by the force measurement assembly is greater than a predetermined magnitude, then the subject is using his or her hips, rather than his or her ankles, to maintain balance. The data acquisition/data processing device104may determine if the subject108uses a step balance strategy by evaluating the center of pressure of the subject108determined from the force measurement assembly (i.e., a step by the subject108will be evident by a characteristic change in the center of pressure of the subject). In another one or more of these further embodiments, the force measurement system may further comprise a motion capture system with one or more motion capture devices configured to detect the motion of the subject108(e.g., the marker-based motion capture system300with cameras302depicted inFIGS.32and33). In these further embodiments, the motion capture system300is operatively coupled to the data acquisition/data processing device104, and the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject108is using to maintain his or her balance on the force measurement assembly based upon the output data from the one or more motion capture devices of the motion capture system (i.e., the limb movements of the subject determined from the motion capture data). For example, when the data acquisition/data processing device104utilizes the motion capture system to determine the type of balance strategy, the images captured by the motion capture system are indicative of whether the subject108is using a hip strategy or an ankle strategy to maintain his or her balance. In yet another one or more of these further embodiments, the force measurement system may further comprise at least one camera (e.g., at least one web camera) configured to capture the motion of the subject108. In these further embodiments, the camera is operatively coupled to the data acquisition/data processing device104, and the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject108is using to maintain his or her balance on the force measurement assembly based upon the output data from the at least one camera. For example, when the data acquisition/data processing device104utilizes the at least one camera (e.g., at least one web camera) to determine the type of balance strategy, a vision model (e.g., PoseNet) that employs a convolutional neural network (CNN) may be used to estimate the balance strategy of the subject by estimating the locations of the key body joints of the subject108. In one or more of these further embodiments, a markerless motion capture system comprising a plurality of cameras (e.g., a plurality of web cameras) may be mounted on elements of the force measurement system in order to capture the motion of the subject in a variety of different planes. For example, with reference toFIG.44, a first camera may be mounted on the screen168facing the subject in order to capture the coronal plane of the subject. A second camera may be mounted on the first side bar of the harness support structure128′ (seeFIG.54), and angled in a direction facing the subject so as to capture the sagittal plane of the subject from a first side. A third camera may be mounted on the second side bar of the harness support structure128′ (seeFIG.54), and angled in a direction facing the subject so as to capture the sagittal plane of the subject from a second side. In still another one or more of these further embodiments, the force measurement system may further comprise one or more inertial measurement units (e.g., one or more inertial measurement units306as depicted inFIG.49) configured to detect the motion of the subject108. In these further embodiments, the one or more inertial measurement units306are operatively coupled to the data acquisition/data processing device104, and the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject108is using to maintain his or her balance on the force measurement assembly based upon the output data from the one or more inertial measurement units306. For example, when the data acquisition/data processing device104utilizes one or more inertial measurement units306to determine the type of balance strategy, a first one of the inertial measurement units306may be mounted on the torso of the subject108, a second one of the inertial measurement units306may be mounted near a hip of the subject108, and a third one of the inertial measurement units306may be mounted near an ankle of the subject108(e.g., refer toFIG.49). In yet another one or more of these further embodiments, the force measurement system may further comprise a radar-based sensor configured to detect the posture of the subject108. In these further embodiments, the radar-based sensor is operatively coupled to the data acquisition/data processing device104, and the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject108is using to maintain his or her balance on the force measurement assembly based upon the output data from the radar-based sensor. For example, when the data acquisition/data processing device104utilizes the radar-based sensor to determine the type of balance strategy, the radar-based sensor may be mounted on one of the side bars of the harness support structure128′ (seeFIG.54), and angled in a direction facing the subject so as to capture the sagittal plane of the subject from a side. As one example, the radar-based sensor may utilize a miniature radar chip to detect touchless gesture or pose interactions, such as in the Google®Solidevice. In still another one or more of these further embodiments, the force measurement system may further comprise an infrared sensor configured to detect the posture of the subject108. In these further embodiments, the infrared sensor is operatively coupled to the data acquisition/data processing device104, and the data acquisition/data processing device104is programmed to determine the type of balance strategy that the subject108is using to maintain his or her balance on the force measurement assembly based upon the output data from the infrared sensor. For example, as described above, the infrared sensor may be part of a motion detection/motion capture system that employs infrared light (e.g., the system could utilize an infrared (IR) emitter to project a plurality of dots onto objects in a particular space as part of a markless motion capture system). As shown in the exemplary system ofFIG.50, the motion detection/motion capture system employing infrared light may comprise one or more cameras320, one or more infrared (IR) depth sensors322, and one or more microphones324to provide full-body three-dimensional (3D) motion capture, facial recognition, and voice recognition capabilities. In these further embodiments where the balance strategy of the subject108is determined, the force measurement system further comprises at least one visual display device having an output screen (e.g., the subject visual display device107and/or operator visual display device130described above and depicted inFIG.1). In these further embodiments, the data acquisition/data processing device104is configured to generate a visual indicator (e.g., see the virtual representation394inFIGS.59and60) indicative of the type of balance strategy that the subject is using to maintain his or her balance, and to display the visual indicator394in the one or more images396,397on the output screen of the at least one visual display device. In these further embodiments where the balance strategy of the subject108is determined, the force measurement system further comprises one or more user input devices, such as the keyboard132and/or mouse134depicted inFIG.1and described above. In these further embodiments, the user input device132,134is configured to output an input device signal based upon an input by a user, and the data acquisition/data processing device104is configured to set a parameter related to the balance strategy of the subject108based upon the input by the user entered using the user input device132,134. Also, in these further embodiments, the data acquisition/data processing device104is further programmed to generate and display visual feedback to the subject108on the output screen of the at least one visual display device based upon the parameter entered by the user. For example, the clinician may set a goal for a particular angle of displacement of the subject's hip angle θHor the subject's ankle angle θA, and then a line may be displayed on the subject visual display device that denotes that particular angle. As the subject displaces his or her body on the force measurement assembly, the virtual representation394of the subject on the force plate surface398disposed in the screen image396,397may be displaced in accordance with the subject's movement so that the subject108is able to visualize the virtual representation394of him or her approach the line marking his or her joint angle displacement goal. In these further embodiments, the visual feedback provided to the subject108regarding his or her balance strategy may be provided in conjunction with a balance assessment and training regime. First of all, an assessment may be performed on the subject to determine if there are particular weaknesses in the balance strategy of the subject. For example, as described above, a motion capture system may be used to determine the hip and ankle joint angles θH, θAof the subject in the sagittal plane. Secondly, based on the results of the balance assessment, a balance training program for the subject may be developed. For example, the balance training program may involve scenarios that would require the subject to use each one of the three balance strategies (i.e., the ankle strategy, the hip strategy, and the step strategy) depending on the scenario. During the training, the clinician may use the visual feedback functionality of the force measurement system in order to set the required range of motion for the subject (e.g., the angular range of displacement for the hip joint angle and/or the ankle joint angle of the subject). Then, during the training, the visual feedback may be modified when the subject reaches a certain target angular displacement (e.g., the line displayed on the subject visual display device that denotes a particular goal angle may be shifted to another rotational position once the goal is achieved). The data acquisition/data processing device104of the force measurement system may be programmed to perform all of the above-described assessment and training functionality. InFIG.59, a first exemplary screen image396on the subject visual display device107is illustrated. In the screen image396ofFIG.59, the virtual representation394of the subject is depicted using an ankle balance strategy where the hip joint angle θHis approximately equal to the ankle joint angle θA. InFIG.60, a second exemplary screen image397on the subject visual display device107is illustrated. In the screen image397ofFIG.60, the virtual representation394of the subject is depicted using a combination hip and ankle balance strategy where the hip joint angle θHis not equal to the ankle joint angle θA. In general, when the hip joint angle θHis equal or approximately equal to the ankle joint angle θA, then an ankle balance strategy is being used by the subject. Conversely, when the hip joint angle θHand the ankle joint angle θAare different, then a hip balance strategy is being used by the subject. In still further embodiments, the data acquisition/data processing device104of the force measurement system is programmed to determine a difference between a center of pressure and a center of mass for the subject from output force and/or moment data, and the one or more data processing devices are additionally programmed to determine the fall risk of the subject based upon the difference between the center of pressure and the center of mass for the subject. For example, a large difference between the center of pressure and the center of mass for a subject will typically be indicative of a high fall risk. Conversely, if the difference between the center of pressure and the center of mass for a subject is small, then the fall risk of the subject is low. In one or more of these further embodiments, the data acquisition/data processing device104may determine the difference between a center of pressure and a center of mass for the subject from output force and/or moment data generated by a force measurement assembly. The force measurement assembly may be in the form of a force plate or a balance plate (e.g., the displaceable force plate102depicted inFIG.44or the static force plate102′ depicted inFIG.52). When the force measurement assembly is in the form of the displaceable force plate102depicted inFIG.44, the force measurement system further includes the base assembly106described above, which has a stationary portion and a displaceable portion. In this arrangement, as described above, the force measurement assembly102forms a part of the displaceable portion of the base assembly106, and the force measurement system additionally comprises a plurality of actuators158,160coupled to the data acquisition/data processing device104. As explained above, the first actuator158is configured to translate the displaceable portion of the base assembly106, which includes the force measurement assembly102, relative to the stationary portion of the base assembly106, while the second actuator160is configured to rotate the force measurement assembly102about a transverse rotational axis TA relative to the stationary portion of the base assembly106. In these further embodiments, the data acquisition/data processing device104may be configured to determine the difference between the center of pressure and the center of mass for the subject by using a combination of a high pass filter and a low pass filter on center of pressure data determined from the output force and/or moment data. For example, in one exemplary, non-limiting embodiment, a low pass filter with a 0.75 Hz cutoff frequency and a high pass filter with a 10.0 Hz cutoff frequency are applied to the center of pressure data in order to determine the difference between the center of pressure and the center of mass. The high pass filter eliminates the signal noise associated with the center of pressure data. Alternatively, in another example, the center of pressure and the center of mass may be computed separately, and then the difference between the center of pressure and the center of mass can be computed. In one or more of these further embodiments, the data acquisition/data processing device104is configured to determine the difference between the center of pressure and the center of mass for the subject while the subject is standing stationary on the top surface of the force measurement assembly. Alternatively, in one or more other ones of these further embodiments, the data acquisition/data processing device104is configured to determine the difference between the center of pressure and the center of mass for the subject while the subject is displacing his or her body (e.g., swaying in the medial-lateral direction and/or superior-inferior direction) while standing on the top surface of the force measurement assembly. In one or more other further embodiments, the force measurement assembly may be in the form of an instrumented treadmill600(seeFIG.38), rather than a force measurement assembly102,102′. In one or more of these further embodiments, the data acquisition/data processing device104is programmed to determine the center of pressure data and the center of mass data based upon one or more of the output forces and/or moments determined from the one or more signals of the force measurement assembly (i.e., the one or more signals of the force measurement assembly102,102′ or instrumented treadmill600). In these further embodiments, the data acquisition/data processing device104of the force measurement system is further programmed to determine one or more balance parameters of the subject from the output force and/or moment data while the subject is stepping onto the top surface of the force measurement assembly and/or stepping off the top surface of the force measurement assembly. The one or more balance parameters of the subject determined by the data acquisition/data processing device104include a displacement of a center of pressure for at least one foot of the subject (e.g., for the left foot of the subject, for the right foot of the subject, or for both the right and left feet of the subject). The center of pressure may be computed by the data acquisition/data processing device104in the manner described above with regard toFIGS.5and7. During the transitional period when the subject is stepping onto the top surface of the force measurement assembly and/or stepping off the top surface of the force measurement assembly, the center of pressure will be displaced over a boundary edge of the force measurement assembly. In these further embodiments, the displacement of the center of pressure for the one or more feet of the subject may be used to predict the fall risk of the subject. For example, if the displacement path of the center of pressure for the subject is erratic with many jagged fluctuations, then the subject may have a high risk of falling. Conversely, if the displacement path of the center of pressure for the subject is generally smooth without many jagged fluctuations, then the subject may have a low risk of falling. In these further embodiments, when the subject is stepping onto the top surface of the force measurement assembly and/or stepping off the top surface of the force measurement assembly, the one or more balance parameters of the subject determined by the data acquisition/data processing device104may further include a weight distribution on each foot of the subject quantified by a percentage of the subject's total body weight on each foot (e.g., 60% of total body weight being placed on the right leg versus 40% of total body weight being placed on the left leg) and/or weight symmetry between the feet of the subject (e.g., approximately symmetric weight distribution or asymmetric weight distribution). In these further embodiments, the data acquisition/data processing device104utilizes the vertical force (FZ) measured by the force measurement assembly to determine the weight distribution on each foot of the subject and/or weight symmetry between the feet of the subject (i.e., the vertical force is used to get the weight of the subject on each foot). In these further embodiments, the data acquisition/data processing device104also utilizes the vertical force (FZ) as a measure of stabilization of the subject (e.g., a shift in the weight of the subject from one foot to the other foot is indicative of the stability of the subject). In these further embodiments, the weight distribution on each foot of the subject and/or weight symmetry between the feet of the subject may be used to predict the fall risk of the subject. For example, if the weight distribution of the subject is highly asymmetric between the feet of the subject, then the subject may have a high risk of falling. Conversely, if the weight distribution of the subject is nearly symmetric between the feet of the subject, then the subject may have a low risk of falling. In these further embodiments, when the subject is stepping onto the top surface of the force measurement assembly and/or stepping off the top surface of the force measurement assembly, the one or more balance parameters of the subject determined by the data acquisition/data processing device104may further include a time duration for the subject to reach a stable position on the force measurement assembly. For example, when the subject is stepping onto the top surface of the force measurement assembly, it may take the subject approximately 2 to 10 seconds after the second foot of the subject reaches the force measurement assembly to reach a stable position on the force measurement assembly. When the subject is stepping off the top surface of the force measurement assembly, the stability of the subject is considered after the subject lifts his or her first foot off the force measurement assembly. The data acquisition/data processing device104may be configured to determine that the stable position of the subject on the force measurement assembly has been reached when a displacement of a center of pressure for the subject remains within a predetermined sway envelope (e.g., the center of pressure does not move outside the boundary of a circular or elliptical sway envelope while the subject is standing on the force measurement assembly). A sway envelope for a subject with a balance disorder is typically larger than for a normal subject. Measurements taken during the stabilization phase of the subject (e.g., the approximately 2 to 10 second time period) are indicative of the subject's overall stability and likelihood of falling. In these further embodiments, the time duration for the subject to reach a stable position may be used to predict the fall risk of the subject. For example, if the subject takes a long time to reach a stable position, then the subject may be instable and have a high risk of falling. Conversely, if the subject only takes a short period of time to reach a stable position, then the subject may have a low risk of falling. In these further embodiments, the data acquisition/data processing device104may be configured to determine the one or more balance parameters of the subject while the subject is either stepping onto the top surface of the force measurement assembly for a body weight measurement (e.g., during a physical exam where the subject is being weighed) or stepping off the top surface of the force measurement assembly after the body weight measurement (e.g., during the physical exam where the subject is being weighed). The vertical force (FZ) measured by the force measurement assembly is used to determine the weight of the subject. In these further embodiments, the data acquisition/data processing device104may be further configured to determine a fall risk of the subject by inputting one or more of the above-described balance parameters (i.e., difference between the center of pressure and the center of mass for the subject, displacement of the center of pressure, weight distribution on each foot, and/or time duration for the subject to reach a stable position) into a trained neural network (e.g., trained convolutional neural network (CNN), and then predicting the fall risk of the subject based upon each of these balance parameters (e.g., the trained neural network can be trained so as to determine normal and abnormal values for the balance parameters, as well as to apply weights to the balance parameters based upon their correlation with a fall risk of the subject). In yet further embodiments, referring toFIGS.61and62, the force measurement system generally comprises a force measurement assembly102configured to receive a subject108, at least one visual display device344,348, the at least one visual display device344,348configured to display one or more images; and one or more data processing devices104operatively coupled to the force measurement assembly102and the at least one visual display device344,348. In these further embodiments, as shown inFIGS.61and62, the one or more data processing devices104are further configured to generate a first image portion349,355and display the first image portion349,355using the at least one visual display device344,348, and to generate a second image portion351,357and display the second image portion351,357using the at least one visual display device344. The first image portion349,355displayed using the at least one visual display device344,348comprises a primary screen image for viewing by the subject108, and the second image portion351,357displayed using the at least one visual display device344comprises a virtual screen surround configured to at least partially circumscribe three sides of a torso of the subject108and to substantially encompass a peripheral vision of the subject108. As shown inFIGS.61and62, in the illustrative embodiments, the force measurement assembly102is in the form of a force plate or a balance plate. Although, in other embodiments, the force measurement assembly may be in the form of an instrumented treadmill (e.g., the instrumented treadmill600,600′ shown inFIGS.38and45). Also, in the embodiments ofFIGS.61and62, the force measurement system further includes the base assembly106described above, which has a stationary portion and a displaceable portion. In this arrangement, as described above, the force measurement assembly102forms a part of the displaceable portion of the base assembly106, and the force measurement system additionally comprises a plurality of actuators158,160coupled to the one or more data processing devices104. As explained above, the first actuator158is configured to translate the displaceable portion of the base assembly106, which includes the force measurement assembly102, relative to the stationary portion of the base assembly106, while the second actuator160is configured to rotate the force measurement assembly102on the rotatable carriage assembly157about a transverse rotational axis TA relative to the stationary portion of the base assembly106(see e.g.,FIGS.3,42, and43). Now, with reference toFIG.61, in one further embodiment, the at least one visual display device344,348comprises a first visual display device348and a second visual display device344. In the embodiment ofFIG.61, it can be seen that the first visual display device comprises a flat display screen348. In other embodiments, the first visual display device may alternatively comprise a curved display screen. As shown inFIG.61, the first image portion349with the primary screen image is displayed on the flat display screen348of the first visual display device. In the embodiment ofFIG.61, it can be seen that the second visual display device is in the form of a head-mounted visual display device344. For example, in the embodiment ofFIG.61, the head-mounted visual display device344may comprise an augmented reality headset that is capable of supplementing real-world objects, such as the flat display screen348, with computer-generated virtual objects. The head-mounted visual display device344may have the headset performance parameters described above (e.g., the aforedescribed field of view range, refresh rate range, and display latency range). As shown inFIG.61, the second image portion351with the virtual screen surround is displayed using the head-mounted visual display device344. Advantageously, similar to the physical dome-shaped projection screen168described above, the virtual screen surround351is capable of creating an immersive environment for the subject108disposed on the force measurement assembly102(i.e., the virtual screen surround351engages enough of the subject's peripheral vision such that the subject becomes, and remains immersed in the primary screen image that is being displayed on the flat display screen348). Next, with reference toFIG.62, in another further embodiment, the at least one visual display device comprises the head-mounted visual display device344without a physical display device. As shown inFIG.62, the first image portion355with the primary screen image is displayed using the head-mounted visual display device344. InFIG.62, the second image portion with the virtual screen surround357is additionally displayed using the head-mounted visual display device344. In the embodiment ofFIG.62, the head-mounted visual display device344may comprise a virtual reality headset that generates entirely virtual objects or an augmented reality headset that is capable of supplementing real-world objects with computer-generated virtual objects. In the embodiments ofFIGS.61and62, it can be seen that the virtual screen surround351,357generated by the one or more data processing devices104and displayed by the at least one visual display device344comprises a virtual cutout353,359configured to receive a portion of the body of the subject108therein. Similar to that described above for the cutout178in the physical dome-shaped projection screen168, the semi-circular virtual cutout353,359permits the subject108to be substantially circumscribed by the generally hemispherical virtual screen surround351,357on three sides. Also, in the embodiments ofFIGS.61and62, the virtual screen surround351,357generated by the one or more data processing devices104and displayed by the at least one visual display device344has a concave shape. More specifically, in the illustrative embodiments ofFIGS.61and62, the virtual screen surround351,357generated by the one or more data processing devices104and displayed by the at least one visual display device344,348has a hemispherical shape. In addition, as shown in the embodiments ofFIGS.61and62, the primary screen image349,355in the first image portion may comprise a subject test screen or subject training screen with a plurality of targets or markers238(e.g., in the form of circles) and a displaceable visual indicator or cursor240. As described above, the one or more data processing devices104control the movement of the visual indicator240towards the plurality of stationary targets or markers238based upon output data determined from the output signals of the force transducers associated with the force measurement assembly102. For example, in one testing or training scenario, the subject108may be instructed to move the cursor240towards each of the plurality of targets or markers238in succession. For example, the subject108may be instructed to move the cursor240towards successive targets238in a clockwise fashion (e.g., beginning with the topmost target238in the primary screen image349,355). In one or more other embodiments, rather than comprising a subject test screen or subject training screen, the primary screen image349,355in the first image portion displayed by the at least one visual display device344,348may alternatively comprise one of: (i) an instructional screen for the subject, (ii) a game screen, and (iii) an immersive environment or virtual reality environment. In these further embodiments, the virtual screen surround351,357depicted inFIGS.61and62may be displaced by the one or more data processing devices104in order to compensate for the movement of the head of the subject108. For example, a head position detection device (e.g., an inertial measurement unit306as depicted inFIG.50) may be provided on the head of the subject108in order to measure the position of the head of the subject108, and then the one or more data processing devices104may adjust the position of the virtual screen surround351,357in accordance with the subject's head position so that the virtual screen surround351,357always substantially encompasses a peripheral vision of the subject regardless of the gazing direction of the subject108. In other words, the virtual screen surround351,357rotates with the head of the subject108so that the subject108is always generally gazing at the center portion of the virtual screen surround351,357(i.e., the one or more data processing devices104displace the virtual screen surround351,357to track the position of the subject's head). In other embodiments, rather than an inertial measurement unit, the head position measurement device for measuring the head position of the subject108may comprise one or more of the following: (i) a video camera, (ii) an infrared sensor, (iii) an ultrasonic sensor, and (iv) a markerless motion capture device. Also, in these further embodiments, the one or more data processing devices104may be programmed to activate or turn “on” the virtual screen surround351,357inFIGS.61and62when the weight of the subject108is detected on the force measurement assembly102(e.g., when the force measurement assembly102detects a vertical force FZthat meets or exceeds a predetermined threshold value, for example, FZ≥200 Newtons). Conversely, the one or more data processing devices104may be programmed to deactivate or turn “off” the virtual screen surround351,357inFIGS.61and62when the weight of the subject108is not detected on the force measurement assembly102(e.g., when the force measurement assembly102detects a vertical force FZthat is less than a predetermined threshold value, for example, FZ<200 Newtons). Also, the one or more data processing devices104may be programmed to deactivate or turn “off” the virtual screen surround351,357inFIGS.61and62if it is determined that the subject108has likely fallen during testing or training (e.g., when the one or more processing devices104determine that the FZdropMAX value is greater than 0.50 and the FZdropAVG value is greater than 0.02 as explained above). In addition, in these further embodiments, the one or more data processing devices104may be programmed to visually indicate when the subject108is placing an excessive amount of weight (e.g., greater than 60% of his or her body weight) on one of his or her feet compared to the other of his or her feet. For example, when the subject108inFIGS.61and62is placing an excessive amount of the weight (e.g., greater than 60% of his or her body weight) on his left foot as detected by the first plate component110(i.e., the left plate component110) of the dual force plate102, the one or more data processing devices104may be programmed to make the left half of the virtual screen surround351,357brighter and/or change the color of the left half of the virtual screen surround351,357(e.g., change the color to “red”). Similarly, in this example, when the subject108inFIGS.61and62is placing an excessive amount of the weight (e.g., greater than 60% of his or her body weight) on his right foot as detected by the second plate component112(i.e., the right plate component112) of the dual force plate102, the one or more data processing devices104may be programmed to make the right half of the virtual screen surround351,357brighter and/or change the color of the right half of the virtual screen surround351,357(e.g., change the color to “red”). In these further embodiments, the data acquisition/data processing device104may be further programmed to generate a virtual representation of the subject and a visual element with which the virtual representation of the subject is able to interact, and to display the virtual representation of the subject and the visual element in the one or more images on the output screen of the at least one visual display device (e.g., the subject visual display device107). For example, as described above with regard toFIG.41, a virtual representation of the subject (e.g., an avatar270′) may interact with a visual element (e.g., a cereal box248in a kitchen cabinet250) in a virtual reality scene. As another example, as illustrated inFIG.15, a virtual representation of the subject204may interact with another type a visual element (e.g., a bridge207) in a virtual reality scene. In these embodiments, the data acquisition/data processing device104may be further programmed to generate tactile feedback for the subject108using at least one of the first and second actuators158,160on the base assembly106based upon the virtual representation of the subject interacting with the visual element in the one or more images on the output screen of the at least one visual display device (e.g., in the bridge scene206, if the virtual representation of the subject204is walking up an incline on the bridge207, the second actuator160may rotate the force measurement assembly102relative to the base assembly106so as to simulate the incline of the bridge207in the scene206). In some of these embodiments, the visual element in the one or more images on the output screen of the at least one visual display device may comprise an obstacle disposed in a virtual walking path of the virtual representation of the subject, and the data acquisition/data processing device104may be programmed to generate the tactile feedback for the subject108using the at least one of the first and second actuators158,160on the base assembly106when the virtual representation of the subject on the output screen collides with the obstacle disposed in the virtual walking path in the one or more images displayed on the at least one visual display device (e.g., in the bridge scene206, if the virtual representation of the subject204collides with one of the sides of the bridge207, the subject108will receive a slight jolt from one of the actuators158,160). As another example, if the virtual representation of the subject is walking down an endless grocery aisle and collides with a box in the grocery aisle, the first actuator158of the base assembly106may be used to provide a slight jolt to the subject108to indicate the collision. Now, with reference to the block diagrams inFIGS.63and64, several illustrative biomechanical analysis systems in which the aforedescribed force measurement assembly102or instrumented treadmill600,600′ are used with a three-dimensional (3D) pose estimation system will be explained. In these one or more illustrative embodiments, the 3D pose estimation system may comprise the 3D pose estimation system described in U.S. Pat. No. 10,853,970, the entire disclosure of which is incorporated herein by reference. Initially, in the block diagram710ofFIG.63, it can be seen that the 3D pose estimation system716receives images of a scene from one or more RGB video cameras714. The 3D pose estimation system716extracts the features from the images of the scene for providing inputs to a convolutional neural network. Then, the 3D pose estimation system716generates one or more volumetric heatmaps using the convolutional neural network, and applies a maximization function to the one or more volumetric heatmaps in order to obtain a three dimensional pose of one or more persons in the scene. As shown inFIG.63, the 3D pose estimation system716determines one or more three dimensional coordinates of the one or more persons in the scene for each image frame, and outputs the three dimensional coordinates to a kinetic core software development kit (SDK). In addition, as shown inFIG.63, user input and/or calibration parameters712may also be received as inputs to the 3D pose estimation system716. In the illustrative embodiment ofFIG.63, in addition to the three dimensional coordinates for each image frame from the 3D pose estimation system716, the kinetic core SDK718may also receive one or more device signals720from one or more force plates and/or an instrumented treadmill and/or as inputs. For example, the instrumented treadmill and the one or more force plates may comprise the force measurement assembly102or the instrumented treadmill600,600′ described above. In addition, as shown inFIG.63, the kinetic core SDK718may receive a monitor/display signal722as an input (e.g., an input signal from a touchscreen display). Further, as shown inFIG.63, the kinetic core SDK718may receive one or more motion base signals724(e.g., one or more signals from the base assembly106described above). Then, the kinetic core SDK718determines and outputs one or more biomechanical performance parameters in an application desired output/report726using the three dimensional coordinates from the 3D pose estimation system716and the one or more signals720,722,724from the connected devices. The illustrative biomechanical analysis system ofFIG.63does not include trained CNN backpropagation, but another illustrative biomechanical analysis system that will be described hereinafter does include trained CNN backpropagation. Next, referring toFIG.64, a second illustrative biomechanical analysis system in which the pose estimation system may be utilized will be explained. With reference to the block diagram730ofFIG.64, it can be seen that the second illustrative biomechanical analysis system is similar in many respects to the first illustrative biomechanical analysis system described above. As such, for the sake of brevity, the features that the second illustrative biomechanical analysis system has in common with the first illustrative biomechanical analysis system will not be discussed because these features have already been explained above. Although, unlike the first illustrative biomechanical analysis system, the second illustrative biomechanical analysis system ofFIG.64includes trained CNN backpropagation. More specifically, in the illustrative embodiment ofFIG.64, the kinetic core SDK718is operatively coupled to one or more trained convolutional neural networks (CNNs)717, which in turn, are operatively coupled to the 3D pose estimation system716so that better accuracy may be obtained from the 3D pose estimation system716. In the illustrative embodiment ofFIG.64, in addition to the three dimensional coordinates for each image frame from the 3D pose estimation system64, the kinetic core SDK718receives the device signals720,722,724from the connected external devices. Then, the kinetic core SDK718determines and outputs one or more biomechanical performance parameters in a biomechanical output report728using the three dimensional coordinates from the 3D pose estimation system716and the signals720,722,724from the connected external device. As shown inFIG.64, the biomechanical output report728may include annotated datasets and/or kinematic and kinetic profiles for the one or more persons in the scene. Now, the user input/calibration712, the kinetic core SDK718, and the application output726and728of the illustrative biomechanical analysis systems710and730will be described in further detail. In the illustrative embodiments described above, some user input712from the system may augment the automatic system calibration tasks performed. One source of input may involve the user selecting the XY pixel location of the four force plate corners from multiple RBG video images. The locations can be triangulated from this information. Additional calibration may require the user to hold an object, such as a checkboard or Aruco pattern. The person holding the calibration target will then perform a sequence of tasks, moving the calibration target at the optimal angle to the respective cameras and to the optimal positions for calibration within the capture volume. Another form of calibration may involve having the user standing on the force plate in the capture volume. The system will capture the user rotating their body around the vertical axis with their arms at 45 degree and 90 degrees of shoulder abduction. The 3D pose estimation system716then calibrates based on the plausible parameters (lengths) of the subject's body segment's and combined shape. In the illustrative embodiment ofFIG.64, there are one or more trained CNN modules717which are used to obtain better accuracy of the 3D pose estimation system716. One of these models may be a “plausible physics” model. This model determined the plausibility of the estimated pose in the physical domain. In addition, this model may consider the temporal parameters of the physics, including: (i) body inertia, (ii) ground/floor contact in regards to foot position, (iii) body segment lengths, (iv) body segment angular velocities, and (v) joint ranges of motion. In the illustrative embodiment, an additional CNN may be applied for allowable human poses. This is a general model which will prevent unrealistic body representations and 3D reconstructions. In the illustrative embodiments ofFIGS.63and64, the desired application output726,728is a biomechanical analysis of the action's performed in the capture volume. This includes output, such as an annotated dataset in which calculated values, such as the rate of force development, maximum force, and other descriptors are displayed. A general report of the movement performed may also be generated and the algorithmically determined kinetic and kinematic insights from both traditional manually devised algorithms and insights derived from machine learned algorithms obtained from analysis of large datasets of similar movements. The specific output is determined by the movement performed. As an example, analyzing a baseball swing is quite different than analyzing the balance of a subject after physical or visual perturbation. Each has its own key performance indicators (KPIs). Using the key point information from the 3D pose estimation system716and the associated algorithms for movement specific analysis, the system becomes an “expert system” which is capable of diagnosing and providing rehabilitation and training interventions to improve the subject's performance during the tasks performed in the capture volume. This requires a large amount of training data, which is a recording of the actions performed in the capture space. In the illustrative biomechanical analysis systems710,730described above, the center of mass may be determined in order to guide the visual representation of the person in the visual scene. Other desired outputs may be trunk, knee, head position and hands position. With these variables' positions, angular and linear velocities can be calculated and essential for balance estimations can be provided. Also, for a functional force or balance plate where a subject can traverse the plate, the estimation of kinematic body segment position desired variables may be upper limb, trunk, hips, knees and ankle position. These variables would provide a gait analysis in combination with ground reaction force output provided by the force plate. The user can be required to walk, walk over a step or variables of it, plus other range of motion activities. The segment positions will provide linear and angular velocities and general kinematic outputs. The illustrative biomechanical analysis systems710,730may further include training models provided as part of the systems that enable the building of dynamic visual scenes. For example, when a participant uses the system710,730for the first time, he or she is asked to walk on the treadmill or sway on the plate. Based on these movements the current kinematics/kinetics, COM movements, ground reaction forces are estimated. This is used to build scenes, for example if while walking the subject does not lift his foot enough, the obstacle height in the visual scene will be low at first. Different levels can then be built into the training protocol to progressively increase the obstacle height and encourage the person to lift his leg at a required height. In addition, with upper limb position a system user can perform dual task activities similar to daily life activities, where he or she would be walking or standing while pointing or grabbing objects. Such activities can be used as assessment and training as already proven by previous research. In another illustrative biomechanical application, a therapist may review a captured video and force plate data, and write notes on the performance of the subject and any thoughts regarding their condition. Additionally, the expert may provide a review kinematic analysis while using the force plate data as additional information for making the decision. One key aspect of one biomechanical analysis system710,730is determining the sway strategy of the patient. The kinematic information, derived from the 3D pose estimation system716is used by the therapist to determine a “sway strategy” or “balance strategy” of the patient. In the system, the subject is assumed to use an ankle strategy when regaining their balance in response to a known perturbation of the floor. The therapist may use the kinematic information to rate the strategy and determine if the amount of ankle versus hip movement is acceptable for the test. If deemed acceptable, the strategy employed by the subject and the therapist annotation (acceptable sway strategy or not) will be saved and used to train the algorithm. In time, the algorithm will provide instant feedback to the patient on the acceptability of the trial's sway strategy and provide a recommendation on how to improve the strategy (i.e.; focus on bending at the ankles and keep the torso upright, etc.). Also, the trunk and head position of the patient can offer a differential analysis to balance and how a patient performs a task. With the upper limb positions, a patient can perform tasks related to hand-eye coordination, ranges of motion, and dual tasks. These tasks are known for assessment and training in several types of population from neurological to orthopedic. In one or more illustrative embodiments, the performance of the user suggestions on the sway strategy of the subsequent trial may be used to provide more useful recommendations. By grading the performance on the subsequent trial thousands of times, the machine learned algorithm learns what to suggest to the patient to obtain the desired result. For a functional force or balance plate where a subject can traverse the plate, the 3D pose estimation system716may be used to estimate gait and upper body events during tasks, such as gait over obstacles, squats and range of motion activities. Aligning with the ground reaction forces provided by the force plate, a clinician will be able to determine not only body sway, but quantify errors in tasks, such as tandem gait. In the illustrative biomechanical analysis systems710,730described above, one or more data processing devices104may be configured to predict one or more balance parameters of the subject using the 3D pose estimation system716. The one or more balance parameters predicted by the one or more data processing devices104may comprise at least one of: (i) a center of pressure, (ii) a center of mass, (iii) a center of gravity, (iv) a sway angle, and (v) a type of balance strategy. Also, the one or more data processing devices104of the illustrative biomechanical analysis systems710,730may be further configured to provide feedback to the subject regarding his or her balance based upon the one or more predicted balance parameters of the subject determined using the 3D pose estimation system716. In one or more further illustrative embodiments, the biomechanical analysis systems710,730may further include a sensory output device configured to generate sensory feedback for delivery to a system user. The sensory feedback may comprise at least one of a visual indicator, an audible indicator, and a tactile indicator. For example, the sensory output device may comprise one or more of the types of sensory output devices described in U.S. Pat. No. 9,414,784, the entire disclosure of which is incorporated herein by reference. In one or more further illustrative embodiments, using the principles of inverse dynamics, the biomechanical analysis systems710,730may further map the energy flow of the subject performing a balance activity in the capture space. The forces and torques occurring at each joint in the body may be determined by the kinematic positions and ground reaction forces (predicted and/or real) and mapped from the body segments and joints in contact with the force plate. Additionally, a temporal plausible physics algorithm may be used to correct for the inertia of the body segments from the previous body movements. Also, the biomechanical analysis systems710,730may automatically calculate joint stresses using inverse dynamics. For example, the biomechanical analysis systems710,730may automatically calculate the knee torque in one such application. In still a further illustrative embodiment, with reference toFIGS.65-66, a modified version of the force measurement system800may comprise a force plate830mounted on a displaceable platform828of a motion base assembly810, an immersive visual display device107, and one or more data processing devices104operatively coupled to the force plate830, the actuation system of the motion base assembly810, and the immersive visual display device107. In this further embodiment, the one or more data processing devices104are configured to receive the one or more signals that are representative of the forces and/or moments being applied to the top surface of the force plate830by the subject, and to convert the one or more signals into output forces and/or moments. The one or more data processing devices104are further configured to selectively displace the force plate830using the actuation system of the motion base assembly810. The motion base assembly810will be described in detail hereinafter. Because the details of the one or more data processing devices104, the subject visual display device107, and the force plate830are generally the same as that described above with regard to the aforementioned embodiments, no further description of these components104,107, and830will be provided for this embodiment. Now, referring toFIG.66, the components of the motion base assembly810will now be described in detail. As shown inFIG.66, the motion base assembly810generally comprises a support structure812, a displaceable carriage828coupled to the force plate830; and an actuation system including a plurality of actuators832,834,836,838,840,842operatively coupling the displaceable carriage828to the support structure812. The plurality of actuators832,834,836,838,840,842are configured to displace the displaceable carriage828relative to the support structure812. As shown inFIGS.65and66, the displaceable carriage828is suspended below a portion of the support structure812(i.e., the displaceable carriage828is suspension-mounted from the underside of the top wall823of the support structure812). As depicted inFIG.66, the displaceable carriage828of the motion base assembly810is preferably displaceable (i.e., translatable) and rotatable in 3-dimensional space by means of the plurality of actuators832,834,836,838,840,842. In other words, the motion base assembly810is preferably a six (6) degree-of-freedom motion base. In the illustrative embodiment, the motion base assembly810is used for the dynamic testing of subjects when, for example, the subject is being tested, or is undergoing training, in a virtual reality environment. Also, in the illustrative embodiment, the motion base assembly810is able to accommodate any type of perturbations as inputs (i.e., any type of perturbations generated by the one or more data processing devices104. While the displaceable carriage828of the motion base assembly810is preferably translatable and rotatable in 3-dimensional space, it is to be understood that the motion base is not so limited. Rather, in alternative embodiments, the motion base assembly810is provided with lesser degrees of motion. In the illustrative embodiment, as shown inFIG.66, the support structure812of the motion base assembly810comprises a plurality of sidewalls814,816,818,820,822and a top wall823attached to the upper edges of the plurality of sidewalls814,816,818,820,822. The top wall823of the support structure812defines an opening824for accommodating a subject108in a standing position on the force plate830(seeFIG.65). Also, as shown in the illustrative embodiment ofFIG.66, the support structure812of the motion base assembly810has an open back826for enabling the subject108to more easily get on and off the force plate830. In the illustrative embodiment, the support structure812of the motion base assembly810generally has the shape of a half regular octagon where the sidewalls814,816,818,820,822of the support structure812form interior angles of approximately 135 degrees and exterior angles of approximately 45 degrees with one another. As shown inFIG.66, the support structure812partially surrounds the displaceable carriage828in the illustrative embodiment. Referring again toFIG.66, in the illustrative embodiment, the displaceable carriage828is in a form of a displaceable platform suspended below a top portion of the support structure812(i.e., suspended below the top wall823of the support structure812). In the illustrative embodiment, the displaceable carriage828is affixedly attached to the force plate830so that the force plate830is able to be displaced together with the displaceable carriage828. In other alternative embodiments, other objects may be attached to the displaceable carriage828of the motion base assembly810, such as an instrumented treadmill or other objects for which the displacement thereof is desired. For example, when the displaceable carriage828is being used to displace an instrumented treadmill, the structure of the displaceable carriage828may be modified accordingly to accommodate the increased size of the instrumented treadmill. Next, with again reference toFIG.66, the actuation system of the motion base assembly810will be described in detail. As shown in this figure, in the illustrative embodiment, the actuation system of the motion base assembly810generally comprises six (6) linear actuators832,834,836,838,840,842configured to displace the displaceable carriage828and the force plate830supported thereon relative to the support structure812of the motion base assembly810. InFIG.66, it can be seen that each of the linear actuators832,834,836,838,840,842is connected between an upper surface of the displaceable carriage (i.e., displaceable platform828) and a lower surface of the top wall823of the support structure812. More specifically, as shown inFIG.66, the displaceable platform828is provided with three (3) protruding actuator connector portions844,846,848for accommodating the linear actuators832,834,836,838,840,842. The first and second linear actuators832,834are connected between the first actuator connector portion844of the displaceable platform828and the lower surface of the top wall823of the support structure812. The third and fourth linear actuators836,838are connected between the second actuator connector portion846of the displaceable platform828and the lower surface of the top wall823of the support structure812. The fifth and sixth linear actuators840,842are connected between the third actuator connector portion848of the displaceable platform828and the lower surface of the top wall823of the support structure812. In the illustrated embodiment, each of the linear actuators832,834,836,838,840,842may be in a form of an electric cylinder, which is powered by an electric servo motor. However, in alternative embodiments, other types of linear actuators may be used in lieu of electric cylinders, such as hydraulic actuators or pneumatic actuators. Turning again toFIG.66, it can be seen that the upper end of each linear actuator832,834,836,838,840,842of the actuation system is rotatably connected to the lower surface of the top wall823of the support structure812by means of an upper joint member having three rotational degrees of freedom. In the illustrative embodiment, the upper joint member rotatably coupling each linear actuator832,834,836,838,840,842to the lower surface of the top wall823of the support structure812comprises an inline ball joint850or spherical joint850providing the three rotational degrees of freedom. Also, as best shown inFIG.66, it can be seen that the lower end of each linear actuator832,834,836,838,840,842of the actuation system is rotatably connected to the respective actuator connector portion844,846,848of the displaceable platform828by means of an lower joint member having three rotational degrees of freedom. In the illustrative embodiment, the lower joint member rotatably coupling each linear actuator832,834,836,838,840,842to the respective actuator connector portion844,846,848of the displaceable platform828comprises an inline ball joint or spherical joint, like spherical joint850, providing the three rotational degrees of freedom. In further embodiments, the data acquisition/data processing device104of the force measurement system100is further programmed to receive the one or more input signals from the one or more user input devices132,134based upon the one or more selections by the user (e.g., a clinician or operator OP), where a first one of the one or more selections by the user is a scene movement setting for the one or more scenes on the output screen of the at least one visual display107, and a second one of the one or more selections by the user is a force measurement assembly movement setting for the force measurement assembly102; display the one or more scenes on the output screen of the at least one visual display107based upon the scene movement setting selected by the user; and control the force measurement assembly102based upon the force measurement assembly movement setting selected by the user. While the functionality of these further embodiments is described with reference to the force measurement system100ofFIG.1, it is to be understood that this system functionality may be used in conjunction with other force measurement systems, such as the force measurement system100′ ofFIG.38that comprises a force measurement device600in a form of an instrumented treadmill. Now, turning toFIGS.67and68, in an illustrative embodiment, the data acquisition/data processing device104is programmed to generate a perturbation selection screen900(FIG.67) for enabling a user to select one or more desired perturbations, and a perturbation report information screen940(FIG.68) for displaying testing or training performance information regarding a subject. Referring initially toFIG.67, it can be seen that the perturbation selection screen900includes a scene movement selection indicator914and a plurality of scene movement selectable choices916. In the example ofFIG.67, the user has selected “Static” for the scene movement. In the illustrative embodiment, the scene movement setting selected by the user may be selected from the group consisting of: (i) static, with no scene movement, (ii) plate-referenced, where a movement of the one or more scenes mimics a movement of the force measurement assembly, (iii) sway-referenced, where a movement of the one or more scenes is based on a sway of the subject, (iv) roll rotation, where a movement of the one or more scenes is a predetermined angular displacement in a clockwise or counterclockwise direction, and (v) random, where a movement of the one or more scenes is random. With reference again toFIG.67, it can be seen that the perturbation selection screen900further includes a force plate movement selection indicator918and a plurality of force plate movement selectable choices920. In the example ofFIG.67, the user has selected “Random” for the force plate movement. In the illustrative embodiment, the force plate movement setting or force measurement assembly movement setting selected by the user is selected from the group consisting of: (i) none, with no force measurement assembly movement, (ii) rotation, where the force measurement assembly is rotated, (iii) translation, where the force measurement assembly is translated, and (iv) random, where there is a random selection among the none, rotation, and translation settings. As shown inFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a force plate direction selection indicator922and a plurality of force plate direction selectable choices924. In the example ofFIG.67, the user has selected “Random” for the force plate direction. In the illustrative embodiment, the force plate direction setting or force measurement assembly direction setting controls a direction of force plate or force measurement assembly movement. In the illustrative embodiment, the force measurement assembly direction setting selected by the user may be selected from the group consisting of: (i) forward, where the force measurement assembly is displaced in a forward direction, (ii) backward, where the force measurement assembly is displaced in a backward direction, and (iii) random, where there is a random selection between the forward and backward directions. As also shown inFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a movement level selection indicator926and a plurality of movement level selectable choices928. In the example ofFIG.67, the user has selected “Random” for the movement level. In the illustrative embodiment, the movement level controls an amplitude of force measurement assembly movement and/or an amplitude of scene movement. In the illustrative embodiment, the movement level selected by the user may be selected from the group consisting of: (i) a first level, where the amplitude of the force measurement assembly or scene movement is the smallest, (ii) a second level, where the amplitude of the force measurement assembly or scene movement is at a medium level, (iii) a third level, where the amplitude of the force measurement assembly or scene movement is the highest, and (iv) random, where there is a random selection among the first level, second level, and third level. Referring again toFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a movement speed selection indicator930and a plurality of movement level selectable choices932. In the example ofFIG.67, the user has selected “Slow” for the movement speed. In the illustrative embodiment, the force measurement assembly movement speed controls a speed of force measurement assembly movement and/or a speed of the scene movement. In the illustrative embodiment, the movement speed may be selected from the group consisting of: (i) a slow speed, where the speed of the force measurement assembly or scene is the smallest, (ii) a medium speed, where the speed of the force measurement assembly or scene is at a medium level, and (iii) a high speed, where the speed of the force measurement assembly or scene is the highest. In the illustrative embodiment, the translational stimuli selection for the force plate movement or force measurement assembly movement results in the force platform moving forward or backward in the anteroposterior (AP) direction. In the perturbation training protocol, the amplitude of translation is selectable by the user among the following movement level options: small, medium, large (and random). The small, medium, and large options correspond to 4, 6, and 8 degrees of subject lean respectively. The distance that the force platform travels is calculated using both the degree of lean selected and the subject's height. Therefore, the platform distance traveled can be different for subjects of differing heights. However, the stimulus should induce the same lean angle for each subject in the same translation perturbation condition. In the illustrative embodiment, the rotational stimuli selection for the force plate movement or force measurement assembly movement results in the force platform rotating forward or backward about the ankle joints in the AP plane. The amplitude of rotation is user-selectable among the same movement level options: small, medium, large (and random). The small, medium, and large options in the rotational setting correspond to 4, 6, and 8 degrees of force platform rotation, respectively. The angle that the force platform rotates in each condition remains consistent for all subjects regardless of patient height. Additionally, in the illustrative embodiment, two types of speed are available for the force platform's motion in both rotational and translational movements. This option is called movement speed. The duration of force platform movement for the “Slow” speed option is 400 ms whereas the duration for “Fast” option is 250 ms. Force platform motions at faster speeds will require the patient to react faster to the perturbation. By incorporating higher speeds, clinicians can increase the challenge to the subject and ultimately improve performance. In the illustrative embodiment, for trials where only the visual scene is moved, the clinician or operator OP can choose the “Static” option for the force platform motion. With this setting selected, the force platform would not move during the trial. In the illustrative embodiment, in the “Static” setting for scene movement, the visual scene does not move. In the “Sway-Referenced” setting for scene movement, the visual scene motion mimics the subject's sway. That is, if the subject leans in one direction, the scene will move proportionally in the same direction to match the subject's lean. This setting is similar to the scene motion in conditions3and6in the Sensory Organization Test (SOT). In the “Plate-Referenced” setting for scene movement, the visual scene motion mimics the force platform motion. For example, if the force platform motion is a large fast forward translation, the visual scene will simultaneously make a large fast translation in the same direction. In the “Roll Rotation” setting for scene movement, the scene rotates 15 degrees clockwise or counterclockwise in the roll plane with the speed of 45 degrees per second. The direction of rotation is randomized. This visual perturbation will also happen in conjunction with the force platform motion. In addition to the above settings, the clinician or operator OP can select the scene to be “Blank”, in which case scene motion will not be visible. The many scene movement profile options are designed to introduce visual stimuli that are sometimes consistent and sometimes contradictory to the information from other sensory mechanisms. The aim is to train the subject to effectively integrate the information from different sensory mechanisms. Also, as shown inFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a delay start time selection indicator902and a plurality of delay start time selectable choices904(e.g., “None” or “3 seconds”). In the example ofFIG.67, the user has selected “None” for the delay start time. In the illustrative embodiment, the delay start time is a time delay between a start of a trial and a start of movement of the force measurement assembly and/or scene, and the delay start time allows for data to be collected from the force measurement assembly prior to the movement of the force measurement assembly and/or scene. Referring again toFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a hold time selection indicator906and a plurality of delay start time selectable choices908(e.g., “2 seconds”, “5 seconds”, or “10 seconds”). In the example ofFIG.67, the user has selected “2 seconds” for the hold time. In the illustrative embodiment, the hold time is a time delay between an end of movement of the force measurement assembly and/or scene and an end of the trial, and the force measurement assembly and/or scene is configured to return to a home position after the hold time period has elapsed. In the illustrative embodiment, in addition to the force platform motion profiles and visual motion profiles, the perturbations training allows the clinician or operator OP to select a Delay Start and a Hold Time. Delay Start is a feature that allows the system to collect 3 seconds of data from the force platform prior to perturbation occurring. This delay in platform motion does not affect scoring nor patient performance. Hold Time is another feature which influences the trial time as it is the time added to the trial after the perturbation has occurred. During this hold time, the force platform will remain in its final position after the perturbation and will not re-home until the hold time has elapsed. Similarly, during the hold time, the scene motion will remain visible until the hold time has elapsed. Hold Time can be between 2 seconds and 10 seconds. The option selected may influence patient performance based on duration and the perceived difficulty along with the other options selected in the training. As also shown inFIG.67, in the illustrative embodiment, the perturbation selection screen900further includes a scene selection indicator910and a plurality of scene selectable choices912. In the example ofFIG.67, the user has selected “Blank Field” for the scene. In the illustrative embodiment, the scene may be selected from the group consisting of: (i) a rock wall, (ii) a checkered room, (iii) an airport, (iv) a fountain, and (v) a blank field. Next, turning toFIG.68, in the illustrative embodiment, it can be seen that the perturbation report information screen940includes a plurality of columns942-956and964-978displaying output data relating to the training routines performed for the subject. More specifically, in the example ofFIG.68, for trials1-8, the report columns include a trial column942, scene movement column944, a plate movement column946, a movement direction column948, a movement level column950, a movement speed column952, a movement score column954, and a latency column956. Similarly, in the example ofFIG.68, for trials9-13, the report columns include a trial column964, scene movement column966, a plate movement column968, a movement direction column970, a movement level column972, a movement speed column974, a movement score column976, and a latency column978. Referring again to the example ofFIG.68, for trials1-8, the Time Delay selection958is set to “None”, the Hold Time selection960is set to “2 seconds”, and the Scene selection962is set to “Airport”. In the example ofFIG.68, for trials9-12, the Time Delay selection980is set to “None”, the Hold Time selection982is set to “2 seconds”, and the Scene selection984is set to “Fountain”. During the perturbation training, the subject is instructed to maintain his or her balance with as little movement as possible during and after the perturbation. In the illustrative embodiment, the perturbation results are quantified by the Movement Score (e.g., columns954,976inFIG.68). To determine the Movement Score, first the total COP path traveled in both anterior-posterior (AP) and mediolateral (ML) directions over the 2 seconds after the onset of subject response is determined. The average movement velocity is then determined by dividing the total path by 2 seconds. In order to compare the Movement Score across trials, the average velocity is divided by the velocity of the stimulus in order to get a score that is scaled for different stimuli. Higher movement scores indicate more subject movement. Scores can be compared to each other; however rotation movement scores are generally lower than translation movement scores. For translational perturbations, the latency in columns956,978ofFIG.68is also calculated (i.e., the amount of time that it takes for the subject to respond to a translational perturbation of the force measurement assembly102). For rotational perturbations, the latency is minimal, and the response is almost immediate. In the illustrative embodiment, the end-of-test report (e.g., the report depicted inFIG.68) is compiled after completion of a test. This report will include the subject information, the settings from the test, and any data/metrics pertaining to the test. As described above, the perturbations training includes a Movement Score for all trials. Any trials which are translations will also include a Latency score. Although the invention has been shown and described with respect to a certain embodiment or embodiments, it is apparent that this invention can be embodied in many different forms and that many other modifications and variations are possible without departing from the spirit and scope of this invention. In particular, while an interactive airplane game is described in the embodiment described above, those of ordinary skill in the art will readily appreciate that the invention is not so limited. For example, as illustrated in the screen image206ofFIG.15, the immersive virtual reality environment208could alternatively comprise a scenario wherein the subject204is walking along a bridge207. Also, the interactive game could involve navigating through a maze, walking down an endless grocery aisle, traversing an escalator, walking down a path in the woods, or driving around a course. For example, an exemplary interactive driving game may comprise various driving scenarios. In the beginning of the game, the scenario may comprise an open road on which the subject drives. Then, a subsequent driving scenario in the interactive driving game may comprise driving through a small, confined roadway tunnel. As such, the subject would encounter different conditions while engaging in the interactive driving game (e.g., a light-to-dark transition as a result of starting out on the open road and transitioning to the confines of the tunnel), and thus, the interactive game would advantageously challenge various senses of the subject. Of course, the interactive driving game could also be configured such that the subject first encounters the tunnel and then, subsequently encounters the open road (i.e., a dark-to-light transition). In addition, any other suitable game and/or protocol involving a virtual reality scenario can be used in conjunction with the aforedescribed force measurement system (e.g., any other interactive game that focuses on weight shifting by the subject and/or a virtual reality scenario that imitates depth in a 2-D painting). As such, the claimed invention may encompass any such suitable game and/or protocol. Moreover, while reference is made throughout this disclosure to, for example, “one embodiment” or a “further embodiment”, it is to be understood that some or all aspects of these various embodiments may be combined with one another as part of an overall embodiment of the invention. That is, any of the features or attributes of the aforedescribed embodiments may be used in combination with any of the other features and attributes of the aforedescribed embodiments as desired. Furthermore, while exemplary embodiments have been described herein, one of ordinary skill in the art will readily appreciate that the exemplary embodiments set forth above are merely illustrative in nature and should not be construed as to limit the claims in any manner. Rather, the scope of the invention is defined only by the appended claims and their equivalents, and not, by the preceding description. | 300,667 |
11857332 | In the following, reference is made to the attached drawings which form part of the present application and which, for illustrative purposes, show specific embodiments in which the present disclosure can be exercised. It is understood that other embodiments may be used and structural or functional or logical modifications may be made without departing from the scope of protection of the present disclosure. In this respect, directional terminology such as “top”, “bottom”, “front”, “back”, “front”, “rear”, etc. is used with reference to the orientation of the figure(s) described. Since components of embodiments can be positioned in several different orientations, the terminology of directions is for illustrative purposes only and is in no way restrictive. It is understood that the characteristics of the various exemplary designs described herein may be combined, unless specifically stated otherwise. The following detailed description is therefore not to be understood in a restrictive sense and the scope of protection of the present disclosure is defined by the appended claims and equivalents thereof. FIG.1shows an arrangement100to determine a degree of stretching of hair. Arrangement100has an acquisition unit110, an evaluation unit120and a user interface130. The Acquisition Unit110is designed to acquire properties or characteristics of hair. For this purpose, the detection unit emits110infrared light in the direction of an area12of the analysis object10to be examined (e.g. of human hair) and detects the emitted light in order to detect an absorption coefficient of the hair sample in a wavelength range from about 800 to about 2500 nm. The light emitted from the area12to be examined is picked up by the detection unit110and allows conclusions to be drawn about the degree to which the hair is stretched, because the structure changed when the hair is stretched or overstretched has an individual absorption spectrum in the detected wavelength range. The detection unit110has a suitable source of electromagnetic waves. This source is a light emitter or laser emitter, also known as a radiation source, and is located on or in the detection unit110. The radiation source may be placed on or in the detection unit110in such a way that when the electromagnetic waves112are emitted, the radiation source occupies a predetermined distance from the area12to be examined, in particular when the detection unit110is placed on the area to be examined. The distance of the radiation source from the area to be examined can be variable and can be changed by actuators or manually. The registration unit110is connected to the evaluation unit120via a data transmission connection114. The data transmission connection114can enable unidirectional or bidirectional data transmission between the acquisition unit110and the evaluation unit120. Thus, the detection unit110delivers signals concerning the detected characteristics of the hair to the evaluation unit120, whereas the evaluation unit120can deliver control commands to the detection unit110, whereby the control commands determine how the detection unit110operates. In the case of a unidirectional data transmission connection114, which only allows data transmission from the acquisition unit110to the evaluation unit120, control parameters can be specified via input elements (buttons, switches, rotary knobs, etc., not shown) on the acquisition unit110. The registration unit110may have display elements (not shown) that indicate a status of the registration unit or the set control parameters. Alternatively, the registration unit110can also transmit the set control parameters to the evaluation unit120, where they can be optionally displayed. The evaluation unit120has a processor126and a local memory128. The evaluation unit120receives signals concerning the characteristics of the examined area12of the hair sample10and determines a recommendation for a non-therapeutic treatment of the examined hair based on these characteristics. The non-therapeutic treatment may include recommendations on treatment products and/or treatment instructions or application instructions for the hair examined Treatment and application instructions are used as synonyms in the context of this description and refer to instructions for non-therapeutic treatment of the examined area (hair)12using selected treatment products or even without the use of treatment products. Treatment instructions may include the use of a treatment agent, or measures to be taken or not to be taken by the user. For example, the treatment instructions may include an indication of desirable or undesirable behavior after the use of a treatment product. To determine a non-therapeutic treatment to be recommended, the recorded characteristics of the investigated area12can be compared with areas of application, effects, and instructions for use of treatment agents and/or treatment instructions. Information on the treatment agents and/or treatment instructions can be stored in a data storage unit140. The data storage unit140can be located outside and spatially separated from the evaluation unit120. The evaluation unit120can access the data storage unit140via a data network122and call up information on the treatment products stored there and/or treatment instructions. This retrieved information is compared by the evaluation unit120with the recorded characteristics of the examined area12to determine appropriate recommendations for the non-therapeutic treatment of the examined hair. In other words, this means that the data storage unit will be queried using the acquired hair characteristics (or determined hair properties). From the data storage unit, a large amount of stored information can first be retrieved and then filtered using the hair properties determined and, if necessary, treatment targets to determine which of the treatment agents and/or treatment instructions are relevant. For this purpose, the data can be loaded from the data memory into a volatile working memory. Alternatively, the determined hair properties can already be used when retrieving the information from the data memory to retrieve only the relevant information from the data memory. For the purposes of this description, these two variants can be considered equivalent in their effect. The data network122may be a public data transmission network comprising sections of wire or wireless transmission. For example, the evaluation unit120may establish a wireless connection to an access point (not shown) to the data network122to establish a corresponding connection to the data storage unit140. The user interface130is connected to the evaluation unit120via the data transmission connection124. The user interface130has an input unit132and an output unit134. The input unit132enables a user to set parameters for the operation and configuration of the evaluation unit120, the registration unit110and/or the user interface130. Input unit132can record information via various interfaces: a keyboard, a mouse, a touch-sensitive display or via a microphone (so-called voice control). It is conceivable that any interface is used via which a human user can communicate with a computing unit and enter or transfer data. Use the input unit to enter the air humidity or a humidity level of the hair. The output unit134can be a display or other display unit that provides visual information to a user. The output unit134can also have a loudspeaker via which acoustic information can be output. Visual information can be output on a touch-sensitive output unit so that the output unit also allows a user to make entries. The evaluation unit120has a processor126and a local memory128. The processor126executes instructions to perform its intended function or functions. The local memory128can store the characteristics of the hair detected by the detection unit110or the associated signals or values. It is a special aspect of this design example that the registration unit110can be operated with an evaluation unit120and a user interface130, which are implemented in a portable device of a user or consumer. This makes it particularly easy to couple a registration unit110, which provides advanced analysis and examination possibilities for the hair of a human user, with a portable computerized data processing device. The portable data processing device can be, for example, a smartphone or tablet and a home computer. The registration unit110can be mechanically, electrically, and signal-wise connected or coupled to the portable data processing device via a defined interface. FIG.2shows a data carrier300. A computer program product is stored on the data medium, which is designed to be executed on a portable computing unit120and to instruct a processor126of the portable computing unit to perform the following steps (cf. also the process steps inFIG.3) Irradiating (410) a hair sample with electromagnetic waves in the infrared region; detecting (420) the light emitted from the hair sample; detecting (430) an absorbance of the hair sample in a wavelength range of 800 to 2500 nm; generating (440) an absorption spectrum of the hair sample in the wavelength range of 800 to 2500 nm; matching (450) the generated absorption spectrum with a calibration model and determining the degree of elongation of hair based on the absorption spectrum and the calibration model. The300medium may use magnetic, optical, or electrical storage techniques (or combinations thereof) to hold the instructions of the computer program product in a machine-readable form. These instructions can be used to be executed directly by the processor126of a portable calculation unit120(the evaluation unit120from the execution example inFIG.1). Alternatively, the instructions can be used to load the computer program product into an internal memory of the portable computing unit120for execution. This internal memory can be the local memory128shown inFIG.1. The data carrier300can be a mobile and/or portable data storage device. Alternatively, the computer program product can also be loaded via a data network by accessing the data carrier300from a portable computing unit via the data network in order to load the computer program product via the data network. The computer program product can be downloaded via a data network to a user's portable device and installed on the portable device for use by the user. In addition toFIG.2,FIG.3shows a procedure400with the following steps (these steps correspond to the functions of the computer program product): Irradiating (410) a hair sample with electromagnetic waves in the infrared region; detecting (420) the light emitted from the hair sample; detecting (430) an absorbance of the hair sample in a wavelength range of from about 800 to about 2500 nm; generating (440) an absorption spectrum of the hair sample in the wavelength range of from about 800 to about 2500 nm; matching (450) the generated absorption spectrum with a calibration model and determining the degree of elongation of hair based on the absorption spectrum and the calibration model. The computer program product contains instructions that instruct the processor126of the portable calculator120to perform these steps410to450. Of course, the procedure400or its steps410to450can be modified in accordance with one of the execution examples of arrangement100, as shown with reference toFIG.1and the other description. This means that the functions of arrangement100or one of its components described herein, the evaluation unit120, can be implemented as step of procedure400. It is not necessary to repeat the functions of the evaluation unit at this point. Rather, the expert will recognize that and how these functions are implemented as procedural steps. The different process steps as well as the components of the arrangement can be realized by one or more circuits. In an embodiment, a “circuit” is to be understood as any entity that implements a logic, which may be hardware, software, firmware, or a combination thereof. Thus, a “circuit” in one embodiment may be a hard-wired logic circuit or a programmable logic circuit, such as a programmable processor, e.g. a microprocessor or a field programmable gate array (FPGA) device. A “circuit” can also be a processor that executes software, e.g. any kind of computer program, such as a computer program in programming code for a virtual machine (delimited runtime environment, virtual machine), such as a Java computer program. A “circuit” can be understood as any type of implementation of the functions described below. FIG.4shows a schematic diagram of a registration unit110. The detection unit has a surface111on which a light emitter116and a NIR sensor118are shown. The NIR sensor118can be a spectrometer. The light emitter116is shown circular and the NIR sensor118is shown square. The surface visible from the detection unit110is that which faces the user's hair during a detection process. In other words, the light emitter116emits the light rays from the drawing plane towards an observer. When the hair of a human user is irradiated with light (e.g. laser), part of this light is emitted depending on the chemical composition and/or structure of the hair. The processor126(FIG.1) can implement control functions and issue control commands to the light emitter116. For example, the processor126can control the light emitter to emit light of a certain intensity, wavelength and/or spectral distribution (these can be called parameters of light). The evaluation unit120with the processor126(FIG.1) also receives the signals from the NIR sensor118and can classify the examined hair based on these signals. In other words, the signals delivered by the NIR sensor118are characteristic of the examined hair. These signals can also be called signal patterns and can be used to determine and output a product recommendation and/or application notes. It is conceivable that a typical signal pattern is assigned to a product and/or an application note, where the product and/or the application note can be sensibly applied to the examined hair to achieve a desired treatment result. This assigned signal pattern of the products and/or application notes can be compared with the actual signal pattern from the detection unit. From a certain degree of conformity of the signal pattern detected or supplied by the spectrometer with the signal pattern assigned to the products and/or application notes, the corresponding products and/or application notes can then be issued. The signals can be examined for qualitative similarity (do the shapes or courses of the signals correspond) and/or quantitative similarity (do the signals have similar input values, i.e. light, similar output values, i.e. emitted light). It is also conceivable that, depending on user input, a factor may be determined and applied to the signal detected by the spectrometer before this input signal is compared with the signal patterns of the products or application notes. This has the advantage that a correction factor can be applied to the acquired signal to improve the accuracy of product recommendations and/or application notes for a particular user. FIG.5shows an exemplary absorption diagram500in a wavenumber range from about 1600 cm−1to about 1720 cm−1(wavelength range between about 6250 nm and about 5813 nm). The wavelength range shown inFIG.5is above the NIR range (near infrared) and serves to understand the present disclosure. The displayed diagram shows the absorption spectrum of hair in a medium infrared wavelength range. On the horizontal axis510the wavenumber is plotted and on the vertical axis520an absorption coefficient measured as the proportion of absorbed light to emitted light (normalized to 1). In the mentioned wavenumber range, the absorption spectrum530of β leaflet structures, the absorption spectrum540of α helices, the absorption spectrum550of other randomly distributed protein fragments of the hair, and the resulting total absorption560is shown. The total absorption560results from the superposition of curves530,540and550. The qualitative shape and quantitative absorption values of curve560are characteristic of the degree of elongation of the hair sample. However, measuring the absorption spectrum above the NIR range (above a wavelength of about 3000 nm) can be expensive. The present disclosure therefore proposes to use a near infrared sensor which detects an absorbance of a hair sample in a wavelength range between about 800 nm and about 2500 nm and generates the corresponding absorption spectrum. In the wavelength range between about 800 nm and about 2500 nm, between about 2000 and about 2500 nm, harmonics of α helices and β-leaf structures can be detected. FIG.6shows exemplary and qualitative a stress-strain diagram600for human hair. The horizontal axis610shows the elongation and the vertical axis620shows the tension in the hair. In human hair, the stretching process is completely reversible up to about 3% to about 5% stretching (depending on the individual hair). This means that after relief, the hair returns to its original length before stretching. This range is marked with630and indicates the range of linear-elastic behavior. From approx. 10% to about 15% elongation, the irreversible transformation of α helices into β-pleated structures occurs. Due to the simple transformation of the secondary structures, this overstretching requires only small forces, as can be seen from a small or almost non-existent gradient of the curve in the640region. At about 25% elongation the hair tears. However, this transformation of the hair is undesirable from a cosmetic point of view, as the hair in the transition area640clearly loses its mechanical stability. Following the range640there is another range650, which corresponds to a further increasing elongation. Here the tension continues to increase as the hair is stretched until it finally tears. The analysis of the ratio of α helices to β-pleated structures is therefore, in addition to parameters such as oxidative damage, reductive damage, moisture, surface damage, another important variable for the holistic assessment of hair condition. It has now been found that the described overstretching of hair can be easily determined using NIR sensors by comparing the absorption spectrum recorded with the NIR sensor with the absorption spectra of a calibration model, as described above. Due to its compactness and low cost, the process is also suitable for consumers, hairdressing salons and drugstores (“point of sale”). While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the various embodiments in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment as contemplated herein. It being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the various embodiments as set forth in the appended claims. | 19,412 |
11857333 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention is related to a device and method of titrating a sleep disorder treatment, particularly positive airway pressure (PAP) and continuous positive airway pressure (CPAP) treatment for sleep apneas. The present invention is further related to the devices used in executing the method. The present invention includes various embodiments of a method of titrating a sleep disorder treatment device. These embodiments include but are not limited to one or more of the steps described herein. The present invention further includes various embodiments of a device used to titrate a sleep disorder treatment, particularly a PAP or CPAP device. The subjects referred to in the present invention can be any form of animal. Preferably the subject is a mammal, and more preferably a human. Most preferably, the subject is a human being treated for a sleep-related breathing disorder with a PAP or CPAP device. Various embodiments of the present invention include a step of applying at least one sensor to the subject. The sensors can be applied at any location. Preferably, the sensors are applied in a physician's office or place of business. The physician's place of business includes but is not limited to an office building, a freestanding sleep center, location within a hospital, mobile vehicle or trailer, leased space, or similar location. Just as preferably, the sensors will be mailed to the subject's home or other sleeping location, and the subject will then apply them independently. The subject's sleeping location includes but is not limited to the subject's home, apartment, and the like, as well as a hotel, nursing facility, or other location where an individual could sleep and where this analysis could be done more controllably and/or less expensively than in a sleep lab or hospital setting. Similarly, the sensors can be applied by a variety of individuals, including but not limited to a physician, nurse, sleep technician, or other healthcare professional. Just as preferably, the sensors could be applied by the subject or the subject's spouse, friend, roommate, or other individual capable of attaching the various sensors. More preferably, the sensors could be applied by the subject or the subject's spouse, friend, roommate, or other individual capable of attaching the various sensors with guidance and instruction. Such guidance and instruction can include static information such as pamphlets, audio recordings (on cassettes, compact discs, and the like), video recordings (on videocassettes, digital video discs, and the like), websites, and the like, as well as dynamic information such as direct real-time communication via telephone, cell phone, videoconference, and the like. The sensors that are used with various embodiments of the present invention are described herein but can also be any of those known to those skilled in the art for the applications of this method. The collected physiological, kinetic, and environmental signals can be obtained by any method known in the art. Preferably, those sensors include, but are not limited, to wet or dry electrodes, photodetectors, accelerometers, pneumotachometers, sphygmomanometers (for measuring blood pressure), strain gauges, thermal sensors, pH sensors, chemical sensors, gas sensors (such as oxygen and carbon dioxide sensors), transducers, piezo sensors, magnetometers, pressure sensors, static charge-sensitive beds, microphones, audio recorders, video cameras, and the like. Optionally, the data includes a video channel. The invention is envisioned to include those sensors subsequently developed by those skilled in the art to detect these types of signals. For example, the sensors can be magnetic sensors. Because electro-physiological signals are, in general, electrical currents that produce associated magnetic fields, the present invention further anticipates methods of sensing those magnetic fields to acquire the signal. For example, new magnetic sensors could collect brain wave signals similar to those that can be obtained through a traditional electrode applied to the subject's scalp. Various embodiments of the present invention include a step for applying sensors to the subject. This step can be performed or accomplished in a number of ways. In the simplest form, one sensor is applied to the subject to measure a single channel of physiological or kinetic data. In a more complex form, two sensors are applied to the subject and one additional sensor is contained within the PAP or CPAP device. Preferably, the set of sensors includes one pulse oximeter applied to the subject's index finger, one thoracic respiratory effort belt applied around the subject's chest, and one airflow or air pressure transducer contained within the PAP or CPAP device. In a still more complex form of this step, multiple sensors are applied to the subject to collect data sufficient for a full PSG test. If PSG data are to be collected, the preferred minimal set of sensors includes sensors for two EEG channels, one EOG channel, one chin EMG channel, one airflow channel, one ECG channel, one thoracic respiratory effort channel, one abdominal respiratory effort channel, one pulse oximetry channel, and one shin or leg EMG channel. More preferably, the minimal set of PSG sensors is augmented with at least one additional channel of EOG, one channel of snore, one channel of body position (ex., an accelerometer), one channel of video, and optionally one channel of audio. Electro-physiological signals such as EEG, ECG, EMG, EOG, electroneurogram (ENG), electroretinogram (ERG), and the like can be collected via electrodes placed at one or several relevant locations on the subject's body. For example when measuring brain wave or EEG signals, electrodes may be placed at one or several locations on the subject's scalp. In order to obtain a good electro-physiological signal, it is desirable to have low impedances for the electrodes. Typical electrodes placed on the skin may have an impedance in the range of from 5 to 10 kΩ. It is in generally desirable to reduce such impedance levels to below 2 kΩ. A conductive paste or gel may be applied to the electrode to create a connection with an impedance below 2 kΩ. Alternatively or in conjunction with the conductive gel, a subject's skin may be mechanically abraded, the electrode may be amplified, or a dry electrode may be used. Dry physiological recording electrodes of the type described in U.S. Pat. No. 7,032,301 are herein incorporated by reference. Dry electrodes are advantageous because they use no gel that can dry out, skin abrasion or cleaning is unnecessary, and the electrode can be applied in a hairy area such as the scalp. Additionally if electrodes are used as the sensors, preferably at least two electrodes are used for each channel of data—one signal electrode and one reference electrode. Optionally, a single reference electrode may be used for more than one channel. When electrodes are used to collect EEG or brain wave signals, common locations for the electrodes include frontal (F), parietal (P), mastoid process (A), central (C), and occipital (O). Preferably for the present invention, when electrodes are used to collect EEG or brain wave data, at least one electrode is placed in the occipital position and referenced against an electrode placed on the mastoid process (A). More preferably, when electrodes are used to collect EEG or brain wave data, electrodes are placed to obtain a second channel of data from the central location. If further EEG or brain wave signal channels are desired, the number of electrodes required will depend on whether separate reference electrodes or a single reference electrode is used. If electrodes are used to collect cardiac signals using an ECG, they may be placed at specific points on the subject's body. The ECG is used to measure the rate and regularity of heartbeats, determine the size and position of the heart chambers, assess any damage to the heart, and diagnose sleeping disorders. An ECG is important as a tool to detect the cardiac abnormalities that can be associated with respiratory-related disorders. As the heart undergoes depolarization and repolarization, electrical currents spread throughout the body because the body acts as a volume conductor. The electrical currents generated by the heart are commonly measured by an array of twelve electrodes placed on the arms, legs, and chest. Although a full ECG test typically involves twelve electrodes, only two are required for many tests such as a sleep study. When electrodes are used to collect ECG with the present invention, preferably only two electrodes are used. When two electrodes are used to collect ECG, preferably one is placed on the subject's left-hand ribcage under the armpit, and the other preferably on the right-hand shoulder near the clavicle bone. Optionally, a full set of twelve ECG electrodes may be used, such as if the subject is suspected to have a cardiac disorder. The specific location of each electrode on a subject's body is well known to those skilled in the art and varies between both individuals and types of subjects. If electrodes are used to collect ECG, preferably the electrode leads are connected to a component of the data acquisition system that includes a processing or pre-processing module that measures potential differences between selected electrodes to produce ECG tracings. The two basic types of ECG leads are bipolar and unipolar. Bipolar leads (standard limb leads) have a single positive and a single negative electrode between which electrical potentials are measured. Unipolar leads (augmented leads and chest leads) have a single positive recording electrode and use a combination of the other electrodes to serve as a composite negative electrode. Either type of lead is acceptable for collecting ECG signals in the present invention. Other sensors can be used to measure various parameters of a subject's respirations. Measurement of airflow is preferably measured using sensors or devices such as a pneumotachometer, strain gauges, thermal sensors, transducers, piezo sensors, magnetometers, pressure sensors, static charge-sensitive beds, and the like. These sensors or devices also preferably measure nasal pressure, respiratory inductance plethysmography, thoracic impedance, expired carbon dioxide, tracheal sound, snore sound, blood pressure and the like. Measurement of respiratory effort is preferably measured by a respiration piezo-electric sensor, inductive plethysmography esophageal pressure, surface diaphragmatic EMG, and the like. Measurement of oxygenation and ventilation is preferably measured by pulse oximetry, transcutaneous oxygen monitoring, transcutaneous carbon dioxide monitoring, expired end carbon dioxide monitoring, and the like. Optionally, sensors for directly or indirectly measuring respirations can be located in the conduit connecting the PAP or CPAP blower to the gas delivery mechanism. These sensors can include airflow sensors, air pressure sensors, or other sensors to measure characteristics of the gas. Further optionally, sensors can be located near the blower mechanism. These blower sensors can estimate or indirectly measure airflow or air pressure by measuring fan speed or power consumption. Methods of determining airflow or air pressure from sensors placed in or on a PAP or CPAP device are generally known in the art, and any such method is appropriate for the present invention. One example of such a sensor for measuring respirations either directly or indirectly is a respiration belt. Respiration belts can be used to measure a subject's abdominal and/or thoracic expansion over a measurement time period. The respiration belts may contain a strain gauge, piezo-electric, pressure transducer, or other sensors that can indirectly measure a subject's respirations and the variability of respirations by providing a signal that correlates to the thoracic/abdominal expansion/contractions of the subject's thoracic/abdominal cavity. If respiration belts are used, they may be placed at one or several locations on the subject's torso or in any other manner known to those skilled in the art. Preferably, when a thoracic respiration belt is used, it is positioned below the axilla to measure rib cage excursions. When an abdominal respiration belt is used, it is positioned at the level of the umbilicus to measure abdominal excursions. Optionally, at least two belts are used, with one positioned at the axilla and the other at the umbilicus. Another example of a sensor or method for measuring respirations either directly or indirectly is a nasal cannula or a facemask used to measure the subject's respiratory airflow. Nasal or oral airflow can be measured quantitatively and directly with a pneumotachograph consisting of a pressure transducer connected to either a standard oxygen nasal cannula placed in the nose, a facemask over the subject's mouth and nose, or the PAP or CPAP gas delivery mechanism. Airflow can be estimated by measuring nasal or oral airway pressure that decreases during inspiration and increases during expiration. Inspiration and expiration produce fluctuations on the pressure transducer's signal that is proportional to airflow. A single pressure transducer can be used to measure the combined oral and nasal airflow. Alternatively, the oral and nasal components of these measurements can be acquired directly through the use of at least two pressure transducers, one transducer for each component. Optionally, the pressure transducer(s) are internal to the patient interface box. If two transducers are used for nasal and oral measurements, preferably each has a separate air port into the patient interface box. When respirations are measured via airflow or air pressure transducers, preferably the sensors are internal to the PAP or CPAP device itself, either in the PAP or CPAP gas delivery mechanism (i.e., the mask or cannula), or positioned near the blower as described above. Transducers in the PAP or CPAP mask or cannula operate identically to the masks and cannulae described above. Optionally, sensors can be located in the conduit connecting the PAP or CPAP blower to the gas delivery mechanism. Methods of determining airflow or air pressure from sensors placed in or on a PAP or CPAP device are generally known in the art, and any such method is appropriate for the present invention. Sensors placed on a mask or cannula can also be used to determine other physiological characteristics. Software filtering can obtain “snore signals” from a single pressure transducer signal by extracting the high frequency portion of the transducer signal. This method can eliminate the need for a separate sensor, such as a microphone or another transducer, and also reduces the system resources required to detect both snore and airflow. A modified nasal cannula or facemask connected to a carbon dioxide or oxygen sensor may be used to measure respective concentrations of these gases. In addition, a variety of other sensors can be connected with either a nasal cannula or facemask to measure a subject's respirations directly or indirectly. Still another example of a sensor or method of directly or indirectly measuring respirations of the subject is a pulse oximeter. The pulse oximeter can measure the oxygenation of the subject's blood by producing a source of light at two wavelengths (typically at 650 nm and 905, 910, or 940 nm). Hemoglobin partially absorbs the light by amounts that differ depending on whether it is saturated or desaturated with oxygen. Calculating the absorption at the two wavelengths leads to an estimate of the proportion of oxygenated hemoglobin. Preferably, pulse oximeters are placed on a subject's earlobe or fingertip. More preferably, the pulse oximeter is placed on the subject's index finger. In one embodiment of the present invention, a pulse oximeter is built-in or hard-wired to the interface box. Alternatively, the pulse oximeter can be a separate unit in communication with either the interface box or the base station via either a wired or wireless connection. Kinetic data can be obtained by accelerometers placed on the subject. Alternatively, several accelerometers can be placed in various locations on the subject, for example on the head, wrists, torso, and legs. These accelerometers can provide both motion and general position/orientation data by measuring gravity. These accelerometers can be used to detect when patients go to sleep or to detect movements during sleep which are important factors in assessing the actual sleep time which is an important parameter used to generate an accurate assessment of nocturnal respiratory events (such as apnea/hypopnea events, which is the sum of all apneas and hypopneas divided by sleep time; or apnea/hypopnea index [AHI]). A video signal can also provide some kinetic data after processing. Alternatively, stereo video signals can provide three-dimensional position and motion information. Kinetic data includes but is not limited to frequent tossing and turning indicative of an unsuitable mattress, excessive movement of bedding indicating unsuitable sleeping temperatures, unusual movement patterns indicating pain, and the subject's sleeping position. Other sensors can be used to measure various parameters of a subject's physiological, kinetic, or environmental conditions. These other parameters are preferably measured using sensors or devices such as a photodetectors, light meters, accelerometers, pneumotachometers, sphygmomanometers (for measuring blood pressure), strain gauges, thermal sensors, pH sensors, chemical sensors, gas sensors (such as carbon monoxide detectors), transducers, piezo sensors, magnetometers, pressure sensors, static charge-sensitive beds, audio monitors, microphones, reflective markers, video monitors, hygrometers, and the like. Because the system is programmable, potentially any transducer-type sensor that outputs an electrical signal can be used with the system. Various embodiments of the present invention include the step of connecting sensors to a data acquisition system. The sensors can be connected to the data acquisition system either before or after they are applied to the subject. The sensors can be permanently hardwired to at least part of the data acquisition system. More preferably, the sensors are connected to at least part of the data acquisition system via a releasable connector. Optionally, the sensors can be connected to at least part of the data acquisition system via non-releasable connector that does not permit disconnection without destruction of the connector. The physiological sensors are generally hardwired (permanently or via a connector) to the data acquisition system, but the ongoing evolution in wireless sensor technology may allow sensors to contain transmitters. Optionally, such sensors are wirelessly connected to the data acquisition system. As such, these sensors and the wireless connection method are considered to be part of the present invention. With the advances in microelectromechanical systems (MEMS) sensor technology, the sensors may have integrated analog amplification, integrated A/D converters, and integrated memory cells for calibration, allowing for some signal conditioning directly on the sensor before transmission. Preferably, the sensors are all connected in the same way at the same time, although this certainly is not required. It is possible, but less preferable, to connect the sensors with a combination of methods (i.e., wired or wireless) at a combination of times (i.e., some before application to the subject, and some after application to the subject). The sensors can be connected to various parts of the data acquisition system. For example, a thoracic respiratory effort belt can be connected to a patient interface box while a pulse oximeter can be connected a base station. Further, some sensors may not be attached to the subject at all. Examples of such sensors include airflow sensors that are part of the PAP or CPAP device and video cameras or microphones that are placed in the subject's sleeping area. Although these sensors are not attached to the subject, they are still connected to at least one component of the data acquisition system. Various embodiments of the present invention use a data acquisition system capable of both (a) receiving signals from the sensors applied to or placed near the subject; and (b) retransmitting the signals or transmitting another signal based at least in part on at least one of the collected signals. In its simplest form, the data acquisition system preferably should interface with the sensors and retransmit the signals from the sensors. Preferably, the data acquisition system wirelessly transmits the signals from the sensors. Optionally, the data acquisition system also pre-processes the signals from the sensors and transmits the pre-processed signals. Further optionally, the data acquisition is also capable of storing the signals from the sensors and/or any pre-processed signals. Optionally, the data acquisition system can be a single box, such as a patient interface box, containing a sensor interface module, a pre-processor module, and a transmitter module. Further optionally, the data acquisition system could consist of several boxes that communicate with each other, each box containing one or more modules. For example, the data acquisition system could consist of: (a) a patient interface box containing a sensor interface module, a pre-processor, a transmitter, and a receiver; and (b) a base station box containing a second pre-processor, a transmitter, and a receiver. In this example, the transmitter and receiver of the patient interface box are used to communicate with the base station box. The transmitter and receiver of the base station box are used to both communicate with the patient interface box and a remote monitoring station, remote analysis station, remote data storage station, and the like. Similarly, the data acquisition could consist of (a) a patient interface box containing a sensor interface module, a transmitter, and a receiver; (b) a processor box containing a pre-processor, a transmitter, and a receiver; and (c) a base station box containing only a receiver and a transmitter. In these configurations, it is not necessary for the transmitters to be of the same type. For example, the transmitter in the patient interface box can be a wired, Bluetooth, or other transmitter designed for short distances, and the transmitter in the base station box can be a WiFi, IEEE 802.11, TCP/IP, or other transmitter designed to establish connections over larger distances. Several data acquisition systems are suitable for use with the present invention. Preferably, the data acquisition system is a device from Cleveland Medical Devices, Inc. (CleveMed). All current suitable CleveMed data acquisition systems include a patient interface box and a base station. The three CleveMed patient interface boxes described below allow for data backup and storage on a removable SD memory card, with a single 1 GB card providing over 60 hours of recording. The current CleveMed data acquisition systems also each include a base station weighing 130 g that is powered by USB. The USB cable also provides a wired link between the base station and a PC. The CleveMed patient interfaces and base stations contain integrated wireless technology for real-time data transmission within 100 feet line of sight. The CleveMed patient interface boxes currently suitable for use with the present invention include the SleepScout™, Crystal Monitor® 20, and Sapphire™ PSG systems. The SleepScout™ is a wireless patient interface box that includes a total of 9 input channels for ECG, EMG, thoracic and abdominal respiratory efforts, snore and a generic auxiliary DC input. Two of the channels are fully-programmable, adding flexibility by allowing for any combination of EEG, ECG, EOG or EMG. SleepScout™ also includes several built-in sensors, including body position, pulse oximetry, pressure-based airflow, and a differential pressure transducer that allows for PAP or CPAP titration studies. The SleepScout™ patient interface box weighs 190 g and is powered by two AA lithium batteries. The SleepScout™ transmits data in the 2.4-2.484 GHz ISM band. The CleveMed Sapphire™ is a wireless patient interface box that includes a total of 22 input channels, including 6 EEG, 2 EOG, 5 EMG, as well as ECG, temperature, body position, and respiratory effort. Six of the channels are EEG, allowing the Sapphire™ to meet guidelines for conducting polysomnogram tests. The Sapphire™ also includes several built-in sensors, including body position, pulse oximetry, and a generic auxiliary DC input. The Sapphire™ patient interface box weighs 538 g and is powered by two AA lithium batteries. The Sapphire™ transmits data in multiple bands, allowing dynamic selection of Wireless Medical Telemetry Service (WTMS) bands (608-614 MHz, 1427-1432 MHz) and two ISM bands (902-928 MHz or 2.4-2.485 GHz), depending on the availability and saturation of transmission bands in the testing location. The CleveMed Crystal® Monitor 20 is a family of wireless patient interface boxes. Each Crystal® Monitor 20 includes a total of 14 input channels, including two each for EEG, EOG, and EMG, as well as ECG and thoracic and abdominal respiratory efforts. The Crystal® Monitor 20 also includes several built-in sensors, including body position, pulse oximetry, pressure-based airflow, and a generic auxiliary DC input. The Crystal® Monitor 20 patient interface box weighs 210 g and is powered by two AA lithium batteries. The Crystal® Monitor family transmits data in multiple bands; the Crystal® 20-B transmits in the 900 MHz ISM band, and the Crystal® 20-S transmits in the 2.4 GHz ISM band. Selection of the appropriate Crystal® Monitor depends upon the availability and saturation of transmission bands in the testing location. The data acquisition system is preferably portable. By portable, it is meant, among other things, that the device is capable of being transported relatively easily. Relative ease in transport means that the device is easily worn and carried, generally in a carrying case, to the point of use or application and then worn by the subject without significantly affecting any range of motion. Furthermore, any components of the data acquisition system that are attached to or worn by the subject, such as the sensors and patient interface box, should also be lightweight. Preferably, these subject-contacting components of the device (including the sensors and the patient interface box) weigh less than about 10 lbs., more preferably less than about 7.5 lbs., even more preferably less than about 5 lbs., and most preferably less than about 2.5 lbs. The subject-contacting components of the device preferably are battery-powered and use a data storage memory card and/or wireless transmission of data, allowing the subject to be untethered. Furthermore, the entire data acquisition system (including the subject-contacting components as well as any other sensors, a base station, or other components) preferably should be relatively lightweight. By relatively lightweight, it is meant preferably the entire data acquisition system, including all components such as any processors, computers, video screens, cameras, and the like preferably weigh less in total than about 20 lbs., more preferably less than about 15 lbs., and most preferably less than about 10 lbs. This data acquisition system preferably can fit in a reasonably sized carrying case so the subject or assistant can easily transport the system. By being lightweight and compact, the device should gain greater acceptance for use by the subject. Various embodiments of the present invention use a data acquisition system capable of storing and/or retransmitting the signals from the sensors or storing and/or transmitting another signal based at least in part on at least one of the signals. The data acquisition system can be programmed to send all signal data to the removable memory, to transmit all data, or to both transmit all data and send a copy of the data to the removable memory. When the data acquisition system is programmed to store a signal or pre-processed signal, the signals from the sensors can be saved on a medium in order to be retrieved and analyzed at a later date. Media on which data can be saved include, but are not limited to chart recorders, hard drive, floppy disks, computer networks, optical storage, solid-state memory, magnetic tape, punch cards, etc. Preferably, data are stored on removable memory. For both storing and transmitting or retransmitting data, flexible use of removable memory can either buffer signal data or store the data for later transmission. Preferably, nonvolatile removable memory can be used to customize the system's buffering capacity and completely store the data. If the data acquisition system is configured to transmit the data, the removable memory acts as a buffer. In this situation, if the data acquisition system loses its connection with the receiving station, the data acquisition system will temporarily store the data in the removable memory until the connection is restored and data transmission can resume. If, however, the data acquisition system is configured to send all data to the removable memory for storage, then the system does not transmit any information at that time. In this situation, the data stored on the removable memory can be retrieved by either transmission from the data acquisition system, or by removing the memory for direct reading. The method of directly reading will depend on the format of the removable memory. Preferably the removable memory is easily removable and can be removed instantly or almost instantly without tools. The memory is preferably in the form of a card and most preferably in the form of a small easily removable card with an imprint (or upper or lower surface) area of less than about two in2. If the removable memory is being used for data storage, preferably it can write data as fast as it is produced by the system, and it possesses enough memory capacity for the duration of the test. These demands will obviously depend on the type of test being conducted, tests requiring more sensors, higher sampling rates, and longer duration of testing will require faster write speeds and larger data capacity. The type of removable memory used can be almost any type that meets the needs of the test being applied. Some examples of the possible types of memory that could be used include but are not limited to Flash Memory such as CompactFlash, SmartMedia, Miniature Card, SD/MMC, Memory Stick, or xD-Picture Card. Alternatively, a portable hard drive, CD-RW burner, DVD-RW burner or other data storage peripheral could be used. Preferably, a SD/MMC-flash memory card is used due to its small size. A PCMCIA card is least preferable because of the size and weight. When the data acquisition system is programmed to retransmit the signals from the sensors, preferably the data acquisition system transmits the signals to a processor for analysis. More preferably, the data acquisition system immediately retransmits the signals to a processor for analysis. Optionally, the data acquisition system receives the signals from one or more of the aforementioned sensors and stores the signals for later transmission and analysis. Optionally, the data acquisition system both stores the signals and immediately retransmits the signals. When the data acquisition system is programmed to retransmit the signals from the sensors or transmit a signal based at least in part on the signal from the sensors (collectively “to transmit” in this section), the data acquisition system can transmit through either a wireless system, a tethered system, or some combination thereof. When the system is configured to transmit data, preferably the data transmission step utilizes a two-way (bi-directional) data transmission. Using two-way data transmission significantly increases data integrity. By transmitting redundant information, the receiver (the processor, monitoring station, or the like) can recognize errors and request a renewed transmission of the data. In the presence of excessive transmission problems, such as transmission over excessive distances or obstacles absorbing the signals, the data acquisition system can control the data transmission or independently manipulate the data. With control of data transmission it is also possible to control or re-set the parameters of the system, e.g., changing the transmission channel or encryption scheme. For example, if the signal transmitted is superimposed by other sources of interference, the receiving component could secure a flawless transmission by changing the channel. Another example would be if the transmitted signal is too weak, the receiving component could transmit a command to increase the transmitting power. Still another example would be for the receiving component to change the data format of the transmission, e.g., in order to increase the redundant information in the data flow. Increased redundancy allows easier detection and correction of transmission errors. In this way, safe data transmissions are possible even with the poorest transmission qualities. This technique opens a simple way to reduce the transmission power requirements, thereby reducing the energy requirements and providing longer battery life. Another advantage of a bi-directional digital data transmission lies in the possibility of transmitting test codes in order to filter out external interferences, for example, refraction or scatter from the transmission current. In this way, it is possible to reconstruct falsely transmitted data. Data compression using lossless encoding techniques can provide basic throughput optimization, while certain lossy encoding techniques will offer far greater throughput while still providing useful data. Lossy encoding techniques may include but are not limited to decimation, or transmission of a compressed image of the data. The preferred method for encoding will include special processing from the transmitter that will preprocess the data according to user-selectable options, such as digital filtering, and take into the account the desired visual representation of that information, such as pixel height and target image width. Facilities can be made within the system to control the encoding in order to optimize utilization on any given network. Control over the encoding methods may include, but is not limited to selection of a subset of the entire set of signals, target image size, and decimation ratio. Data encryption can be applied to secure data transmissions over any network. Encryption methods may include but are not limited to simple obfuscation and sophisticated ciphers. The preferred embodiment of secure data transmission that is compatible with HIPAA and HCFA guidelines will be implemented using a virtual private network. More preferably, the virtual private network will be implemented using a specialized security appliance, such as the PIX 506E, from Cisco Systems, Inc, capable of implementing IKE and IPSec VPN standards using data encryption techniques such as 168-bit 3DES, 256-bit AES, and the like. Still more preferably, secure transmission will be provided by a third party service provider or by the healthcare facility's information technology department. The system will offer configuration management facilities to allow it to adapt to changing guidelines for protecting patient health information (PHI). Several preferable embodiments of this method employ a wireless data acquisition system. This wireless data acquisition system consists of several components, each wirelessly connected. Data is collected from the sensors described above by a patient interface box. The patient interface box then wirelessly transmits the data to a separate signal pre-processing module, which then wirelessly transmits the pre-processed signal to a receiver. Alternatively, the patient interface box processes the signal and then directly transmits the processed signal directly to the receiver using wireless technology. Further alternatively, the patient interface box wirelessly transmits the signals to the receiver, which then pre-processes the signal. Preferably, the wireless technology used by the data acquisition system components is radio frequency based. Most preferably, the wireless technology is digital radio frequency based. The signals from the sensors and/or the pre-processed signals are transmitted wirelessly to a receiver, which can be a base station, a transceiver hooked to a computer, a personal digital assistant (PDA), a cellular phone, a wireless network, or the like. Most preferably, the physiological signals are transmitted wirelessly in digital format to a receiver. Wireless signals between the wireless data acquisition system components are both received and transmitted via frequencies preferably less than about 2.0 GHz. More preferably, the frequencies are primarily 902-928 MHz, but Wireless Medical Telemetry Bands (WMTS), 608-614 MHz, 1395-1400 MHz, or 1429-1432 MHz can also be used. The present invention may also use other less preferable frequencies above 2.0 GHz for data transmission, including but not limited to such standards as Bluetooth, WiFi, IEEE 802.11, and the like. When a component of the wireless data acquisition system is configured to wirelessly transmit data, it is preferably capable of conducting a RF sweep to detect an occupied frequency or possible interference. The system is capable of operating in either “manual” or “automatic” mode. In the manual mode, the system conducts an RF sweep and displays the results of the scan to the system monitor. The user of the system can then manually choose which frequency or channel to use for data transmission. In automatic mode, the system conducts a RF sweep and automatically chooses which frequencies to use for data transmission. The system also preferably employs a form of frequency hopping to avoid interference and improve security. The system scans the RF environment then picks a channel over which to transmit based on the amount of interference occurring in the frequency range. In this application, transmitting the data wirelessly means that the data is transmitted wirelessly at least in part of the data transfer process. This means, for example, that the data may be transmitted wirelessly from the patient interface box to the base station, which then transmits the data via either a wireless method, such as a wireless cellular card, local wireless network, satellite communication system, and the like, or a wired method, such as a wired internet connection, the testing facility's LAN, and the like. Transmitting the data wirelessly also means, for example, that the data may be transmitted via wired connection from the patient interface box to a base station, which then wirelessly transmits the data wirelessly via any wireless method, such as Bluetooth, IEEE 802.11, wireless cellular card, satellite communication system, and the like to a database that distributes the data over a hardwired system to a sleep unit or lab. Transmitting the data wirelessly also means, for example, that the data may be wirelessly transmitted directly from the patient interface box via WiFi or IEEE 802.11, Bluetooth, wireless cellular card, and the like to a processor, which then transmits the processed data to the sleep unit or laboratory. Preferably, the patient interface box wirelessly transmits the data. This allows for a simplified subject hookup and improved subject mobility. Preferably, the data acquisition system retransmits the signals from the sensors applied to the subject or transmits a signal based at least in part on at least one of the physiological, kinetic, or environmental signals at substantially a same time as the signal is received or generated. At substantially the same time preferably means within approximately one hour. More preferably, at substantially the same time means within thirty minutes. Still more preferably, at substantially the same time means within ten minutes. Still more preferably, at substantially the same time means within approximately one minute. Still more preferably, at substantially the same time means within milliseconds of when the signal is received or generated. Most preferably, a substantially same time means that the signal is transmitted or retransmitted at a nearly instantaneous time as it is received or generated. Transmitting or retransmitting the signal at substantially the same time allows the physician or monitoring service to review the subject's physiological and kinetic signals and the environmental signals and if necessary to make a determination, which could include modifying the subject's treatment protocols or asking the subject to adjust the sensors. The receiver (base station, remote communication station, or the like) of various embodiments of the wireless data acquisition system can be any device known to receive RF transmissions used by those skilled in the art to receive transmissions of data. By way of example but not limitation, the receiver can include a communications device for relaying the transmission, a communications device for re-processing the transmission, a communications device for re-processing the transmission then relaying it to another remote communication station, a computer with wireless capabilities, a PDA with wireless capabilities, a processor, a processor with display capabilities, and combinations of these devices. Optionally, the receiver can further transmit data to another device and/or back. Further optionally, two different receivers can be used, one for receiving transmitted data and another for sending data. For example, with the wireless data acquisition system used in the present invention, the receiver can be a wireless router that establishes a broadband internet connection and transmits the physiological signal to a remote Internet site for analysis, preferably by the subject's physician or another clinician. Other examples of a receiver are a PDA, computer, or cell phone that receives the data transmission, optionally re-processes the information, and re-transmits the information via cell towers, land phone lines, or cable to a remote processor or remote monitoring site for analysis. Other examples of a receiver are a computer or processor that receives the data transmission and displays the data or records it on some recording medium that can be displayed or transferred for analysis at a later time. Optionally, two or more receivers can be used simultaneously. For example, the patient interface box can transmit signals to a base station receiver that processes and retransmits the signals, as well as a PDA receiver that displays the signals for a clinician to review. One or more aforementioned sensors are used to develop the data or signals used in the present invention for, optionally, determining a quantitative level of severity of a subject's sleeping disorder and/or symptoms, and more preferably to develop a quantitative measurement of the level of severity of a subject's sleep apnea. The signals from the one or more sensors used in various embodiments of the present invention are preferably analyzed using a processor and software that can quantitatively estimate or determine the severity of the subject's sleeping disorder or symptoms. Using either a microcontroller of a data acquisition system, a separate computer, base station or processor, a PDA, a processor on a device for treating the subject's sleeping disorder or a combination of these processors, the severity of the subject's sleeping disorder and/or symptoms including apneas is determined and is used at least in part to regulate the physical or chemical treatment of the subject. Also optionally, the one or more sensors used in the system of the present invention can also be tethered to a computer, base station, cell phone, a PDA or some other form of processor or microprocessor. The processor or microprocessor of various embodiments of the present invention can be part of a remote communication station or base station. The remote communication station or base station can also be used only to relay a pre- or post-processed signal. Preferably, the remote communication station or base station can be any device known to receive RF transmissions such as those transmitted by the wireless data acquisition system described herein. The remote communication station or base station by way of example but not limitation can include a communications device for relaying the transmission, a communications device for re-processing the transmission, a communications device for re-processing the transmission then relaying it to another remote communication station, a computer with wireless capabilities, a PDA with wireless capabilities, a processor, a processor with display capabilities, and combinations of these devices. Optionally, the remote communication station can further transmit data both to another device including the subject's treatment device and/or back. Further optionally, two different remote communication stations can be used, one for receiving transmitted data and another for sending data. For example, with the sleep diagnosis and treatment system of the present invention, the remote communication system of the present invention can be a wireless router, which establishes a broadband internet connection and transmits the physiological signal to a remote internet site for analysis, preferably for further input by the subject's physician or another clinician. Another example is where the remote communication system is a PDA, computer or cell phone, which receives the physiological data transmission, optionally re-processes the information, and re-transmits the information via cell towers, land phone lines, satellite, radio frequencies or cable to a remote site for analysis. Another example is where the remote communication system is a computer or processor, which receives the data transmission and displays the data or records it on some recording medium, which can be displayed or transferred for analysis at a later time. The quantitative method for estimating or determining the severity of the subject's sleeping disorder or symptoms is preferably accomplished by using signals or data from the one or more sensors described herein. More preferably, this quantitative method is accomplished in real-time, allowing the subject's symptoms to be treated as they occur. By real-time it is meant that the quantitative diagnosis step is accomplished predictively or within a short period of time after symptoms occur which allows for immediate treatment, thereby more effectively reducing the health affects of such disorder while at the same time also minimizing side effects of the treatment chosen. By real-time, preferably the diagnosis is accomplished within 24 hours of receiving the signals from the one or more sensors on the subject, more preferably within 8 hours, even more preferably within 4 hours, still even more preferably within 1 hour, still even more preferably within 20 minutes, still even more preferably within 5 minutes, still even more preferably within 1 minute, still even more preferably within 10 seconds, still even more preferably within 1 second, still even more preferably within 0.1 seconds and most preferably within 0.01 seconds. Various algorithms known to those skilled in the art are used to filter out noise from the signal or data, and to then quantify the level of severity of the subject's sleeping disorder or symptoms. This filtered data is then is preferably analyzed using the techniques described in the following paragraph In addition to these sleeping disorder data or signal analysis techniques various controller schemes can be used. Various sleeping disorders have symptoms that can be predicted based on various combinations of physiological signals or data. Various embodiments of the present invention include the approach to identifying these symptoms prior to onset by identifying various characteristic shifts in the power spectrum of the sensors being used to monitor these physiological conditions. This characteristic shift in these signals or data can be identified and used to trigger an actuator on the physical or chemical treatment device(s) to provide for delivery of a certain level of treatment. The various embodiments of the present invention include but are not limited to the following signal-processing techniques that are utilized to predict the onset of these symptoms. These are: (i) the standard deviation technique, (ii) a recursively fit ARMAX system identification model, (iii) the Short-Time Fourier Transform (SFFT) technique, and (iv) time-frequency signal analysis with a variety of different kernels. The present invention would also include other on-line signal processing algorithms known to those skilled in the art, such as wavelet analysis, which is similar to time-frequency analysis with a particular kernel function, to identify the shift in power spectrum associated with imminent flow separation that is discussed herein. The standard deviation technique operates on the principle that there is an increase in pressure fluctuation as the flow begins to separate from the surface of an airfoil, due to either increasing angle of attack or unsteady flow. A sharp increase in the standard deviation of pressure data is observed immediately prior to stall. To trigger the deployment the flow effectors and initiate fluid flow control, a threshold standard deviation can be calculated for each pressure sensor and programmed into the control strategy. The second embodiment of a method to identify the shift in measured power spectrum of the signal from the pressure transducer to identify stall utilizes a recursively identified system model, particularly an Auto-Regressive Moving Average (ARMA) model. Advantageously, the controller is the ORICA™ controller, an extended horizon, adaptive, predictive controller, produced by Orbital Research, Inc. and patented under U.S. Pat. No. 5,424,942, which is incorporated herein by reference. The ARMA recursive identification method attempts to fit specific models to the measured data or signals. Evaluation of this data reveals distinct, identifiable model order shifts based, which can be used to actuate the treatment device at various levels. Further analysis of the frequency spectrum of the physiological data related to various sleeping disorders reveals recognizable changes in this data or signals. This clear characterization alongside the model order shifts allows the ORICA identifier to classify discrete models based upon various physiological conditions of the subject, thus allowing precisely controlled treatments to be delivered to the subject or patient. A simple function minimization based upon the error associated with each model will enable adaptive model selection for the subject's physiological condition. As the subject's physiological conditions moves toward various critical conditions or symptoms, the model with the best fit to the data will shift into a higher order model. This model shift foretells the onset of the symptom. A second sub-method of identifying impending symptoms using the ARMA and other related models is to track the poles of the identified system model based on the subject over time. As the subject's physiological condition moves toward certain designated critical symptoms, the poles of the identified system model will move toward a condition of symptom onset, thereby indicating to the control system that certain critical symptoms are impending. Either of these two signal identification techniques based on fitting a mathematical model to the system can be utilized to predict the onset of the subject's symptoms. The ARMA model can be adapted to resemble other canonical model forms thereby demonstrating similarity to other system identification methods based on Kalman filtering and similar approaches. A third embodiment of a method for quantifying the power spectrum measured by the one or more sensors is by using Short-Time Fourier Transforms (STFT). A Discrete Fourier transform (DFT), and its numerically efficient complement the Fast Fourier Transform (FFT), both provide frequency information of a digitized signal or data from the sensors. The DFT and FFT both assume that the signal that is being measured is stationary in time. However, in the case of the subject being tested and treated, the measured signal or data is not stationary in time, which means a typical DFT/FFT approach is inapplicable. However, for short time periods the signal maybe considered to be stationary. Therefore, it is possible to estimate the mean power spectrum by segmenting the physiological data or signals into epochs lasting anywhere from 0.1-5 seconds each, and then applying a discrete-time Fourier transform (DFT) to the windowed data. The DFT is used to calculate the power spectrum of the signal for that epoch. Then the spectral mean and median density are calculated from the power spectrum of the signals from each epoch. Using this method it is possible to identify specific frequency content in the data. As the subject begins to experience the onset of various critical symptoms, the frequency spectrum of the measured and analyzed data will shift, which indicates to the control system that the symptom is beginning. A fourth embodiment of a signal processing method which can provide indications to the control system that various symptoms are impending, to enable either actuation of the treatment device, is to analyze the sensor data using a time-frequency transform. A time-frequency transform enables both frequency resolution and estimation stability for highly non-stationary signals, which typifies some types of such as some of the data or signals related to various physiological conditions. This is accomplished by devising a joint function of both time and frequency, a distribution that describes the energy and density of a signal simultaneously in both time and frequency. The general form of the time-frequency transform is given by the following P(t,w)=14∏2∫∫∫e-jθt-jτω+jθuϕ(θ,t)·s*(u-12τ)s(u+12τ)dudτdθ This transform can be used to calculate instantaneous power spectra of a given signal. The actual transformation distribution is selected by changing the kernel, Φ(θ,τ). The function [e−1] is interesting since it is possible to identify any distribution invariant to time and frequency shifts by means of its kernel, and the properties of the kernel are strictly related to the properties of the distribution, given by [e−1]. The diagnostic device of the present invention is used to provide an output which is then used either automatically to adjust the treatment device or by a clinician or the subject to adjust the device which provides the physical or chemical treatment device which is another part of the system of the present invention. There are clearly many embodiments of the present invention, and we will attempt to describe a few herein. Also optionally, the signals or data received from the sensors through the data acquisition system can be used to train the treatment or therapeutic device. During a titration or adjustment period the (rich) diagnostic data can be used to train the treatment or therapeutic device to recognize more detailed physiological symptoms or signs of a sleeping disorder, or more particularly a sleep disorder by correlating the more robust or rich diagnostic data collected with the data acquisition device with the more limited sensor data from the therapeutic or treatment device. For example, certain conditions which routinely are recognized by a number of sensors can be correlated to the signature of the more limited data from the sensors on the therapeutic or treatment device. For instance, while a central sleep apnea is best recognized by a respiratory effort belt and pulse oximetry. Data from a diagnostic period of time can be compared with the sensor data from the treatment device to determine the signature from such data that indicates a central apnic event occurred. Preferably, the treatment device can include a neural network as part of its control mechanism which allows the treatment device to correlate the limited sensor data with the more robust data from the diagnostic period. Optionally, the treatment device can further include a library of events recorded from one or more subjects that allow for more accurate control of the treatment device, and more effective treatment of the subject. Various embodiments of the present invention include a treatment interface device comprising at least one electronic component for receiving a signal transmitted from the data acquisition system, optionally processing the signal from the data acquisition system, and retransmitting the signal from the data acquisition system or transmitting a signal based at least in part on at least one of the signals from the data acquisition system. The treatment interface device operates essentially as part of the data acquisition system, with the exception that it also transmits to a treatment device (i.e., a PAP or CPAP device). Preferably, the treatment interface device can be used for the titration and then detached from the treatment device or removed from the subject's sleeping location. This would allow the treatment interface device to be used during titration, collected from the subject, and then reused for titration with another subject. Preferably, the treatment interface device receives a signal from a component of the data acquisition system and transmits a command signal to the PAP or CPAP device. Like a component of the data acquisition system, the treatment interface device preferably contains a transmitter and a receiver. More preferably, the treatment interface device contains a wireless receiver and/or a wireless transmitter. The transmissions sent and received by the treatment interface device do not necessarily use the same method. For example, the treatment interface device could include both a wireless receiver to receive wireless transmissions from the data acquisition system and a USB transmitter to transmit command signals to the PAP or CPAP device. Optionally, the treatment interface device also contains a receiver or transceiver to receive data from the PAP or CPAP device. Such data could include, for example, PAP or CPAP device status information (ex., whether the device is on or off, error codes, blower speed, etc.), fluid characteristics of the pressurized gas delivered to the patient (ex., airflow, air pressure, humidity, etc.), and the like. The treatment interface device also preferably contains a processor. Preferably, the treatment interface device uses a processor to execute an algorithm for titrating or adjusting the PAP or CPAP device. The treatment interface device processor can be used to relate all the received signals (from the subject, the environment, and the PAP or CPAP device) to each other, and to predict or determine the next appropriate treatment setting. For example, the treatment interface device could receive a pulse oximetry signal, a thoracic effort signal, and a room temperature signal from a data acquisition system, and an airflow signal from the PAP or CPAP device. The treatment interface device processor would then use the signals to calculate the next appropriate treatment setting. For example, the treatment interface device processor could use the airflow, pulse oximetry signal, thoracic effort, and room temperature to determine that the PAP pressure should be increased by 2 cm H2O. The treatment interface device processor would then create a command signal to instruct the PAP device to increase the pressure appropriately. The treatment device processor is preferably capable of executing closed-loop titration, thereby automatically determining a set of final treatment values for the treatment device. The set of final treatment values for the treatment device are the parameters programmed into the treatment device (i.e., the PAP or CPAP), which the treatment device uses during operation. Once the set of final treatment values are programmed into the treatment device, the treatment device will continue to operate according to the set of final treatment values. For example, if the treatment device were a CPAP device, the set of final treatment values would be the gas pressure delivered to the patient. Similarly, if the treatment device were a bi-PAP device, the set of final treatment values would be the inspiration gas pressure and the expiration gas pressure. The set of final treatment values depends on both the type of treatment device and the results of the titration process. Essentially, the titration process is designed to determine the set of final treatment values for a given treatment device and a given subject. If the treatment interface device contains a processor, it is preferably capable of using a variety of techniques to conduct the titration, including but not limited to lookup tables, relationship algorithms, neural networks, wavelets, fast-Fourier transforms, and the like. Various embodiments of the present invention include a treatment interface device capable of automatically conducting titration of the treatment device. In this case, the treatment device interface preferably uses a processor to enable closed-loop control for executing the titration adjustments, determining the set of final treatment values, and programming the treatment device to deliver the set of final treatment values. Optionally, the set of final treatment values is sent to a clinician for approval. Various other embodiments of the present invention include a treatment interface device capable of using closed-loop control to run the titration and determine the set of final treatment values, but the treatment interface device is not capable of independently programming the treatment device. In this case, a clinician must review the set of final treatment values, approve or adjust them, and then program the treatment device. Various other embodiments of the present invention include a treatment interface device that is only capable of conducting an open-loop titration. In this case, a clinician must conduct the titration, determine the set of final treatment values, and program the treatment device. Thus, the treatment interface device only provides the clinician with a means to control the treatment device and obtain information from it. Various embodiments of the present invention include a PAP or CPAP device comprising an electrical connection or component for receiving a retransmitted or transmitted signal. The PAP or CPAP device can be any device known in the art that is capable of delivering a flow of gas to the subject and is capable of being titrated or adjusted. The PAP or CPAP device may receive a signal containing command information only. For example, the PAP or CPAP device could receive a command signal to increase the pressure of gas delivered to the subject, to decrease the gas pressure, or to cease operations. Optionally, the device may receive a signal containing data requiring further processing by the PAP or CPAP device. For example, the data acquisition system could transmit a signal containing pulse oximetry and respiration characteristics, which the PAP or CPAP device further processes to relate to an internal airflow signal and determine the next appropriate gas pressure setting. The PAP or CPAP device could contain any component known in the art to receive the signals sent from the data acquisition system. For example, if the data acquisition system provides a signal transmitted via USB, the PAP or CPAP device could contain a receiver component for obtaining the transmitted USB signals. Optionally, the PAP or CPAP device may be a wireless receiver. In this case, for example, the PAP or CPAP device wirelessly receives the signals from a transmitting component of the data acquisition system (the patient interface box, the base station, or other component capable of wirelessly transmitting signals), optionally processes the signal, and makes an adjustment to the flow of gas provided to the subject. Further optionally, the PAP or CPAP device contains a component for receiving a signal transmitted from a remote monitoring station. In this case, for example, a remote monitor receives data from the data acquisition system, determines the next appropriate gas pressure setting, and transmits the setting to the PAP or CPAP device. Various embodiments of the present invention include a PAP or CPAP device capable of processing the received signal. Such processing can be used to relate the received signals to each other and any additional signals collected by the PAP or CPAP device itself. For example, the PAP or CPAP processor could receive a pulse oximetry signal from a data acquisition system, an airflow signal from the PAP or CPAP device itself, and a signal to increase the gas pressure from a remote monitor. The PAP or CPAP processor would then relate the signals to each other, thereby creating a lookup table of values or a more sophisticated relationship algorithm. The PAP or CPAP processor is optionally capable of creating a neural network and training the network with data collected from an individual subject over several nights. Such a neural network could “teach” the PAP or CPAP device to accurately predict apnea events (confirmed with physiological sensors) based only on gas flow characteristics. In this way, the PAP or CPAP device can continue to operate correctly based on gas flow characteristics alone, and the physiological sensors become redundant. Various embodiments of the present invention include the step of processing or pre-processing the signals received from the sensors attached to the subject. The processor or preprocessor of various embodiments of the present invention can be independent, or combined with any other component. For example, a processor or preprocessor could be a part of the patient interface box, base station, treatment interface, or PAP or CPAP device. Optionally, the processor or preprocessor could be distributed between two or more components of the device. Optionally, preprocessing can correct artifacts, derive a total sleep time, derive a snore signal, filter a signal, or compress and/or encrypt the data for transmission as described above. Preferably, the preprocessing step corrects for artifacts present in the sensor signals. Optionally, a step of more powerful processing can perform one or more of the preprocessing functions. Further optionally, more powerful processing can determine the appropriate pressure to be delivered by the PAP or CPAP to the subject. Further optionally, more powerful processing can determine whether the patient has central or obstructive sleep apnea. For example, in the case of CSA, the processing can provide a recommendation to stop CPAP treatment and use another treatment specific to CSA. Various embodiments of the present invention include a system capable of determining the location of obstructions in the airways of subjects. This feature is helpful because one OSA treatment modality is surgical procedures that rely on excising part of the tissue causing the obstruction. In these embodiments, the system detects an obstructive apnea event, and then determines the location of the obstruction using acoustic reflectance methods. A sound wave created by an oscillating piston (tuning fork, membrane, loudspeaker, and the like), an aperture (whistle, reed, and the like), or any other method of producing a sound of known frequency is introduced into the flow of pressurized gas delivered to the patient. The sound wave can be generated inside the PAP or CPAP device, inside the mask, or at any other suitable location. Preferably, the sound is outside the audible frequency range to minimize disturbance to the subject. A pressure transducer in the system will then receive the pressure signals generated by the echo waves bouncing back from the obstructed wall inside the airways. Effectively, the data acquisition system will “listen” to the echoes coming back from inside the subject's airways. Using the delay between the known time of the original sound wave and the detected echo, the system can calculate the location of the obstruction. Measuring the pressure of the reflected sound wave can allow the system to distinguish between the obstruction and other anatomical features of the airway. It is expected that the obstruction site will generate the biggest pressure amplitude, thereby differentiating it from other nearby structures. Also, the system could be configured to “listen” for charges in frequencies or detect an echo signature to determine the density of the tissue the sounds waves are reflected from. Thus allowing for the determination between hard and soft tissue. Signal quality of the signals from all the sensors can be affected by the posture and movement of the subject. For methods of the present invention, it is important to reduce motion artifacts from the sensor placement. Errors in the form of noise can occur when biopotential data acquisition is performed on a subject. For example, a motion artifact is noise that is introduced to a biopotential signal via motion of an electrode placed on the skin of a subject. A motion artifact can also be caused by bending of the electrical leads connected to any sensor. The presence of motion artifacts can result in misdiagnosis, prolong procedure duration and can lead to delayed or inappropriate treatment decisions. Thus, it is imperative to remove motion artifact from the biopotential signal to prevent these problems from occurring during treatment. The present method of collecting signals from a subject includes a means of reducing motion artifacts. When physiological electrodes are used, preferably they are used with conductive gels or adhesives. More preferably, dry electrodes are used with or without conductive gels or adhesives. Still more preferably, the device's firmware and/or software uses body motion information for artifact correction. Most preferably, a combination of the above methods is used. The most common methods for reducing the effects of motion artifacts in sensors such as electrodes have focused on skin deformation. These methods include removing the upper epidermal layer of the skin by abrasion, puncturing the skin near the electrode, or measuring skin stretch at the electrode site. The methods for skin abrasion ensure good electrical contact between the electrode and the subject's skin. In this method, an abrasive pad is mechanically rotated on the skin to abrade the skin surface before electrode placement. Similarly, medical electrodes have been used with an abrading member to prepare the skin after application of the electrode whereby an applicator gun rotates the abrading member. Methods of skin preparation that abrade the skin with a bundle of fibers have also been disclosed. These methods provide a light abrasion of the skin to reduce the electrical potential and minimize the impedance of the skin, thereby reducing motion artifacts. Skin abrasion methods can cause unnecessary subject discomfort, prolong procedure preparation time and can vary based on operator experience. Furthermore, skin abrasions methods can lead to infection, and do not provide an effective solution to long term monitoring. Dry physiological recording electrodes could be used as an alternative to gel electrodes. Dry physiological recording electrodes of the type described in U.S. Pat. No. 7,032,301 are herein incorporated by reference. Dry physiological electrodes do not require any of the skin abrasion techniques mentioned above and are less likely to produce motion artifacts in general. Although the above-mentioned methods reduce motion artifacts, they do not completely eliminate them and they are less effective for sensors that do not measure a biopotential signal, such as respiratory effort belts, flow meters, environmental sensors, and the like. The invention preferably incorporates a step to more completely remove motion and other artifacts by firmware and/or software correction that utilizes information collected preferably from a sensor or device to detect body motion, and more preferably from an accelerometer. In certain embodiments of the present invention, a 3-D accelerometer is directly connected to the data acquisition system. The data acquisition system receives signal inputs from the accelerometer and at least one set of other physiological or kinetic signals. The microprocessor applies particular tests and algorithms comparing the two signal sets to correct any motion artifacts that have occurred. The processor in one embodiment applies a time synchronization test, which compares the at least one set of physiological or kinetic signal data to the accelerometer signal data synchronized in time to detect motion artifacts and then remove those artifacts. Alternatively, the processor may apply a more complicated frequency analysis. Frequency analysis preferably in the form of wavelet analysis can be applied to the accelerometer and at least one set of physiological or kinetic signals to yield artifact detection. Yet another alternative is to create a neural net model to improve artifact detection and rejection. This allows for the system to be taught over time to detect and correct motion artifacts that typically occur during a test study. The above illustrations are only examples of possible embodiments of the present invention and are not limitations. The accelerometer data need not be analyzed before wireless transmission; it could be analyzed by a base station, computer, or the like after transmission. As should be obvious to those skilled in the art, a 2-D accelerometer or an appropriate array of accelerometers could also be used. Gyroscopes could be used as well for these purposes. Sensors can be used to detect motion of the subject's body or a portion of the subject's body. The motion information can then be used to detect the posture and movement of the subject. This motion information may indicate that the subject has a sleeping disorder unrelated to breathing, such as restless legs syndrome (RLS) or other parasomnia. The motion information can be used to correct for error in the form of noise or motion artifact in the other sensor channels. To detect motion, various embodiments of the present invention include sensors, devices, and methods of determining the posture and movement of the subject. This information can be used when analyzing the physiological signals. The posture and movement of the subject is preferably determined by signals received from an accelerometer or an array of two or more accelerometers. Accelerometers are known in the art and are suitable for use as motion-monitoring units. Various other types of sensors can be additionally or alternatively used to sense the criteria (e.g., vibration, force, speed, and direction) used in determining motion. For particularly low power designs, the one or more sensors used can be largely mechanical. Body movement of the subject will result in a high amplitude signal from the accelerometer. The data acquisition system can also monitor the sensor signals for any indication that the subject has moved, for example from a supine position to an upright position. For example, the integrated velocity signal computed from the vertical acceleration component of the sensor data can be used to determine that the subject has just stood up from a chair or sat up in bed. A sudden change in the vertical signal, particularly following a prolonged period with little activity while the subject is sleeping or resting, confirms that a posture-changing event occurred. The data acquisition system can also monitor the sleep-wake cycle of the patient. Sleep-wake data will be used to determine the total sleep time for the calculation of the apnea/hypopnea index and other sleep related indices. In addition, a video camera can be used to detect subject movement and position, and the information then used to correct any artifacts that may have arisen from such movement. Preferably, the camera is a digital camera. More preferably, the camera is a wireless digital camera. Still more preferably, the camera is a wireless digital infrared camera. Preferably, the video acquired from the camera is processed so that the subject's movement and position are isolated from other information in the video. The movement and position data that are acquired from the video is then preferably analyzed by software algorithms. This analysis will yield the information needed to make artifact corrections of the physiological signals. Optionally, alternative analysis of the video signal can indicate additional sleeping disorders, such as restless legs syndrome (RLS), sleepwalking, or other parasomnia. One specific embodiment of the present invention using video subject movement detection involves the use of specially marked electrodes. The electrodes can be any appropriate electrode known in the art. The only change to the electrode is that they preferably have predetermined high contrast marks on them to make them more visible to the video camera. These marking could be manufactured into the electrodes or simply be a sticker that is placed on the back of the electrodes. These markings enable the video system to accurately distinguish the electrodes from the rest of the video image. Using the markers on each visible electrode, the system can calculate of the movement of each individual electrode, thus allowing for more accurate artifact correction. In another specific embodiment of the invention, the system can detect subject movement by monitoring the actual movement of the subject's body. Software is applied to the video that first isolates the position of the subject's body, including limbs, and then continues to monitor the motion of the subject. There are numerous advantages to using video over other means of artifact detection and correction. Foremost, video allows for the calculation of movement artifacts from each individual electrode without the need for accelerometers. This makes the use of video very cost effective in relation to other available methods. The video also can be used in conjunction with the accelerometer data to correct for motion artifacts, thus increasing the precision and accuracy of the system's motion artifact correction capabilities. Current auto-titrating machines adjust PAP or CPAP pressure based on airflow/pressure alone. The advantage of using multiple parameters over just an airflow or pressure parameter is that the PAP or CPAP can now confirm events and differentiate between central apnea and hypopneas and obstructive apneas and hypopneas. For example, when airflow drops, existing commercial systems will “assume” the event is obstructive; however, the current invention will “know” whether it is obstructive or central by further investigating two other parameters. If pulse ox drops below 3% and thoracic effort persists, then the apnea/hypopnea is obstructive. If the pulse ox drops below 3% and the thoracic effort ceases then the apnea/hypopnea is central. If pulse ox does not drop by 3% then the event cannot be considered an hypopnea at all. It is a benefit for auto-titrating machines to confirm or to know central apneas vs. obstructive apneas, since central events are indicative of more serious cardiovascular problems and, more often than not, they cannot be properly treated with conventional CPAP treatment. Additional treatment such as oxygen is needed. It is suspected that up to 15% of patients develop central events once they are placed on PAP or CPAP. The development of central events after PAP or CPAP administration is thought to be generated by a newly discovered and newly created disease called Complex Sleep Apnea (CompSA), which requires a very different treatment than the traditional CPAP. A method to automatically titrate PAP or CPAP pressure is based on the above physiological parameters that are specific to the subject. The shapes of physiological signals often differ between subjects, especially during events such as hypopneas, apneas, upper airway resistance, central apneas, and others. The ability to wear a portable data acquisition system or device for a few days will allow the PAP or CPAP to be trained on the physiological signals that are specific to that subject. This “subject specific” information can then be used to better optimize auto-titration since it can now better detect hypopneas, apneas, etc. Preferably, the titration method of the present invention includes a step whereby the titration analysis runs over a minimum period of time, preferably at least 15 minutes, before pressure adjustment occurs. This period is needed to make sure pressure is not titrated up or down without sufficient confirmation of the event. For example, in the event a subject holds their breath for whatever reason, perhaps a bad dream, if the adjustments are made quickly a traditional system may unnecessarily increase pressure because the system has falsely detected an apnea. This is why it is necessary to wait a sufficient period of time, so that non-pathogenic irregular breathing does not affect the PAP or CPAP titration. Various embodiments of the present invention include the step of conducting PAP or CPAP titration that is attended from a remote location. Such remote attendance can be accomplished in several ways, for example by an individual in a remote location (a remote monitor) periodically or continuously viewing the data transmitted from the data acquisition system, including signals from the sensors and a preprocessed signal or signals based at least in part on at least one of the sensors. Remote monitoring can be achieved at various levels, including but not limited to, post-titration approval, titration approval, and active titration. Further, each level of monitoring can be either periodic or continuous, and can incorporate automatic alerts. Several illustrative examples of monitoring are described below. In a post-titration approval monitoring scheme, the remote monitor receives a report of the titration process after completion. In this example, the subject receives a completely automated titration system that independently determines the appropriate pressure of the delivered gas. While the subject sleeps or attempts to sleep, the system automatically adjusts to find the set of final treatment values (i.e., the optimal gas pressure). Then the PAP or CPAP device programs itself to continue delivering the set of final treatment values. The system also sends the collected data and set of final treatment values to the remote monitor, who reviews the collected data and approves or rejects the system's set of final treatment values. If the monitor approves the system's set of final treatment values, the subject can return all the equipment other than the PAP or CPAP device and then use the PAP or CPAP device for ongoing treatment. In this scenario, the remote monitor is not actively engaged in the titration process. This type of monitoring is typically periodic, with the remote monitor reviewing the data at a single point (after the end of the titration), or at multiple points, for example at the end of each night during a multi-night titration. This type of monitoring could also be continuous, in that the remote monitor continuously receives data from the titration system. Post-titration approval monitoring is generally suited to subjects with relatively simple apnea and few complicating factors. Preferably, the review portion of the post-titration approval monitoring takes place within a few weeks of the titration night(s). More preferably, the review takes place within one week; more preferably within three days; still more preferably within one day; still more preferably within six hours; still more preferably within one hour of the end of the titration nights. In a titration approval monitoring scheme, the remote monitor receives a report of the titration process after completion. In contrast to the post-titration approval, however, the remote monitor must approve the set of final treatment values before the PAP or CPAP device is programmed to continue delivering that pressure. In this example, the subject could receive an automated titration system that independently determines the set of final treatment values by automatically adjusting while the subject sleeps. The system then sends the collected data and set of final treatment values to the remote monitor, who reviews the collected data and approves or modifies the system's set of final treatment values. If the monitor approves the system's set of final treatment values, the remote monitor programs the PAP or CPAP device to continue using set of final treatment values. Optionally, the subject could receive a semi-automated titration system that periodically changes treatment values. The system sends the collected data and corresponding treatment values to the remote monitor, who reviews all the data and determines the set of final treatment values. After the remote monitor determines the set of final treatment values, the PAP or CPAP device is programmed to deliver it. This type of monitoring is typically periodic, with the remote monitor reviewing the data at a single point (after the end of the titration), or at multiple points, for example at the end of each night during a multi-night titration, or several times during the titration nights. This type of monitoring could also be continuous, in that the remote monitor continuously receives data from the titration system. Preferably, the remote monitor determines the optimal gas pressure within one day; still more preferably within six hours; still more preferably within one hour of the end of the titration nights, and most preferably within 20 minutes of the end of the titration. In an active titration monitoring scheme, the remote monitor receives signals from the system during the titration phase. Preferably, the remote monitor receives data every hour; more preferably the remote monitor receives data every twenty minutes; more preferably every five minutes; and most preferably the remote monitor receives continuous streaming data during the titration phase. In contrast to the post-titration approval and titration approval, the remote monitor is actively engaged in the titration process. In this example of monitoring, the subject could receive a titration system that collects and transmits data to the remote monitor. The remote monitor then reviews the data and determines the next level of gas pressure for the titration. The remote monitor transmits the appropriate command to the PAP or CPAP device (ex., to increase or decrease the gas pressure), and data collection continues until the treatment value requires adjustment. After the remote monitor has completed the titration and determined a set of final treatment values, the PAP or CPAP device is programmed to continue using the set of final treatment values. This type of monitoring can be periodic, with the remote monitor reviewing the data at multiple points, for example just before each change in PAP or CPAP gas pressure. This type of monitoring could also be continuous, with the remote monitor continuously receiving and reviewing data. Other types of remote monitoring can include only monitoring at the beginning of the titration to assess the quality of the collected signals. For example, the subject can set up the titration system, and the remote monitor can view preliminary data for adequacy. If a sensor has been improperly placed or incorrectly connected, the remote monitor can instruct the subject to take remedial action. In this way, the remote monitor can ensure receipt of sufficient and adequate data to perform the titration correctly. Each level of monitoring can include an alert function wherein the monitor receives alerts of predetermined events. For example, the monitor could be alerted when the subject's oxygen saturation drops below a predetermined threshold, when the PAP or CPAP device is instructed to deliver a gas pressure over a safety threshold, every time the system changes the pressure, when an electrode's impedance increases, if a sensor malfunctions, or for any other event. The system can also be programmed to alert the remote monitor of more complex events, such as detection of an apnea event after the PAP or CPAP has reached a defined gas pressure setting, a drop in oxygen concentration combined with cessation of thoracic breathing activity, or a sensor has moved and no back-up sensors are available. Preferably, the alert function is provided in all of the monitoring schemes described above. Preferably, the remote monitor is capable of communicating with the subject, subject's assistant, or other individual near the subject. Such communication allows the remote monitor to provide instructions to the subject, subject's assistant, or other individual near the subject, for example, to adjust a sensor, close window blinds, remove a source of noise, turn off any equipment, or wake the subject. More preferably, the remote monitor is capable of two-way communication with the subject, subject's assistant, or other individual near the subject. Such communication allows the subject, subject's assistant, or other individual close to the subject to ask the remote monitor questions, for example, to clarify instructions. Various embodiments of the present invention include a step of monitoring a subject from a separate monitoring location. Data transmitted in a remote monitoring application may include, but are not limited to, physiological data, kinetic data, environmental data, PAP or CPAP device data, audio, and video recording. It is preferable that both audio and video communications be components of the envisioned system in order to provide interaction between the subject and remote monitor. Preferably, the data is transmitted from a base station to a database or remote monitoring location with a wireless module or card through a cellular service provider. The envisioned remote monitoring application may allow for multiple remote monitoring locations anywhere in the world. Remote data collection to monitoring station configurations may include, but are not limited to one-to-one, one-to-many, many-to-one, or many-to-many. The envisioned system may include a central server, or group of servers that can collect data from one or more remote sites and offer delivery to multiple viewing clients. It is preferable that the remote monitoring application employ a wireless network link between the subject and caregiver such as a cellular wireless network. Other wireless techniques include but are not limited to satellite communications, direct radio, infrared links, and the like. Data transmission through a wired network such as dial-up modem, digital subscriber line (DSL), or fiber-optic, while less preferable, can also be used. Bandwidth management facilities will be employed to facilitate remote monitoring in low-speed communication networks. Several data compression techniques are envisioned to maximize system utilization in low-bandwidth environments. The envisioned remote monitoring step will require data processing, storage, and transmission. This step may be completed or accomplished in one or more modules of the data acquisition system. The preferred embodiment realizes the remote system as two separate components with a patient interface module that can collect, digitize, store, and transmit data to a base station module that can store, process, compress, encrypt, and transmit data to a remote monitoring location. The preferred embodiment of the remote monitoring system will consist of several system modules. A patient interface module will collect physiological and kinetic data from the subject and transmit the signals to a base station module. The base station module will receive the physiological and kinetic data from the patient interface module, and will also preferably directly connect to any environmental sensors and any PAP or CPAP sensors. The base station module will preferably consist of an embedded computer equipped with a cellular wireless data/voice card and a night-vision video acquisition system. The embedded computer will collect, analyze, compress, and encrypt the data and relay them to one or more viewing caregivers. The remote monitoring systems will broadcast their dynamically assigned IP addresses to a dedicated address server, which will be used for lookup by the viewing caregivers. Computer software used by caregivers will enumerate each remote monitoring system in the field using the aforementioned address server and allow caregivers to select one or more for monitoring. The software will have the ability to control data acquisition including start and stop of acquisition, as well as system reconfiguration. The software preferably will also provide real-time control over the display of data including page width, amplitude, color, montage, and the like. The software will also provide both real-time video and audio communication with the subject using dual services from the cellular card. Video will preferably be transmitted through the data connection, and audio will preferably be transmitted through the voice connection. While the equipment and methods used in the various embodiments of the present invention can be used in rooms or buildings adjacent to the subject's sleeping location, due to the equipment's robust nature these methods are preferably performed over greater distances. Preferably, the subject's sleeping location and the remote locations, for example the location of the remote monitor, are separate buildings. Preferably, the subject's sleeping location is at least 1 mile from the remote location(s) receiving the data; more preferably, the subject's sleeping location is at least 5 miles from the remote location(s) receiving the data; even more preferably, the subject's sleeping location is at least twenty miles from the remote location(s) receiving the data; still more preferably, the subject's sleeping location is at least fifty miles from the remote location(s) receiving the data; still even more preferably, the subject's sleeping location is at least two hundred-fifty miles from the remote location(s) receiving the data; more preferably, the subject's sleeping location is in a different state from the remote location(s) receiving the data; and most preferably, the subject's sleeping location is in a different country from the remote location(s) receiving the data. Various embodiments of the present invention include the step of evaluating the received signals to determine if they are adequate for later analysis. This step can be performed or accomplished a number of ways. In the simplest form, the signal can be evaluated once just prior to the start of the sleep study. In another form, the signal is evaluated periodically during the study to determine its quality. Preferably, the signal(s) are evaluated both at the start of the study and periodically during the study. Most preferably, the signals are evaluated at the beginning of the study and continuously during the study. If the signals are evaluated for adequacy, preferably the subject can be contacted to adjust the sensor as necessary. In this way, corrective action can adjust an inadequate signal to increase the value of the sleep study data and enable later analysis. The data collected for the sleep analysis conducted under the various methods of the present invention can be viewed by any number of medical personnel and the subject themselves, if appropriate. Preferably, the data is available to a sleep technician, to a doctor making the analysis/diagnosis based on the data, and others involved in these methods. This data can be reviewed at multiple locations including but not limited to the doctor's home or office, or anywhere else the doctor or other individuals associated with the analysis/diagnosis have access to the internet or an intranet. Referring now to the drawings and, in particular toFIG.1, there is shown a block diagram of the present invention. An external input12from sensor14is input to signal processing module16. Although, one sensor14and one external input12are shown, the signal processing module16is capable of accepting multiple external inputs12from multiple sensors14. The signal processing module16generates a signal18encoded with data corresponding to the external input12. The signal processing module16transmits the signal18by wireless means to a base station40. InFIG.1, the wireless means is shown as radio frequency (RF). In this case, the signal processing module generates a radio frequency signal18by frequency modulating a frequency carrier and transmits the radio frequency signal through module antenna20. The base station40receives the radio frequency signal18through base antenna42, demodulates the radio frequency signal18, and decodes the data. It is understood that other wireless means can be utilized with the present invention, such as infrared and optical, for example. Although one module antenna20and one base antenna42is shown in this embodiment, it is understood that two or more diversity antennas can be used and are included in the present invention. An external programming means60, shown inFIG.1as a personal computer, contains software which is used to program the signal processing module16and the base station40through data interface cable62. The data interface cable62is connected to the base station40and signal processing module16by respective connectors64. The same data interface cable62or two different interface cables62can be used, one for the base station40and one for the signal processing module16. The signal processing module16and the base station40can be programmed by connecting a data interface cable62between it and an external programming means60or by radio frequency (or other type) of signals transmitted between a base station40to the signal processing module16or to another base station40. RF signals, therefore, can be both transmitted and received by both signal processing module16and base station40. In this event the signal processing module16also includes a module receiver29while the base station40also includes a base transmitter84, in effect making both the signal processing module16and the base station40into transceivers. In addition, the data interface cable62also can be used to convey data from the base station40to the external programming means60. If a personal computer is the external programming means60, it can monitor, analyze and display the data in addition to its programming functions. The base receiver80and module receiver29can be any appropriate receivers, such as direct or single conversion types. The base receiver80preferably is a double conversion superheterodyne receiver while the module receiver29preferably is a single conversion receiver. Advantageously, the receiver employed will have automatic frequency control to facilitate accurate and consistent tuning of the radio frequency signal18received thereby. Referring now toFIG.2, there is shown a block diagram of the signal processing module16with the sensor14and the module antenna20. The signal processing module16comprises input means22, analog-to-digital (A/D) means24, a module microcontroller26with a nonvolatile memory, advantageously, an EEPROM261, a module transmitter28, a module receiver29and a module power supply30. Although the module antenna20is shown externally located from the signal processing module16, it can also be incorporated therein. The module antenna20may be a printed spiral antenna printed on a circuit board or on the case of the signal processing module16or other type of antenna. A module power supply30provides electrical power to the signal processing module16which includes the input means22, A/D means24, module microcontroller26module transmitter28and module receiver29. The input means22is adjustable either under control of the module microcontroller26or by means of individually populatable components based upon the specific external input12characteristics and range enabling the input means22to accept that specific external input12. For example, if the input is a 4-20 mA analog signal, the input means22is programmed by the module microcontroller26and/or populated with the components needed to accept that range and characteristic of signals. If the input characteristics change the programming and/or components change accordingly but the same platform circuit board design is utilized. In other words, the same platform design is utilized notwithstanding the character, range, or quantity (number of external inputs12) [up to a predetermined limit] of the input. For example, bioelectric signals such as EEG, EMG, EKG, and EOG have typical amplitudes of a few microvolts up to a few tens of millivolts. For a given application, a specific frequency band of interest might be from 0.1 Hz to 100 Hz, whereas another application may require measurement of signals from 20 Hz to 10 KHz. Alternatively, measurement of vital signs such as body temperature and respiration rate may deal with signals in a range of +5 volts, with a frequency content from DC (0 Hz) to 20 Hz. For other applications such as industrial process monitoring, the information of interest may be contained in the signal as a current, such as a 4 to 20 mA current loop sensor, or it may take the form of resistance, impedance, capacitance, inductance, conductivity, or some other parameter, The present invention provides a single device for measuring such widely disparate signal types and presents distinct economic advantages, especially to small enterprises such as a medical clinic located in a rural area, which would be empowered by this invention to conduct tests which would otherwise have required subject travel to a large medical center, with all the attendant cost thereof. This is possible due to the selectively adaptable input means22and A/D means24, the frequency agile module transmitter28and base transmitter84, and the programmability of the module microcontroller26and EEPROM261. One universal platform design then can be utilized for all applications. In addition, the signal processing module can comprise multiple copies of the input means22and the A/D means24. Cost savings can be achieved by multiplexing at several different points in the input means22and the A/D means24allowing hardware to be shared among external inputs12. After receipt by the input means22, the external input12is inputted to the A/D means24. The A/D means24converts the input to a digital signal32and conditions it. The A/D means24utilizes at least one programmable A/D converter. This programmable A/D converter may be an AD7714 as manufactured by Analog Devices or similar. Depending upon the application, the input means22may also include at least one low noise differential preamp. This preamp may be an INA126 as manufactured by Burr-Brown or similar. The module microcontroller26can be programmed to control the input means22and the A/D means24to provide specific number of external inputs12, sampling rate, filtering and gain. These parameters are initially configured by programming the module microcontroller26to control the input means22and the A/D means24via input communications line35and A/D communications line36based upon the input characteristics and the particular application. If the application changes, the A/D converter is reconfigured by reprogramming the module microcontroller26. In this manner, the input means22and the A/D means24can be configured to accept analog inputs of 4-20 mA, +/−5 volts, +/−15 volts or a range from +/−microvolts to millivolts. They also can be configured to accept digital inputs, for detection of contact closure, for example. The module microcontroller26controls the operation of the signal processing module16. In the present invention, the module microcontroller26includes a serial EEPROM261but any nonvolatile memory (or volatile memory if the signal processing module remains powered) can be used. The EEPROM261can also be a separate component external to the module microcontroller26. Advantageously, the module microcontroller26may be PIC16C74A PIC16C74B or a PIC16C77 both manufactured by MicroChip, or an Amtel AT90S8515 or similar. The module microcontroller26is programmed by the external programming means60through the connector64or through radio frequency signal from the base station40. The same module microcontroller26, therefore, can be utilized for all applications and inputs by programming it for those applications and inputs. If the application or inputs change, the module microcontroller26is modified by merely reprogramming. The digital signal32is inputted to the module microcontroller26. The module microcontroller26formats the digital signal32into a digital data stream34encoded with the data from the digital signal32. The digital data stream34is composed of data bytes corresponding to the encoded data and additional data bytes to provide error correction and housekeeping functions. Advantageously, the digital data stream34is organized in data packets with the appropriate error correction data bytes coordinated on a per data packet basis. These packets can incorporate data from a single input channel or from several input channels in a single packet, or for some applications may advantageously include several temporally differing measurements of one or a plurality of input channels in a single packet. The digital data stream34is used to modulate the carrier frequency generated by the transmitter28. The module transmitter28is under module microcontroller26control. The module transmitter28employs frequency synthesis to generate the carrier frequency. In the preferred embodiment, this frequency synthesis is accomplished by a voltage controlled crystal reference oscillator and a voltage controlled oscillator in a phase lock loop circuit. The digital data stream34is used to frequency modulate the carrier frequency resulting in the radio frequency signal18which is then transmitted through the module antenna20. The generation of the carrier frequency is controlled by the module microcontroller26through programming in the EEPROM261, making the module transmitter28frequency agile over a broad frequency spectrum. In the United States and Canada a preferred operating band for the carrier frequency is 902 to 928 MHz. The EEPROM261can be programmed such that the module microcontroller26can instruct the module transmitter28to generate a carrier frequency in increments between 902 to 928 MHz. as small as about 5 to 10 KHz. In the US and other countries of the world, the carrier frequency may be in the 2400 to 2483.5 MHz. band, 5.725 to 5.875 GHz. band, or the 24.0 to 24.25 GHz. band, or other authorized band. This allows the system to be usable in non-North American applications and provides additional flexibility. The voltage controlled crystal oscillator (not shown) in the module transmitter28, not only provides the reference frequency for the module transmitter28but, advantageously also, provides the clock function38for the module microcontroller26and the A/D means24assuring that all components of the signal processing module16are synchronized. An alternate design can use a plurality of reference frequency sources where this arrangement can provide certain advantages such as size or power consumption in the implementation. The module receiver29in the signal processing module16receives RF signals from the base station40. The signals from the base station40can be used to operate and control the signal processing module16by programming and reprogramming the module microprocessor26and EEPROM261therein. The base station40has a base antenna42through which RF signals18are received. Base microcontroller86controls the operation of the base station40including base receiver80, base transmitter82, and base power supply88. Base receiver80receives the RF signal18from base antenna42. The base receiver80demodulates the RF signal18and the base microcontroller86removes any error correction and performs other housekeeping tasks. The data is then downloaded through connector64to the external programming means60or other personal computer (PC) or data storage/viewing device for viewing in real time, storage, or analysis. Referring now toFIG.4, there is shown a block diagram of the input means22and A/D means24of the signal processing module16, which provides for the data acquisition function of the present invention. The signal processing module16is variably configurable through software programming initiated by the external programming means60to the EEPROM261of the microcontroller26. The variable configurability enables the signal processing module16to receive external inputs12having different characteristics and ranges and to provide variable sampling rate, filtering and gain of the external inputs12based upon such characteristics and range and/or the specific application. For example, if the present invention is utilized in a biomedical environment, EEG diagnosis and monitoring for instance, the sampling rate will need to be much higher than it would be for an industrial setting measuring thermocouple readings. The ability to reconfigure the system for varying signal characteristics arises at three separate levels in the present invention. For maximum flexibility, such reconfiguration can be carried out during a series of measurements by means of the wireless link, which is understood in this context to be bidirectional. Depending on the characteristics of the received signal18, the base station40can command the signal processing module16to reconfigure the input means22and/or A/D means24to accept an external input12of larger amplitude, or a different frequency range, where signal characteristics change significantly during the course of a series of measurements. Alternatively, for cost, size, and power advantages, this adjustment could be carried out prior to a series of measurements, with the configuration information stored in memory in the signal processing module16, where this memory is advantageously implemented in a nonvolatile form such as EEPROM261, allowing the configuration information to be retained, for instance, across power outages and obviating the need for module receiver29and base transmitter84, saving cost. A third alternative, which provides advantages in certain technical parameters, is to arrange the implementation of the signal processing module16such that minor changes in component values or parameters can reconfigure the same basic hardware to accept widely divergent external input12types. This reconfiguration could take place at the factory, providing cost and inventory advantages to the manufacturer, or it could be performed by the end user, providing similar cost advantages to the user in allowing one piece of equipment to perform multiple tasks. A number of configurable components are shown inFIG.4. Any given component of this arrangement, though, may be omitted, and, in some cases, the order of the components may be changed to gain certain advantages such as physical size, power consumption, or cost, without changing the basic spirit of the invention. Components in thisFIG.4may be combined, either by having a single component carry out the function of two or more of the components shown or by combining functions within a single package such as an integrated circuit or hybrid module. Certain components may also operate with a fixed configuration, limiting the flexibility of certain parameters while retaining the advantages of configurability in other components. The external input12inputs to the input protection network221, which protects the signal processing module16against damage caused by faults or unanticipated conditions encountered at the external inputs12. Depending on the rigors expected to be encountered in any given application and the tolerance to size and weight, the input protection network221may be omitted, may consist of a simple resistor network, or may include more elaborate protection such as diodes, zener diodes, transorbs, gas discharge tubes, and other components commonly known to those of ordinary skill in the art. Typically, the input protection network221is not configurable but its configurability in the present invention provides advantages in certain applications. Configuration options can include adjustable limits on input voltage and/or current as well as rates of change of those parameters, and other electrical parameters as well. These configuration changes can be achieved by changes to component values on a common platform for smallest size, or can be changed under processor control by means of various switches such as relays. A signal within normally expected ranges passes essentially unchanged to the measurement type means222. The measurement type means222allows selection of the external input12configuration. The measurement type means222may be used to configure the input circuitry to accept external inputs12which are single-ended voltage (a voltage with respect to a common reference shared between several signals), differential voltage (voltage between two defined conductors), differential current (current flowing through a conductor), single-ended current (current flowing to a common reference), frequency, capacitance, inductance, resistance, impedance, conductivity, or any other electrical parameter. The measurement type means222converts the external input12to a common parameter such as voltage or current, which can be interpreted by the succeeding blocks regardless of the original type of external signal12measured. One input channel can be built with several different measurement type means, which can be selectively enabled by means of an analog switch, such as that found in the AD7714 chip in the present invention. It is understood that the AD7714 chip can provide many of the functions of the A/D means24and the input means22thus reducing the overall size of the signal processing module16. In the preferred embodiment, the output of the measurement type means222is a varying voltage carrying the information which was present in the original signal, or in certain cases, a series of voltage measurements, which are then conveyed to the prefilter223. The prefilter223allows rejection of external inputs12of large signals which are outside the frequency band of interest, so that such signals do not saturate the low-noise preamplifier224. The prefilter223can be advantageously arranged to be a relatively simple filter to provide cost, size, and power advantages, because it need only reject out of band signals to the extent necessary to protect the low-noise preamplifier224. A typical application might use a simple “R-C” filter to reject offset voltages in an AC-coupled application, or to reject extremely high frequencies which fall well beyond the frequency band of interest, or a combination of the two. Configurability of this section can be limited to simply enabling or bypassing the prefilter223, or may be more elaborate in allowing selection of cutoff frequencies. In the preferred embodiment this prefilter consists of a simple RC filter which can be bypassed under firmware control, to minimize noise injection; however, an alternate embodiment could incorporate electrically adjustable components such as electronic potentiometers or varactors to provide even more flexibility at the expense of size and noise injection. The prefiltered signal is then passed to the low-noise preamplifier224. The low-noise preamplifier224is advantageous in certain applications to allow application of gain to the external input12early in the signal chain, before significant noise is introduced by the inherent characteristics of certain components, such as thermal noise. Configurability of the gain applied at this step provides an advantage in allowing the present invention to accept larger external inputs12using a low gain (unity gain or lower), or alternatively to accurately measure very small external inputs12with minimal noise by using higher gain. This gain can be selectively chosen to be either a fixed value or unity gain under processor control by means of the signal selector built into the AD7714 used in the preferred embodiment, or can be designed to allow a selection of one of several gains by means of analog switches combined with a plurality of gain setting resistors. Gain applied at this stage has the net effect of dividing any downstream noise by the gain factor applied here. This more robust signal output by the preamplifier224is then passed to the AC coupling filter225. The AC coupling filter225is a highpass filter used to allow the system to reject the DC offset or steady state value of an external input12wherein the offset is not of interest, allowing additional gain to be applied to the changes in the external input12. For instance, bioelectric signals such as EEG, EMG, or ECG are normally of interest only for the changes in those signals, and the absolute offset level is not of interest for diagnostic purposes. The cutoff frequency may be configured to allow adjustment of various parameters such as settling time, or may be adjusted to zero to effectively bypass the AC coupling filter225. In the preferred embodiment, the filter may be bypassed by use of the signal selector switch in the AD7714; however, the use of adjustable components such as electronic potentiometers or varactors would allow more flexibility in choosing the cutoff frequency, at the expense of size and power consumption. The resulting signal, now stripped of any interfering DC offset if so configured, is then passed to the antialias filter226. The antialias filter226is a lowpass filter required to guard against false signals caused by aliasing between external input12content and sampling rate of downstream sampling functions such as multiplexing or analog-to-digital conversion. The Nyquist sampling theorem shows that any frequency content in the sampled signal which is higher than one-half the sampling rate of the sampling function will cause aliasing, which results in false signals. In practice the antialias filter226is more commonly set to a smaller fraction of the sampling rate, usually between ¼ and 1/10 the sampling rate. Regardless of the rate or ratio used, the cutoff frequency of the antialias filter226must change when the sampling rate changes significantly, to retain the most advantageous ratio of the sampling rate to the filter passband. The programmable cutoff frequency of the antialias filter226is thus required to allow for variable sampling rates. In the preferred embodiment, the high sampling rate of the delta sigma modulator in the AD7714 permits the use of a simple fixed RC type filter, with the antialias filtering begin provided as an inherent digital filter in the AD7714; however, an alternate embodiment might use a switched capacitor filter such as the MAX7409 or other filter with a programmable cutoff frequency. The resulting filtered signal is then conveyed to the programmable gain amplifier241in the A/D means24. The programmable gain amplifier241adjusts the external input12amplitude to match the amplitude accepted by the A/D converter242. In the preferred embodiment this programmable gain amplifier is included in the AD7714 integrated circuit, but this function could also be provided with a dedicated programmable gain amplifier, or alternatively through the use of analog switches or adjustable components such as potentiometers or DACs. If too much gain is applied, the programmable gain amplifier241itself or downstream components will saturate, introducing severe distortion and usually rendering the external input12immeasurable. If, on the other hand, insufficient gain is applied here, the quantization noise of the analog-to-digital conversion process comes to dominate the external input12, causing a severe degradation in the signal-to-noise ratio. For instance, a typical 16-bit A/D converter242can distinguish between 216or 65536 distinct levels. With an A/D converter242input range of +/−3 volts, each level represents 92 μV. If insufficient gain is applied to the external input12such that the total signal swing is only 200 μV, the A/D converter242will convert at most three distinct levels, rendering fine features of the external input12totally illegible. The module microcontroller26therefore adjusts the gain applied in the programmable gain amplifier241such that the expected external input12as processed and filtered by the preceding elements as described above, is amplified to cover as much of the A/D converter242input range as practical, or some other gain which optimizes signal features of interest. Additionally, in some applications it is advantageous to have the module microcontroller26adjust this gain dynamically depending upon the actual measured external input12. For instance, the module microcontroller26might increase the programmable gain amplifier241gain when a measured external input12is very small, and then decrease the gain to avoid saturation when the external input12amplitude increases. This automatic gain control provides an increase in the total dynamic range achievable by the system without requiring expensive, large, and power-hungry components such as very high resolution A/D converters242. The signal resulting from application of the specified gain is then passed to the A/D converter242. At least two parameters of a typical A/D converter242can be readily adjusted to achieve various goals as the situation dictates. First, the sampling rate may be adjusted to balance the conflicting goals of high fidelity measurements and low digital data rate. Where a signal has no high frequency content of interest, the sampling rate may be adjusted to a very low rate to minimize the demands on downstream processes such as digital filtering or telemetering of the data. On the other hand, sampling an external signal12with significant high-frequency content of interest demands a higher sampling rate. In the preferred embodiment, the sampling rate is programmable via the AD7714; in other implementations the sampling rate can be made adjustable by means of an externally applied sampling clock to an A/D converter. The adjustable sampling rate allows the controller to adapt the A/D converter242to best meet the system demands of the moment. In a similar fashion, selection of the resolution provided by the A/D converter242must balance faithful reproduction of the external input12against total digital data rate. Depending on the particular A/D converter242used, there may also be a tradeoff of the maximum achievable sampling rate against the selected resolution, wherein selection of a higher resolution lowers the maximum attainable sampling rate. Again the module microcontroller26can adjust this parameter to best meet the system requirements, selecting higher resolution when smaller changes in the measured signal amplitude must be reported, and lower resolution when the lack of such a requirement allows advantages in the form of either a higher sampling rate or a lower digital data rate. In the preferred embodiment, the AD7714 can be programmed to either 16 bit or 24-bit resolution, and the firmware running in the microcontroller can selectively transmit 8, 12, 16, or 24 bits of the acquired data. The digital filter243, the module microcontroller26, or other downstream process can also reject certain portions of the digital data stream to provide an effective decrease in resolution where this decrease is advantageous, especially when the data must later cross a bandwidth-limited link such as a RF, IR or optical link. The A/D converter242passes the signal, now in the form of a succession of digital values, to the digital filter243for further processing. The digital filter243extracts external input12parameters of interest while rejecting other signals, commonly referred to as noise. Implementation of the digital filter243could alternatively be in the form of analog filters applied anywhere in the signal chain prior to the A/D converter242, but implementation as a digital filter243provides advantages as to programmability, calibration, drift, and accuracy. The digital filter243could be implemented in many forms, depending upon the demands of the particular application. In the preferred embodiment, the digital filter is inherent in the analog to digital conversion process inside the AD7714, but it is understood that the digital filter243could be implemented as firmware inside the module microcontroller26itself, or as a digital signal processor, or as a specialized integrated circuit, or by some other means. Regardless of implementation, the programmability of the digital filter243allows the system to readily adapt to changing measurement requirements, whether those changes are brought about by changes in the environment, changes in the external input12itself, or changes in the focus of the overall system. The resulting output from the digital filter243is a stream of digital values, ready for further processing such as assembly into the desired format for transmission by the firmware. Referring now toFIG.5there is shown a block diagram of the firmware of the present invention. The signal processing module16firmware defines several modes of operation100. There are several “test” modes which are used during factory calibration of the device. In addition, there are several operation modes which have mode-specific configuration. For example, the signal processing module16can be programmed to operate in a first operational mode in which it transmits calibration data (used to properly zero the analog inputs) for the first three seconds of operation (or for some other predetermined time), and then switches to a second operational mode which transmits analog signal information as collected from the A/D converters242. The configuration for each mode of operation is programmed in the non-volatile memory EEPROM261. Once power is first applied to the signal processing module16, the module microcontroller26performs the basic device initialization, including proper configuration of the I/O ports and internal variables102. Next, the module microcontroller26reads the initial device configuration104from the EEPROM261. This configuration controls the input means22of the signal processing module16, including the number of external inputs (also herein referred to as channels), the resolution of the A/D converter242, and the sampling rate of each individual input channel. This configuration also controls the operation of the module transmitter28in the signal processing module16, including the carrier frequency, modulation type, output power control, and the length in bytes of each transmitted RF message packet. This configuration also describes the initial mode of operation for the signal processing module16. Once the initial configuration has been read, the module microcontroller26enters the first mode of operation described in the configuration. It reads the mode-specific configuration106, which includes the state of the module transmitter28and the analog inputs as used in the mode. This configuration can reside in EEPROM261or in module microcontroller26memory. The module microcontroller26then initializes all the peripheral devices according to this mode configuration108. In the special case that this is the “shutdown” mode, the module microcontroller26will perform a software power-down110. Once the mode has been initialized, the module microcontroller26begins execution of the interrupt service routine (ISR)112, which is responsible for transmitting the data in the form of messages along the modulated RF carrier. Operation of the interrupt service routine is asynchronous and distinct from the mainline code, and is described later. The module microcontroller26begins execution of the mode-specific “opcodes”114, which are a sequence of instructions contained either in EEPROM261or in the module microcontroller26memory. These opcodes are performed for each operational mode. The module microcontroller26reads the first operational code from the EEPROM261and interprets the opcode, performing an appropriate action: If the opcode instructs the module microcontroller26to change modes116, the module microcontroller26terminates the ISR118and returns to the mode initialization, and begins execution of a new operational mode; if the opcode instructs the module microcontroller26to begin a loop construct120, the module microcontroller26begins the loop by initializing a loop counter variable122; if the opcode instructs the module microcontroller26to end a loop construct, the module microcontroller26increments the loop counter variable and determines if the loop is complete124. If not, the module microcontroller26resets the index of current opcode to the beginning of the loop, otherwise it sets the index of the next opcode to after the loop; if the opcode instructs the module microcontroller26to initialize a single A/D converter242, the module microcontroller26will perform the specified calibration126; if the opcode instructs the module microcontroller26to the read a single A/D converter242, the module microcontroller26will take the reading and insert the data into the current message to be transmitted over the RF carrier128; if the opcode instructs the module microcontroller26to insert a special byte of data into the RF message, the module microcontroller26will insert this data into the message130. This special message byte may include an identifier to uniquely identify the signal processing module16, an error check field such as a cyclic redundancy check, or some data representing the internal state of the signal processing module16such as the RF frequency, measured temperature, etc. After each opcode has been read and interpreted, the module microcontroller26determines if the RF message has been completely filled and is ready to be transmitted over the RF carrier132. If it has, the module microcontroller26marks a flag variable for the interrupt service routine to begin transmitting the RF message134. Next, the module microcontroller26performs any housekeeping tasks, such as updating the RF tuning parameters based on changes in temperature, updating timers, etc.136. Finally, the module microcontroller26returns to execute the next opcode in the sequence114. Referring now toFIG.6there is shown a block diagram of the software programming function of the ISR150. The ISR is responsible for transmitting the individual message bytes over the RF carrier. The ISR is executed by a hardware interrupt which occurs immediately before every byte to be transmitted over the RF carrier. The ISR detects whether an RF message is completely filled152. If the ISR detects (based on the flag variable) that an RF message is not yet completely filled by the main code, the ISR transmits a “filler” byte, or a byte with an even number of “1” and “0” bits154. This acts to maintain an even (50%) modulation duty cycle on the carrier frequency. Once the ISR detects that the main code has filled an RF message to be transmitted, it transmits the RF sync bytes156. These are two unique bytes transmitted at the beginning of every RF message which are easily identified by the base station40as the start of a message. Once the RF sync bytes have been transmitted, the ISR transmits each message byte of the RF message, in sequence158. Once the RF message has been completely transmitted160, the ISR resumes transmitting filler bytes until the next RF message is filled by the main code. Because of the phase locked loop based frequency synthesizer used in the present invention, the module transmitter28and base transmitter84are frequency agile over the frequency range. Since the module receiver29and the base receiver80employ automatic frequency control, the present invention consumes relatively low power as the module transmitter28and base transmitter84can be intermittently powered down without loosing reception due to drift or sacrificing data transmission accuracy. The utilization of programmable firmware allows inexpensive and flexible operation for the inputting, conditioning and processing of any type, character and range of the external inputs. This also allows the module microcontroller26, in response to the variation of the external inputs12or, in response to instructions received by RF signal through the module receiver29, to adapt the signal processing module16based upon the variations allowing the signal processing means16to input, condition, process and transmit said external input notwithstanding said variation. The present invention performs this adaptation without the need to modify or alter hardware or select or use different hardware already present in the device. In other words all adaptation can be accomplished by software programming totally. One or more sensors are used to develop the data or signals used in the present invention for determining a quantitative level of severity of a subject's sleeping disorder and/or symptoms. In various embodiments, preferably at least two EEG electrodes are used to develop this data. In other embodiments, preferably, at least two ECG electrodes are used. In still other embodiments, preferably a pulse oximeter is used. In still even other embodiments, preferably, either an O2or CO2blood gas monitor is used. The signals from the one or more sensors used in various embodiments of the present invention are preferably analyzed using a processor and software that can quantitatively estimate or determine the severity of the subject's sleeping disorder or symptoms. Using either the microcontroller26of a data acquisition system, a separate computer, base station or processor, a PDA, a processor on a device for treating the subject's sleeping disorder or a combination of these processors, the severity of the subject's sleeping disorder and/or symptoms is determined and is used at least in part to regulate the physical or chemical treatment of the subject. Also optionally, the one or more sensors used in the system of the present invention can also be tethered to a computer, base station, cell phone, a PDA or some other form of processor or microprocessor. The processor or microprocessor of various embodiments of the present invention can be part of a remote communication station or base station. The remote communication station or base station can also be used only to relay a pre- or post-processed signal. Preferably, the remote communication station or base station can be any device known to receive RF transmissions such as those transmitted by the wireless data acquisition system described herein. The remote communication station or base station by way of example but not limitation can include a communications device for relaying the transmission, a communications device for re-processing the transmission, a communications device for re-processing the transmission then relaying it to another remote communication station, a computer with wireless capabilities, a PDA with wireless capabilities, a processor, a processor with display capabilities, and combinations of these devices. Optionally, the remote communication station can further transmit data both to another device including the subject's treatment device. Further optionally, two different remote communication stations can be used, one for receiving transmitted data and another for sending data. For example, with the sleep diagnosis and treatment system of the present invention, the remote communication system of the present invention can be a wireless router, which establishes a broadband internet connection and transmits the physiological signal to a remote internet site for analysis, preferably for further input by the subject's physician or another clinician. Another example is where the remote communication system is a PDA, computer or cell phone, which receives the physiological data transmission, optionally re-processes the information, and re-transmits the information via cell towers, land phone lines, satellite, radio frequencies or cable to a remote site for analysis. Another example is where the remote communication system is a computer or processor, which receives the data transmission and displays the data or records it on some recording medium, which can be displayed or transferred for analysis at a later time. The quantitative method for estimating or determining the severity of the subject's sleeping disorder or symptoms is preferably accomplished by using signals or data from the one or more sensors described herein. More preferably, this quantitative method is accomplished in real-time, allowing the subject's symptoms to be treated as they occur. By real-time it is meant that the quantitative diagnosis step is accomplished predictively or within a short period of time after symptoms occur which allows for immediate treatment, thereby more effectively reducing the health affects of such disorder while at the same time also minimizing side effects of the treatment chosen. By real-time, preferably the diagnosis is accomplished within 24 hours of receiving the signals from the one or more sensors on the subject, more preferably within 8 hours, even more preferably within 4 hours, still even more preferably within 1 hour, still even more preferably within 20 minutes, still even more preferably within 5 minutes, still even more preferably within 1 minute, still even more preferably within 10 seconds, still even more preferably within 1 second, still even more preferably within 0.1 seconds and most preferably within 0.01 seconds. FIG.7shows a flow diagram of one example titration algorithm that adjusts the pressure of a CPAP device based on at least the measured airflow, respiratory effort, and blood oxygen concentration. The algorithm inFIG.7consists of a counting phase and an adjustment phase. The adjustment phase operates in titration mode, during which large pressure adjustments are made, and tuning mode, during which fine pressure adjustments are made to establish the optimum air pressure. First, a time interval and all event counters are reset202. The system then establishes a baseline206, which is used for comparisons throughout the time interval. If the time interval has elapsed210, the system evaluates the event counters and makes adjustments as necessary. The time interval210inFIG.7is shown as 15 minutes, but any time period may be suitable. For example, early in the titration phase, smaller intervals of 5 minutes may be more appropriate, while later in the titration phase intervals of 30 minutes or more may be used. If the time interval has not elapsed210, the subject's airflow is compared to the baseline214. If the subject's current airflow drops below 70% of the baseline for 10 seconds or more214, the system evaluates the effects of a severe reduction in airflow. If the airflow drops to below 10% of the baseline218, the decrease in airflow may indicate an instance of apnea. In this situation, the subject's oxygen saturation is compared to the baseline222. If the subject's oxygen saturation has not decreased more than 3%222, the decrease in airflow is not an event at all, and the system returns to monitoring the subject's airflow210. If the subject's oxygen saturation does decrease more than 3%222, the system checks for a breathing effort226. If the subject is attempting to breathe, the event is considered an obstructive sleep apnea (OSA), and the OSAncount is increased by one230. The system then returns to monitoring the subject's airflow210. If, however, the subject is not attempting to breathe, the system continues to look for breathing effort226. If the subject does not attempt to breathe for more than 4 seconds234the event is considered a central sleep apnea (CSA), and the CSAncount is increased by one238. The system then returns to monitoring the subject's airflow210. In contrast, if the subject does attempt to breathe within 4 seconds234the event is considered a mixed sleep apnea, and the Mixedncount is increased by one242. The system then returns to monitoring the subject's airflow210. Returning to the airflow comparison218, if the subject's airflow is reduced to 70% of the baseline for 10 seconds or more, but the airflow does not drop to 10% of the baseline, the system evaluates the effects of a mild reduction in airflow. If the airflow does not drop to below 10% of the baseline218, the mild decrease in airflow may indicate an instance of hypopnea. In this situation, the subject's oxygen saturation is compared to the baseline246. If the subject's oxygen saturation has not decreased more than 4%246, the decrease in airflow is not an event at all, and the system returns to monitoring the subject's airflow210. If the subject's oxygen saturation does decrease more than 4%246, the system checks for a breathing effort250. If the subject is attempting to breathe, the event is considered an obstructive sleep hypopnea (OSH), and the OSAncount is increased by one258. The system then returns to monitoring the subject's airflow210. If, however, the subject is not attempting to breathe, the event is considered a central sleep hypopnea (CSH), and the CSHncount is increased by one254. The system then returns to monitoring the subject's airflow210. The system continues to monitor the subject throughout the time interval210. After the time interval is over, the system evaluates the subject's condition and calculates the next change in pressure. If the pressure should be adjusted260, an adjustment algorithm is applied. InFIG.7, the system looks for any event260, but in other embodiments of the present invention the system could evaluate only a few variables, for example the number of CSA events. Optionally, the system could evaluate the ratio of counted events, changes in the number of events between timeperiods, or any other condition capable of being recorded or calculated by the system. InFIG.7, If no events have been detected (i.e., all event counters OSAn, CSAn, OSHn, CSHn, and Mixednare 0)260, the subject's condition is acceptable, and no treatment changes are required. The system then returns and resets the time interval and all event counters OSAn, CSAn, OSHn, CSHnand Mixedn202. At this point, the system is also capable of recording the previous counter values, recording the total number of events, and the like. In this way, the system can compare the subject's status between intervals. For example, the subject's status during the first time interval can be compared to the status during the current or final interval, or the subject's status can be evaluated over consecutive intervals. Such comparisons can provide information on, for example, trends, and overall effectiveness of the treatment. If an adjustment is appropriate260, the system determines if any central or central-based events have occurred262. If the subject has experienced a central sleep apnea, central sleep hypopnea, or mixed apnea event, the system sets the current pressure as a maximum threshold, and sets a flag to initiate the tuning phase of the titration266. The new maximum pressure Pmaxis the highest value of pressure that the system can now attain. Under no circumstances will the system automatically increase the air pressure beyond Pmax, although in some embodiments the pressure could be manually adjusted above the maximum value. After setting the maximum pressure and initiating the tuning mode, the system decreases the CPAP pressure by 2 cm H2O270. The system then returns and resets the time interval and all event counters OSAn, CSAn, OSHn, CSHnand Mixedn202. If no central or central-based events have occurred, the system checks to see if it is in tuning mode274. If the system is in tuning mode, and the subject has experienced an obstructive event (but not a central or central-based event)274, the system compares the result of the next pressure change to the maximum pressure290(established previously at266). If the next pressure increase of 1 cm H2O will be less than the maximum allowable pressure Pmax266, the system increases the pressure by 1 cm H2O. After making the adjustment, the system then returns to the counting phase and resets the time interval and all event counters OSAn, CSAn, OSHn, CSHnand Mixedn202. In contrast, if the next pressure increase will be greater than or equal to the maximum allowable pressure Pmax266, the titration is complete. The system no longer adjusts the gas pressure, although it may continue to count the subject's events and record other data through the remainder of the night. If the system is not in tuning mode274, the system evaluates if the obstructive events were apneas or hypopneas278. If the events were apneas, the system increases the gas pressure by 2 cm H2O282. If the events were hypopneas, the system increases the gas pressure by only 1 cm H2O286. In either case, after adjusting the pressure accordingly, the system then returns and resets the time interval and all event counters OSAn, CSAn, OSHn, CSHnand Mixedn202. The adjustment algorithm shown inFIG.7is relatively simple. More sophisticated calculations and decisions can also be used. For example, the system can evaluate the trends occurring across time periods by considering how the numbers of detected events changes, or the system can use the ratio of central-type events to obstructive-type events to refine the changes in pressure. The system could also, for example, consider the number and type of adjustments previously made. Such a step would prevent system oscillation that can occur near the end of titration as the system attempts to refine the optimal pressure. The system could use a variety of analysis and calculation techniques, including lookup tables, fast-Fourier transforms, wavelet analysis, neural networks, and the like. AlthoughFIG.7depicts control of a CPAP machine, any appropriate treatment device may be used. In this situation, the treatment device control algorithm would be adjusted to consider the capabilities of the treatment device. For example, if the treatment device is a more advanced bi-level PAP machine, the treatment device control algorithm could adjust the inspiration air pressure only. The action taken can also vary. For example, the pressure can be increased by differing amounts depending on the phase of titration, or the number of prior adjustments, or the severity of the breathing events. As further illustration, if the system determines that the subject has an sleep-related breathing disorder that is untreatable with the current treatment device (for example, a CPAP device cannot deliver a sufficiently high pressure, or the treatment device is inappropriate for the subject's condition), the system can shut down or provide a safe pressure for the remainder of the night before recommending another treatment method. Although the titration phase ofFIG.7is triggered only by the end of the time interval210, various other conditions could also require pressure adjustments. For example, if the system detects a severe central apnea event, the gas pressure could be immediately reduced. Similarly, if the system detects several severe apneas in a single time period, the titration phase could begin before the end of the time period. Other safety mechanisms can be programmed into the system as well. For example, the system can be programmed to ignore the sensor signals if the data becomes corrupted (for example, if the sensor becomes disconnected). FIG.8shows a schematic view of one embodiment of the sleep disorder treatment system of the present invention. InFIG.8, a number of sensors420,424,418, and426are connected to a subject410. The subject410in this case is a human shown with a respiratory mask412, which is connected by an air hose or subject circuit416to a continuous positive air pressure device428. In this embodiment, the signal or data from one or more of these sensors is collected by a diagnostic device441, which comprises a radio436; an antenna434; and a microprocessor438for processing the data or signals to determine a level of severity of the subject's sleeping disorder or symptoms. The diagnostic device441calculates a level of severity for the subject's symptoms and physiological condition. The diagnostic device441then transmits a signal based on this level of severity by either a tether444or radio signal (not shown) to an actuator (not shown) in the CPAP device428, which controls the flow of air or gas provided to the subject by the air hose or subject circuit416. The CPAP device428optionally connects to an oxygen tank430, which can be used to increase the concentration of oxygen in the air being delivered to the subject. Further optionally, the CPAP device428connects to a carbon dioxide tank431, which can be used to increase the concentration of carbon dioxide in the air being delivered to the subject. The CPAP device428could connect to both the oxygen tank430and the carbon dioxide tank431, only one of the tanks, or neither. In addition, optionally the treatment device428has a sensor in the air hose416, which can measure the differential pressure414and thereby accurately measure air flow provided to the subject. Also optionally, the device can have a nebulizer (not shown) with a reservoir and pump to injecting medication into the nebulizer. FIG.9shows a diagram outlining the treatment titration system in more detail. InFIG.9, a patient interface box16receives signals (not shown) from a respiratory belt500and a pulse oximeter504placed on the subject. The sensors500and504can be any of the sensors described herein or known in the art. In a simple embodiment of the present invention, the patient interface box16generates a wireless signal18encoded with data corresponding to the signals from the respiratory belt500and a pulse oximeter504. The patient interface box16transmits the wireless signal18to base station40. InFIG.9, the wireless signal18is shown as radio frequency (RF). In this case, the patient interface box16generates a radio frequency signal18by frequency modulating a frequency carrier and transmits the radio frequency signal through the module antenna20. The base station40receives the radio frequency signal18through base antenna42, demodulates the radio frequency signal18, and decodes the data. It is understood that other wireless means can be utilized with the present invention, such as infrared and optical, for example. RF wireless transmission is preferred. Although one module antenna16and one base antenna42are shown in this embodiment, it is understood that two or more types of antennas can be used and are included in the present invention. An external programming means60, shown inFIG.9as a personal computer, contains software that is used to program the patient interface box16and the base station40through data interface cable62. The data interface cable62is connected to the base station40by connector64. Instead of a data interface cable62, the patient interface box16and the base station40can be programmed by radio frequency (or other type) of signals transmitted between an external programming means and a base station40and the patient interface box16or to another base station40. RF signals, therefore, can be both transmitted and received by both patient interface box16and base station40. In this event the patient interface box16also includes a module receiver29(shown inFIG.2) while the base station40also includes a base transmitter84(shown inFIG.3), in effect making both the patient interface box16and the base station40into transceivers. In addition, the data interface cable62also can be used to convey data from the base station40to the external programming means60. If a personal computer is the external programming means it can monitor, analyze, and display the data in addition to its programming functions. The base receiver80and module receiver29(shown inFIG.3andFIG.2, respectively) can be any appropriate receivers, such as direct or single conversion types. The base receiver80preferably is a double conversion superheterodyne receiver while the module receiver29preferably is a single conversion receiver. Advantageously, the receiver employed will have automatic frequency control to facilitate accurate and consistent tuning of the radio frequency signal18received thereby. The external programming means60also contains a processor used to calculate the next appropriate gas flow level to be delivered to the subject. In the illustrated embodiment, the external programming means60uses data originally collected from the respiratory belt500and the pulse oximeter504to calculate the appropriate flow level. The external programming means is capable of performing a variety of analysis and calculation techniques, including lookup tables, fast-Fourier transforms, wavelet analysis, use of a neural network, and the like. Optionally, the data processing and calculation can be performed by the base station40. Further optionally, the processing and calculation can be distributed between the patient interface box16, the base station40, and the external programming means60. After the appropriate flow level has been calculated, the external programming means60transmits a command signal to the treatment device interface518, which then relays the command signal to the treatment device522via a connection518. InFIG.9, the external programming means60transmits the command signal to the treatment device interface518via wireless RF signal512. The RF signal512is received by an RF antenna514on the treatment device interface518. Optionally, the command signal can be transmitted by any other wireless means. Although the command signal transmission512is shown inFIG.9to be of the same type as the sensor signal transmission18, this is not necessary. Optionally, the two wireless transmissions can be of different types. Optionally, the command signal can be transmitted from the external programming means60by a wired connection to the treatment device interface518. The treatment device interface518connects to the treatment device522with a connector518. InFIG.9, the treatment device522is shown as a CPAP device, but the treatment device522may be any device known in the art for the treatment of sleep-related breathing disorders, including but not limited to a bi-level PAP device, an auto-PAP or auto-CPAP device, an ASV device, and the like.FIG.9also shows the connector518as a USB connection. Optionally, the treatment device interface518can be completely enclosed within the treatment device522itself. In this case, the treatment device would be essentially modified to directly receive the command signal from the external programming means60. Once the treatment device522receives the command signal the treatment device performs the command and changes the treatment provided to the subject. InFIG.9, the treatment device522is a CPAP device, which increases or decreases the pressure of the gas delivered to the subject via conduit526and mask530. Optionally, the treatment device522may contain additional sensors. For example, if the treatment device522is a CPAP device, it may contain an air flow or air pressure sensor (not shown). If the treatment device522contains a sensor, that sensor information would be integrated into the command calculation process. This integration can take place in any of the system components. For example, the treatment device sensor information could be included in a processing step performed within the treatment device522, at the external programming means or at the base station40. FIG.10is schematic of the remote data acquisition device and system of the present invention. InFIG.10, a wireless data acquisition system50is used to receive, filter, and optionally analyze signals27from sensors (not shown) on a subject (not shown). The wireless data acquisition system50transmits a signal based, at least in part, on one or more of the signals from the sensors on the subject. The data acquisition system50transmits a signal55preferably in real time from the subject's home52to a server70for analysis. The signal55is transmitted over the internet or other communication system58. Such other communication systems include satellites, cellular networks, local area networks (LAN), other wide area networks (WAN), or other telecommunications system. If the signal55is transmitted over the internet58, preferably the signal55is transmitted using a cellular card provided by cellular providers such as for example Sprint, Cingular, AT&T, T-Mobile, Alltel, Verizon or the like. The signal55that is transmitted over the internet or other communication system58can be compressed to provide better resolution or greater efficiency. The server70performs data analysis (not shown). The analyzed data73is then entered into a database76. The analyzed data73in the database76is then accessible and can be requested79and sent to multiple review stations82anywhere in the world via the internet or other communications system58for further analysis and review by clinicians, technicians, researchers, doctors and the like. Signal84is a command signal for adjusting a parameter of the treatment device. For example the signal84could instruct the PAP device to increase the pressure delivered to the subject. The communications systems used for data transmission need not be the same at all stages. For example, the a cellular network can be used to transmit data between the subject's home52and the remote analysis server70. Then the internet can be used to transmit data between the remote analysis server70and the database76. Finally in this example, a LAN can be used to transmit data between the database76and a review station82. FIG.9shows a diagram outlining the wireless data acquisition system in more detail. InFIG.9, a patient interface box85receives signal (not shown) from a sensor91. This sensor91can be an EEG electrode (as shown) or any of the other sensors described herein or known in the art. Although one type of sensor91is shown, the patient interface box85is capable of accepting multiple signals from multiple sensors91. In a very simple embodiment of the present invention, the patient interface box85generates a wireless signal94encoded with data corresponding to the signal from the sensor91. The patient interface box85transmits the wireless signal94to base station97. InFIG.9, the wireless signal94is shown as radio frequency (RF). In this case, the patient interface box85generates a radio frequency signal94by frequency modulating a frequency carrier and transmits the radio frequency signal through module antenna100. The base station97receives the radio frequency signal94through base antenna103, demodulates the radio frequency signal94, and decodes the data. It is understood that other wireless means can be utilized with the present invention, such as infrared and optical, for example. RF wireless transmission is preferred. Although one module antenna100and one base antenna103are shown in this embodiment, it is understood that two or more types of antennas can be used and are included in the present invention. An external programming means106, shown inFIG.9as a personal computer, contains software that is used to program the patient interface box85and the base station97through data interface cable109. The data interface cable109is connected to the base station97by connector112. Instead of a data interface cable109, the patient interface box85and the base station97can be programmed by radio frequency (or other type) of signals transmitted between an external programming means106and a base station97and the patient interface box85or to another base station97. RF signals, therefore, can be both transmitted and received by both patient interface box85and base station97. In this event the patient interface box85also includes a module receiver133(shown onFIG.2) while the base station97also includes a base transmitter84, in effect making both the patient interface box85and the base station97into transceivers. In addition, the data interface cable109also can be used to convey data from the base station97to the external programming means106. If a personal computer is the external programming means106, it can monitor, analyze, and display the data in addition to its programming functions. The base receiver80and module receiver133(shown onFIG.5) can be any appropriate receivers, such as direct or single conversion types. The base receiver80preferably is a double conversion superheterodyne receiver while the module receiver133(shown onFIG.5) preferably is a single conversion receiver. Advantageously, the receiver employed will have automatic frequency control to facilitate accurate and consistent tuning of the radio frequency signal94received thereby. FIG.11is a diagram of an artifact rejection module750that can be used in either the data acquisition system (not shown) or a computer or processor (not shown) linked to the data acquisition unit of the present invention. InFIG.11, a subject's EEG signal752is preferably continuously fed754into artifact rejection algorithms within the data acquisition unit processor. Simultaneously sensor signals760from the subject's movement or motion are also fed into the artifact rejection processor so the EEG signal can be corrected762for effects of abnormal or prejudicial motion by the subject. The sensors for determining the subject's motion are described above, but the most preferred is an accelerometer that is incorporated into the EEG data acquisition unit itself. A method for the detection and treatment of disordered breathing during sleep employing wavelet analysis is provided in which data related to respiratory patterns are analyzed with wavelet analysis. Thus allowing for automatic continuous titration and adjustment of PAP and other treatment module therapy. More specifically, this method according to one embodiment of the present invention comprises the following steps: placing a mask with a tube over a subject's airway, the mask being in communication with a source of a pressurized breathing gas controlled by a PAP, thereby establishing a respiratory circuit; periodically sampling the gas flow in the circuit; periodically sampling one or several other parameters related to the subjects physiological state; periodically calculating values for one or several parameters distinctive of a physiological pattern; periodically feeding the parameter values to a processing unit programmed to recognize physiological patterns characteristic of sleep disorders; analyzing the parameter values with wavelet analysis; controlling pressurized breathing gas supply and other treatment modules or devices in response to the output from the processing unit utilizing wavelet analysis. Each sensor and/or transducer may generate an analog signal representative of variables being monitored. The monitoring means may include means for amplifying and/or performing analog processing on the analog signal. The latter may perform filtering and/or other wave shaping functions. The processed signal may be fed to an analog to digital converter to convert each analog signal to a corresponding digital signal. Each digital signal may be fed to a digital processor such as a microprocessor or microcomputer. The digital processor includes software for deriving subject's respiratory state. The software may include means such as an algorithm for determining from the data a gas pressure value which substantially prevents a deterioration of the respiratory state. Preferably the algorithm utilizes wavelet analysis to detect and correct the respiratory event by changing one or several treatment parameters. The result may be used to control delivery of gas to the subject to cancel out or substantially compensate the effects of a sleeping or breathing disorder. In the event that the disorder is not substantially corrected the software may be adapted to activate delivery of a drug such as albuterol, or ipratropium bromide, or the like. This may circumvent what may otherwise be a fatal or severe asthma attack. Other drugs or substances may be used depending on the subject's special needs. Such as oxygen (O2) or carbon dioxide (CO2) gas could be delivered to the subject. As mentioned earlier these gases can be used to aid in respiration. As oxygen can mitigate or relieve the effects of many apneas, while a dose of carbon dioxide gas can be used to trigger respiratory effort in central and complex apneas. The software may additionally be adapted to determine quantity requirements of the drug, gas or other therapeutic agent. The latter may be based on the subject's history and the extent to which the disorder fails to respond to traditional gas pressure treatment. These drugs and therapeutic agents could be delivered by any means known in the art, but could include nebulizers, pressurized gas delivering, intravenous auto injection, or simply allowing the air to flow over a piece of dry ice to sublimate carbon dioxide into the subject's breathing air. For a better understanding of the detailed description of the invention, it is necessary to present an overview of the wavelet analysis of the present invention. The wavelet analysis of the present invention, preferably represents a signal as a weighted sum of shifted and scaled versions of the original mother wavelet, without any loss of information. A single wavelet coefficient is obtained by computing the correlation between the scaled and time shifted version of the mother wavelet and the analyzed part of a signal. For efficient analysis, scales and shifts take discrete values based on powers of two (i.e., the dyadic decomposition). For implementation, filter bank and quadrature mirror filters are utilized for a hierarchical signal decomposition, in which a given signal is decomposed by a series of low- and high-pass filters followed by downsampling at each stage, seeFIG.3. This analysis is referred to as Discrete Wavelet Transform (DWT). The particular structure of the filters is determined by the particular wavelet family used for data analysis and by the conditions imposed for a perfect reconstruction of the original signal. The approximation is the output of the low-pass filter, while the detail is the output of the high-pass filter. In a dyadic multiresolution analysis, the decomposition process is iterated such that the approximations are successively decomposed. The original signal can be reconstructed from its details and approximation at each stage (e.g., for a 3-level signal decomposition, a signal S can be written as S=A3+D3+D2+D1), seeFIG.13. The decomposition may proceed until the individual details consist of a single sample. The nature of the process generates a set of vectors (for instance a.sub.3, d.sub.3, d.sub.2, and d.sub.1 in the three level signal decomposition), containing the corresponding coefficients. These vectors are of different lengths, based on powers of two, seeFIG.14. These coefficients are the projections of the signal onto the mother wavelet at a given scale. They contain signal information at different frequency bands (e.g., a.sub.3, d.sub.3, d.sub.2, and d.sub.1) determined by the filter bank frequency response. DWT leads to an octave band signal decomposition that divides the frequency space into the bands of unequal widths based on powers of two, seeFIG.15. The Stationary Wavelet Transform (SWT) is obtained in a similar fashion, however, the downsampling step is not performed. This leads to a redundant signal decomposition with better potential for statistical analysis. The frequency space division is the same as for DWT, seeFIG.6. Despite its high efficiency for signal analysis, DWT and SWT decompositions do not provide sufficient flexibility for a narrow frequency bandwidth data analysis (FIG.13). Wavelet packets, as a generalization of standard DWT, alleviate this problem. At each stage, details as well as approximations are further decomposed into low and high frequency signal components.FIG.13shows the wavelet packet decomposition tree. Accordingly, a given signal can be written in a more flexible way than provided by the DWT or SWT decomposition (e.g., at level 3 we have S=A1+AD2+ADD3+DDD3, where DDD3 is the signal component of the narrow high frequency band ddd.sub.3). Wavelet packet analysis results in signal decomposition with equal frequency bandwidths at each level of decomposition. This also leads to an equal number of the approximation and details coefficients, a desirable feature for data analysis and information extraction.FIG.15illustrates frequency bands for the 3-level wavelet packet decomposition. Specifically in our application wavelets were adopted due to their suitability for the analysis of non-stationary or transitory features, which characterize most signals found in biomedical applications. Wavelet analysis uses wavelets as basis functions for signal decomposition. In the present invention the use of wavelet transform significantly reduces the computational complexity when performing the task of assessing the subjects' physiological state based on the acquired signal or signals. Neither a large number of reference signals nor an extensive amount of clinical data is needed to produce the index disclosed herewith. This invention involves an observed data set acquired in real-time from a subject. This data set is further compared, in real time, with one or more reference data sets which characterize distinct physiological states. The comparison yields an index that is later referred to WAVelet index (abbreviated WAV). The WAVelet index can then be used to assist in distinguishing among the various physiological states, in distinguishing increasing and decreasing rates of respiration, and in distinguishing increasing and decreasing level of both obstructive and central airway apneas, and in distinguishing increasing and decreasing respiratory flow rates and the like. The observed and reference data sets are statistical representations of the wavelet coefficients obtained by applying a wavelet transform onto corresponding observed and reference signals. These coefficients may be obtained through a wavelet transform of the signal such as standard dyadic discrete wavelet transform (DWT), discrete stationary wavelet transform (SWT), or wavelet packet transform. In this respect, filters yielding coefficients in a frequency band, chosen such that their statistical representation differentiates between respiratory states, can be used for this type of analysis. The choice of this transformation determines the computational complexity of the method and the resolution of the final index. The observed and reference data sets are obtained by calculating a statistical representation of the transformation coefficients. The reference data sets represent distinct physiological states taken from the continuum from normal (i.e. no irregularities) to full apnea (i.e. complete lack of ventilation). They can be extracted off-line from a group of subjects or subjects. They are then stored for real-time implementation. The transformation selected maximizes the dissimilarity between each of the reference data sets. The comparison between the observed data set against the reference data sets can be based on the computation of the correlation between these functions. However, a computationally less demanding solution is to quantify the similarity between these functions by computing the L1 (Manhattan), L2 (Euclidean), or any distance metrics. In the preferred embodiment, where two reference data sets are used, the result of this comparison yields two values, each expressing the likelihood of the subject's physiological states are normal or irregular and to what degree. These two values are further combined into a single value corresponding to a univariate index of normal/irregular physiological states state, the WAVelet index. This value corresponds to the type and level of the condition, which is used to create a proper control signal to the gas flow generator, or turbine, or to other treatment modules. Most any variant of PAP or CPAP therapy, such as bi-level CPAP therapy or therapy in which the mask pressure is modulated within a breath, can also be monitored and/or controlled using the methods described herein. Less complex variants of PAP or CPAP therapy could be used, but the benefits would be much less apparent. The following figures give a more detailed description of example control algorithms of the present invention. This example deals specifically with control of respiratory gas flow using high noise physiological signals, although other treatment parameters can be modified using the same method with proper modification. Also, the respiratory gas flow control and the other treatment methods can be used concurrently to correct the subject's physiological state. Some parts of this example embodiment may not be needed in some applications depending on the level of noise associated with a particular physiological signal.FIG.16gives an overview of the wavelet analysis functions of the present invention in its preferred embodiment. The invention is based on the wavelet decomposition of the sensor signals in the wavelet analyzer unit862. This unit862applies the wavelet transform onto the finite signal delivered by the preprocessing unit860, and then extracts the observed data set872correlated to the respiratory state from the corresponding wavelet coefficients. This feature function is further delivered to the comparator unit864, where it is compared with two reference data sets874,876corresponding to the respiratory state. These reference data sets are calculated off-line and stored in66for the real time comparison in the comparator864. The result of comparison is further integrated into an index of respiration, which is the input of the scaling868and filtering870units. Parts of the processing unit878contained in the controller38that involve signal analysis are detailed in the following. Pre-Processing Unit The basic function of the pre-processing unit80is to further “clean-up” the signal being analyzed and to reject finite signals that contain artifacts or are corrupted. The exact operation of the preprocessing unit will heavily depend on the type of sensor and physiological parameter being monitored. The following description is supplied to give a simple overview of the basic function of the preprocessing unit and a possible method of implementation. Once a finite signal has been acquired, it is sent to the pre-processing unit, seeFIG.12. It's first stored as a vector x882of length N. The mean value 6x_=k=1N×k is removed884. The root mean square amplitude 86 of the finite signal is then calculated as: 7 rms=1Nk=1N(xk)2 (9). Finite signals with amplitudes greater than some maximum value and less than some minimum value are then rejected. It is assumed that they either contain artifacts or the data is corrupted possibly due disconnection of a sensor. If the amplitude is within the two bounds892, a flag894indicating that the finite signal is not corrupted takes the value 1. In this case, the finite signal is normalized896as: 8×k=×krms,k=1,N(10). The amplitude normalization allows better focus on the phase and frequency content of the finite signal, rather than its amplitude. So amplitude normalization is especially well suited for bio-potential measurements such as EEG, EMG, or ECG. If an artifact is present888, the flag is put to 0 and the algorithm proceeds to the scaling unit811. If normal breathing is detected890, the flag takes the value 1 and the variable WAV_unfilt898takes the value of 0. The apparatus then proceeds to send the signal to the filtering unit870. The apparatus then proceeds to the next stage, (i.e. the wavelet analyzer unit denoted by862inFIG.12andFIG.16). Wavelet Analyzer Unit The wavelet analyzer unit862first calculates the wavelet coefficients applying the SWT and the wavelet filter to the pre-processed finite signal. The coefficients are obtained by convolution of the signal with the wavelet filter. The coefficients corresponding to the band selected in the off-line analysis as the most discriminating (in this embodiment: d, are then stored in a vector C. The probability density function is then obtained by calculating the histogram of the coefficients in vector C. The vector of histogram contains b coefficients, where b is chosen number of bins (e.g. 100). Each element of this vector is then divided by the total number of coefficients in d.sub.1 band, i.e. by the length of a vector C. The result is a vector pdf of length b, which represents the probability density function of wavelet coefficients in d.sub.1 band obtained by the wavelet decomposition of the finite signal x. Comparator Unit The resulting pdf vector is input into comparator unit864, seeFIG.17. This unit compares the pdf vector of a current signal872with two reference vectors pdf.sub.w and pdf.sub.a representing two known respiratory states non-apneic874and apneic876. The non-apneic reference data set74is derived from a combination of signals obtained from a group of healthy subjects (population norming). This reference data set can be then stored on a mass storage device for future real time comparison. Another possibility is to record the subject's respiratory signals while the subject is in a non-apneic state, and then derive the reference data set (self-norming). The apneic reference data set876is the PDF of the wavelet coefficients of an apneic signal, which is either derived or recorded from an actual subject which mimics the most severe level of apneas. The comparison900between the pdf872calculated in the wavelet analyzer unit862and the two reference data sets pdf.sub.w874and pdf.sub.a876is achieved using the L1 distance metric. This comparison yields two values i.sub.w902and i.sub.a906. An index i908is then generated by calculating904the difference between i.sub.w902and i.sub.a906: i=i.sub.a−i.sub.w(12) The output of the comparator unit is then input to the scaling unit868. Scaling Unit The index i908is scaled in order to take values between 0% (corresponding to an apneic signal) and 100% (corresponding to the non-apneic baseline) with higher values indicating higher level of respiratory function: i=i.multidot.scale+offset (13) scale and offset are two fixed values calculated in the offline analysis. The result of the scaling is further stored into the variable WAV_unfilt898. Filtering Unit The variable WAV_unfilt898contains the unfiltered version of the final WAVelet index. The random character of the some signals dictates that in order to extract a more representative trend of the subject's respiratory state it may be necessary to smooth this variable using a filter. A new value WAV_unfilt is delivered by the scaling unit868for every finite signal (i.e. every second or every fraction of a second in the preferred embodiment). However, note that if the current epoch is corrupted with an artifact, the variable WAV_unfilt can take an arbitrary value, as it will not be used to derive the final value of the index. The result of the averaging filter is stored in the variable WAV. However, when calculating the average, only uncorrupted finite signals are taken into account by investigating the corresponding flag variable. The WAV variable is finally sent to the controller838which then produces the appropriate command signal. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. | 181,205 |
11857334 | DETAILED DESCRIPTION Almost 80 years ago, Gibbs, Gibbs and Lenox demonstrated that systematic changes can occur in electroencephalogram (“EEG”) and patient arousal measurements with increasing doses of administered ether or pentobarbital. They recognized the practical application of these observations to be used as measures of depth-of-anesthesia. Several subsequent studies reported on the relationship between electroencephalogram activity and the behavioral states of general anesthesia. Faulconer showed in 1949 that a regular progression of the electroencephalogram patterns correlated with the concentration of ether in arterial blood. Linde and colleagues used the spectrum—the decomposition of the electroencephalogram signal into the power in its frequency components—to show that under general anesthesia the electroencephalogram was organized into distinct oscillations at particular frequencies. Bickford and colleagues introduced the compressed spectral array or spectrogram to display the electroencephalogram activity of anesthetized patients over time as a three-dimensional plot (power by frequency versus time). Fleming and Smith devised the density-modulated or density spectral array, the two-dimensional plot of the spectrogram for this same purpose. Levy later suggested using multiple electroencephalogram features to track anesthetic effects. Since the 1990s, depth-of-anesthesia resulting from administration of anesthetic agents has been tracked using EEG recordings and behavioral responses. Herein, the inventors recognize that narcotic or pharmaceutical drugs often responsible for overdose can be either identical to routinely-used anesthetic drugs (e.g. opioids), or in the same drug class (e.g. clonidine and dexmedetomidine as alpha-2 adrenergic agonists), or work through similar mechanisms (e.g., gamma-aminobutyric acid or GABA). Therefore, it is a discovery of the present disclosure that EEGs and other physiological measurements, may be utilized to identify or characterize drugs affecting a subject. In particular, based on pre-determined signatures, a drug profile identifying the different drugs or drug doses acting on the subject can then be determined. It is also recognized herein that different levels and types of intoxication can affect the amplitude and duration of the evoked brain-wave response to sensory stimuli. Therefore, in some aspects, sensory stimulations can also be administered using systems and methods herein to elucidate higher or lower-order cognitive processing involvement. For instance, time-locked sensory stimuli, such as visual, auditory, tactile, olfactory, and other stimuli, may be provided. Evoked patient-specific responses, detected using measurements of EEG activity, for instance, could then be compared with pre-determined baseline responses. In addition, the present systems and methods may also be used to assess mental status. For instance, questions from the standardized clinical tests, such as the “miniature mental status exam,” reaction time or memory tests may be provided, performance to the tests may be compared to established or derived baseline ranges to indicate differences or deviations from normal. The results may then be used to determine mental status, level of intoxication, or patient response to therapy, for instance. In some aspects, a combination of physiological measurements, stimulation and performance measurements may be utilized to identify drugs affecting a patient as well as determine mental states or physiological stability of a patient. In this manner, drug levels intoxication or overdose, as well as a patient's condition or response to therapy may be identified. As will become apparent from description below, systems and methods provided herein afford a number of advantages not previously achievable. This is because patients under the influence often do not know, or are unable to inform on the type(s) and amounts of drugs taken, thereby making treatment challenging. Also, the time course of drug effects and appropriateness and efficacy of reversal medications can be difficult to ascertain. Moreover, standard toxicology tests available in the clinic cannot indicate the patient's individual state or likely drug response due to patient variability. To solve these problems, the present disclosure provides systems and methods that can determine a subject's specific drug profile or mental state, thereby providing valuable information that allows clinicians, or other professionals, to act accordingly. Therefore, the present disclosure provides a significant technological improvement in a variety of fields, including patient monitoring for clinical applications, law-enforcement, as well as commercial and industrial applications. Turning now toFIGS.1A and1B, block diagrams of an example system100, in accordance with aspects of the present disclosure, are shown. In some embodiments, the system100may be any general-purpose computing system or device, such as a personal computer, workstation, cellular phone, smartphone, laptop, tablet, and the like. In this regard, the system100may be a system designed to integrate a variety of software, hardware, capabilities and functionalities. Alternatively, and by way of particular configurations and programming, the system100may be a special-purpose system or device, such as a dedicated monitoring or drug detection/overdose system or device. In some implementations, the system100can be hand-held, portable or wearable. Also, the system100may operate autonomously or semi-autonomously based on user feedback or other input or instructions. Furthermore, the system100may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks, and servers. As shown inFIGS.1A and1B, the system100may generally a sensor assembly10and a monitoring unit15including an input102, at least one processor104, a memory106, an output108, and a communication network110. The system100may include one or more housing containing the various elements of the system100. In particular, the sensor assembly10may include one or more sensors for detecting physiological signals from a subject. By way of example, the sensor assembly10may include various EEG sensors, galvanic skin response (GSR) sensors, electrocardiographic sensors, heart rate sensors, blood pressure sensors, oxygenation sensors, oxygen saturation (SpO2) sensors, ocular microtremor sensors, and others. As such, the sensor assembly10may also include various elements or components, such harnesses, headbands, caps, straps, belts and the like, configured to secure the sensors to the subject. In some configurations, the sensor assembly10may also include various electronic and hardware components connected, or connectable, to the sensors that is configured for pre-processing the physiological signals detected by the sensors (e.g. amplifiers, filters, integrated circuits, microchips, analog-to-digital and digital-to-analog converters, and so on). In one embodiment, the sensor assembly10includes a number of electrodes. The electrodes may include scalp electrodes, intracranial electrodes, cutaneous electrodes, subdural electrodes, subcutaneous electrodes, and others, configured to detect brain activity in a subject. The electrodes may be arranged in an array, and constructed using various materials including conductors, semiconductors, insulators, adhesives, plastics, and other materials. For example, the electrodes may be formed using gold, silver, copper, aluminum, stainless steel, tin, and other materials. Movement of subject may also be helpful in identifying the specific drug(s) affecting a subject, or to determine efficacy of a treatment. Therefore, the sensor assembly10may also include position, orientation, and/or movement sensors (e.g. accelerometers, GPS sensors, and others) for detecting the position, orientation or movement of the subject. Furthermore, in some configurations, the sensor assembly10may also include various elements or devices that are configured to stimulate a subject by way of visual, auditory, tactile, olfactory, and other stimuli. In some configurations, position/orientation/movement sensors, stimulation elements/devices, and other sensors of the sensor assembly10, may be included in the housing of the monitor unit10, or in a separate housing or assembly, as shown inFIG.1. The input102may include various input elements configured for receiving selections and operational instructions from a user or subject. For example, the input102may include a mouse, keyboard, touchpad, touch screen, microphone, buttons, and the like. The input102may also include various drives and receptacles for receiving various data and information, such as flash-memory drives, USB drives, CD/DVD drives, and other receptacles. Generally, the processor104may be configured to carry out various steps for operating the system100. As such, the processor104may include one or more general-purpose processors, such as computer processing units (CPUs), graphical processing units (GPUs), and so on. In some implementations, the processor104may also be configured to perform methods, or steps thereof, in accordance with aspects of the present disclosure. To do so, the processor104may be programmed to execute instructions corresponding to the methods or steps. As shown inFIGS.1A and1B, such instructions may in stored in the memory106, in the form of non-transitory computer readable-media112, or alternatively elsewhere in a data storage location. Alternatively, or additionally, the processor104may also include various processing units or modules that are hard-wired or pre-programmed to execute such instructions. As such, the processor104may be an application-specific processor, or include one or more application-specific processing units or modules. The memory106may store a variety of information and data, including instructions executable by the processor104, as described. In some implementations, the memory106may have stored therein various pre-determined markers, indicators or signatures associated with different drugs, drug classes, drug combinations, and drug doses. Alternatively, or additionally, these may be stored in another accessible data storage location, such as a database or server. The markers, indicators or signatures may be stored in the form of specific numerical values, numerical patterns, or other data representations, tabulated using various categories, including drug type, drug class, drug dose, drug combination, patient characteristics (e.g. age, size, weight, medical condition), and others. Pre-determined markers, indicators or signatures may be generated using measurements obtained from one or more individuals. Such measurements include neural or EEG measurements, GSR measurements, electrocardiographic measurements, heart rate measurements, blood pressure measurements, oxygenation measurements, SpO2 measurements, ocular microtremor measurements, and so forth. Additionally, or alternatively, the markers, indicators or signatures may be generated based on movement, position, and orientation measurements. In some aspects, pre-determined markers, indicators or signatures may be generated using various deep learning algorithms or other deep learning techniques. In particular, EEG markers, indicators or signatures can be specific to certain drugs, drug classes, drug combinations, drug doses and patient characteristics. These may be reflected in EEG signal amplitudes, EEG spectral power or power distribution, EEG spatio-temporal correlations, EEG phase-amplitude couplings, EEG bursts (i.e. high-amplitude EEG activity) or burst suppression (i.e. isoelectric silence), EEG coherence, EEG synchrony, as well as trends or changes therein. For purposes of illustration,FIGS.3A-Bshow non-limiting examples of EEG markers, indicators or signatures, in accordance with aspects of the present disclosure. Specifically,FIG.3Ashows features in spectrograms302that are distinct for dexmedetomidine (304), fentanyl (306) and propofol (308). As apparent from the figure, the spectrograms302exhibit different spectral power distributions depending on the drug being administered. Drug-specific distinctions may also be apparent in the time-series304data, as shown. In addition,FIG.3Billustrates a correlation between the dose of an administered drug (e.g. fentanyl) and spectral power distribution. As shown in the spectrogram ofFIG.3B, power distribution can change based on drug dose. During anesthesia, auditory stimuli can elicit evoked potentials across various electrodes positioned about a subject. By way of example,FIG.8shows a graph of mean evoked response potential (ERP) amplitudes obtained from eight subjects undergoing induction and emergence from anesthesia using propofol. As shown in the figure, ERP amplitudes are generally small at baseline, sedation and post-loss-of consciousness (LOC). By contrast, ERP amplitudes are large after propofol is turned off and before return-of-consciousness (ROC). Therefore, it is envisioned that ERP signatures, as ascertained from amplitude, duration, spectral power, and other features, may be used to determine a state of a subject, in accordance with aspects of the present disclosure. Referring again toFIGS.1A and1B, the output108may be configured to provide a report to a user. In addition, in some configurations, the output108may also include various elements or devices to stimulate a subject, for instance, using visual, auditory, tactile, olfactory, and other stimuli. As such, the output108may include various output or stimulation elements, including displays, screens, speakers, LCDs, LEDs, vibration elements, tactile or textured elements, olfactory elements, temperature elements, scent elements, and so on. Although the output108is shown inFIGS.1A and1Bas a single element, as mentioned, the output108may include multiple output elements, not all of which need to be contained in the same housing or be part of the same device. For example, as shown inFIG.6, a visual stimulus may be provided to a subject via a separate visual device (e.g. wearable glasses or goggles). In some configurations, output or stimulation elements may be alternatively, or additionally, included in the sensor assembly10, as described. The system100may be configured to communicate wirelessly with various external computers, systems, devices, machines, cellular towers, cellular/mobile networks mainframes, and servers, As such, the system100may also include a number communication modules114configured for transmitting and receiving signals, data and information wirelessly. In some implementations, as shown inFIG.1B, the communication module114includes a monitor telemetry unit116, configured to communicate wirelessly (e.g. using WiFi or Bluetooth) with a sensor telemetry unit118on the sensor assembly10. The communication module114may also be capable of wired communication, as shown inFIG.1A. To this end, the communication module114may include various communication hardware, and ports to facilitate communication. Example ports include serial ports, parallel ports, Digital Visual Interfaces, Display ports, eSATA ports, SCSI ports, PS/2 ports, USB ports, Ethernet ports, and others. In some implementations, the processor104may be configured to control the sensor assembly10via the communication module114to acquire physiological signals, and other information associated with a subject. Specifically, the processor104may control the operation sensors in the sensor assembly10, as well as other sensors or hardware configured in the sensor assembly10or monitor unit15, such as various positional/movement sensors, digitizers, samplers, filters, data acquisition cards, and so on. The processor104may then generate or receive data corresponding to acquired signals, and analyze the data to identify data features or signatures corresponding to one or more drugs, or drug doses. In the analysis, the processor104may assemble and process the data in any number of ways. In particular, the processor104may assemble a time-series or a time-frequency representation of the data. For example, the processor104may generate power and coherence spectra or spectrograms, using the EEG signals, by performing an analytic decomposition or applying a multi-taper technique. The processor104may carry out a number of other processing or pre-processing steps to generate markers, indicators or signatures indicating an influence of drugs on the subject. As described, the processor104may also take into consideration patient characteristics in the analysis. By correlating the generated markers, indicators or signatures with pre-determined information, the processor104may then determine the drug profile of the subject. In doing so, the processor104may compare individual signal/data features or patterns to pre-determined ones stored in a memory or database, as described. For instance, the processor104may compare individual measurements to pre-determined thresholds, values or ranges corresponding with specific drugs, drug types or classes, drug combinations, or drug doses. In addition, the processor104may also compare assembled waveforms or time-series data, spectra, spectrograms, and features or patterns therein, with pre-determined ones. The drug profile determined by the processor104may characterize the various drugs affecting the subject. For example, the drug profile may identify the drug (e.g. by drug type or drug class), or combinations of drugs present in a subject's body, as well as the respective drug doses (e.g. absolute or relative dose). Non-limiting examples of drugs in a drug profile may include opioids, stimulants, depressants, benzodiazepines, tetrahydrocannabinols, alpha-2 agonists, ketamines, clonidines, tetrahydrocannabinols, alcohol, and so on. In some implementations, the processor104may also be configured to control the output106, or other stimulation elements or devices in the sensor assembly10or monitoring unit15, to provide various stimuli and performance tests to the subject. Based on the response to the stimulus, input by the subject to the performance test, as well as physiological measurements, the processor104may determine a drug profile, or mental states or mental status of the subject. To do so, the processor104may apply various algorithms taking into consideration markers, indicators or signatures based on physiological measurements, movement/position/orientation information, stimulus measurements, and performance measurements, as well as the strength of correlation of these measurements with specific drugs or mental status. The processor104may then generate a report and provide it to a user via output108, for instance, in substantially real time. The report may have any form and include a variety of information. For instance, as shown in the non-limiting example ofFIG.4, the report may provide an indication of measured brain activity and acquired physiological measurements, as well as a subject's drug profile, among other information. The report may also indicate performance or stimulus results, as shown in the non-limiting examples ofFIGS.6and7. In addition, the report may provide additional information to a user, for instance, in the form of instructions for a performance test, patient information/characteristics, and so forth, as shown inFIG.7. Turning now toFIG.2, a flowchart setting forth steps of a process200, in accordance with aspects of the present disclosure, is shown. The process200, or various steps therein, may be carried out using any suitable device, apparatus or system, such as the system100described with reference toFIG.1. Steps of the process200may be implemented as a program, firmware, software, or instructions that may be stored in non-transitory computer readable media and executed by a general-purpose, programmable computer, processor or other suitable computing device. In some implementations, steps of the process200may also be hardwired in an application-specific computer, processor or dedicated module. The process200may begin at process block202with controlling one or more sensors of a monitoring device to acquire physiological signals from a subject that is under, or suspected to be under, the influence of one or more drugs. As described, this may include acquiring EEG signals, EMG signals, heart rate signals, and others. In some aspects, the physiological signals may be acquired before, during or after a stimulus or performance test is provided to the subject. Alternatively, data corresponding to the physiological signals may be retrieved from a memory or other data storage location. As described, in additional to physiological signals, other measurements may be acquired at process block202, including position, orientation, or movement measurements. Then, a processing of acquired physiological signals, and other measurements, may be carried out. For example, the physiological signals and other measurements may be pre-processed (e.g. sampled, filtered, scaled, digitized, integrated, and so forth). In some aspects, as indicated by process block204, the physiological signals may be assembled into a set of physiological data that reflects a specific form or data representation, such as waveform or time-series, spectral or time-frequency representations, and others. To this end, various data processing or transformation techniques may be applied at process block204. For example, an analytic decomposition of the EEG signals (e.g. multitaper) may be used to generate a time-frequency representation of the EEG signals. The set of physiological data, and other acquired measurement data, may then be analyzed at process block206to generate various markers, indicators or signatures characteristic of the influence of the drug(s) on the subject. As described, this may include identifying specific features or patterns in the data. This analysis may be carried using various algorithms configured to identify such features or patterns. For example, one algorithm might identify whether measurements, or quantities derived therefrom, exceed one or more pre-determined thresholds or ranges. Another algorithm might identify a trend or change from a baseline or reference. Yet another algorithm might identify a spatio-temporal pattern, or change thereof. Then, at process block208, a drug profile of the subject may be determined by correlating the physiological markers, and other indicators or signatures, with a drug profile characterizing the drug(s) affecting the subject. As described, the drug profile may indicate various drugs, drug classes, doses and combinations of drugs, such as opioids, benzodiazepines, alpha-2 agonists, ketamine, alcohol and others, affecting a subject As described, responses of the subject to a stimulus, queries or a performance test may be used to aid in determining the drugs affecting the subject. In addition, such response may be indicative of underlying mental states. Such mental states can reflect various cognition or brain conditions, mental status, mental capacity, memory, likelihood of response to therapy, stability of the subject's physiological status, and so on. Therefore, in conjunction with, or separate from the analysis at process blocks206and208, an analysis may also be performed to identify a mental state of the subject. To this end, the analysis may be performed using various physiological markers, indicators or signatures, generated based on measurements corresponding to an administered stimulus or performance test. A report may then be generated and provided, as indicated by process block210. The report may include a variety of information associated with the determined drug profile. For example, the report may indicate drugs and drug doses affecting a subject, an intoxication level or overdose, and so on. In addition, the report may indicate past, current and/or future mental states or stability of the subject. The report may also include other information about the subject, as well as indications related to other physiological measurements. For example, the report may indicate heart rate, blood pressure, oxygenation, brain activity, and so on. Turning now toFIG.5, another flowchart setting forth steps of a process500, in accordance with aspects of the present disclosure, is shown. As described, the process500, or various steps therein, may be carried out using any suitable device, apparatus or system, such as the system100described with reference toFIG.1. As above, steps of the process500may be implemented as a program, firmware, software, or instructions that may be stored in non-transitory computer readable media and executed by a general-purpose, programmable computer, processor or other suitable computing device. In some implementations, steps of the process500may also be hardwired in an application-specific computer, processor or dedicated module. The process500may begin with process block502, where a stimulus or performance test is provided to a subject. The stimulus may include a visual stimulus, an auditory stimulus, a tactile stimulus, or an olfactory stimulus, or a combination thereof. The performance test may include one or more standardized clinical tests used to determine mental status, such as the “miniature mental status exam,” reaction time or memory tests, for example. Stimulation and/or performance measurements, corresponding to evoked subject-specific physiological reactions, input or responses, may then be acquired from the subject at process block504. An analysis of these measurements may then be performed, as indicated by process block506. The analysis may include processing the measurements and generating various markers, indicators or signatures associated with the acquired measurements, as described. For example, various time-series, waveforms, power spectra or spectrograms, may be assembled using a number of techniques, including analytic decomposition and multi-taper techniques, and analyzed to identify specific markers, indicators or signatures indicative of the subject's state. The identified markers, indicators or signatures may then be compared with pre-determined information to determine the subject's mental state, as indicated by process block508. For example, features or patterns in EEG activity (e.g. amplitudes, power, spectral power distribution, and so forth) may be compared to a baseline (e.g. historical, subject-specific activity) or reference (e.g. population activity) to determine levels, changes or trends indicative of the subject's mental state, or declines thereof. Similarly, performance to the tests may be compared to established or derived baseline ranges to indicate differences or deviations from normal. The results may then be used to determine the mental state of the subject. In some aspects, the mental state of the subject may also be determined at process block508using a combination of physiological measurements, stimulation and performance measurements, as well as movement information, position information, and orientation information. As described, this may include applying an algorithm that takes into consideration the strength of correlation between various mental states of the subject and the various measurements and information. A report, in any form, may then be generated and provided to a user, as indicated by process block510. The report may include a variety of information. For instance, the report may indicate past, current and/or future mental states of the patient. Such indications may identify, for example, whether a subject's physiological status was, is, or will likely be, “stable” or “unstable.” Other designations may also be used. In some implementations, as illustrated in the examples ofFIGS.6and7, the report may indicate current brain activity, or changes thereof due to the applied stimuli, stability. The report may further indicate scoring related to mental status, reaction time, memory, and other indicators. Features suitable for such combinations and sub-combinations would be readily apparent to persons skilled in the art upon review of the present application as a whole. The subject matter described herein and in the recited claims intends to cover and embrace all suitable changes in technology. | 28,424 |
11857335 | DESCRIPTION OF AT LEAST SOME EMBODIMENTS FIG.1Ashows a non-limiting example of a system according to at least some embodiments of the present disclosure. As shown, a system100features a camera102, a depth sensor104and optionally an audio sensor106. Optionally an additional sensor120is also included. Optionally camera102and depth sensor104are combined in a single product (e.g., Kinect® product of Microsoft®, and/or as described in U.S. Pat. No. 8,379,101).FIG.1Bshows an exemplary implementation for camera102and depth sensor104. Optionally, camera102and depth sensor104can be implemented with the LYRA camera of Mindmaze SA. The integrated product (i.e., camera102and depth sensor104) enables, according to some embodiments, the orientation of camera102to be determined with respect to a canonical reference frame. Optionally, three or all four sensors (e.g., a plurality of sensors) are combined in a single product. The sensor data, in some embodiments, relates to physical actions of a user (not shown), which are accessible to the sensors. For example, camera102can collect video data of one or more movements of the user, while depth sensor104may provide data to determine the three dimensional location of the user in space according to the distance of the user from depth sensor104(or more specifically, the plurality of distances that represent the three dimensional volume of the user in space). Depth sensor104can provide TOF (time of flight) data regarding the position of the user, which, when combined with video data from camera102, allows a three dimensional map of the user in the environment to be determined. As described in greater detail below, such a map enables the physical actions of the user to be accurately determined, for example, with regard to gestures made by the user. Audio sensor106preferably collects audio data regarding any sounds made by the user, optionally including, but not limited to, speech. Additional sensor120can collect biological signals about the user and/or may collect additional information to assist the depth sensor104. Sensor signals are collected by a device abstraction layer108, which preferably converts the sensor signals into data which is sensor-agnostic. Device abstraction layer108preferably handles the necessary preprocessing such that, if different sensors are substituted, only changes to device abstraction layer108would be required; the remainder of system100can continue functioning without changes (or, in some embodiments, at least without substantive changes). Device abstraction layer108preferably also cleans signals, for example, to remove or at least reduce noise as necessary, and can also be used to normalize the signals. Device abstraction layer108may be operated by a computational device (not shown), and any method steps may be performed by a computational device (note—modules and interfaces disclosed herein are assumed to incorporate, or to be operated by, a computational device, even if not shown). The preprocessed signal data from the sensors can then be passed to a data analysis a layer110, which preferably performs data analysis on the sensor data for consumption by an application layer116(according to some embodiments, “application,” means any type of interaction with a user). Preferably, such analysis includes tracking analysis, performed by a tracking engine112, which can track the position of the user's body and also can track the position of one or more body parts of the user, including but not limited, to one or more of arms, legs, hands, feet, head and so forth. Tracking engine112can decompose physical actions made by the user into a series of gestures. A “gesture” in this case may include an action taken by a plurality of body parts of the user, such as taking a step while swinging an arm, lifting an arm while bending forward, moving both arms, and so forth. Such decomposition and gesture recognition can also be done separately, for example, by a classifier trained on information provided by tracking engine112with regard to tracking the various body parts. It is noted that while the term “classifier” is used throughout, this term is also intended to encompass “regressor”. For machine learning, the difference between the two terms is that for classifiers, the output or target variable takes class labels (that is, is categorical). For regressors, the output variable assumes continuous variables (see, for example, http://scottge.net/2015/06/14/ml101-regression-vs-classification-vs-clustering-problems/). The tracking of the user's body and/or body parts, optionally decomposed to a series of gestures, can then be provided to application layer116, which translates the actions of the user into a type of reaction and/or analyzes these actions to determine one or more action parameters. For example, and without limitation, a physical action taken by the user to lift an arm is a gesture which could translate to application layer116as lifting a virtual object. Alternatively or additionally, such a physical action could be analyzed by application layer116to determine the user's range of motion or ability to perform the action. To assist in the tracking process, optionally, one or more markers118can be placed on the body of the user. Markers118optionally feature a characteristic that can be detected by one or more of the sensors, such as by camera102, depth sensor104, audio sensor106or additional sensor120. Markers118can be detectable by camera102, for example, as optical markers. While such optical markers may be passive or active, preferably, markers118are active optical markers, for example featuring an LED light. More preferably, each of markers118, or alternatively each pair of markers118, can comprise an LED light of a specific color which is then placed on a specific location of the body of the user. The different colors of the LED lights, placed at a specific location, convey a significant amount of information to the system through camera102; as described in greater detail below, such information can be used to make the tracking process efficient and accurate. Additionally, or alternatively, one or more inertial sensors can be added to the hands of the user as a type of marker118, which can be enabled as Bluetooth or other wireless communication, such that the information would be sent to device abstraction layer108. The inertial sensors can also be integrated with an optical component in at least markers118related to the hands, or even for more such markers118. The information can then optionally be integrated to the tracking process, for example, to provide an estimate of orientation and location for a particular body part, for example as a prior restraint. Data analysis layer110, in some embodiments, includes a system calibration module114. As described in greater detail below, system calibration module114is configured to calibrate the system with respect to the position of the user, in order for the system to track the user effectively. System calibration module114can perform calibration of the sensors with respect to the requirements of the operation of application layer116(although, in some embodiments—which can include this embodiment—device abstraction layer108is configured to perform sensor specific calibration). Optionally, the sensors may be packaged in a device (e.g., Microsoft® Kinect), which performs its own sensor specific calibration. FIG.1Bshows a non-limiting example of the implementation of the camera and depth sensor, according to at least some embodiments of the present disclosure (components with the same or similar function from earlier figures are labeled with the same component numbers). Here, a camera140includes a plurality of different sensors incorporated therein, including, without limitation, a left RGB (red green blue) sensor142, a right RGB sensor144, depth sensor104, audio sensor106and an orientation sensor146. Orientation sensor146is configured to provide information on the orientation of the camera. The markers ofFIG.1Aare now shown in more detail, as markers152. Markers152preferably comprise an inertial sensor148and an active marker150. Active marker150can comprise any type of marker which issues a detectable signal, including but not limited to an optical signal such as from an LED light as previously described. A plurality of different markers152can be provided; active marker150can be adjusted for the plurality of markers152, for example to show LED lights of different colors as previously described. FIG.1Cshows a variation on the above systems, in a non-limiting, illustrative, exemplary system160. System160includes various components as previously described, which have the same reference number as these previously described components, and the same or similar function. System160also includes a motion analysis module162, a functional assessment module164and a cognitive assessment module166. Motion analysis module162is preferably in communication with tracking engine112, to receive information on tracking of the patient's movements. Motion analysis module162optionally and preferably provides feedback on the patient movements using motivating content. Motion analysis module162optionally and preferably provides visualization of the kinematic parameters during a movement customized by the therapist or a movement from a list of predefined movements. Motion analysis module162optionally and preferably analyzes quality and confidence of the tracking data. Kinematic parameters for each joint is optionally extracted from the patient motor calibration procedure or from dedicated patient assessment activity, optionally by tracking engine112but alternatively by motion analysis module162. In the latter instance, motion analysis module162is optionally combined with tracking engine112. The kinematic parameters to be assessed may optionally include but are not limited to range of motion for each joint, reaction time, accuracy and speed. Optionally, the kinematic parameters are compiled in a graphical report available through a user display (not shown). Functional assessment module164preferably records movement while operating standard assessment processes. Functional assessment module164preferably includes but is not limited to the following assessments: ARAT (action research arm test), FMA (Fugl-Meyer assessment) and WMFT (Wolf Motor Function Test). The results of the data collected with the assessments is optionally compiled in a graphical report available through a user display (not shown). Cognitive assessment module166preferably provides at least a neglect analysis, including but not limited to the following settings: standard defined amount of targets/distractors; standard defined type of targets/distractors; standard defined spatial extension; and standard defined target distribution. The performance of the patients triggers default parametrization for the patient activity content regarding the cognitive aspect. FIGS.2A-2Dshow an exemplary, illustrative, non-limiting configuration of a mobile, table based system for supporting rehabilitation according to at least some embodiments of the present disclosure. As shown, a mobile table based system200features a display202for manipulation by a subject, who may for example be in a wheelchair, as shown with regard to a subject212. Display202is mounted on the frame of system200so that visual information on display202is visible to subject212. Subject212is seated, whether in a chair or wheelchair, in front of a table208. Table208sets a minimum plane, above which subject212performs one or more gestures. Optionally subject212may rest his or her hands and/or arms on table208. Table208is also attached to the frame. Subject212performs one or more gestures, which are detected by a camera206. Camera206is optionally attached to the frame of system200or alternatively may be attached directly to display202or to a holder for display202. Such an attachment optionally enables the base pillar to be shorter, such that, without wishing to be limited by a closed list, system200would be easier to transport and would have greater stability. Preferably camera206features an image based camera and a depth sensor, such as a TOF (time of flight) sensor. The image based camera preferably features an RGB (red, green, blue) camera. The data from camera206is then communicated to a computer214, which detects the gestures of subject212and which changes the visual information shown on display202accordingly. For example and without limitation, the gestures of subject212may optionally be used to play a game; the state of the game and the effects of the gestures of subject212are determined by computer214which adjusts the displayed information on display202accordingly. A therapist or other individual may optionally adjust one or more aspects of the therapy, or gameplay, or otherwise control one or more operations of computer214, through a controller204. Controller204is optionally a touch screen display for example, such that information about the therapy and/or operation of computer214may be displayed on controller204. Controller204is optionally attached to the frame of system200. Optionally, controller204is attached to the same pillar support as display202. Optionally camera206is attached to the same pillar support. System200is movable due to rollers or wheels210, which are mounted on the frame. Wheels210optionally have brakes to prevent unwanted movement. Optionally the electronics of system200are powered through UPS and Battery but alternatively such power is provided by an isolation transformer. FIGS.3A-3Dshow an exemplary, illustrative, non-limiting configuration of another table based system for supporting rehabilitation according to at least some embodiments of the present disclosure. Items with the same number as forFIGS.2A-2Dplus 100 have the same or similar function as the corresponding item inFIGS.2A-2D. For example, reference number “300” indicates a system300inFIGS.3A-3D. System300is similar to system200, except that instead of wheels, a plurality of fixed feet320are present instead. FIGS.4A-4Dshow an exemplary, illustrative, non-limiting configuration of a system that is suitable for a subject in a bed, for supporting rehabilitation according to at least some embodiments of the present disclosure. Items with the same number as forFIGS.2A-2Dplus 200 have the same or similar function as the corresponding item inFIGS.2A-2D. For example, reference number “500” indicates a system500inFIGS.5A-5D. System500is similar to system200, except that the frame is adjusted so that system500is suitable for a subject522who is in a bed. FIG.5shows an exemplary, illustrative non-limiting method for tracking the user, optionally performed with the system ofFIG.1, according to at least some embodiments of the present disclosure. As shown, at502, the system initiates activity, for example, by being powered up (i.e., turned on). The system can be implemented as described inFIG.1but may also optionally be implemented in other ways. At504, the system performs system calibration, which can include determining license and/or privacy features. System calibration may also optionally include calibration of one or more functions of a sensor. At506, an initial user position is determined, which (in some embodiments), is the location and orientation of the user relative to the sensors (optionally at least with respect to the camera and depth sensors). For example, the user may be asked to or be placed such that the user is in front of the camera and depth sensors. Optionally, the user may be asked to perform a specific pose, such as the “T” pose for example, in which the user stands straight with arms outstretched, facing the camera. The term “pose” relates to position and orientation of the body of the user. Preferably the gesture(s) of the user are calibrated in order to determine the range of motion and capabilities of the user, for example as described with regard to U.S. patent application Ser. No. 15/849,744, filed on 21 Dec. 2017, owned in common with the present application and incorporated by reference as if fully set forth herein. Optionally, user calibration may comprise determining compensatory actions. Such actions occur due to motor deficit, causing the patient to involve a portion of the body in a movement which would not normally be involved in that movement. For example, a study by Aprile et al. (“Kinematic analysis of the upper limb motor strategies in stroke patients as a tool towards advanced neuro-rehabilitation strategies: A preliminary study”, 2014, Biomed Res Int. 2014; 2014:636123), found that when reaching for an object, some patients showed reduced arm elongation and trunk axial rotation due to motor deficit. For this reason, as observed, the patients carried out compensatory strategies which included trunk forward displacement and head movements. Table 1 below provides a non-exhaustive list of a few examples of such movements. TABLE 1Compensatory MovementsCompensationHappening duringMeasurementTrunk forwardforward handangle of forward flexion ofdisplacementmovementthe trunkor both shoulder forwarddisplacementTrunk lateralhand movement to theangle of lateral flexion of thedisplacementsidetrunkor both shoulder lateraldisplacementTrunk rotationhand movement to theangle of axial rotation of thesidetrunkor frontal distance betweenboth shouldersShoulder elevationhand elevationdisplacement of the shoulder(rotation center) to the topShoulder abductionforward handshoulder elevation angleinstead of flexionmovementin the lateral plane insteadof frontal planeElbow flexionhand movement awayelbow flexion(absence offrom the bodyelbow extension)(reaching) Compensatory movement tracking and feedback is discussed further below in relation toFIGS.11A-16C. At508, a model is initialized. This model features a model of a human body, configured as only a plurality of parameters and features, such as a skeleton, joints and so forth, which are used to assist in tracking of the user's movements. At510, sensor data is received, such as for example, one or more of depth sensor data and/or camera data. At512, the game is started and the user begins to interact with the game, for example by performing one or more movements. As previously described, the range of motion and capabilities of the user are preferably determined in advance, so that the movements performed by the user can be correctly assessed. At514, the state of the user is determined with regard to the user's movements. Optionally, the sensor data can be mapped onto the previously described body model, e.g., the body model features an articulated structure of joints and a skin defined by a mesh of vertices that are soft-assigned to the joints of the model with blending weights. In this way, the skin can deform accordingly with the body pose to simulate a realistic human shape and the user's movements can be correctly analyzed. Optionally, such analysis is performed with regard to PCT Application No. IB2018/000171, filed on 7 Feb. 2018, owned in common with the present application and incorporated by reference as if fully set forth herein. The state of the user may optionally relate to the ability of the user to perform one or more movements, and/or any improvements in such an ability as compared to a previous session. Such an ability may optionally also be compared to an “ideal” model of normal human function, for example to determine whether the user has any functional deficits. Alternatively, for example with regard to training, such an ability may optionally be compared to a desired future state of the user. As a non-limiting example, such a desired future state may optionally relate to an improvement in one or more functions, or to a model of an “ideal” improved human functional state. In stage516, the game play is preferably adjusted according to the state of the user. For example, if the user has one or more functional deficits, then game play is optionally adjusted to be rehabilitative and useful with these deficits. On the other hand, for training purposes, game play may optionally be adjusted to induce the user to move in the direction of the desired improved state. FIG.6Ashows an exemplary, illustrative, non-limiting method for patient table position calibration with the calibration pattern or with the ToF camera, according to at least some embodiments of the present disclosure;FIG.6Bshows an exemplary calibration pattern. In stage602, a calibration pattern device is provided, as shown with regard toFIG.6Bin a non-limiting example. The calibration pattern corresponds in this example to a large black square with a shape on a A4 format detectable by the camera and 2 HandTrackers images to correctly position the HandTrackers during the calibration. The handtrackers are markers attached to or associated with the hands of the patient. In stage604, the Calibration Pattern is placed flat on a horizontal Table between the Patient and the Camera inside the field of view of the Camera. In stage606, in case of HandTrackers utilization, place the HandTrackers on the Calibration Pattern spots respecting the color codes and labels. In stage608, calibration is initiated, for example by selecting the calibration function through the user interface. Optionally the camera and the handtrackers can be calibrated separately. In stage610, a calibration image is displayed, optionally in a Body Tracking window with an overview of a Live Calibration Image displayed, which shows the image with the detected Markers captured by the Camera. The Body Tracking algorithm will automatically detect the body Markers associated to the joints of the person in front of the Camera. In stage612, the position of the patient is adjusted for correct calibration, so that the joints of the “skeleton” (figure abstraction) are in a correct position. In stage614, when all the skeleton's joints are associated appropriately with the detected Markers from the Camera Live Feed, they are displayed in green. Objects similar to the Markers are detected and highlighted as blue circles. These objects are preferably removed from the field of view of the Camera to avoid confusion in the Markers detection. In stage616, one or more warnings are provided, if the patient is not in the correct position. Non-limiting examples of such warnings are given below in Table 2. TABLE 2WarningsWarningDescriptionCalibrationMarkers Detectionx/6 markersThe x represents the number ofBlockeddetecteddetected Markers. Please check thatthe markers are switched on andattached to the patient and that thepatient is in the field of view of thetracking camera. Ensure that themarkers are well visible to thecamera, and not occluded by thepatient.Markers PositionCheck MarkersPlease check that all the markers areOK toPlacementattached to the patient's joints in theproceedcorrect locations, as shown. Somemarkers might be exchangedbetween the right and left side ofthe patient.Patient PositionPatient notPlease check that the patient is inBlockedproperlythe field of view of the trackingpositionedcamera. Ensure that the patient is inthe middle of the calibration image.Turn camera rightThe patient is too far on the right.BlockedPlease turn the tracking camera tothe left to have the patient in themiddle of the calibration pictureTurn camera leftThe patient is too far on the left.BlockedPlease turn the tracking camera tothe right to have the patient in themiddle of the calibration pictureTurn cameraThe patient is too far up in theBlockeddownimage. Please move the trackingcamera down to have the patient inthe middle of the calibration pictureTurn camera upThe patient is too low, the shouldersBlockedmay disappear in the image. Pleaseturn the tracking camera up to havethe patient in the middle of thecalibration pictureCamera positionCamera notPlease check that the camera is wellBlockedproperlyplaced to have the patient in thepositionedmiddle of the calibration picture.Camera too closePlease move the camera away fromBlockedthe patient of move the patient awayfrom the cameraCamera too farPlease move the camera closer toBlockedthe patient or move the patientcloser to the cameraCamera too highPlease move the camera to have theBlockedpatient centered in the calibrationpictureCamera too lowPlease move the camera to have theBlockedpatient centered in the calibrationpicture In stage618, after the patient is in the correct position, validation is indicated, for example by selecting validation through the user interface. Optionally validation of the calibration pattern occurs automatically. If not, a warning is issued (for example, if the pattern isn't visible to the camera). FIG.7shows a portion of an exemplary, illustrative, non-limiting system for motion tracking according to at least some embodiments of the present invention. Additional components from any of systems shown inFIGS.1A-1Cmay also be incorporated to the system, even if not explicitly shown. As shown in a system700, one or more motion tracker(s)702are provided. Preferably these motion tracker(s)702are wireless and are also preferably inertial sensors and/or incorporate such inertial sensors. They may for example be attached to a subject with a strap or other attachment, and/or may be provided in clothing or other wearables. Each motion tracker702is in communication with a motion tracker base station704, which is able to receive motion tracking information from motion tracker702and to provide this information to device abstraction layer108. Communication between motion tracker base station704and computational device130may optionally be wired or wireless. Motion tracker(s)702and motion tracker base station704may optionally be implemented according to the Xsens “ATWINDA” system for example. Motion tracker(s)702may be used for body tracking and/or tracking of specific parts of the body, such as for hand tracking for example. Motion tracker(s)702may be used alone or in conjunction with the previously described markers and/or with markerless tracking. Referring now toFIGS.8A-8E, in some preferred embodiments, system calibration, for example corresponding to a portion of system calibration of step504, some other step, or as a separate step, can include calibration of a workspace area or 2-dimensional region within the virtual environment that lies on a plane on which elements (e.g., targets, distractors, and the like) are placed. Calibration can be done using adjustment of the location of vertices of the area and shape of the edges, including the curve of the edges. In some embodiments, the workspace is dissected by a “magnetic” axis. Each end of the axis is adjustable so that the workspace can be dissected at any angle. In some embodiments, the workspace can include a “magnetic” target center. In therapeutic virtual reality systems, working areas are normally determined by the patient's range of motion. If a region is outside the patient's range of motion, the region falls outside the working area. A typical calibration process will include determining a maximal radial distance at a number of points of a patient range of motion from a resting position and setting a workspace region having vertices based on those points. Motion tracking information can be used to determine the location of points as the patient moves a body part. The typical range-of-motion calibration suffers from a few drawbacks. It does not account for compensatory movement of the patient and, therefore requires concurrent intervention to prevent any compensation of the patient. Without intervention, the calibration is not be reliable. Further, it relies on a patient extending to a range of motion suitable for the rehabilitation of that patient at the beginning of the therapy. At the beginning of therapy, the patient often is unable or not motivated to extend to a suitable range of motion. Conversely, a proper range of motion reached by a patient may require expending energy that may be required for the therapy itself, thus, undermining therapeutic goals. Embodiments of the present invention solve these problems by providing workspace area configuration of the size, location, and distribution probability for the workspace area. Benefits include the lack of requirement of patients to expend energy pre-activity during calibration, customization of workspace based on the progress of the patient from exercise to exercise without recalibrating the entire VR system or a greater portion of the VR system. Embodiments of the present invention also allow for a faster calibration process. Referring toFIG.8A, an exemplary illustration of a display800for calibrating a workspace area802is shown. In preferred embodiments, display800is presented on a first display for a therapist or other person to view a workspace area calibration interface. An exemplary workspace calibration interface can include four vertices804marking a workspace area802. In some preferred embodiments, three or more than four vertices can be included. The therapist user can adjust the position of the vertices to define an active area in which interactive elements of an activity can appear. Such interactive elements could include, for example, a target for a reach exercise, a path for the user to follow for a reach exercise, distractors. The vertices are preferably adjustable in two dimensions so that the area can take various sizes and account for a patient's range of motion and therapeutic needs. The displays can be a monitor, head-mounted display, or other type of display. Workspace area802includes four sides, a bottom and left and right sides806, and a top side or edge808. The number of sides in a workspace area is dictated by the number of vertices. In preferred embodiments having a workspace area with four sides, the bottom and connected left and right sides806are defined by straight lines and the top edge808is defined by a curve. The curve path defining the top edge808is preferably based on a quadratic or some other curvilinear equation with the peak and ends of the curve adjustable. For example, the two vertices804that intersect the curve are preferably adjustable in the y- and x-axes to adjust the location of the right and left ends of the curve. Also, the peak of the curve is preferably adjustable in both the x-axis and y-axis. In this way, the workspace area can be adjusted to accommodate the patient's reach throughout the patient's range of motion and to allow for the placement of targets or other virtual environment elements to appropriately exercise the user. In preferred embodiments, the system includes two displays. A first display, as illustrated inFIG.8A, can be used by a user to configure the workspace. A second display820, as illustrated inFIG.8B, can be used by the patient to assist in configuring the workspace. For example, the tracking system can be used to determine the location of the patient's hand or other body part and overlay a hand avatar810or avatar of another body part on the workspace are calibration or configuration interface. Preferably, vertices and top edge curvature are initially located based on patient hand (or other body part) tracking. For example, the top edge curvature can be fitted to a path of movement of the user's hand (or other part). The second display820can include an element representing the workspace area822as the workspace area is being configured or calibrated. In the example shown inFIG.8B, the second display as seen by the patient also includes a user avatar824showing that patient's movements to help set the vertices of the workspace area. Preferably, the first display allows the user to adjust the workspace area vertices and other dimensions as the patient motion and location is tracked to allow for configuration by the user with the tracking animation displayed. In preferred embodiments, the workspace area is defined in two stages. First, the patient range of motion is used to determine vertices and, second, the workspace area is modified by another user. Motion tracking data of the placement of user body at a vertex location is received and used to define each vertex. For embodiments with a workspace area defined with four vertices for training of upper limbs the user can move a hand to a close-left, close-right, far-left, and far-right locations and the motion tracking data received at the time of the hand placement at those locations is used to define the vertices. During calibration, a distribution probability of targets can be determined. For example,FIG.8Cillustrates an exemplary workspace area configuration interface in accordance with embodiments. A resting location or starting position830for an activity for the patient is shown. The current position of the patient's hand is animated in an avatar810over a vertex804of the workspace area. A distribution probability is illustrated by potential location markers832. In the illustration shown inFIG.8Cthe probability distribution is limited to the workspace area. Preferably, the x and y coordinates of a potential location are modulated to fit within the bounds of the vertices and edges, including the top edge curve808. In accordance with preferred embodiments, the number of potential target locations can also be adjusted. In some preferred embodiments, potential target locations are distributed according to a bell curve distribution around an axis that intersects the workspace area or around a point in the workspace area. For example, as illustrated inFIG.8D, the potential target locations are determined using the magnetic axis840to weight the distribution and used as input to different types of activities. Preferably the distribution is limited to the defined 2-dimensional area. The distribution can be weighted with different standard deviations to keep the distribution closer to the axis or more diffuse. For some activities that require a single target at any given time (e.g., a reach exercise in which the patient is directed to reach toward a target) one of the potential target locations can be selected at random from the distribution. For activities that require, more than one target, a plurality of targets can be selected, including selecting one or more of the potential target locations as targets and one or more of the potential target locations as distractors or placements for other interface elements. For some activities that require multiple targets, multiple locations can be selected at random from the distribution or the probability distribution can be determined multiple times with one or more targets selected from each distribution. Final target locations selected at random or by the therapist. In accordance with preferred embodiments, the interface includes a slider844along an linear element842to adjust a standard deviation for configuring the distribution. Skilled artisans understand that other user interface elements can be used to implement a distribution adjuster that can be used to set a standard deviation for a distribution. As shown inFIG.8E, a lower “magnetic” value (i.e., slider to the left of the adjuster in the image on the left) results in a larger standard deviation and more distributed potential locations and a higher “magnetic” value (in the image on the right) results in a smaller standard deviation and a distribution closer to the axis. In accordance with preferred embodiments, the axis can be rotated such that the distribution falls along both the x-axis and y-axis or just the y-axis as opposed to along just the x-axis as illustrated inFIG.8D As illustrated inFIG.8E, in some embodiments, potential target locations can be distributed around a single point850in the workspace area. For a single point, the potential target locations preferably are randomly located around the point with the radial distance of the potential target from the point weighted according to an adjustable standard deviation. For an axis, the potential targets are randomly located along the axis with the distance from the axis weighted according to an adjustable standard deviation. Preferably, a potential target is located within the workspace area and the maximum distance from the bottom axis of the workspace area is y(x) of the linear equation defining the top of the workspace area. Referring now toFIG.9, an exemplary interface900of a first display used to configure or calibrate the system prior to an activity or exercise is illustrated. A rest position902can be set by a user for the user to reach or can based on the patient's position. InFIG.10, an exemplary interface1000for initializing other gameplay (or activity or exercise) parameters is illustrated. In accordance with preferred embodiments, such parameters include number of trials1002, motor level1004, cognitive level1006, environment load1008, mirror mode1010, and compensation feedback1012. Skilled artisans can appreciate that other parameters can be included and interfaces to set parameters can be further combined or separated. In some preferred embodiments, the method ofFIG.5also includes receiving parameter data to initially configure gameplay. In preferred embodiments, compensatory movements of a patient are determined and feedback is provided to the patient through one or more interfaces of the system (e.g., visual, audio, haptic, and the like). Such compensatory movement tracking and feedback can be done in, for example, step514from the method ofFIG.5, or as the user is tracked during a game or activity. In current devices and systems that track and provide feedback of compensatory movement, the feedback is provided outside the scope of gameplay or takes the form of changing the scope of gameplay. In the former case, current systems will display an avatar or other signaling image to the user indicating compensatory movement. Such an indication takes the user's attention away from the therapy or activity and, thus, reduces the efficacy of the activity and frustrates and confuses the user. In the latter case, current systems will change the rules or goals of the activity which similarly distracts or frustrates the user. Additionally, therapeutic goals and objectives become somewhat less clear and whether the user meets the goals and objectives. For example, U.S. Publ. Serial No. 20170231529A1 describes a system which provides a flashing lights and screens, audio, or a vibration as biofeedback. The inventors have found that patients presented with this type of extraneous stimulation distract the patient from the gameplay and reduce the effectiveness of the therapy. Other systems control compensatory movement using harnesses or through therapist interdiction. Embodiments can include visual feedback as well as audio feedback or haptic feedback, depending on the activity and the available hardware. In preferred embodiments, compensatory movement feedback parameters are determined by patient-specific data and through workspace configuration. Parameters determined by workspace configuration are independent of patient-specific data. Feedback thresholds are determined through combination of patient-specific parameters and workspace configuration. In preferred embodiments, compensatory movement feedback is provided at two or more threshold levels. More preferably, three or more levels of compensatory feedback are provided. At each level, another form of feedback is added. In some preferred embodiments, the particular type of or level of feedback given about compensatory movements used during the activity can be set by the therapist. As described further below, there are at least three levels of feedback:Level 1: No feedback and the patient should be able to complete the given task in spite of a certain level of compensation below a minimum threshold.Level 2: Integrated visual/auditory feedback—non-blocking patient's experience of the reach (does not affect the game play). The patient should be able to complete the task in spire of a certain level of compensation but feedback is provided as an integrated part of the activity. The feedback does not interfere with the movement controller or the game logic. Examples include graduated change of transparency or hue of an activity element (e.g., avatar shadow, target, target path, and the like) or graduated visual blocking of an activity element using another activity element (e.g., gradual blocking of target or target path with avatar). In preferred embodiments, the graduated change begins at the threshold of compensation for the level.Level 3: Integrated visual/auditory feedback—blocking patient's experience of the reach which affects the game play. The patient is not allowed to complete the given task. Feedback is provided and interferes with the movement controller (e.g., a hand orientation only reacting on the wrist rotation for a wrist movement activity) or the game logic (e.g., a target disappears or becomes transparent in a reach activity) such that the task cannot be completed. Feedback for Upper Limb Multiple Joint Exercises/Activities Feedback indicating trunk forward flexion is a typical compensatory mechanism used during reaching activities to avoid arm use. Thus, in preferred embodiments that track forward trunk flexion, flexion is be measured by calculating the forward angle of the vector created by the L5 bone base and the skull bone base in the body model with respect to the vertical. Skilled artisans can appreciate that a body model used in tracking identifies bones of the vertebrae that can be used to determine forward trunk flexion and other bones, for example in the arm, to detect other types of compensatory movement. For instance, the inventors were able to determine forward trunk flexion to provide effective integrated feedback using a basic body model that includes the following skeleton portions:1. 3 vertebrae bones:i. sacrumii. L5iii. T12. neck bone: skull3. 2 scapular bones:i. l_clavicleii. r_clavicle4. 2 upperarm bones:i. l_upperarmii. r_upperarm5. 2 forearm bones:i. l_forearmii. r_forearm As a reference, the location of a vertebrae in accordance with the above skeleton with respect to their anatomical bone correspondences is as follows:1. sacrum: From the sacrum to L12. L5: From L1 to T63. T1: From T6 to C34. Skull: From C3 to a terminal point in the middle of the head. The bones used to calculate the flexion were found by performing tests on healthy participants using the following protocol. The subject reaches in 5 directions (center, right, left and between) one after another. The reaching targets were placed at a distance equivalent to 90% (Valdés et al. 2016) of the arm length. This was be performed at three different heights (table height, shoulder height, eye level height). The eye height was estimated as 24.5 cm above shoulder height ˜=eye height (50th percentile human shoulder to eye length). The inventors found from the results that preferable degrees of trunk flexion compensation feedback were 7% and 20%. In instances where trunk flexion compensation is tracked, level thresholds of compensation are preferably at 7% and 20% degrees from the patient's rest position. In such an embodiment, from 0% to 7%, feedback indicates no compensation, at 7% up to 20% an initial feedback threshold begins, and at 20% a terminal threshold begins. In preferred embodiments, the initial feedback is provided at a gradient and is proportional to the degree or magnitude of compensation. At the initial threshold the activity goal is still achievable and while at the terminal threshold the activity goal is no longer achievable. Exemplary feedback thresholds are described in the following Table 3. TABLE 3Feedback Thresholds (degrees)Rest Trunk ForwardStart of FeedbackMaximum FeedbackFlexion PositionThresholdThreshold<=0°7°20°>0°Rest Position + 7°Rest Position + 20° Referring toFIGS.11A-11C, exemplary screenshots of a preferred embodiment with shadow or visual trunk feedback are illustrated. InFIG.11A, no head or trunk avatar is shown to indicate no or minimal compensation. InFIG.11B, a portion of a head avatar1102is shown which begins to block the target and target path for the user. InFIG.11C, a further portion of a head avatar1102with visible shoulders obscures more of the target and target path to indicate further trunk flexion. In preferred embodiments, movement of the blocking avatar is smooth and tracks the movement of the user as, for example, trunk flexion increases or decreases. The blocking avatar becomes visible when a minimum threshold of compensation is made. At a maximum threshold of compensation is detected in the user, the avatar blocks the target and the user is prevented from reaching or achieving the target. In some embodiments. the avatar can appear as a shadow and can change color, preferably from light to dark, as flexion increases. In some embodiments, the avatar can appear with a level of transparency that increases to obscure the target and target path as flexion increases. Preferably, the level of transparency is 0 when the maximum threshold is reached. Referring toFIGS.12A-12C, exemplary screenshots of a preferred embodiment with target blocking are illustrated. InFIG.12A, the target1202is shown with solid color to indicate no or minimal compensation along with a target path1204for the user to follow. InFIG.12B, the target1206is shown partially transparent to indicate compensation above a minimal threshold level. In accordance with some preferred embodiments, path1204can also be shown partially transparent. InFIG.12C, the target1208is shown completely or near-completely transparent to indicate compensation beyond a maximum threshold. At this stage the target is preferably unachievable by the patient. In some preferred embodiments, the target path1204can also be shown as completely or near-completely transparent. Preferably, the degree of transparency is smooth and changes as θ changes, as exemplified inFIG.14A. FIGS.13A-13Cillustrate exemplary screenshots and corresponding compensatory movement ranges of a preferred embodiment with that includes integrated feedback in the form of an avatar shadow. InFIG.13A, the user1300exhibits no or minimal compensatory movement during a reach activity that asks the user to reach out to the target1302along a particular path1304. On the display1306seen by the user during the activity, the avatar shadow1308of the avatar arms1310is a shade of gray to indicate compensatory movement within acceptable levels. InFIG.13B, the user1300exhibits compensatory movement beyond an minimal threshold but not a maximum threshold. On the display1306, the avatar shadow1308changes hue to reflect the level of compensation that corresponds to the degree θ. Preferably, the degree of hue change is smooth and changes as θ changes, as exemplified inFIG.14B. InFIG.13C, the user1300exhibits compensatory movement at or beyond a maximum threshold. On the display1306, the avatar shadow1308correspondingly reaches a maximum change in hue. In some preferred embodiments, the target1302also becomes blocked so that the user is not allowed to obtain the goal of reaching even if actually reaching it. In some preferred embodiments, the successfully reaching the target will not result in completion of the activity and no indication that the target is reached or the goal of the game is reached is provided to the user. During testing, the inventors discovered that tracking the trunk flexion for very high trunk flexion values is not guaranteed. This is probably because during high trunk flexion, in some embodiments, the tracking camera cannot properly track the base of the trunk as it is hidden by the upper flexing trunk. This causes the base of the trunk to be estimated, and it is estimated by aligning it to the top of the trunk. This causes the flexion value to be tracked as if the body was straight and not flexed. Because of this, the inventors found that measuring the trunk displacement so that when the trunk flexion suddenly decreases but the displacement continues to increase, it is preferable to still provide compensation feedback. For instances in which trunk flexion is tracked, it is preferable that the lower trunk position is saved as the last maximum trunk flexion tracked, and then the flexion used for feedback is calculated from this saved position. Referring toFIG.15, an exemplary visual of motion tracking with forward trunk flexion components identified is illustrated where flexion decreases but displacement continues. In the top portion of the figure, the base of the trunk position1502is tracked at a first position (not necessarily an initial, resting position) and the upper trunk portion position1504is tracked to create a forward flexion vector1506. In the bottom portion of the figure, the trunk is flexed more than in the top portion of the figure, as evidenced by the upper trunk position1508. As a result, the body may be considered to have moved forward in the tracking as evidenced by the different tracked lower trunk position1510and a different vector1512. In preferred embodiments, compensation feedback is still provided in spite of the tracking resulting in a smaller or even 0 degree of flexion. In such cases, the lower trunk position1502is used rather than lower trunk position1510in determining flexion for determining compensation feedback. Feedback for Single Joint Exercises/Activities The single joint activity should enforce the usage of the wrist rotation. Wrist rotation refers to the change of hand orientation versus forearm orientation. Any other movement than wrist rotation is perceived as a compensatory movement. More specifically, this includes (a) displacement of the wrist (due to elbow, shoulder or trunk movement); (b) displacement of the elbow (due to shoulder or trunk movement). A tolerance threshold is allowed for each movement. Referring now toFIGS.16A-16C, illustrations of single joint compensation are shown. InFIG.16A, wrist flexion/extension and/or radial/ulnar deviation movements are illustrated. In accordance with preferred embodiments, up to 3 cm displacement of the wrist is allowed from the start of trial wrist position. Blocking feedback is provided from a wrist displacement of 10 cm. Up to 3 cm displacement of the elbow is allowed from the start of trial elbow position. Blocking feedback is provided from an elbow displacement of 10 cm. InFIGS.16B and16C, pronation/supination movements are illustrated. In accordance with preferred embodiments, up to 3 cm displacement of the elbow is allowed from the start of trial angle and blocking feedback is provided from an elbow displacement of 10 cm. TABLE 4Feedback Thresholds (cm)CompensationStart of FeedbackMaximum FeedbackTypeThresholdThresholdElbow displacement3 cm10 cmWrist displacement3 cm10 cm For this activity tracking directly the wrist and elbow markers are sufficient to detect compensatory movements. Because of this the marker positions can be tracked directly as opposed to using the hybrid tracking values of the wrist and elbows. If a marker can no longer be seen by the camera, for example if the user rotates his/her forearm so that the palm is facing up, the last tracked position will be used until the marker is seen by the camera again. In some preferred embodiments, the compensatory threshold values can be adjusted either manually or automatically. For example, in some cases, values can be received from user input. In some cases, threshold values can be calibrated according to the patient. For example, a patient can exhibit less trunk flexion or other compensatory movement over the course of therapy and the threshold levels can be adjusted to allow less or more compensation as the patient advances. Thus, data representing the performance of a patient in activities and the amount of compensation (e.g., degree of trunk flexion, degree of elbow displacement, and the like) during activities can be used to determine threshold levels. It should be understood that the compensatory movement feedback is integrated with the activity or exercise such that the user need not move attention away from the activity or exercise to receive it. Thus, feedback is incorporated into the elements of the activity that otherwise enhance the reality of the virtual reality environment. For example, as discussed above, feedback can be incorporated into a shadow of an avatar, the shadow included for rendering the avatar more realistic. Thus feedback is provided visually absent an element specifically dedicated to providing feedback or audially absent an audio element specifically dedicated to providing feedback and the like. It is possible in some embodiments to provide such integrated feedback in combination with a different type of feedback that is not integrated (e.g., integrated visual feedback and non-integrated audio feedback). In some preferred embodiments, upper body compensation feedback can include other types of compensatory movements, including lateral trunk displacement, trunk axial rotation, shoulder elevation (i.e., shrugging). Skilled artisans can appreciate that the particular types of compensation that are measured and are provided feedback for can depend on the particular activity or exercise. FIG.17Ashows an exemplary, non-limiting, illustrative system that incorporates a device abstraction layer, such as for example and without limitation device abstraction layer108ofFIG.1A. As shown, a device abstraction system1700features a plurality of user applications1702A and1702B, communicating with a manager server1710through a local socket1708A or1708B, respectively. Each user application1702is assumed to be operated by a computational device (not shown for simplicity). Each of user applications1702A and1702B is in communication with a client library1704A or1704B, respectively, and a manager client1706A or1706B, respectively. Manager server1710features a plurality of backends1712A and1712B, each of which is in communication with a physical device1714A or1714B. Manager server1710also operates a device abstraction process1716, which receives data from physical devices1714through their respective backends1712. Device abstraction process1716then communicates this data in a standardized manner to the appropriate manager client(s)1706, which in turn pass the data to user application(s)1702. User application(s)1702are able to consume abstracted data, such that they do not need to be aware of the specific features or functions of each physical device1714, through client library1704, which may be implemented as a shared instance of such a library. Preferably user applications1702receive the data through a shared memory, which may for example be implemented as described with regard toFIG.17B. FIG.17Bshows an exemplary, non-limiting, illustrative shared memory buffer. In an embodiment of the system of the present invention, the system uses shared memory architecture where the provider (running in a server process in the system) writes data to a shared memory buffer that the client (running in the user process) can access. This is a single-producer, multiple consumers model. To do this, each capability provider will allocate the following: a shared memory segment; a shared memory mutex; a shared memory condition variable. The mutex and condition variable are used to allow a consumer to wait for the next frame. An example of such a layout is given inFIG.17B. As shown, various items are shared through the shared memory segment1720, including the frame descriptor (e.g. decaf_monorgb_frame_desc)1730, a base frame1742and data1746. To maintain alignment, the layout includes alignment padding regions1722,1728,1736,1740. Canary values1726,1732,1744and1748are used for buffer overflow protection. FIG.18shows an exemplary, non-limiting, illustrative data type description. As shown, a data type description1800features a plurality of layers, shown as three layers, for describing capabilities. A capability description1802features two tiers, a capability class1804and a capability format1806. The top tier (Capability class) describes the broad category of the data—that is, from which type or category or sensor the data has been obtained. Example capability classes are “Image”, “Range”, “IMU” (inertial motion unit), “BioSig” (biosignals). The second tier (Capability format) describes the format of the data. For image data, for example, this corresponds to the pixel format, e.g., “24 bit BGR”, “RGBA32” and so forth. The two top tiers form the capability for capability description1802. It is what client software would request when querying for a capability. The lowest tier (Description1808) describes additional details about the data. An example description for “24 bit BGR” images is “1080×720 with 3240 stride”. All tiers taken together form a full description of the data format which allows unambiguous interpretation of a data buffer. FIG.19shows an exemplary, non-limiting, illustrative frame class description. A frame class description1900features an overall frame1902that includes pointers to the lowest tier information (description1808) fromFIG.18and to the actual data. Each instance of the data and of the description is provided through a base frame1904. Helpers, such as an image frame describer1906and a biosig frame describer1908, are provided to assist in interpreting the data. FIG.20shows an exemplary, non-limiting, illustrative API (application programming interface) abstraction. An API abstraction2000features a camera API2002and a sensor back-end2004. Camera API2002features a CameraSystem abstraction2006. CameraSystem2006loads dynamically (at running time) a list of pre-compiled back-ends (dynamic libraries). This architecture generalizes the interface to support any arbitrary type hardware sensor. For example inFIG.20, the MMScanner back-end implements support for an RGBD (Color+Depth) camera. However other back-ends could provide support for inertial sensors, EEG, or any other type of acquisition device that generates a data stream. In this example, a camera is an abstraction that contains several sensors, such that Camera objects2008may for example connect through a sensor class2010, which may for example feature an RGB sensor abstraction2012or a depth sensor abstraction2014. Abstraction2016is an abstraction of the connection bus with the hardware device. Abstraction2016makes the API agnostic to the physical connection of the device (Sensor) with the host. The non-limiting example shows the USB and CameraLink connections, but the generalization could apply to any type of physical connection or communications protocol (Ethernet, FireWire, BlueTooth, WiFi, etc. . . . ). In this example, abstraction2016connects to various specific connectors, such as a camera link connector2018and/or a USB connector2020for example. The abstraction2022represents another generalization of the different data types provided by the back-ends. In the example, data types, including but not limited to one or more of RGB data2024, depth data2026or point cloud data2028, may be provided by a Camera back-end with Color and Depth sensors. Camera objects2008instantiate the corresponding Frame (2022) data types depending on which back-end has been dynamically loaded. So for instance if the MMScannerBackend is dynamically loaded (in this example, loading data from an RGBD (red, green, blue, depth) camera device), the back-end modules will expose to the Camera module which type of data the device is able to provide. The Camera (2008) will then generate the corresponding data types and expose them to the API user. This datatype-agnostic strategy is also used with other parameters that are specific for the configuration of the device, such as the frequency, light-exposure, internal inclinometer sensors, calibration parameters, and so forth, so that the usability of the hardware is not limited by the generalization ability of the API. Optionally, in each iteration of the camera loop, the camera checks if there's data in the device, asks each sensor for a free buffer, fills it, asks the sensor to process it, and pushes it back into the sensor “processed” buffer ring (and if callbacks registered calls them). The operation of pushing back is done atomically for all the sensors of a camera (this means that all cameras pushed a processed buffer at once) with a shared mutex. When the buffer pool is empty, new data coming from a connection is skipped. The buffer ring with processed data should be released after used by the client and brought back to the free buffer pool, or should be automatically released after certain time. The back-end code should only implement the Camera initialization (adding its sensors), a virtual method in the camera that transforms raw data from the device into Frames, and virtual methods in the sensors that perform the processing of Frames. A client can at any time ask a Camera (or a sensor) for its most recent data (pops a processed frame from the buffer ring) FIG.21shows an exemplary, non-limiting Unified Modeling Language (UML) diagram of the components of the API from the system to the backend, as a flow2100. Flow2100moves between a user API2102, which exposes capabilities of the device abstraction layer to a user application; a standard client library2104for the device abstraction layer and a backend2106for a particular device, which in this non-limiting example is called “MMScanner”. A device is a software representation of a logical input device. It can provide several types of data, by exposing capabilities. A device will often correspond to a physical device, but this does not have to be the case. Multiple physical devices may be combined or otherwise abstracted. For example, to create a new device that represents multiple physical devices, one would then write a new “composition” backend, providing a device that wraps the drivers to the multiple devices and provides the relevant capabilities. A device is preferably defined in terms of its capabilities. Various non-limiting examples are as follows; mmscanner camera (provides RGB, RGB stereo and depth); kinect (provides bodypose+pointcloud); Colibri inertial sensors (provides data from N inertial sensors); EEG acquisition device (provides eeg channels). A Capability represents the capability of a device to provide certain types of data. The data itself will be represented by a sequence of Frame objects, either queried directly through a getNextFrame( ) method, or by means registering a callback function. Capabilities do not necessarily match exactly the sensors of a device. Take for example a depth camera; it might expose a RawDepth and a PointCloud capabilities that rely on data from the same sensor but provide it in different forms. In order for the user application to be able to receive the data, capabilities preferably communicate in terms of frame types. Data is communicated according to frames, so that the data type and format is clear. Each frame includes a device ID and a timestamp. Non-limiting examples of data types include RawDepthFrame, PointCloudFrame, colorImageFrame, StereoColorFrame, BodyPoseFrame, and EEGChannelsFrame. Capabilities exposed by a single device are assumed to be synchronized, i.e., the Frame timestamps are assumed to be coherent among the different Frame objects returned by the Capabilities of a single device. A user is of course free to use several devices (and thus extend the range of Capabilities), but the timestamps might not be coherent in that case and synchronization is up to the user. Non-limiting examples of capabilities include: ColorImage; StereoColorImages; RawDepthMap; PointCloud; BodyPose; InertialSensorOrientation; InertialSensorAcceleration; EEGChannels. As shown with regard to user API2102, a DeviceProvider provides the entry point into client library2104for clients (client applications). The client asks the DeviceProvider for available devices that match a set of required capabilities. Underneath, the DeviceProvider loads all the available backends (preferably dynamically), and asks each backend (such as backend2106) whether it supports the relevant capabilities and, if it is the case, asks for the devices it can find currently. The backends are responsible for returning AbstractDevice instances for consumption by Decaf end users. Their only obligation is to provide a Backend instance that can be dynamically loaded by the DeviceProvider and by correctly inheriting AbstractDevice to provide Capabilities. User API2102interacts with an abstracted device, in order to obtain the data required according to the requested capabilities of the device. Client library2104provides the data according to the abstracted device model, such that user API2102does not need to implement details of any device drivers or other device specific features. Client library2104receives the data from the backend of the device, such as backend2106, which does include any wrappers for device drivers and other device-specific features. Optionally, various sensor types and data sources are used to obtain the necessary data for further analysis as described above. Table 5 shows a non-limiting list of such devices, the capabilities of each device (according to the data that can be provided), the raw data format and some exemplary user formats. TABLE 5Devices and Resultant DataCapabilities(several perTypical userDevicedevice)Raw dataformatsDepthDepth mapAmplitude imageDepth imagecameraPhase imagePoint CloudKinect2Color, Depth mapIR ImageDepth image(amplitude)Point CloudDepth mapColorColor imageCameraColorColor imageColorsuch as(BGR)SimpleWebcamLyraDepth map, Color,Amplitude imageDepth imageIMUPhase imagePoint CloudColor imageColorIMUIMUElviraDepth map, Color,Phases,Depth imageIMU, BiosignalsBayered stereo,Point CloudIMUColorint24 biosignalIMUdataint24 or scaled floatbiosignal dataMindleapIMU, BiosignalsIMUIMUint24 biosignalint24 or scaled floatdatabiosignals data SimpleWebcam is a backend made with OpenCV that retrieves the first available standard camera on a local computer. The LYRA device is described for example in U.S. patent application Ser. No. 15/891,235, filed on 7 Feb. 2018, owned in common with the present application and hereby set forth as if fully incorporated herein. The ELVIRA device is described for example in U.S. patent application Ser. No. 15/555,561, filed on 5 Sep. 2017, owned in common with the present application and hereby set forth as if fully incorporated herein. FIG.22shows an exemplary, non-limiting, illustrative system according to at least some embodiments. As shown a system2200features a GUI2202, a control2204and an interactive content platform2206. Interactive content platform2206serves one or more games, handles device abstraction and provides data. The previously described device backends, client libraries, etc. ofFIGS.17-21may optionally be implemented at interactive content platform2206. Optionally, interactive content platform2206also provides various reports and analyses of activities, including but not limited to activities provided with regard to therapeutic activities, calibration activities and assessment activities. Control2204supports interactive content platform2206, by passing parameters to interactive content platform2206(from GUI2202) and receiving events from interactive content platform2206(which are then sent to GUI2202). Optionally, transmission of information across system2200is performed according to a remote object protocol, such as GRPC (Google remote procedure call) for example. Control2204may include two servers for the remote protocol, shown as two GRPC servers, for supporting remote object protocol communication with each of interactive content platform2206and GUI2202. The games are managed through control2204, through an activity manager and a session manager. Each session preferably includes a plurality of games and/or activities, so the session manager manages the overall session. Control2204also preferably includes an object model, which is a data model. This data model is able to receive (load) data from the database, manipulate it and push the data back to the database. The data model includes information necessary for operation of system2200, including but not limited to data about the patient and therapist; credentials, parameters, type of illness, other necessary definitions and so forth. GUI2202also includes an object model, which it uses to exchange objects, to display data and to receive commands; as well as state controllers and view controllers. FIG.23shows an exemplary, non-limiting, illustrative flow for operating the exemplary system ofFIG.22. As shown in a flow2300, the process begins with loading an activity into the system in2302. Next parameters and other options are provided from the control to the interactive game platform, in2304. Tracking calibration is performed and tracking begins in2306; this stage may repeat until tracking has been established. Tracking is performed through the interactive game platform; the control may indicate to the user or therapist, through the GUI, whether further calibration is required. In2308, the system indicates that it is ready for gameplay to begin, after tracking has been adapted and is ready. A message to this effect may be displayed through the GUI. During gameplay in2310, the activity may be paused through the GUI by the user or the therapist, and may then be restarted. Once the command to stop has been provided through the GUI in2312, tracking and other processes shut down, and gameplay stops. Any and all references to publications or other documents, including but not limited to, patents, patent applications, articles, webpages, books, etc., presented in the present application, are herein incorporated by reference in their entirety. Example embodiments of the devices, systems and methods have been described herein. As noted elsewhere, these embodiments have been described for illustrative purposes only and are not limiting. Other embodiments are possible and are covered by the disclosure, which will be apparent from the teachings contained herein. Thus, the breadth and scope of the disclosure should not be limited by any of the above-described embodiments but should be defined only in accordance with claims supported by the present disclosure and their equivalents. Moreover, embodiments of the subject disclosure may include methods, systems and devices which may further include any and all elements from any other disclosed methods, systems, and devices, including any and all elements corresponding to systems, methods and apparatuses/device for tracking a body or portions thereof. In other words, elements from one or another disclosed embodiments may be interchangeable with elements from other disclosed embodiments. In addition, one or more features/elements of disclosed embodiments may be removed and still result in patentable subject matter (and thus, resulting in yet more embodiments of the subject disclosure). Correspondingly, some embodiments of the present disclosure may be patentably distinct from one and/or another reference by specifically lacking one or more elements/features. In other words, claims to certain embodiments may contain negative limitation to specifically exclude one or more elements/features resulting in embodiments which are patentably distinct from the prior art which include such features/elements. | 71,726 |
11857336 | DETAILED DESCRIPTION In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Systems and methods in accordance with various embodiments of the present disclosure may utilize one or more wearable devices to detect arousal responses (e.g., activations, sympathetic nervous system responses, etc.) from electrodermal activity (EDA) and present the information to the user in order to track and/or manage their arousal responses. Embodiments may incorporate EDA measurement within one or more wearable devices, and may include information from other sensors, in order to detect changes in a user's arousal response. These changes may be compared against a baseline, that may be determined over a period of time, and once responses exceeding a threshold level are detected, the user may receive a prompt or notification providing information to the user regarding their responses to certain stimuli. In this manner, the user may use the information to track how he or she responds to different stimuli and/or to manage certain activities in his or her life. Thus, the present disclosure is directed to a technical solution/benefit to the technical problem relating to users accurately and timely receiving information regarding their arousal responses due to activation of their sympathetic nervous system. In various embodiments, the user's sympathetic responses may be presented for visual inspection, for example, as a graph showing the user's responses over time. These responses may then be correlated with a user's activities in order to identify activities that activate the user's sympathetic nervous system, which may be detectable via the EDA measurements by measuring skin conductance responses to sweating or the like. In certain embodiments, the user may be prompted to interact with one or more wearable devices to provide data for the EDA measurements. In other embodiments, continuous EDA measurements may be obtained. In embodiments, the user may be prompted to provide the information. By way of example, the user may receive a message on their wearable device to interact with the device in order to obtain an EDA measurement. This may be in accordance with another activity the user is undergoing, such as completing a workout, meditating, or the like. The measurements may provide information to the user regarding these activities. For example, the user's response to a workout may be different if the workout is more challenging or if the user exerted a particularly large amount of energy. Determining how the user is responding to the stimulus may be useful for developing training and/or recovery routines. In another example, during a meditation session, arousal responses may be indicative of the user losing focus or having his or her mind drift, which may enable a prompt to alert the user to facilitate changes or improvements to their meditation session. Various other activities may also be monitored and/or evaluated when using the EDA measurements, such as lie detection, stress evaluation, mental health screenings, women's health screening, and the like. In various embodiments, EDA measurements may be obtained using a user's fingers, which may provide more accurate information than, for example, a user's arms or chest. In various embodiments, a single lead or multi-lead portion of the wearable device may provide a region where the user may position their fingers (or other body parts), measure a skin conductance, and determine a value associated with a user's arousal response associated with the sympathetic nervous system. The leads may be arranged to provide a comfortable, ergonomic position for the user. Accordingly, if the user is comfortable during the measurement or the measurement is not onerous for the user, the user is more likely to utilize the functionality of the wearable device. Referring now to the drawings,FIG.1illustrates an example embodiment of a user100wearing a user monitoring device102around a wrist104of the user100. The user monitoring device102may also be referred to herein as a wearable or a fitness tracker, and may also include devices that are worn around the chest, legs, head, or other body part, or a device to be clipped or otherwise attached onto an article of clothing worn by the user100. The user monitoring device102may collectively or respectively capture data related to any one or more of caloric energy expenditure, floors climbed or descended, heart rate, heart rate variability, heart rate recovery, location and/or heading (e.g., through GPS), elevation, ambulatory speed and/or distance traveled, swimming lap count, bicycle distance and/or speed, blood pressure, blood glucose, skin conduction, skin and/or body temperature, electromyography data, electroencephalographic data, weight, body fat, respiration rate and patterns, various body movements, among others. Additional data may be provided from an external source, e.g., the user may input their height, weight, age, stride, or other data in a user profile on a fitness-tracking website or application and such information may be used in combination with some of the above-described data to make certain evaluation or in determining user behaviors, such as the distance traveled or calories burned of the user. The user monitoring device102may also measure or calculate metrics related to the environment around the user such as barometric pressure, weather conditions, light exposure, noise exposure, and magnetic field. In some embodiments, the user monitoring device102may be connected to a network directly, or via an intermediary device. For example, the user monitoring device102may be connected to the intermediary device via a BLUETOOTH® connection, and the intermediary device may be connected to the network via an Internet connection. In various embodiments, a user may be associated with a user account, and the user account may be associated with (i.e., signed onto) a plurality of different networked devices. In some embodiments, additional devices may provide any of the abovementioned data among other data, and/or receive the data for various processing or analysis. The additional devices may include a computer, a server, a handheld device, a temperature regulation device, or a vehicle, among others. In the illustrated embodiment, the user monitoring device102may include a conductive bottom plate that is positioned against a wrist of the user100, e.g., where the user monitoring device102is worn on the wrist. In such embodiments, the conductive bottom plate may serve as a first lead (e.g., first electrode) for obtaining various measurement data, such as for ECG. Additionally, in certain embodiments, one or more additional conductive areas (e.g., leads, electrodes) may be integrated into other areas of the user monitoring device102. A location of the various additional leads may be particularly selected to enable certain types of measurements (e.g., ECG, EDA, etc.) and/or provide an ergonomic position for the user100while the data is collected. For example, it would be uncomfortable for the user to place a bottom of their foot on the user monitoring device102. However, placing their opposite hand along a top of the user monitoring device102may be easy, and as a result, the user100may be more likely to utilize the features of the user monitoring device102. FIG.2illustrates an example wearable device200that can be utilized in accordance with various embodiments. In this example, the wearable device200is a smart watch, although fitness trackers and other types of devices can be utilized as well. Further, although the wearable device200is shown to be worn on a user's wrist, similar to the example ofFIG.1, there can be other types of devices worn on, or proximate to, other portions of a user's body as well, such as on a finger, in an ear, around a chest, etc. For many of these devices there will be at least some amount of wireless connectivity, enabling data transfer between a networked device or computing device and the wearable device. This might take the form of a BLUETOOTH® connection enabling specified data to be synchronized between a user computing device and the wearable device, or a cellular or Wi-Fi connection enabling data to be transmitted across at least one network such as the Internet or a cellular network, among other such options. Still referring toFIG.2, the wearable device200includes a housing210having a display screen208. More specifically, as shown, the housing210may be a multi-part component, such that the housing210includes a first part212and a second part214. However, it should be appreciated that there may be additional parts. Moreover, in embodiments, additional components may be utilized to form one or more parts. For example, the wearable device200includes a conductive ring206that may form a portion of a bezel of the housing210. Moreover, in an embodiment, the housing210may enclose one or more electronic components, which may be utilized to collect and/or analyze data, as described herein. For example, the housing210may enclose appropriate circuitry for ECG and/or EDA measurements. By way of example, a combination electrode may be utilized, such as the electrode described in U.S. patent application Ser. No. 16/457,363, which is hereby incorporated by reference in its entirety. The combination electrode of the '363 application may include an electrode that makes contact with the user, such as at the wrist, and a second electrode may be embedded into the bevel or surface of the wearable device, such as the configuration illustrated in U.S. patent application Ser. No. 16/935,583, which is hereby incorporated by reference in its entirety. Furthermore, as shown in the '583 application, multiple electrodes may be embedded into the wearable device face or at another location. Additionally, circuitry for performing measurements, such as ECG and/or EDA measurements, may be utilized in embodiments of the present disclosure. An example circuitry arrangement is shown in U.S. patent application Ser. No. 16/457,337, which is hereby incorporated by reference in its entirety. In various embodiments, ergonomics and user comfort are emphasized in order to decrease the likelihood of user error and/or encourage users to utilize the functionality of the wearable device200. For example, increasing the surface area of the electrodes may prevent shorts across both electrodes because it will be easier for the user to identify a region associated with one of the two electrodes. As mentioned, there can be various types of functionality offered by such a wearable device, as may relate to the health of a person wearing the device. One such type of functionality relates to electrocardiography (ECG). ECG is a process that can be used to determine and/or track the activity of the heart of a person over a period of time. In order to obtain ECG data, a conductive electrode is often brought into contact with the skin of the person to be monitored. In the example embodiment ofFIG.2, the user is wearing the wearable device200on his or her arm202, and can bring one or more fingers204(or palm, etc.) into contact with an exposed electrode of the wearable device200. In this example, the electrode is at least a portion of the conductive ring206that is part of the housing210around the display screen208of the wearable device200, although other types and forms of electrodes can be used as well within the scope of the various embodiments. In further embodiments, the housing may also be referred to as a bezel that forms an outline around the display screen208. The electrode can be connected to an ECG circuit that can detect small changes in electrical charge on the skin that vary with the user's heartbeat. ECG data can be monitored over time to attempt to determine irregularities in heartbeat that might indicate serious cardiac issues. Conventional ECG measurements are obtained by measuring the electrical potential of the heart over a period of time, typically corresponding to multiple cardiac cycles. By a user placing his or her fingers on the exposed electrode for a minimum period of time, during which ECG measurements are taken, an application executing on the wearable device200can collect and analyze the ECG data and provide feedback to the user. As mentioned, ECG measurements are taken across opposite extremities. For example, with reference toFIG.2, a first point may be along the arm202(e.g., via a conductor on the underside of the wearable device200), and a second point at the fingers204of the opposite arm contacting the conductor ring206. As a result, the signal evaluates a circuit including the heart. Because the ECG is incorporated in the wearable device200, both electrodes that form a single lead ECG sensor are incorporated into the wearable device200, unlike traditional methods that may utilize two or more separate sensors. In various embodiments, the electrodes are electrically isolated from the device to facilitate appropriate functionality. A user's skin impedance may decrease the reliability of data captured for the ECG measurement. As a result, reducing skin impedance is desirable. Accordingly, increasing contact surface area for each electrode is desirable. For example, forming substantially all of the bottom face of the wearable device200may increase the surface area in contact with the arm202, while increasing a size of the conductive ring206may also decrease skin impedance. Moreover, as noted above, in various embodiments the second electrode may include one or more plated electrodes or other conductive elements that are integrated into the display screen208, thereby increasing the conductive surface area for the second electrode. Additionally, to further prevent user error during electrical measurements, locations for the electrodes may be particularly selected to provide comfort for users to maintain a stationary pose. For example, measurement data may be acquired over a period of time, such as 60 seconds, or longer. Movement may disrupt the measurements, and therefore, the location of the electrodes may be selected such that the user can maintain position to acquire the data. The particularly selected locations may be selected with user comfort in mind, as well as providing flexibility to enable the user to interact with the wearable device in a variety of ways. For example, different users may have ailments that make interaction with the devices difficult (e.g., arthritis, carpal tunnel, amputations, etc.), so providing a wide variety of potential interaction methods provides a greater range of use over a wider group of users. As noted above, embodiments of the present disclosure may include a system that includes at least two independent electrodes, electrically isolated within the wearable device200. For example, a first electrode may utilize a bottom surface area of the wearable device200(not pictured inFIG.2). The bottom surface area, or a portion thereof, may make contact with the wrist202. As will be appreciated, the bottom surface area may have one of the largest continuous surface areas for the wearable device200, thereby achieving a goal described above to increase surface area and reduce skin impedance. In various embodiments, the first electrode is formed from a conductive electrode material and may be electrically isolated from the remainder of the wearable device200, for example, by incorporating insulating material into the wearable device200, such as plastics and the like. A second electrode may utilize a top surface area, or a portion thereof, of the wearable device200. This area may be positioned such that a user can easily access the area and intuitively interact with the area. In a variety of embodiments, the display screen208may occupy a large portion of the top surface area, as users may prefer large displays. Accordingly, the second electrode may be incorporated into the bezel surrounding the display screen208, as illustrated by the conductive ring206. However, it should be appreciated that, in various embodiments, at least a portion of the display screen208may be utilized as the second electrode using methods that would not occlude the display, for example, by coating the display screen208in a conductive material (e.g., indium tin oxide), local extension of the sensor to not occlude the display, and the like. Moreover, in various embodiments, the display screen208may be omitted from the wearable device200. As a result, the top surface could be substantially identical to the bottom surface. It should be appreciated that the second electrode may further be comprised of two separate, electrically isolated electrodes. For example, in various embodiments, a portion of the conductive ring206may be segmented and isolated from a different portion of the ring. As noted above, embodiments of the present disclosure may go beyond configurations that include a single top electrode and a single bottom electrode to include multiple leads along the wearable device (e.g., more than one lead on the top, more than one lead on the bottom, more than one lead on both the top and bottom). Adding an electrode to the top of the wearable device200, as described below, increases the number of ECGs and provides additional wearer configurations for obtaining measurement information. By way of example, configurations that include two electrodes along the top of the wearable device enable multiple different positions to obtain information, such as right arm to left leg and left arm to left leg, as well as augmented limb leads (e.g., aVR, aVL, and aVF). These additional leads may enable screening of a broader range of non-rhythm based conditions, and could ergonomically work by users holding the top of the device with two thumbs and pressing the bottom of the device into their leg, as an example. While single-lead ECG can provide accurate information with regards to beat timing (also called RR interval), which can be sufficient for diagnosing many arrhythmias, multiple leads can provide additional information to more accurately diagnose conditions which rely on ECG morphology (shape). For example, sinus tachycardia is a regular rhythm that is faster than normal, and can be diagnosed from a single lead. Several conditions can cause a deviation of the electrical axis or an abnormal R-wave amplitude, which is best observed using multiple leads. Embodiments described herein may also use multi-lead ECG to examine other morphologies, such as ST-elevation or depression. Moreover, as noted above, including at least two sensors on the top may also enable EDA measurements. As described, embodiments of the present disclosure enable multiple different user configurations for obtaining measurements using two or more leads, such as for ECG or EDA. EDA is a measurement of skin electrical resistance or conductance, which reflects the sympathetic activation in the secretory activity of sweat glands. It has been used in psychological research to understand autonomic nervous system activity and identify acute stress events induced by physical, mental, or cognitive stimuli. The skin conductance/resistance can be measured by injecting a small current between two electrodes in contact with the skin. In many instances, EDA is measured at the fingers, palm, or feet. However, in certain embodiments, wrist measurements may also be utilized for EDA. Utilizing configurations having two electrodes at the top surface of the wearable device, EDA measurements may be obtained from users in a simple, compact, and comfortable form factor. FIG.3illustrates an example environment300in which aspects of various embodiments can be implemented. In this example, a person might have a number of different devices that are able to communicate using at least one wireless communication protocol. In this example, the user might have a smartwatch302or fitness tracker, which the user would like to be able to communicate with a smartphone304and a tablet computer306. The ability to communicate with multiple devices can enable a user to obtain information from the smartwatch302, such as heart rate data captured using a sensor on the smartwatch, using an application installed on either the smartphone304or the tablet306. The user may also want the smartwatch302to be able to communicate with a service provider308, or other such entity, that is able to obtain and process data from the smartwatch and provide functionality that may not otherwise be available on the smartwatch or the applications installed on the individual devices. The smartwatch may be able to communicate with the service provider308through at least one network310, such as the Internet or a cellular network, or may communicate over a wireless connection such as Bluetooth® to one of the individual devices, which can then communicate over the at least one network. There may be a number of other types of, or reasons for, communications in various embodiments. In addition to simply being able to communicate, a user may also want the devices to be able to communicate in a number of ways or with certain aspects. For example, the user may want communications between the devices to be secure, particularly where the data may include personal health data or other such communications. The device or application providers may also be required to secure this information in at least some situations. The user may want the devices to be able to communicate with each other concurrently, rather than sequentially. This may be particularly true where pairing may be required, as the user may prefer that each device be paired at most once, or that not manual pairing is required. The user may also desire the communications to be as standards-based as possible, not only so that little manual intervention is required on the part of the user but also so that the devices can communicate with as many other types of devices as possible, which is often not the case for various proprietary formats. A user may thus desire to be able to walk in a room with one device and have the device automatically be able to communicate with another target device with little to no effort on the part of the user. In various conventional approaches, a device will utilize a communication technology such as Wi-Fi to communicate with other devices using wireless local area networking (WLAN). Smaller or lower capacity devices, such as many Internet of Things (IoT) devices, instead utilize a communication technology such as Bluetooth®, and in particular Bluetooth Low Energy (BLE) that has very low power consumption. An environment300such as that illustrated inFIG.3enables data to be captured, processed, and displayed in a number of different ways. For example, data may be captured using sensors on a smartwatch302, but due to limited resources on that smartwatch the data may be transferred to a smart phone304or service provider system308(or a cloud resource) for processing, and results of that processing may then be presented back to that user on the smartwatch302, smart phone304, or another such device associated with that user, such as a tablet computer306. In at least some embodiments, a user may also be able to provide input such as health data using an interface on any of these devices, which can then be considered when making that determination. In at least one embodiment, data determined for a user can be used to determine state information, such as may relate to a current arousal level or state of that user. At least some of this data can be determined using sensors or components able to measure or detect aspects of a user, while other data may be manually input by that user or otherwise obtained. In at least one embodiment, an arousals determination algorithm can be utilized that takes as input a number of different inputs, where different inputs can be obtained manually, automatically, or otherwise. In at least one embodiment, such an algorithm can take various types of factors identify events or activations related to arousal or “stress” events that activate a sympathetic nervous system response. FIG.4illustrates a graphical representation400of an activation402(e.g., arousal event, arousal activation, response, stressor, etc.) provided on a display404of a user device406to the user. In this embodiment, the graphical representation400is provided from EDA information, which may be acquired by user device406, as described above. In this instance, a peak detection algorithm may be utilized to determine a peak or spike408that is above a baseline level410. In various embodiments, the peak or spike may be determined by evaluating a percentage difference from the baseline or may be evaluated in terms of a threshold, among other possible determinations. In various embodiments, the information may be sampled over time to determine a user's response to a stimulus and then subsequent time after the stimulus. By way of example only, different “bins” of time may be capture and averaged or normalized in order to provide the EDA information to the user. It should be appreciated that information may not be provided as a line graph, as illustrated inFIG.4, but in various embodiments may be provided in various other graphical representations in order to provide information to the user regarding an elevated arousal level responsive to a stimulus. In various embodiments, the information is provided to the user to illustrate their response to an event, which as noted above may be described as an arousal event, an activation, a sympathetic arousal, or the like. The information may be EDA information, which provides information regarding a skin conductance responsive to sweat or moisture on the skin. Accordingly, the illustrated embodiment may provide information to the user to inform them of a particular response to a stimulus. By way of example only, the user may notice that they have an activation or peak prior to a meeting with their boss, and as a result, the user may learn that performing a deep breathing exercise or other calming activity may be beneficial prior to the meeting. Embodiments of the present disclosure may incorporate various information in order to generate the graphical representation400, which may include additional information other than EDA information. For example, the user device406may include other sensors, which may provide context to the information. The user may have a first baseline when working and a second baseline while exercising. Accordingly, the user device406may be used to determine the user is exercising (e.g., elevated heart rate, set to exercise mode, GPS information, etc.) and may compare the user's response differently from when the user is resting, because the stimulus response may be known and expected while the user is exercising or doing another known strenuous activity. As will be described below, in various embodiments the user device406may prompt the user to provide the EDA information. FIG.5includes a representation500of the display404of the user device406providing a prompt502to the user to begin a session for recording EDA information. The prompt may be responsive to the user selecting or completing a certain mode. For example, after the user completes an exercise activity, the user device406may prompt the user to receive the information in order to analyze the user's response to the exercise. In various embodiments, as noted above, this could determine whether the user was particularly worn out by the exercise event, which may prompt the user to obtain additional recovery in order to maximize performance. In other embodiments, the prompt502may be provided before beginning an event. For example, the user may begin a meditation session and may be prompted to provide EDA information to obtain a baseline measurement of their arousal level. The user may continue to provide the information during the session in order to track their arousal levels throughout the session, which may be indicative of the user's focus or a quality of the session. For example, spikes or peaks may be indicative of distractions. FIG.6Aincludes a representation600of the display404of the user device406providing a prompt602to the user to begin an calming exercise after detecting an activation (e.g., arousal) that exceeds a threshold. In this embodiment, the user may have provided EDA information prior to receiving the prompt or the information may be obtained from a continuous measurement and/or from a combination of measurements received from one or more sensors. In this example, the prompt602recommends a breathing exercise for the user. FIG.6Bincludes a representation650of the display404of the user device406providing information a graphical representation652and message654to the user. The graphical representation652includes the EDA information in a visual format so that the user can see how his or her response decreases over time, which may be based at least in part to a guided breathing session provided by the user device406. Additionally, the message654may provide information to the user throughout the session, for example, by including instructions. In this example, the user has received an affirmative message indicating that the breathing exercise has reduced their arousal levels, which may provide an incentive for the user to continue using the feature. In various embodiments, the user device406may include or more features or sensors that enable detection whether or not the user has properly positioned themselves to provide the EDA information. For example, the user device406may include a pressure sensor that determines whether the user has sufficiently engaged the screen404to provide the information. Additionally, other sensors and components may also be utilized in embodiments, such as a timer to alert the user that measurements have been obtained or provide a countdown, haptic feedback to provide instructions, and the like. FIGS.7A and7Bprovide representations700,750of a meditation session that utilizes embodiments of the present disclosure. In this example, as shown inFIG.7A, the display404transmits a prompt702for the user to activate an EDA measurement using the user device406. In various embodiments, the prompt702may be associated with a selected activity, which in this case is a meditation session. During the session, the user's arousal responses may be monitored, which may be indicative of distractions during the session. In certain embodiments, the user will maintain contact with the user device406during the session to provide the EDA information. FIG.7Bincludes the representation750in which the user device406provides feedback to the user in the form of an auditory sound752and/or vibration754indicative of an alert for the user regarding the elevated arousal levels. In this example, the user's EDA information may be processed to identify one or more arousal events (e.g., peaks). Upon identification, the user may be provided with feedback to make adjustments to reduce the arousal levels, thereby providing an improved medication session. It should be appreciated that messages or the like may be provided on the display404at this time, but it may be advantageous to black out or otherwise provide a blank screen during the meditation session. Embodiments of the present disclosure may provide, directly within a consumer product such as a wearable, useful EDA information that users may evaluate and then respond to. For example, the user may receive an acute measurement or a continuous measurement to provide information how the user responds to certain events, such as stressful or arousing events, in order to identify steps or techniques for controlling or anticipating the response, among other benefits. Furthermore, the information may enable self-discovery for the user to monitor their arousal levels at different points in the day in order to identify triggering events or situations where arousal may spike, which may provide information to the user to make changes in their lifestyle to control these events. Various embodiments of the present disclosure may be utilized in order to provide various levels of functionality within the wearable device described herein. By way of example, in various embodiments, the wearable device may be utilized for lie detection. For example, when an individual is nervous, he or she may exhibit a nervous system response, which may be detectable via EDA. When paired with one or more other sensors, such as a heart rate sensor, information similar that utilized in a polygraph machine may be obtained in a smaller form factor, which may provide additional use cases. Various embodiments may also be utilized to calculate or determine a stress metric, as described in Application Ser. No. 62/062,818 filed Aug. 7, 2020. Accordingly, the EDA information may provide a piece of information to calculate additional scores or tracking information that may improve a user's day to day life or provide additional information that may facilitate improvements or changes to a user's lifestyle. Embodiments, as noted above, may enable self-discovery for the user to identify one or more stressful or arousing events to enable a user to prepare and potentially utilize tactics to overcome the response. For example, a user may be preparing to give a speech and the anticipating may cause an arousal event detectable by the wearable device. While the user may feel confident about the speech, the information provided by the wearable may enable the user to perform one or more calming exercises prior to the speech in order to improve performance. Additionally, providing the information to the user may be indicative that the user should practice their speech again to improve their confidence. Various embodiments may incorporate the EDA information, and potentially one or more other component of sensor information, for mental health screenings or diagnosis. As an example, a muted sensory response or an elevated sensory response may be indicative of one or more conditions, when paired with additional information that may be provided by a licensed mental healthcare practitioner. Furthermore, the EDA information may also provide information to the practitioner if the user provides consent to share that information with the practitioner, such as helping the user identify anxiety-causing events. Accordingly, the information may be utilized to detect an arousal response, which may be higher or lower than expected, in order to facilitate diagnosis. In certain embodiments, user health and wellness may also benefit from the readings obtained by the EDA information to determine arousal responses. For example, with respect to women's health, hot flashes may be detected based on a user's response (e.g., increased sweating), which may facilitate diagnosis of the condition. In certain embodiments, a wearable may provide information to the user to predict or otherwise explain the occurrence, which may reduce the anxiety felt by the user. In the example of the hot flash, the display may provide a message informing the user they are having a hot flash, provide techniques for controlling it, and the like, which may help calm the user. As described herein, in various embodiments cross-correlations with other sensors may also be provided and utilized. By way of example, a sleep or stress score may be improved by incorporating arousal responses, which may be indicative of disturbances during sleep and/or high stress events during the day. Additionally, other information may further be utilized to inform or improve the detection and classification of arousal events. For example, heart rate, heart rate variation, respiratory rate, and the like may be indicators of an arousal event. However, as noted above, the additional information may also be used to disregard an arousal event, such as an elevated response during known strenuous exercise. In this manner, the arousal responses may have their thresholds and/or baselines adjusted based on information from the other sensors. Additionally, various sensors and sensor information may provide information to the sensor to begin recording data. For example, an accelerometer within the wearable may indicate that the user is not sitting still, which may lead to noisy or unreliable information. Embodiments of the present disclosure may also be particularly selected to perform high frequency measurements (e.g., approximately 125 Hz), compared to traditional techniques that utilized lower frequencies. As a result, existing on board power supplies and systems may be utilized, which decreases the weight and complexity of the circuit design for the wearable. For example, in various embodiments, high frequency measurements may provide reduced quality signals, however, the presence of the other components of the wearable device may drive design of the circuity for conducting EDA measurements. Various embodiments may include one or more switching circuits to improve data acquisition and/or increase a current intensity in order to improve signal quality. Referring now toFIG.8, a flow chart of an embodiment of a method800for identifying an arousal event according to the present disclosure is illustrated. It should be understood that, for any process discussed herein, there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments. In this example, as shown at (802), the method800includes activating an arousal monitoring service for a user device. For example, the user may load a program stored on the wearable device that receives sensor information to evaluate or determine a user's arousal levels responsive to events in their life. As shown at (804), the method800includes evaluating whether data acquisition is sufficient, such as the user having a proper connection with a lead or electrode. If not, as shown at (806), the method800includes displaying instructions for the user to adjust contact for acquisition. If yes, as shown at (808), the method800includes collecting data indicative of arousal events for a period of time. In various embodiments, the data collection may be EDA information that may be utilized to determine arousal events, which may be indicated by an increased skin conductivity. As shown at (810), the method800includes transforming the data, for example, by evaluating the information continuously for a given time period and/or over certain periods of time and then averaged or normalized to enable smoothing of the data. Additional transformations may be applied to the data, such as changing a format to enable interaction with one or more other devices. In certain embodiments, the transformation is a derivative. As shown at (812), the method800includes evaluating the data to determine whether one or more segments or bins exceeds a threshold. For example, the threshold may be a minimum threshold value that is indicative of an arousal event. Additionally, the threshold may be a percentage increase over a calculated baseline event for the user. Various other methods may also be utilized to identify the threshold. For example, the threshold may be related to a sudden change that exceeds a certain percentage of the data for a time period preceding it. Additionally, the threshold may also be evaluate din terms of how many different peaks or changes there are over a period of time. Thereafter, as shown at (814), the method800includes providing a notification to the user regarding the arousal event. For example, the user may be informed of the arousal event and provided with a suggestion to perform one or more exercises to relax. FIG.9is a flow chart of an embodiment of a method900for determining a baseline level for a user according to the present disclosure. The baseline level for the user may refer to a baseline EDA response and/or baseline arousal level. The baseline may correspond to a value of the response and/or a number of response elevations over a period of time. In this example, as shown at (902), the method900includes activating an arousal event determination for a user associated with a wearable device902. For example, the user may selectively load an application that records EDA information to determine an arousal level. As shown at (904), the method900includes receiving data from one or more sensors of the wearable device. The data may be correlated to a state, such as “not aroused” and “aroused.” As shown at (906), the method900includes determining, based at least in part on the data, a baseline value for the user. In various embodiments, the user may receive instructions for providing the information to determine the baseline. For example, the user may be instructed to sit quietly for a short period of time prior to providing the information. The baseline, as described above, may be correlated to a response level or to a number of elevated responses over a period of it. In various embodiments, the baseline may be updated over time. For example, an average arousal may be determined over different periods of the day and then averaged to generate a baseline arousal. In this manner, the user's baseline may be adjusted over time to accommodate different events or changes in the user's life. As shown at (908), the method900includes storing the baseline value for the user. In certain embodiments, different values may be stored for different activities, such as a baseline for working, a baseline for exercising, etc. In certain embodiments, as shown at (910), the method900may include collecting base information from a plurality of other users, e.g., who have provided permission to have their information collected and anonymized. As shown at (912), the method900may include classifying the user. For example, the classifications may be based on a user's demographic information, location, job, or the like. In this manner, an initial baseline may be provided and then the user's information may be adjusted from the baseline that may be predicted based on other similar users. FIG.10is a flow chart of an embodiment of a method1000for providing instructions to a user to complete an event responsive to user's arousal state. In this example, a message is provided to a user indicative of an elevated arousal state1002. For example, a wearable device may provide a message or alert indicative of a detected elevated arousal state, which may be obtained from EDA information acquired via the device. The user may provide an input requesting participation in an event1004. By way of example, the event may be a guided meditation application that provides the user with breathing exercises that may be particularly selected to reduce the user's present elevated arousal state. Information may be received related to an arousal state of the user1006. For example, the user may be instructed to position a portion of their body on the wearable to enable data collection. The device may then provide instructions for completing the event1008. In the example of a guided meditation application, the instructions may relate to breathing exercises. During the event, the user's information and arousal state may be monitored. In various embodiments, the event may be a timed event or the event may continue until the user's arousal state reaches a determined level. In this manner, a user in a heightened state of arousal may be notified and then instructed to take action to reduce their arousal state. FIG.11illustrates a set of basic components1100of one or more devices according to the present disclosure, in accordance with various embodiments of the present disclosure. In this example, the components1100include at least one processor1102for executing instructions that can be stored in a memory device or element1104. As would be apparent to one of ordinary skill in the art, the device can include many types of memory, data storage or computer-readable media, such as a first data storage for program instructions for execution by the processor(s)1102, the same or separate storage can be used for images or data, a removable memory can be available for sharing information with other devices, and any number of communication approaches can be available for sharing with other devices. The components1100also include at least one type of display1106, such as a touch screen, electronic ink (e-ink), organic light emitting diode (OLED) or liquid crystal display (LCD), although devices such as servers might convey information via other means, such as through a system of lights and data transmissions. Further, the components1100include one or more networking device1108, such as a port, network interface card, or wireless transceiver that enables communication over at least one network. Moreover, as shown, the components1100include at least one input/output element1110able to receive conventional input from a user. The input/output element1110can include, for example, a push button, touch pad, touch screen, wheel, joystick, keyboard, mouse, trackball, keypad or any other such device or element whereby a user can input a command to the device. Further, the input/output element(s)1110may also be connected by a wireless infrared or Bluetooth or other link as well in some embodiments. In some embodiments, however, such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device. As discussed, different approaches can be implemented in various environments in accordance with the described embodiments. As will be appreciated, although a Web-based environment is used for purposes of explanation in several examples presented herein, different environments may be used, as appropriate, to implement various embodiments. The components1100may also include an electronic client device, which can include any appropriate device operable to send and receive requests, messages or information over an appropriate network and convey information back to a user of the device. Examples of such client devices include personal computers, cell phones, handheld messaging devices, laptop computers, set-top boxes, personal data assistants, electronic book readers and the like. The network can include any appropriate network, including an intranet, the Internet, a cellular network, a local area network or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Communication over the network can be enabled via wired or wireless connections and combinations thereof. In this example, the network includes the Internet, as the environment includes a Web server for receiving requests and serving content in response thereto, although for other networks, an alternative device serving a similar purpose could be used, as would be apparent to one of ordinary skill in the art. The illustrative environment includes at least one application server and a data store. It should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device and handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device and the application server, can be handled by the Web server. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein. The data store can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing content (e.g., production data) and user information, which can be used to serve content for the production side. The data store is also shown to include a mechanism for storing log or session data. It should be understood that there can be many other aspects that may need to be stored in the data store, such as page image information and access rights information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store. The data store is operable, through logic associated therewith, to receive instructions from the application server and obtain, update, or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type. The information can then be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device. Information for a particular item of interest can be viewed in a dedicated page or window of the browser. Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein. The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated. Thus, the depiction of the systems herein should be taken as being illustrative in nature and not limiting to the scope of the disclosure. The various embodiments can be further implemented in a wide variety of operating environments, which in some cases can include one or more user computers or computing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or notebook computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Devices capable of generating events or requests can also include wearable computers (e.g., smart watches or glasses), VR headsets, Internet of Things (IoT) devices, voice command recognition systems, and the like. Such a system can also include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices can also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, FTP, UPnP, NFS, and CIFS. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof. In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers and business application servers. The server(s) may also be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++ or any scripting language, such as Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM® as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. Database servers may include table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers or combinations of these and/or other database servers. The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In certain embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch-sensitive display element or keypad) and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc. Such devices can also include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device) and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and other non-transitory computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. While various embodiments of the invention have been described above, it should be understood that they have been presented by way of example only, and not by way of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosure, which is done to aid in understanding the features and functionality that can be included in the disclosure. The disclosure is not restricted to the illustrated example architectures or configurations, but can be implemented using a variety of alternative architectures and configurations. Additionally, although the disclosure is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. They instead can be applied, alone or in some combination, to one or more of the other embodiments of the disclosure, whether or not such embodiments are described, and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments. Unless otherwise defined, all terms (including technical and scientific terms) are to be given their ordinary and customary meaning to a person of ordinary skill in the art, and are not to be limited to a special or customized meaning unless expressly so defined herein. It should be noted that the use of particular terminology when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being re-defined herein to be restricted to include any specific characteristics of the features or aspects of the disclosure with which that terminology is associated. Terms and phrases used in this application, and variations thereof, especially in the appended claims, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing, the term ‘including’ should be read to mean ‘including, without limitation,’ ‘including but not limited to,’ or the like; the term ‘comprising’ as used herein is synonymous with ‘including,’ ‘containing,’ or ‘characterized by,’ and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps; the term ‘having’ should be interpreted as ‘having at least;’ the term ‘includes’ should be interpreted as ‘includes but is not limited to;’ the term ‘example’ is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; adjectives such as ‘known’, ‘normal’, ‘standard’, and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass known, normal, or standard technologies that may be available or known now or at any time in the future; and use of terms like ‘preferably,’ ‘preferred,’ ‘desired,’ or ‘desirable,’ and words of similar meaning should not be understood as implying that certain features are critical, essential, or even important to the structure or function of the invention, but instead as merely intended to highlight alternative or additional features that may or may not be utilized in a particular embodiment of the invention. Likewise, a group of items linked with the conjunction ‘and’ should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as ‘and/or’ unless expressly stated otherwise. Similarly, a group of items linked with the conjunction ‘or’ should not be read as requiring mutual exclusivity among that group, but rather should be read as ‘and/or’ unless expressly stated otherwise. Where a range of values is provided, it is understood that the upper and lower limit, and each intervening value between the upper and lower limit of the range is encompassed within the embodiments. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. The indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope. It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” All numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification are to be understood as being modified in all instances by the term ‘about.’ Accordingly, unless indicated to the contrary, the numerical parameters set forth herein are approximations that may vary depending upon the desired properties sought to be obtained. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of any claims in any application claiming priority to the present application, each numerical parameter should be construed in light of the number of significant digits and ordinary rounding approaches. All of the features disclosed in this specification (including any accompanying exhibits, claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The disclosure is not restricted to the details of any foregoing embodiments. The disclosure extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. Referring now toFIG.12, a schematic diagram of one embodiment of a circuit1200for conducting EDA measurements according to the present disclosure is illustrated. It should be appreciated that the circuit1200is provided for illustrative purposes only and in various embodiments different configurations may be used. Additionally, various features have been omitted for clarity, such as resistors and ground connections. As shown, the illustrated circuit1200includes a power supply1202, which may be provided by a battery of a wearable device. As will be appreciated, the power supply may be a DC power supply and may also provide electrical energy to other components within the wearable device, and as a result, operation of the circuit1200may be regulated by how the power supply1202interacts with various other components of the wearable device. The circuit1200may further include a resistance circuit1204that receives power from the power supply1202to measure skin conductance, as an example. In various embodiments, the resistance circuit1204may include one or more electrodes that the user may contact, for example, with the user's fingers, with the user's palm and wrist, and/or any combination, as described herein. Accordingly, the resultant resistance is provided as input to various operational amplifiers (op amps)1206to provide an increased output potential. An output circuit1208may receive the information from the series of op amps1208and may, in various embodiments, transmit the output to one or more controllers for further computation. Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the principles and features disclosed herein. Certain embodiments of the disclosure are encompassed in the claim set listed below or presented in the future. | 69,248 |
11857337 | Reference will now be made to the exemplary embodiments illustrated, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. DETAILED DESCRIPTION As data becomes increasingly easier to access, individuals can increasingly desire to monitor, collect, and/or analyze various aspects of their environment and/or physiology. For example, a sport or fitness enthusiast may desire to monitor, collect, and/or analyze various aspects of the fitness routine (such as their heart rate, workout intensity, workout duration, and so forth) to determine how to improve and adjust their fitness routine to increase its efficacy. In another example, an asthmatic may desire to monitor, collect, and/or analyze environmental condition information (such as air quality, pollen count, and so forth) to determine and avoid conditions that may aggravate their condition. Described herein are apparatuses and methods for power adjustments of user measurement device. In one apparatus, a processing element is coupled to a first sensor interface and a second sensor interface. The processing element measures a physiological measurement via the first sensor interface and measures an amount of activity of the apparatus via the second sensor interface. A physiological measurement may be any measurement related to a living body, such as a human's body or an animal's body. The physiological measurement is a measurement made to assess body functions. Physiological measurements may be very simple, such as the measurement of body or ambient temperature, or they may be more complicated, for example measuring how well the heart is functioning by taking an ECG (electrocardiograph). Physiological measurements may also include motion and/or movement of the body, including measures of speed, acceleration, position, absolute or relative location, or the like. In some cases, these physiological measurements may be taken to determine an activity level for power management, as described herein. The physiological measurements can be medical measurements, such as heart rate measurement data, hydration level measurement data, blood pressure measurement data, oxygenation level, and so forth; a representation of a set of environmental measurements for the individual, such as an ambient temperature of a location approximate the individual and a location of the individual; a representation of a trending of physiological and/or environmental information, such as increases or decreases in the physiological and/or environmental information; a representation of a performance of an individual; or a representation of the individual compared to the group overall. The processing element performs a power adjustment activity in view of the amount of activity. For example, the power adjustment activity may be to perform any one or more of the following: adjust a number of different physiological measurements to take; adjusting a frequency of taking physiological measurements; turning off one or more systems; adjusting a type, frequency, data rate, number of channels and/or power at which to communicate data. In another embodiment, the power management system described herein may adjust one or more sensors in view of a rate of change in the measurements of the one or more sensors. Power management of portable or mobile electronic devices can be used to extend the useful engagement of the devices by reducing the duty cycle of the “on” time period for the device. Rechargeable batteries often employ chemistries such as nickel cadmium (NiCd), nickel metal hydride (NiMH) and various forms of lithium-configurations. Conventionally, the rechargeable batteries are recharged by supplying electrical energy (e.g., current) through wires that are connected via the electronic device to the battery, such as through a battery management system (BMS). The electronic device may have external electrical contacts for receiving electrical energy from an external power supply to recharge the batteries. The external electrical contacts, however, may be prone to poor performance, or even failure, due to becoming dirty or corroded. Further, electrical contacts are undesirable for use with electronic devices where a possibility of electrical shorting may occur, such as when the device may be exposed to water. The exposed electrical contacts make the electronic device difficult to waterproof. Alternatively, contact-less charging using induction has been used in electronic devices, such as motorized toothbrushes and cordless phones. An inductively-rechargeable electronic device may be placed on an inductive charger. The inductive charger includes a primary coil and the electronic device includes a secondary coil. Alternating current flows through the primary coil of the inductive charger, causing a varying magnetic field that is intersected by the secondary coil in the electronic device to receive energy. The energy received by the secondary coil can be used to charge the battery in the electronic device. Although inductive charging obviates the need for contacts, inductive charging may not be practical for use with smaller electronic devices because the smaller devices do not have the volumetric space to accommodate a coil large enough to efficiently transfer energy to charge a battery. A smaller electronic device may be a device that is suitable to be worn by, or carried by, a person, such as a wearable device. Wearable devices may be attached directly to the person or may be attached to an article that can be attached, worn, or otherwise disposed on a person or equipment being used by a person. Further, the coil may interfere with the ability of the electronic device to communicate using radio frequency. Power management of portable or mobile electronic devices can be used to extend the use period of the devices. While portable or mobile electronic devices' operational time may be limited by a volumetric or power density size of a battery coupled to the portable or mobile electronic devices, power management can be used to extend a period of time a user can use the portable electronic device or the amount of time the portable electronic device is powered on and available for use. FIG.1Aillustrates a bottom view of the UMD110. In one example the UMD110can be a wearable UMD, such as a wearable wristband, that can be used to take selected measurements using one or more sensors120according to one embodiment. In one embodiment, the one or more sensors120can be a bio-impedance sensor, an accelerometer, a three dimensional (3D) accelerometer, a gyroscope, a light sensor, an optical sensor, a spectroscopy sensor, a heart rate monitor, a blood pressure sensor, a pulse oximeter sensor, and so forth. In one example, the UMD110can include a sensor module130to receive measurement information from the one or more sensors120and analyze the measurement information to determine selected physiological information and/or medical information, such as a hydration level of the user, cardiac information of the user (e.g., blood pressure or heart rate), an blood oxygen level of the user, and so forth. In another example, the UMD110can include a power management system140to perform power management for the UMD110. The power management system140may be hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. The power management system140is described in more detail below, including a power management module described below with respect toFIG.11. FIG.1Billustrates a schematic view of the UMD150according to one embodiment. The UMD150can include a wireless transfer coil152to receive wireless power from another wireless transfer coil of another device, such as a wireless charger. A rectifier or impedance matching device154can convert the wireless power into direct current (DC) power and transfer the DC power to a battery management system (BMS)156. In one example, the BMS156can direct the DC power to a power storage device157, such as a rechargeable battery, to replenish power to the power storage device157(e.g., recharge the rechargeable battery). In another example, the BMS156can direct the DC power to a processing device158. In another example, the BMS156can direct power from the power storage device157to the processing device158. The processing device158can include a processor, a memory storage device, an analog-to-digital converter, and/or a digital-to-analog converter. In one example, the processing device158can be coupled to a communication module160to communicate data with other devices using an antenna162. The antenna162can be configured to communicate on a wireless network and/or a cellular network. In another example, the processing device158can be coupled to one or more external sensors. The external sensors164can be sensors that take measurements external to the user, such as non-physiological measurements or non-direct engagement measurements of the user. The external sensors164can include a global positioning system (GPS) device, a triangulation device, a humidity sensor, an altimeter, and so forth. In another example, the processing device158can be coupled to a sensor array178. The sensor array178can include one or more sensors to engage a user of the UMD to take measurements. The sensor array178can include: a bio-impedance spectroscopy sensor168, an optical sensor170, an electrocardiogram (ECG) sensor172, a temperature sensor174(such as a thermostat or thermistor), an accelerometer176, and so forth. The user monitoring system can also include an analysis tool that can analyze input data. In one example, the analysis tool can be integrated into the UMD or coupled to the UMD. In another example, the analysis tool can be integrated or coupled to a cloud-based computing system that can communicate with the UMD. In one example, the analysis tool can receive input data for a memory of the UMD and/or the cloud-based computing system. In another example, the analysis tool can receive the input data can in real-time from the UMD and/or the cloud-based computing system. The input data can include measurement data and/or user information. The measurement data can include information collected using one or more sensors in a sensor array of the UMD, environmental sensors, Newtonian sensors, third-party sensors or devices, and so forth (as discussed in the preceding and proceeding paragraphs). The input data can include profile information, such as: user profile information; group profile information; and so forth. The analysis tool can determine a correlation between different data points or data sets of the input data (such as data collected from different sensors or devices). The analysis tool can determine different types of correlations of the data points or data sets. In one example, the analysis tool can use a Pearson product moment correlation coefficient algorithm to measures the extent to which two variables of input data may be related. In another example, the analysis tool can determine relations between variables of input data based on a similarity of rankings of different data points. In another example, the analysis tool can use a multiple regression algorithm to determine a correlation between a data set or a data point that may be defined as a dependent variable and one or more other data sets or other data points defined as independent variables. In another example, the analysis tool can determine a correlation between different categories or types of information in the input data. In one example, when the analysis tool determines a correlation between the different data points or data sets, the analysis tool can use the correlation information to predict when a first event or condition may occur based on a second event or condition occurring. In another example, when the analysis tool determines a correlation between the different data points or data sets, the analysis tool can use the correlation information to determine a diagnosis or result data. In another example, when the analysis tool determines a correlation between the different data points or data sets, the analysis tool can use the correlation information to determine a cause of a condition and/or event. In one example, the analysis tool can determine a correlation between physiological data and environmental data. For example, the input data can include hydration level data (physiological data) and ambient temperature data (environmental data). In this example, the analysis tool may identify a correlation between when the ambient temperature increases and a decrease in a hydration level of a user. The analysis tool may identify the correlation between the ambient temperature and the hydration level by using a regression algorithm with the ambient temperature as an independent variable and the hydration level as a dependent variable. When the analysis tool has identified the correlation between the ambient temperature and the hydration level, the analysis tool can predict a change in a hydration level of a user or a rate of change of a hydration level of a user based on the ambient temperature. In another example, the analysis tool can determine a correlation between an altitude level and an oxygenation level of a user. For example, the analysis tool can determine a correlation between an increase in the altitude level and a decrease in the oxygenation level of the user. When the analysis tool determines the correlation between the altitude level and the oxygenation level, the analysis tool can predict a change in the oxygenation level of the user based on the altitude level the user may be at. The preceding examples are intended for purposes of illustration and are not intended to be limiting. The analysis tool can identify a correlation between various data points, data sets, and/or data types. In one example, the analysis tool can identify a correlate between location information and physiological data of a user. For example, the analysis tool can determine a location of a user for at a period of time, such as by using GPS sensor data or triangulation sensor data. In this example, the analysis tool can receive physiological measurement data (such as heart rate measurement data, hydration level measurement data, blood pressure measurement data, and so forth). The analysis tool can correlate the location of the user with the physiological measurement data to increase an accuracy of data analysis, a diagnosis, or result data and/or provide additional details regarding a cause of physiological measurements. In one example, the analysis tool can determine that a user is at work in an office location. When the analysis tool detects an increase in a heart rate or a blood pressure of a user, the analysis tool can correlate heart rate or blood pressure data with the location information to determine a cause of the increase in heart rate or blood pressure. For example, when a heart rate or blood pressure of an individual increases while at a work in an office, the analysis tool may determine that the heart rate or blood pressure increase may be due to psychological causes (such as stress) rather than physiological causes (such as exercising or working out) because the user is at a location where an individual is not likely to physically exert himself or herself. In one example, the analysis tool can use the multiple regression algorithm to determine a correlation between a multiple physiological and/or environmental data points or data sets. For example, the analysis tool may receive heart rate data, skin temperature data, and hydration level data of a user. In this example, the analysis tool can determine a correlation between both a heart rate and skin temperature of an individual and a hydration level of the individual. For example, the analysis tool may determine that as the heart rate and the skin temperature of an individual increase, the hydration level of the individual may decrease. In one example, the analysis tool can filter out a correlation determination (e.g., a determination that data points or data sets may be correlated) when the correlation level is below a threshold level. For example, when the analysis tool determines that there may be a 30 percent correlation between a skin temperature of an individual and a hydration level of an individual, the analysis tool may filter out the correlation information when determining a cause of a condition or event, a result of the data, or a diagnosis. In another example, the analysis tool can discount or weight a correlation determination based on the correlation level of the correlation determination. For example, when the analysis tool determines that there may only be a 30 percent correlation between a skin temperature of an individual and a hydration level of an individual, the analysis tool may discount or assign a lower weight to the correlation determination (relative to a higher correlation percentage such as 90 percent) when determining a cause of a condition or event, a result of the data, or a diagnosis. In one example, the analysis tool can assign weights to different factors, such as: physiological data, environmental data, time of day, and so forth. In one example, the analysis tool can assign a first weight to hydration level data of an individual and a second weight to heart rate data of an individual when determining a performance level of an individual, as discussed in the proceeding paragraphs. In this example, when determining a performance level, the analysis tool may assign a higher weight to the hydration level data relative to the heart rate data. In one example, the analysis tool can use predetermined weights for the different physiological and/or environmental data. In another example, the analysis tool can receive user-defined or predefined weights from an input device indicating the weights for the different physiological and/or environmental data. In another example, the analysis tool can determine the weights to assign to the different physiological and/or environmental data based on correlation levels of the different physiological and/or environmental data. For example, when a correlation level between a humidity level and a heart rate of an individual may be relatively low over a threshold period of time and/or under a threshold number of different conditions, the analysis tool may assign a low weight to humidity level data when determining a cause of a change in heart rate of a user. In one example, the analysis tool can assign different weights to physiological measurements based on environmental data. For example, based on a location of an individual, the analysis tool can assign a first weight to a heart rate measurement and a second weight to a respiration sensor measurement. In another example, the analysis tool can assign weights to different causes, diagnosis, or results, such as: an exertion level (e.g., working out or sleeping), a stress level, an amount of time a user sleeps each day, and so forth. In another example, the analysis tool can use environmental data to determine a cause of a physiological diagnosis. For example, when the user is located at a fitness facility, the analysis tool can increase a weight for physical exertion (e.g., working out) diagnosis as a cause of physiological measurements (such as an increase in a heart rate or decrease in a hydration level of a user). In another example, when a user is located at home in bed, the analysis tool can correlate a location of the user with physiological measurements of the user. In this example, the analysis tool can determine that a decrease in heart rate may be due to an individual going to sleep when a user is located in their bedroom for a threshold period of time. The analysis tool can track, sort and/or filter input data. The input data can include: user schedule information, such as a daily schedule of the user; survey information, such as information received from surveys of individuals; research information, such as clinical research information or academic research information associated with one or more measurements of the UMD; and so forth. The analysis tool can use scheduling information of the user in determining an expected or probable activity of a user. For example, when a user is a member of a sports team, the user's schedule may include practice schedule information and/or game schedule information. In this example, the analysis tool can use the schedule information to anticipate that the user may be participating in physical activity and provide recommendations to the user based on the physical activity. For example, the analysis tool can determine that the user may be practicing in 2 hours, can determine a current hydration level of the user, and can communicate a recommendation (such as via a sensory indicator of the UMD) to increase the hydration level of the user. A sensory indicator can include: a visual indication device, such as a display; an auditory indication device, such as a speaker; and/or touch indication device, such as a vibrator. In another example, the analysis tool can use the scheduling information in correlation with a location of the user to determine an expected or probable activity. For example, the scheduling information may indicate that the user may be scheduled to attend a lecture at a physical fitness facility and the analysis tool can adjust a location-based recommendation in view of the scheduling information. In this example, while typically the analysis tool may recommend increasing a hydration level of the user in anticipation of physical activity based on the location information (e.g., the physical fitness facility), the analysis tool can adjust the recommendation in view of the scheduling information that the user may be attending a lecture rather than working out. The analysis tool can store historical or previous input data of the user (as discussed in the proceeding and preceding paragraphs). In one example, the analysis tool can be integrated into the UMD and can store the historical information on a memory device of the UMD. In another example, the analysis tool can be integrated into the UMD and can use a communication module of the UMD to store the information on a memory device coupled to the UMD, such as a cloud-based storage device or a memory device of another computing device. In another example, the analysis tool can be part of a cloud-based system or the other computing device. The analysis tool may filter and/or sort input data. In one example, the analysis tool can receive a filter or sort command from the UMD or an input device to filter and/or sort the input data. In another example, the filter or sort command can include filter parameters and/or sort parameters. The filter parameters and/or sort parameters can include: a time of day, a day of the week, group information, individual information, a measurement type, measurement duration, an activity type, profile information, injury information, performance level information, and so forth. In another example, the analysis tool can sort and/or filter the input data based on a trending of input data. For example, the analysis tool can sort input data that may be trending in an increasing direction or a decreasing direction and can sort the input data based on the trending. In this example, different measurements for an individual may be trending in different directions, such as a hydration level of an individual may be trending towards a dehydrated level and an activity level of an individual may be stable or stagnant. The analysis tool can sort input data to display hydration level trending because the individual may be trending towards dehydration while filtering out the activity level information. In one example, the analysis tool can sort or filter the input data on a group level. In another example, the analysis tool can sort or filter the input data on an individual level. The analysis tool may receive survey information and/or research information from an input device. For example, the analysis tool may receive survey information that includes: gender information, age information, physical weight information, general health information, family information, fitness level information, and so forth. In one example, the analysis tool can determine a correlation between the survey information and the input data. For example, the analysis tool can correlate the age, weight, fitness level, and general health level of a user with survey information from other individuals to determine a correlation between the survey information for the individual and the other individuals. In this example, the analysis tool can set a baseline for a measurement of the UMD for the individual based on baselines for the other individuals with the same or similar survey information. In another example, the analysis tool can correlate the user information with research information (such as research papers, clinical studies, and so forth). In one example, the analysis tool can communicate information to a display of the UMD. The information can include diagnosis information, recommended actions, trending information, raw data, and so forth. In another example, the analysis tool can communicate the information to another computing device or cloud-based server to display the information using a graphical user interface (GUI). In one example, the UMD, the other computing device, the cloud-based server, and/or the analysis tool can aggregate data received from one or more UMD users. For example, members of a sports team can each use a UMD to collect information. Analysis tools of the UMDs can analyze and communicate information for each of the members of the sports team to the cloud-based server. The GUI can provide a user with different representations of the data or information for display. In one example, the GUI can receive representation information indicating information to display via the GUI. In one example, the representation information can include a group indicator, an individual indicator, and/or a detailed indicator. The group indicator can indicate to display the representation information in a group format, such as a team format for a sports team. The individual indicator can indicate to display the representation information in an individual format for one or more individuals, such as displaying individual team member information for a sports team. The detailed indicator can indicate to display the representation information in a detailed format for an individual, such as displaying greater detail in representation information for an individual team member. In one example, the GUI can layer or overlay different levels of representation information in view of the group indicators, the individual indicators, and the detailed indicators. For example, the GUI can initially display the representation information in a group format (e.g., a first layer of information) such as displaying physiological information for a team (e.g., a group). When the GUI receives an individual indicator requesting to display information for an individual of the team, the GUI can display the individual's information in a layer above the group layer (e.g., a second layer of information). In one example, the second layer of information can be displayed in a box or field that is overlaid on top of at least part of the first layer of information. In one example, the second layer of information can be partially transparent or semi-transparent, e.g., a viewer can still see at least part of the first layer information beneath the second layer of information. In another example, the GUI can receive the group indicators, the individual indicators, and the detailed indicators from an input device, such as a touch screen, a mouse, a stylus, a keyboard, and so forth. In one example, the input device can send the group indicators, the individual indicators, and/or the detailed indicators when a user clicks or selects on a selected area of the screen, such as selecting an individual of the team in the group layer. In another example, the input device can send the group indicators, the individual indicators, and/or the detailed indicators when a user hovers a pointer or selector (such as a cursor, an arrow, or a hand icon) over an individual of the group in the group layer. In this example, when the pointer hovers over the individual, the GUI can display the information of the individual in a second layer, and when the pointer is moved (e.g., no longer hovers over the individual) the second layer may no longer be displayed. An advantage of displaying the second layer (or other layers such as a third layer of detailed information) can be to enable the user to quickly and efficiently move between viewing different layers of information for one or more individuals or groups. The group layer of information can be displayed when the group indicator may be selected and the GUI can display representation of group information including: a representation of different types of individual in a group or different locations of individual in the group, such as a locations or positions of different members on a team; a representation of information (such as input data) for the group as a whole; a representation of an aggregated health level of the group, such as an overall health score of the team; a representation of trending information for the group, such as trending information of the different measurements by the UMDs of the group; a representation of a threshold increase or decrease in measurement data, such as when an aggregation of measurements by the UMDs of the group exceeds or decreases below-defined threshold levels; a representation of a group performance indicator; a representation of multiple groups, such as a representation of a same type of measurement data for different teams of a sports association; a representation of one or more individual in the group versus one or more other individuals in the group, such as different team member of a group or team member that play the same position on the team. The input data can include user profile data received at the UMD from an input device. The user profile data can include: dietary information of the user, ethnic or race information of the user, weight information of the user, gender information of the user, and so forth. In one example, the analysis tool can determine a correlation between a hydration level of a user and physical performance of a user. In this example, the analysis tool can set the physical performance as a dependent variable and the hydration level of the user as an independent variable. The analysis tool can use the correlation between the hydration level and the physical performance to determine a physical performance of the user based on different hydration levels of the user. In one example, the analysis tool can aggregate input data from different users and determine correlations between input data of the different users. In another example, the analysis tool can determine a correlation between current or real-time input data of a user and previous input data of a user. The analysis tool can determine a baseline measurement for a user for one or more physiological measurements taken using the sensor array. In one example, the analysis tool can determine a baseline measurement for an individual by iteratively determining medium of a measurement, such as a heart rate measurement, over a period of time. For example, the UMD may measure a hydration level of a user using a bio-impedance sensor that measures bio-impedance in ohm (Q). A peak hydration level (e.g., over-hydrated) of the user may be at 5 Q and a minimum hydration level may be at 1 Q (e.g., dehydrated). In this example, the average hydration level of the user may be determined by taking a medium or average of the individual over a period of time. For example, while the user may have a peak hydration level at 5 Q and a minimum hydration level at 1 Q, the user may on average hydration level at 3.7 Q. As the user continues to use the UMD over a period of time, the analysis tool can monitor the hydration level of the individual to determine the average hydration level of the individual over the period of time. In this example, the average hydration level of the user may be at 3.7 Q even though the bio-impedance hydration level range spans from 1 Q to 5 n. In one example, the analysis tool can determine a baseline range measurement of a user by determining a reoccurring or repetitive range of measurements of the user over a period of time. In this example, the bio-impedance Q measurement range of an individual may be between 1 Q and 5 Q. Over a period of time, the bio-impedance Q measurements may range between 1.7 Q and 2.2 Q. The analysis tool can determine that the baseline range for the user that indicates that the user may be hydrated can be measurements that are within the range of 1.7 Q and 2.2 Q. In another example, the analysis tool can correlate different measurement ranges for different measurements and/or different diagnoses. For example, the analysis tool can determine that a first range can be when a user may be hydrated, a second range when a user may be dehydrated, and a third range when a user may be over-hydrated. In this example, the first range can be an average or medium range of measurement points over time, the second range can be a minimum range of measurement points over time, and the third range can be a maximum range of measurement points over time. In another example, different ranges can be associated with different diagnoses. For example, a first range for bio-impedance may be associated with a hydration level of a user and a second range of bio-impedance may be associated with a glucose level. In another example, the first range for bio-impedance may be associated with a hydration level of the user and a second range of optical spectroscopy may be associated with a glucose level of the user. The analysis tool can use different analysis algorithms when analyzing measurement data based on different activities of a user. In one example, the different activities can be different sports activities a user may be participating in. For example, the analysis tool can use a first analysis algorithm when a user is playing basketball and a second analysis algorithm when the user is playing football. In another example, the different activities can be different types of actions. The different types of actions can include: sleeping, sitting, walking, jogging, running, climbing, laying, standing, stepping, and so forth. In one example, the UMD can use one or more of the sensors of the sensor array to determine the different activities. In another example, the UMD can receive user input, such as from a touch screen or an input device (e.g., a smartphone, tablet, computer, stylus, and so forth). In another example, the different activities can be associated with different criteria, such as a time of day, location, and so forth. The analysis tool can use multiple measurements from different sensors to determine different diagnoses. The different measurements can include: hydration, skin temperature (Temp), heart rate (FIR), blood pressure (BP), oxygen saturation level (02), steps or mileage, sleep tracking, recovery tracking, and so forth. The analysis tool can determine a performance metric of a user based on one or more measurements and/or one or more diagnosis of a user. In one example, the analysis tool can determine a diagnosis in one or more of multiple categories, including: hydration, skin temperature, heart rate, blood pressure, oxygen saturation, activity level, sleep activity, recovery activity, and so forth. In one example, the diagnosis can have predetermined ranges for a user. In one example, the predetermined ranges can be defined by the user. In another example, the predefined ranges can be determined based on a user profile. For example, a hydration level can have multiple ranges such as a dehydrated range, a hydrated range, and an overhydrated range. The heart rate can have multiple ranges such as a slow range, a normal range, and a fast range. The oxygen saturated level can have multiple ranges such as a low range, a medium range, and a high range. In this example, the analysis tool can determine a fitness level of a user when a threshold number of ranges of the different diagnosis may be within the predetermined categories. For example, when a hydration diagnosis may be at a hydrated range, a heart rate diagnosis may be at a normal range, and an oxygen saturation diagnosis may be at a medium range, a user of the UMD may be at peak performance (such as a 100% performance level). In this example, when a hydration diagnosis may be at a dehydrated range, a heart rate diagnosis may be at a low range, and an oxygen saturation diagnosis may be at a low range, a user of the UMD by be at poor performance (such as a 65% performance level). In one example, the analysis tool can determine a recovery rate of a user for an activity or event (e.g., an amount of time it may take the user to recover from an activity or event) based on measurements from the sensors in the sensor array. In one example, the analysis tool can determine a recovery rate of a user based on a heart rate of the user. In this example, the analysis tool can identify a target or baseline heart rate of a user for an activity or event. The target or baseline heart rate can be a stable or steady heart rate of the individual after a period of time while the event or activity may be occurring. The analysis tool can monitor the heart rate of the user during the event and determine when the event or activity has finished. Upon completion of the activity, the analysis tool can take a first heart rate measurement. After a threshold period of time has passed, the analysis tool can take a second heart rate measurement. The analysis tool can then determine a difference between the first heart rate measurement and the second heart rate measurement. The analysis tool can then determine a rate at which the heart rate of the individual is slowing down or increasing to determine a recovery rate of the individual. For example, when the activity may be that the user is running, the analysis tool can take the first measurement when the user finishes their run and a second measurement 3 minutes after the first measurement. The analysis tool can determine a rate that the heart rate of the user is decreasing after the run to determine a recovery rate of the user. In this example, the greater the rate that the heart rate decreases the greater the recovery rate of the user. In another example, the analysis tool can take multiple different measurements at different times after the activity to increase an accuracy level of the recovery rate determination. In another example, the analysis tool can use multiple sensors to take different measurements when determining the recovery rate. In one example, the analysis tool can determine a sleep rate of a user based on one or more measurements of the user. For example, the analysis tool can receive a heart rate measurement, an activity level measurement, a body temperature, a blood pressure, and/or a blood oxygenation measurement. In this example, the analysis tool can determine that a user may be going to sleep based on a decrease in activity level and/or a decrease in heart rate. When the analysis tool has determined the user may be going to sleep, the analysis tool can switch to a sleep analysis mode. In the sleep analysis mode, the analysis tool can monitor the heart rate measurements, the activity level measurements, the body temperature measurements, the blood pressure measurements, and/or the blood oxygenation measurements of the user to determine different stages of sleep of the user and the period of time the user is in each stage of sleep. The different stages of sleep can include a nonrapid eye movement (NREM) sleep stage and a REM sleep stage. The analysis tool can determine when the user is in the NREM sleep stage based on the heart rate measurements, the activity level measurements, the body temperature measurement, the blood pressure measurement, and/or the blood oxygenation measurement being within a first range or at first level. The analysis tool can also determine when the user is in the REM sleep stage based on the heart rate measurements, the activity level measurements, the body temperature measurement, the blood pressure measurement, and/or the blood oxygenation measurement being within a second range or at a second level. For example, when a user switches from wakefulness to non-REM sleep the heart rate, blood pressure, activity level, body temperature, and blood pressure may decrease and blood oxygenation level may increase. When the user the transitions from NREM sleep to REM sleep the blood pressure, activity level, and heart rate may increase above an NREM level but below a wakeful level while the body temperature and blood oxygenation level may decrease below the NREM level. In this example, the analysis tool can use predefined ranges or levels to determine when the user enters each sleep stage. In another example, the analysis tool can monitor trending information or a change in measurement data to determine when the user enters each sleep stage. In another example, the analysis tool can set a range or level for each sleep stage (such as a predefined range or level) and can iteratively update the ranges or levels for each sleep stage as the analysis tool monitors the user over a period of time. In another example, the analysis tool can determine when the user may be experiencing periods of sleep apnea. For example, the analysis tool can monitor for irregular or sudden changes in one or more of the measurements taken during the sleep stages. When there is a sudden change in one or more of the measurements, the analysis tool may determine that the user is experiencing a sleep apnea episode. For example, when the oxygenation level of the user decreases and the activity level, blood pressure, and heart rate increase, these changes can indicate that the user may be experiencing sleep apnea because the user may be more active and consume more oxygen during a sleep apnea episode. Prolonged or extended high-stress levels of an individual or heavy physical training or exercise without adequate recovery can increase a risk of an injury for the individual. The analysis tool can forecast or estimate when a user may experience an injury and preventatively alert the user. In one example, the analysis tool can monitor a heart rate, a change in heart rate, and/or heart rate variability (HRV) measurements to determine stress states of the user, such as when the body of a user is in a mental stress state, recovering or relaxation state, or a physical exercise state. Heart rate variability (HRV) can be a difference in times between successive heartbeats. In one example, the analysis tool can determine an amount of time the user is in one or more stress states to forecast or estimate when an injury may occur. For example, when a user may be in physical exercise state and/or a mental stressed state for an extended period of time and has minimal time in a recovering or relaxing state, a risk of an injury occurring can increase as the body of the individual may not have adequate time to recover and regenerate. The analysis tool can forecast or estimate when an injury may occur based on an amount of time a user may be in one or more of the stress states. In another example, the analysis tool can monitor the stress states in combination with monitoring other states of the user, such as sleep states. When a user's body does not receive a threshold amount of sleep and/or a threshold amount of a type of sleep (such as REM sleep), the body may have insufficient time to secretes hormones to build up an immune system of the user, increase muscle mass of the user, increase bone strength of the user, and increase an energy level of the user. Additionally, when the amount of sleep or stage of sleep the user enters is below a threshold amount, a user can experience muscle atrophy and a decrease in an ability to build and repair muscles. The analysis tool can also determine sleep latencies, sleep fragmentation, decreased sleep efficiency, and/or a frequency of sleep arousals. The analysis tool can monitor an amount of time the user spends in a sleep state and/or the amount of time the user spends in different stages of sleep to determine an amount of sleep and/or stages of sleep of the user. Based on the amount of sleep and/or stages of sleep of the user, the analysis tool can forecast or estimate an increase in injury based on an amount of time the user's body has to recover. In one example, when the analysis tool determines that an injury risk level exceeds a threshold level, the analysis tool can send an alert to the user, via the UMD or another display device. In another example, the analysis tool can forecast or estimate when an injury may occur in the future or may currently be occurring based on a change in intensity of physical activity and/or an amount of physical activity the user performs. For example, when the user historically performed a physical activity at an intensity level for a period of time and then decrease the intensity of the physical activity and/or the amount of time the physical activity may be performed, the analysis tool can determine that the user may be experiencing an injury or that a probability that an injury may occur can be increasing. In one example, the activity intensity can be measured based on acceleration during a run, deceleration during the run, speed of the run, jumping height, lateral movement or agility, heart rate, recovery rate, oxygenation level, and so on. In one example, the analysis tool can predict when an injury may occur based on measurements that occurred during a previous injury. In this example, the analysis tool can maintain a database of measurements taken prior to the previous injury occurring, during the previous injury, and/or after the injury occurred. The analysis tool can compare current measurements of the user with the previous measurements to forecast or predict when another injury may occur based on a correlation or similarity of the measurements. For example, a user may begin favoring one leg over the other when jumping, similarly to when a previous injury occurred. The analysis tool can use a sensor, such as an accelerometer or a gyroscope, to determine when a jumping movement may be similar to a previous injury and can forecast or predict when an injury may occur. In another example, the analysis tool can weigh different measurements and/or user profile information when forecasting or predicting an injury. For example, the analysis tool can weight measurement data for a first individual differently than for the second individual based on an age, height, weight, injury history, nutrition level, and gender of the two individuals. In another example, the analysis tool can determine a recovery of an individual from an injury based on similar measurements and/or analysis as discussed in the preceding paragraphs. The analysis tool can determine a change in the activity (e.g., a suggested or recommended course of action) of the user based on the measurements discussed in the preceding and proceeding paragraphs. In one example, when the analysis tool determines that an injury risk level has increased above a threshold level, the analysis tool can determine a change in user activities to decrease the injury risk level. For example, when the analysis tool determines an increased injury risk level, the analysis tool can determine that the user can decrease an amount of physical activity and increase an amount of time the user sleeps or recovers. In another example, when the analysis tool determines that a heart rate or blood pressure of an individual exceeds a threshold amount, the analysis tool may determine a change in diet of the user to decrease the heart rate or blood pressure of the individual. The analysis tool may use a database or lookup table to determine a recommended course of action based on one or more measurements of the sensors of the UMD. The analysis tool can recommend activities based on previous or historical measurement data and/or current measurement data. For example, the analysis tool can determine different measurements (a heart rate, a blood pressure, a hydration level change, and so forth) of a user for different activities. The analysis tool can compare the different measurements associated with the different activities to determine activities to increase a desired measurement of a user. For example, the analysis tool can identify an activity from multiple activities that may increase a heart rate of a user while not increasing a blood pressure level and dehydration level of the user. The analysis tool can communicate the identified activity to the user via a sensory indicator of the UMD or another computing device (such as a display). The analysis tool can estimate or forecast measurement data of the user for different environments. In one example, a member of a sports team may have previously performed in a first environment (such as a relatively hot environment at a relatively high altitude) and may now be performing in a second environment (such as a relatively cold environment at a relatively low altitude). In this example, the analysis tool can determine a difference in the performance of the individual for the first and second environments. The analysis tool can then determine an adjustment value to adjust measurement data between the two environments. When the measurement data of the individual changes in the second environment, the analysis tool can convert the measurement data for the first environment. In another example, the analysis tool can use the survey information or the profile information (as discussed in preceding and proceeding paragraphs) to determine the adjustment value. For example, a user may desire to estimate or forecast how the user's measurement data would change in an environment that the user has not taken measurement data at previously. In this example, the analysis tool can identify measurement data of another user that may have similar survey information or user profile information to the user. The analysis tool can then determine an adjustment value to adjust the user's measurement data based on the other user's measurement data. In another example, analysis tool can use environmental information to determine a similar environment to the environment of the user and convert the measurement data of the user selected environment based on an adjustment value for the similar environment. In one example, the analysis tool can forecast or estimate an amount of fluid a user may intake and at what rate for the user to return to a hydrated level. The analysis tool can monitor a trending of a user's hydration level over a period of time as the hydration level of the individual decreases and increases. The analysis tool can determine the change in hydration level over time to determine a rate that the user may dehydrate at (e.g., a dehydration rate) and/or a rate that a user may rehydrate at (e.g., a rehydration rate). For example, the analysis tool can determine that the user may transition from a hydrated state to a dehydrated state over a period of 20 minutes when performing an activity (such as running) based on the trending information and may transition from a dehydrated state to a hydrated state over a period of 30 minutes when resting and drinking fluid. In one example, the analysis tool can iteratively determine the dehydration rate and/or the rehydration rate. For example, the analysis tool can use an initial trending information of the transition from the hydrated state to a dehydrated state to set an initial dehydration rate. The analysis tool can then continue to monitor the user's transitions between hydrated and dehydrated states to iteratively update the dehydration rate. In one example, the analysis tool can take an average of the trending data and update a current dehydration rate with the current trending information. In another example, the analysis tool can assign weights the trending information, with the current trending information having a larger weight and the older trending data having successively smaller weight. In one example, the analysis tool can track trending information of one or more measurements or diagnosis. In this example, based on the trending information, the analysis tool can determine when measurements or diagnoses of a user indicate that a user is trending from one diagnosis level to another diagnosis level. For example, the analysis tool can monitor a user's hydration level measurements to determine when a user is trending from a hydrated level to a dehydrated level. In another example, when the analysis tool determines a user is trending in an undesired direction (such as from hydrated to dehydrated), the analysis tool can determine a treatment regimen. For example, when a user is trending from a hydrated state to a dehydrated state, the analysis tool can forecast when the user may switch to a dehydrated state and determine a rehydration treatment regimen for a user. In this example, the UMD can indicate to a user an amount of fluid to intake. The analysis tool can monitor the trending information of a user to determine when the treatment regimen may be completed. For example, when the trending information indicates that the user has rehydrated to his or her original hydration level, the analysis tool can determine that the user has been rehydrated. The analysis tool can determine a metabolic rate of the user based on profile information and/or measurement data of a user. In one example, the analysis tool can determine a resting metabolic rate (RMR). The analysis tool can determine the RMR of a user using a look-up table at a storage device coupled to the analysis tool. The look-up table can include RMR information associated with different profiles of individuals (such as height, weight, age, and gender). The analysis tool can identify a profile from the different profiles of individuals that match the profile of the user (either an exact match or a match within a matching range). The analysis tool can then set the RMR of the user based on the RMR of the matching individual. In one example, the analysis tool can adjust the RMR of an individual in view of additional information. The additional information can include: weather information, e.g., living in a cold environment can increase the RMR; frequency of meals consumed, e.g., small regular meals can increase the RMR; pregnancy information, e.g., a pregnancy can increase the RMR; diet change, e.g., a crash-diet or fad diet may decrease the RMR; and dietary or nutritional supplement usage, e.g., dietary or nutritional supplements can raise the RMR. When the analysis tool has set the RMR for the user, the analysis tool can determine an activity level of the user. For example, the analysis tool can determine a frequency and duration that an individual may physically exert himself or herself. When the analysis tool has determined the RMR and the user activity level, the analysis tool can then use an algorithm, such as a Harris Benedict equation to determine a metabolic rate (e.g., a rate that an individual bums calories) of the user. For example, when the analysis tool determines that a user has a low activity level, the analysis tool can determine the metabolic rate using RMR×1.2=metabolic rate. In another example, when the analysis tool determines that a user has a light activity level, the analysis tool can determine the metabolic rate using RMR×1.375=metabolic rate. When the analysis tool determines that a user has a moderate activity level, the analysis tool can determine the metabolic rate using RMR×1.55=metabolic rate. When the analysis tool determines that a user has a high activity level, the analysis tool can determine the metabolic rate using RMR×1.725=metabolic rate. When the analysis tool determines that a user has a very high activity level, the analysis tool can determine the metabolic rate using RMR*1.9=metabolic rate. The analysis tool can associate the input data with a user. In one example, the analysis tool can create a user identification (ID) for a user of the UMD. The analysis tool can then associate the user ID with input data (such as measurement data and/or user data) for the user, such as by tagging or appending the user ID to the input data. For example, the input data can be stored in data fields and the user ID tag can be appended to the data fields. In another example, a separate data field can be created for the user ID. When the user may be associated with a group of individuals, a group ID can also be associated with the input data. In one example, the user ID and the group ID can be included in a combined data field. In another example, the user ID and the group ID can use separate data fields. In one example, the user can be a member of a first sports team. A first group ID can be associated with the first sports team. While the member may be part of the first sports team, the measurement data and/or user data can be tagged with the first group ID for the first sports team. When the member switches or moves to a second sports team, the user ID can remain the same and the first group ID can be switched to a second group ID. An advantage of using different group IDs for different groups of individuals (such as different sports teams) can be to provide data mobility. For example, regardless of which group the user may be part of at a given point in time, the input data can be associated with the user and when the user leaves a group or switches to another group the input data can continue to be associated with the individual (e.g., the input data can follow the user). In another example, when a user switches groups, the input data collected while the user was part of a first group can continue to be tagged with the first group ID and new data can be tagged with the second group ID. An advantage of maintaining the association of the group ID with the group when the input data was collected can be to enable sorting of the input data based on what group the user was a part of when the input data was collected. For example, when a first set of input data was taken when the user was a member of the Utah Jazz basketball team, the first set of input data can continue to be associated with the Utah Jazz when the user moves to Chicago Bulls basketball team. In this example, an individual (such as a coach or trainer) can use the group IDs for the different teams to sort the input data based on when the member played for each team. FIGS.2A-2Dshow various exemplary embodiments of a base station.FIG.2Ashows a base station configured as a hook210according to one embodiment. In one example, a UMD can be hung from the hook.FIG.2Bshows a base station configured as a hanger220according to one embodiment. In one example, a UMD can be hung from the hanger.FIG.2Cshows a base station configured as a holder230with a UMD240coupled or attached to the holder230according to one embodiment.FIG.2Dshows a base station configured as a plate250according to one embodiment. In one example, multiple UMDs can communicate with a base station or hub device. For example, multiple members on a sports team can use different UMDs to take measurements. The UMDs of each member can communicate the input data back to the base station or hub device. The base station or hub device can store the information and/or relay the information to an application (such as a cloud-based application) to display to a user. In one example, each UMD can communicate the input data to the base station or hub device in real-time using different communications channels or frequencies. An advantage of communicating the input data in real-time can be to provide a user with the input data as measurements may be taken or other input information may be received. In another example, the multiple UMDs can communicate using a same channel or frequency and can stagger the communications. For example, each UMD can have an internal clock and each UMD can have different designated times to communicate the measurement data to the base station or hub device. In another example, the UMDs can communicate with each other and can coordinate when to communicate input data to the hub device. The UMDs can coordinate or synchronize an order to communicate the measurement data based on one or more criteria. The criteria can include: an priority level of the input data, a period since the last time a UMD communicated input data to the hub device, an amount of input data the UMD may communicate to the hub device, a bandwidth rate of the UMD to communicate the input data, a location of the UMD in relation to the hub device, a number of UMDs requesting to communicate with the hub device, a type of the UMD requesting to communicate with the hub device, and so forth. In another example, the UMDs can communicate to the hub device in a defined order, such as a first in first out (FIFO) order. In one example, the multiple UMDs can communicate with the hub device in a staggered order. For example, a first UMD can communicate to the UMD within a first period of time and a second UMD can communicate to the hub device within a second period of time, where the first period of time and the second period to time may be different (e.g., staggered periods of time). In another example, the UMDs can communicate with the hub device based on a location of the UMD. In one example, a UMD can communicate to the hub device based on the location of the UMD. For example, when the UMD comes within a proximity distance or threshold distance of the hub device, the UMD can communicate input data. In one example, the UMD can determine proximity using a location system such as GPS or triangulation. In another example, the UMD can determine the location using a pinging scheme or by sending a message to the hub station to determine its location. For example, when a player on a sports team may be using the UMD, when the UMD determines that the player may be sitting on the bench (e.g., not currently in the game) or at a location for a water break, the UMD can communicate the measurement data. In another example, the UMD can receive a manual synchronization command. For example, the UMD can include a controller (such as a button, switch, or touch screen icon) on the UMD to receive a command from a user (e.g., the user presses the synchronization button) to communicate the input data. In another example, when the user places the UMD on a charging pad or connect the UMD to a charging cable, the UMD can be activated to communicate the input data. In another example, the UMD can communicate a portion of information in real time and a portion of information periodically. For example, a portion of measurement data (such as data indicating a user may be dehydrated) can be designated as critical information (e.g., high priority data) and can be communicated to the hub device in real-time. In this example, other non-critical information, such as clock information, user profile updates, and so forth can be communicated on a periodic basis (such as a periodic data dump of non-critical information). FIG.3Ashows another exemplary embodiment of a base or base station310configured to transfer data and/or power with a UMD320according to one embodiment. In one embodiment, the base station can be configured to communicate data, such as input data, with the UMD using a transfer module330. In another embodiment, the base station310can be configured to transfer power with the UMD320using the transfer module330. In one example, the base station310can transfer power using a physical electrical connection, such as a universal serial bus (USB) connection. In another example, the transfer module330and/or the base station310can include one or more wireless transfer coils and the base station310can be configured to wirelessly transfer power to the UMD320using the one or more wireless transfer coils. In one configuration, the base station310can be connected to a power outlet (such as a wall power outlet) and/or a communication port (such as an Ethernet port) using a transfer connector340. In one embodiment, the base station310can receive power from the power outlet using the transfer connector340. In another embodiment, the base station310can communicate data, such as sync data, with another base station or another computing device (such as a server or cloud storage device as discussed in the proceeding paragraphs) using the transfer connector340. In another embodiment, the base station310can communicate data with one or more UMDs, one or more other base stations, and/or other devices using a communication module350. In one embodiment, the communications module350can communicate the data using a cellular network and/or a wireless network. In one example, the communications network can be a cellular network that may be a third generation partnership project (3GPP) release 8, 9, 10, 11, or 12 or Institute of Electronics and Electrical Engineers (IEEE) 802.16p, 802.16n, 802.16m—2011, 802.16h—2010, 802.16j—2009, 802.16 —2009. In another embodiment, the communications network can be a wireless network (such as a wireless local area network (e.g., network using Wi-Fi® technology) that may follow a standard such as the IEEE 802.11 —2012, IEEE 802.11ac, or IEEE 802.11ad standard. In another embodiment, the communications network can be a PAN connection (e.g., a connection using Bluetooth® technology) such as Bluetooth® v1.0, Bluetooth® v2.0, Bluetooth® v3.0, or Bluetooth v4.0. In another embodiment, the communications network can be a PAN connection (e.g., a connection using the Zigbee® technology), such as IEEE 802.15.4 —2003 (Zigbee® 2003), IEEE 802.15.4 —2006 (Zigbee® 2006), IEEE 802.15.4 —2007 (Zigbee® Pro). In one embodiment, the base station and the UMD can use near-field communication, or induction communication to communicate information between the base station and the UMD. In one example, the UMD can communicate input data to the base station at selected times of the day. In another example, the UMD can communicate input data with the base station at a selected time when the user wakes up in the morning and at a selected time when the user goes to sleep at night. In another embodiment, the base station can communicate input data with other devices such as computers, phones, tablets, medical equipment, display devices, and so forth. FIG.3Billustrates a schematic view of the base station351according to one embodiment. The base station351can include a data transfer device362and a power management device364. In one example, the data transfer device362can include a communication device352and a processing device354. The communication device352can be coupled to an antenna365, where the antenna can be configured to communicate with another device such as a UMD via a wireless network and/or a cellular network. In one example, the communication device can be a transceiver, communicating data between a processing device354coupled to the communication device352and another device. In one embodiment, the processing device354can include a processor to analyze data received from another device. In another embodiment, the processing device354can be connected to the computing device and transfer data between the other device and the computing device, where the computing device can analyze the data. In one example, the computing device can be a server or a cloud-based device. The power management device364can include a power converter356that can be coupled to a power source366. In one example, the power source366can be an alternating current (AC) power source and the power converter356can convert the AC power to DC power. In another embodiment, the power management device364can include a rectifier or oscillator358to receive the DC power from the power converter can transfer the power to a wireless transfer coil360for wireless power transfer. In this example, the rectifier or oscillator358can be an impedance matching circuit to match an impedance level of the wireless transfer coil360with a wireless transfer coil of another device. FIGS.4A and4Bdepict a base station410and multiple UMDs420.FIG.4Adepicts one exemplary embodiment of the base station410sized and shaped as a cylindrical platform coupled to the multiple UMDs420according to one embodiment.FIG.4Bdepicts another exemplary embodiment of the base station430sized and shaped as a podium or pedestal platform coupled to the multiple UMDs420according to one embodiment. In one embodiment, the analysis tool can monitor one or more sensors of the UMD to determine when the sensors are no longer taking measurements of the user of the UMD. In another embodiment, the one or more sensors can include: an optical sensor, an impedance sensor, a bioimpedance sensor, an electrocardiogram (ECG) sensor, an accelerometer, an altimeter, a pulse oximeter sensor, a fluid level sensor, an oxygen saturation sensor, a body temperature sensor (e.g., a skin temperature sensor), a plethysmograph sensor, a respiration sensor, a breath sensor, a cardiac sensor, a hydration level sensor, a humidity sensor, ambient temperature sensor, altitude sensor, barometer, a gyroscope sensor, a vibration sensor, an accelerometer sensor, 3d accelerometer sensor, force sensor, pedometer, strain gauge, and so forth. In one example, the sensors may no longer be taking measurements of the user when there is a sudden shift or change in measurement data of one or more sensors of the UMD. In another example, the sensors may no longer be taking measurements of the user when one or more measurements may be zero, near zero, or unknown. In another example, the sensors are no longer taking measurements of the user when one or more measurements are outside a selected threshold measurement range. When the sensors are no longer taking measurements of the user of the UMD, the UMD can determine that the UMD has been removed from the user and can communicate sync data with to the base station or another UMD. The UMD can be further configured to establish a communication link, such as a cellular network communications link, a wireless network communications link, a device to device (D2D) communications link, a peer-to-peer (P2P) communications link, or a machine type communications link with the base station and/or the other UMD when the circuitry determines that the UMD has been removed from the body of the individual. The circuitry can be further configured to communicate sync data with the base station and/or the other UMD using the communications link. FIG.5depicts a base station and/or a UMD510configured to communicate data, such as input data, with one or more other devices520,530, and/or550according to one embodiment. In one embodiment, the other devices can be non-wearable and/or non-portable devices, such as a bathroom scale or a bed scale520, a medical device530, and/or a continuous positive airway pressure (CPAP) device550. In another embodiment, the base station and/or the UMD can store and/or analyze the data received from the one or more other devices separately from data of the base station and/or the UMD. In another embodiment, the base station and/or the UMD can aggregate the data received from the one or more other devices with the input data of the base station and/or the UMD. In another embodiment, the base station and/or the UMD can store, synchronize, and/or analyze the aggregated data of the one or more other devices and the base station and/or the UMD. FIG.6illustrates a UMD and/or a base station610according to one embodiment.FIG.6further illustrates that the UMD and/or the base station610can include a wireless transfer coil620and a management module630. In one example, the management module630of the UMD and/or the base station610can convert energy received at the wireless transfer coil620from an energy source, such as an alternating current (AC) energy outlet, to a selected current level, a selected voltage level, and/or a selected wattage level. In another embodiment, the UMD and/or the base station610can include one or more batteries640, such as rechargeable batteries. In one embodiment, the wireless transfer coil can be a transmitting coil and/or a receiving coil (e.g., a transfer coil). FIG.7illustrates an example of transferring energy or data between multiple wireless transfer coils710and720according to one embodiment.FIG.7further illustrates that a first wireless transfer coil710can be a transmitting coil and a second wireless transfer coil720can be a receiving coil. In one embodiment, energy and/or data can be transferred from the transmitting coil to the receiving coil by coupling the transmitting coil with the receiving coil to enable the energy or data to be transferred over a gap or distance. In one example, wireless energy can be transferred by generating a magnetic field730at the transmitting coil and positioning the receiving coil within the magnetic field to induce a current at the receiving coil. In one embodiment, the magnetic field can be an electromagnetic field. Inducing a current at the receiving coil can be a coupling of the receiving coil to the transmitting coil. In one embodiment, the wireless transfer coil coupling for wireless energy or data transfer can be an induction coupling. In another embodiment, the wireless transfer coil coupling for wireless energy transfer can be a resonant coupling. In one embodiment, the transmitting coil can be a transmitting induction coil and the receiving coil can be a receiving induction coil. The UMD and/or the base station can use a field (such as a magnetic field or a resonance field) to transfer energy between the transmitting coil coupled to a first object (such as a base station) and a receiving coil of a second object (such as a UMD) without any direct contact between the transmitting coil and the receiving coil, e.g. inductive coupling. In one example, when the transmitting coil and the receiving coil may be within a threshold proximity distance, the transmitting coil and the receiving coil can couple to form an electric transformer. In one embodiment, current from the receiving coil can be transferred to a battery of the UMD or the base station. In one embodiment, an impedance of the transmitting coil can be substantially matched with an impedance of the receiving coil. In one embodiment, the transmitting coil can be a transmitting resonant coil and the receiving coil can be a receiving resonant coil. A wireless resonant transfer can be a resonant transmission of energy or data between at the transmitting coil and the receiving coil. In another embodiment, the transmitting coil and the receiving coil can be tuned to resonate at a same frequency or a substantially same frequency. In one example, resonant transmission of wireless energy can occur when the transmitting coil and the receiving coil are constructed to resonate at the same frequency or approximately the same frequency. The transmitting coil can be configured to oscillate current at a resonant frequency of the receiving coil to transfer energy and/or data. The oscillating current of the transmitting coil can generate an oscillating field at the selected resonant frequency of the receiving coil. When the receiving coil is positioned adjacent to the oscillating field and constructed to operate at the same frequency or substantially the same frequency as the transmitting coil, the receiving coil can receive energy and/or data from the oscillating magnetic field. FIG.8Aillustrates a base station and/or a UMD810operable to communicate input data to a computing device830, such as a server according to one embodiment. In one example, the base station and/or the UMD810can communicate input data directly to the computing device830using a communications connection850of a communications network. In another example, the base station and/or the UMD810can indirectly communicate the input data to the computing device830using another base station or another UMD820along communication connections840. FIG.8Afurther illustrates that the base station and/or a UMD810can receive selected data or information, such as input data or other information, from the computing device830. In one example, the base station and/or a UMD810can receive selected data or information for a user of the base station and/or a UMD810from a cloud-based server or a server in communication with a cloud-based server. In one embodiment, the input data can include setting information for the base station and/or a UMD810. In one example, the setting information can include: measurement data threshold ranges, measurement data threshold values, measurement event triggering values, and so forth. In another example, the input information can include: medical information of the user of a UMD, user condition information, medication regiment information, exercise regimen information, medical risk information, and so forth. In another embodiment, the base station and/or a UMD810can provide a sensory indication (such as a visual, auditory, and/or touch indication) communicating the selected data or information to the user. In one example, the base station and/or a UMD810can display a reminder for a user to exercise, take medication, rehydrate, and so forth. In one embodiment, the base station can analyze received input data and/or stored input data (such as measurement information) to determine selected states or conditions, such as medical conditions, physiological states, and so forth of the user of the UMD. In another embodiment, the base station can aggregate input data received from multiple UMDs. In another embodiment, the base station can aggregate current input data received from one or more UMD or other base station with previous input data stored at the base station or a device in communication with the base station. In another embodiment, the base station can analyze the aggregated sync data. In one configuration, the base station can communicate other information to one or more UMDs. For example, the base station can receive software and/or firmware update information and relay the software and/or firmware update to the one or more UMDs. In one embodiment, the base station can communicate the other information to the one or more UMDs when the one or more UMDs receive energy (such as wired energy or wireless energy) from the base station. FIG.8Billustrates a UMD860and a base station870according to one embodiment. The UMD860can include a wireless induction coil to transfer power and/or data. In one example, the UMD860can transfer power and/or data with a base station870. The base station870can be sized and shaped to receive the UMD860and align a transfer coil of the UMD860with a transfer coil of the base station870. For example, the UMD860can be sized and shaped to be a circular disk with a first radius872and the base station870can be a circular disk of a second radius874. In this example, the second radius874of the base device can be larger than the first radius872of the UMD860. The base station870can have a top surface with indentation or groove876that is a third radius and is approximately the same radius as the first radius872. The UMD860can be sized and shaped to fit into the indentation or groove876when placed on the top surface of the base station870. When the UMD860may be placed on the indentation876, the transfer coils of the UMD860and the base station870can be aligned to enable wireless transfer of power and/or data using induction. In one example, the UMD can communicate with external or third party equipment (e.g., equipment not integrated into the UMD). In another example, the UMD can receive information from the third party equipment (e.g., external information). In one example, the third party equipment can be a body weight scale that can determine a weight of an individual. In this example, the body weight scale can communicate weight information to the UMD. In another example, the third party equipment can be a smart water bottle that can monitor a fluid consumption of an individual. The UMD can aggregate the information for the third party equipment with the input data (such as measurement data) of the UMD. In one example, the UMD can store the external information. In another example, the UMD can communicate the external information and the input data to the hub device. In another example, the hub device or the UMD can communicate the external information and/or the measurement data to a cloud-based computing device, where the cloud-based computing device can store and/or analyze the external information and/or the input data. An advantage of storing and/or analyzing the external information and/or the measurement data at the cloud-based computing device can provide the external information and/or the measurement data to a user from any location. For example, the UMD can take measurements of an individual (such as a player on an athletic team) and communicate the information to the cloud-based computing device. In this example, a coach or trainer can access the information from the cloud-based computing device at another location and view the measurement data of the player. In one example, the UMD can be sealed to prevent fluid (such as water or sweat) or other materials from entering an inner cavity of the UMD. The inner cavity of the UMD can include circuitry, such as power management circuitry, processing circuitry, communication circuitry, and so forth. In one example, where the UMD includes a transfer coil to transfer data and/or power, the UMD may avoid traditional physical ports to transfer power and/or data (such as a USB port). In one example, the UMD can include two halves. The two halves can each include inner cavities to receive the circuitry. When the circuitry is placed inside the inner cavities, the two halves of the UMD can be sealed together to provide a fluid-proof outer surface (e.g., preventing fluid from entering the inner cavities). In another example, when the UMD is sealed and transfer coils can be used to transfer power and/or data, a durability of the UMD can be increased. For example, by using transfer coils to transfer power and/or data, switches and ports along an external surface of the UMD can be reduced or eliminated. Traditionally, a port or switch can be a weak point in a device. For example, a switch uses mechanical contacts to perform a function, such as turning a device on and off. As the switch is used, the mechanical contacts can be worn out. Where the UMD reduces or eliminates the use of switches and ports, the mechanical contacts can be eliminated (e.g., the weak points of the device) and a durability of the device can be increased. In one example, the UMD can include sensor devices to provide sensory indications to a user. In one example, the sensory device can be a visual sensory device, such as a display. The UMD can display information to a user using the visual sensory device. In another example, the sensory device can be an auditory sensory device, such as a speaker. The UMD can communicate information to a user using the auditory sensory device, such as communicating information to the user via the speaker. In another example, the sensory device can be a touch sensory device, such as a vibrator. The UMD can communicate information to a user using the touch sensory device. For example, the vibrator can vibrate for different periods of time or at different intervals to indicate different information to a user. In one example, the UMD can be attached to different locations on a user's body using different UMD holders. For example, the UMD can be coupled with a wristband UMD holder to attach the UMD to a wrist position on the user. In another example, the UMD can be coupled with a headband UMD holder to attach the UMD to a head position (such as the forehead) on the user. The UMD can determine the location on the user that the UMD may be taking measurements and can adjust the measurements being taken based on the location. For example, when the UMD may be located at a wrist of the user, the UMD can use an impedance spectrometer to take a hydration level measurement. In another example, when the UMD may be located at a forehead of the user, the UMD can use an accelerometer to take impact measurements (such as concussion measurements). The UMD can receive user input from an input device or an input controller that can part of the UMD indicating the location of the UMD on the user. An advantage of different UMD holders positioning the UMD at different locations on a user can be to enable optimal measurement taking for different types of measurements. For example, when a user desires to take a hydration measurement, the wrist may be an optimal location and the UMD can be coupled to the user with a wristband UMD holder. In this example, when a user desires to take an impact measurement, the forehead may be an optimal location and the UMD can be coupled to the user with a forehead UMD holder. The different UMD holders can position or align the UMD to engage the body differently based on the location of the UMD. For example, a wrist location may be relatively flat for the UMD to engage the wrist and the forehead position may be relatively curved for the UMD to engage the forehead. The wristband UMD holder can align the UMD with the flat wrist or the curved forehead to enable the UMD to engage the body of the user and take measurements. FIGS.9A-9Eillustrate various embodiments of UMD holders according to various embodiments.FIG.9Aillustrates a band or harness900with a UMD holder902formed and shaped to receive a UMD904according to one embodiment. In one embodiment, the UMD904can snap into an opening or a pocket formed and shaped to receive the UMD. For example, the UMD904can be circularly shaped and the UMD holder902can have a circular opening to receive the UMD904. In one embodiment, the band or harness900can be a compression sleeve to attach to a body of a user. In another embodiment, the band or harness900can include fasteners906(such as Velcro®, snap fasteners, hooks, and so forth) to attach together to form a partial or complete band or harness900around a body part of a user. FIG.9Billustrates another band or harness920according to one embodiment. In one example, the band or harness920can be formed and shaped to receive a UMD (such as a circular puck) in a UMD holder. In another example, the UMD can be integrated into the band or harness920. FIG.9Cillustrates another band or harness930attached to an object932according to according to one embodiment.FIG.9Cfurther shows an exemplary embodiment where the band or harness930may be attached to an object932, where the object932may be a football helmet. In one example, the band or harness930can include a UMD coupled to or integrated into the band or harness930. In one example, the object932can be a helmet (such as a safety helmet or athletic helmet), a hat, clothing, and so forth. FIG.9Dillustrates a band or harness940with an integrated UMD942according to one embodiment.FIG.9Dfurther shows an exemplary embodiment where the band or harness930may be integrated into a wristband, compression sleeve, or sweatband. FIG.10Aillustrates a top view of a UMD1000with a display1010according to one embodiment. In one example, the UMD1000can be a circular shape, a square shape, a rectangular shape, a cylindrical or oval shape, and so forth. In another example, the display1010can be a liquid crystal display (LCD) display a light emitting diode (LED) display, a touchscreen display, a backlight display, and so forth. FIG.10Billustrates a bottom view of the UMD1000with an optical sensor1030and electrodes1040according to one embodiment. In one example, the UMD1000can include light sources1020, such as LED lights or candescent bulbs. When the UMD1000may be engaged to a body of a user, the light sources1020can illuminate one or more skin layers of the user. In one example, the optical sensor can measure an amount of light reflected by the one or more skin layers of the user. In another example, the optical sensor can measure an amount of light absorbed by the one or more skin layers of the user. When the UMD1000may be engaged to a body of a user, the electrodes1040can measure an impedance level of one or more skin layers of the user (e.g., measure bio-impedance levels). In one example, the UMDs may be swappable. For example, a first UMD may be taking measurements of a user at a wrist location using the wristband UMD holder until the UMD runs out of power. When the UMD runs out of power, a user can switch the first UMD with a second UMD. In this example, the wristband UMD holder can maintain the same location to take measurements when the UMDs may be switched out. In this example, when the second UMD is coupled to the wristband UMD holder, the wristband UMD holder can align sensors in the sensor array of the UMD to take measurements at the same location or substantially the same location as the first UMD. An advantage of different UMDs taking measurements at the same location can be to reduce calibration errors and measurement errors. When a UMD engages the body and takes a measurement at different locations, the measurements may be calibrated for the different locations and may introduce errors or differences into measurement data. For example, when a first UMD takes measurements at a first location on the wrist and a second UMD takes measurements at a second location on the wrist, the first and second UMD may be calibrated differently based on the different locations and may provide different measurement information. When the UMD holder enables the first UMD and second UMD to engage the body at the same location or substantially the same location (e.g., a repeatable measurement location), the first and second UMD can use the same calibration information and reduce errors from different calibrations for different locations. In one example, the UMDs can include one or more electrodes to take impedance spectroscopy measurements. An electrode can be a conductor through which a current can enter or leave the body of the user. In one example, the one or more electrodes can be spring-loaded electrodes (e.g., electrodes with springs to adjust a height of the electrodes). An advantage of spring-loaded electrodes can be to enable the electrodes to engage the body of the user and maintain a comfortable electrode height. For example, when the user places the UMD in a UMD holder, the UMD can have electrodes on a bottom surface of the UMD. When the electrodes engage the body a height of the electrodes can be automatically adjusted using the springs. Another advantage of the spring-loaded electrodes can be to secure the electrodes at a location of the user. For example, the springs can enable the electrodes to apply continuous pressure against the body of the user. In this example, the continuous pressure can secure the UMD at a location and reduce or eliminate movement of the UMD as the user may move around. In one example, the UMD can include multiple pairs of electrodes to take impedance spectroscopy measurements. A first electrode of an electrode pair can conduct a current to enter the body of a user. A second electrode of the electrode pair can conduct a current to leave the body of the user. The multiple pairs of electrodes can enable the UMD to take multiple impedance spectroscopy measurements using different electrode pairs. An advantage of taking multiple impedance spectroscopy measurements using different electrode pairs can be to take impedance spectroscopy measurements from the user when one or more electrode pairs may not be engaging the body of the user. For example, the UMD can have 4 electrode pairs at different locations on a bottom surface of the UMD. When the UMD engages the body of the user, one or more of the electrode pairs may not properly engage the body of the user. When at least one of the electrode pairs properly engages the body of the user, the UMD can use that electrode pair to take impedance spectroscopy measurements. In one example, the UMD can be sized and shaped to provide different distances between the first electrode and the second electrode of the electrode pair. In one example, as the distance between the first electrode and the second electrode may be increased or decreased, the amplitude of the current transmitted between the electrode pair can be adjusted. For example, as the distance between the first and second electrode may be decreased, the amplitude of the current between the first and second electrodes may be increased. In another example, as the distance between the first electrode and the second electrode is increased or decreased, a depth that the current may penetrate the body of the user may be adjusted. For example, as the distance between the first and second electrode is increased, the current penetration level between the first and second electrodes may decrease (e.g., the current does not penetrate as far into the body of the user as when the electrodes are closer together). The electrode pairs of the UMD can be arranged in various arrangements. In one example, the electrodes can be arranged in lines, such as the electrodes where electricity enters the body can be arranged in a first line and the electrodes where the electricity leaves the body can be arranged in a second line. In another example, the electrode pairs can be arranged in a concentric pattern or a circular pattern. In another example, the electrodes can be arranged in a rectangular pattern. Power management of portable or mobile electronic devices can be used to extend the useful engagement of the devices by reducing the duty cycle of the “on” time period for the device. The embodiments described herein may address the above-noted deficiency by using user monitoring system to monitor, collect, and/or analyze. The user monitoring system can include a user measurement device (UMD) to monitor, collect, and/or analyze the desired environmental and/or physiological aspects of the user and the user's environment. The UMD can use sensors, stored data, real-time data, received data, and/or algorithms to monitor, collect, and/or analyze environmental and/or physiological information related to an individual, a group of individuals, or a business. FIG.11is a block diagram of a wearable UMD1100with a power management module1120according to one embodiment. The wearable UMD1100can include a sensor array with one or more sensors. In the depicted embodiment, the wearable UMD1100includes one or more physiological sensors1102and one or more activity sensors1104. In some instances, the activity sensors1104may be physiological sensors. That is, in some embodiment, the activity level can be determined from one or more physiological measurements. A physiological measurement may be any measurement related to a living body, such as a human's body or an animal's body. The physiological measurement is a measurement made to assess body functions. Physiological measurements may be very simple, such as the measurement of body or ambient temperature, or they may be more complicated, for example measuring how well the heart is functioning by taking an ECG (electrocardiograph). Physiological measurements may also include motion and/or movement of the body. In some cases, these physiological measurements may be taken to determine an activity level for power management, as described herein. In other instances, separate activity sensors may be used to measure measurements that are specified to activity levels for power management. The physiological sensors1102can include a pulse oximeter sensor, an electrocardiography (ECG) sensor, a fluid level sensor, an oxygen saturation sensor, a body temperature sensor (e.g., a skin temperature sensor), an ambient temperature sensor, a plethysmograph sensor, a respiration sensor, a breath sensor, a cardiac sensor (e.g., a blood pressure sensor, a heart rate sensor, a cardiac stress sensor, or the like), an impedance sensor (e.g., bioimpedance sensor), an optical sensor, a spectrographic sensor, a humidity sensor, an ambient temperature sensor, an altitude sensor, a barometer, a global positioning system (GPS) sensor, a triangulation sensor, a location sensor, a gyroscope sensor, a vibration sensor, an accelerometer sensor, a three dimensional (3D) an accelerometer sensor, a force sensor, a pedometer, a strain gauge, a magnetometer, a geomagnetic field sensor, and the like. The activity sensors1104may be any of the physiological sensors described above, but in some cases, the activity sensors1104are Newtonian sensors, such as, for example, a gyroscope sensor, a vibration sensor, an accelerometer sensor (e.g., a sensor that measures acceleration and de-acceleration), a three dimensional (3D) accelerometer sensor (e.g., sensor that measure the acceleration and de-acceleration and the direction of such), a force sensor, a pedometer, a strain gauge, a magnetometer, and a geomagnetic field sensor that can be used for activity level measurements; whereas the physiological sensors1102may be used for specific physiological measurements. In another embodiment, the physiological sensors1102and activity sensors1104can be categorized into physiological sensors, environmental sensors, and Newtonian sensors. The one or more physiological sensors may be a pulse oximeter sensor, an electrocardiography (ECG) sensor, a fluid level sensor, an oxygen saturation sensor, a body temperature sensor, an ambient temperature sensor, a plethysmograph sensor, a respiration sensor, a breath sensor, a cardiac sensor, a heart rate sensor, an impedance sensor, an optical sensor, a spectrographic sensor, or the like. The one or more environmental sensors may be, for example, a humidity sensor, an ambient temperature sensor, an altitude sensor, a barometer, a global positioning system (GPS) sensor, a triangulation sensor, a location sensor, or the like. The one or more Newtonian sensors may be, for example, a gyroscope sensor, a vibration sensor, an accelerometer sensor, a three dimensional (3D) accelerometer sensor, a force sensor, a pedometer, a strain gauge, a magnetometer, a geomagnetic field sensor, or the like. Alternatively, other types of sensors may be used to measure physiological measurements, including measurements to determine activity levels of the wearable UMD for power management. It should be noted that in some cases, power management activities may be performed to reduce the power consumption of the wearable UMD1100in response to the determination of the activity level. In other cases, the power management activities may be performed to increase measurement granularity or increase accuracy, precision, or resolution of the measurements by the wearable UMD1100. The wearable UMD1100includes a processor1106having a first sensor interface1105coupled to the one or more physiological sensors1102and a second sensor interface1109coupled to the one or more activity sensors1104. The processor1106includes a processing element that is operable to execute one or more instructions stored in the memory device1108, which is operatively coupled to the processor1106. In some cases, the processing element and memory device1108may be located on a common substrate or on a same integrated circuit die. Alternatively, the components described herein may be integrated into one or more integrated circuits as would be appreciated by one having the benefit of this disclosure. The memory device1108may be any type of memory device, including non-volatile memory, volatile memory, or the like. Although not separately illustrated the memory device may one or more types of memory configured in various types of memory hierarchies. The memory device1108may store physiological data1126, such as current and past physiological measurements, as well as user profile data, bibliographic data, demographic data, or the like. The physiological data1126may also include processed data regarding the measurements, such as statistical information regarding the measurements, as well as data derived from the measurements. The memory device1108may also store activity data1124. The activity data1124may be current and past measurements, as well as predictive data for predictive modeling of the activity level. In one embodiment, the memory device1108may store instructions of the sensor module1122and power management module (PMM)1120, which perform various operations described herein. In particular, the sensor module1122can perform operations to control the physiological sensors1102and activity sensors1104, such as when to turn them on and off, when to take a measurement, how many measurements to take, how often to perform measurements, etc. For example, the sensor module1122can be programmed to measure a set of physiological measurement according to a default pattern. The default pattern may be the frequency, granularity, and power used for measurements by the physiological sensors. In another embodiment, the PMM1120may be implemented as processing logic in the processor1106. As described herein, the PMM1120can determine an activity level based on an activity level (e.g., an activity-based PMM) and can adjust the default pattern in various ways as described in more detail below. For example, the pattern may be adjusted by adjusting a sampling rate; adjusting a number of sensors to take physiological measurements; adjusting a number of different physiological measurements to take; adjusting a frequency or granularity of taking physiological measurements; turning off one or more systems of the apparatus; adjusting a type of communication channel to transmit or receive data; adjusting a frequency at which to transmit or receive data; adjusting a power level to transmit or receive data; adjusting a data rate to transmit or receive data; adjusting a number of different channels to transmit or receive data, or the like. In the depicted embodiment, the processing element1107(e.g., a processor core, a digital signal processor, or the like) executes the instructions of the sensor module1122and PMM1120, as well as possibly other modules, routines. Alternatively, the operations of the sensor module1122and PMM1120can be integrated into an operating system that is executed by the processor1106. In one embodiment, the processing element1107measures a physiological measurement via the first sensor interface1105. The processing element1107measures an amount of activity of the wearable UMD1100via the second sensor interface1109. The amount of activity could be movement or motion of the wearable UMD1100, as well as other measurements indicative of the activity level of a user, such as heart rate, body temperature, or the like. The processing element1107performs a power adjustment activity in view of the amount of activity. As described above, the physiological measurements may be stored in the memory device1108as physiological data and the activity measurements (indicative of the activity level) may be stored in the memory device1108as the activity data1124. When determining the activity level, the processing element (or PMM1120) may process the activity data1124to determine an activity level and the appropriate power adjustment activity, as described herein. In one embodiment, the PMM1120measure a first set of physiological measurements using a first sensor at a first sampling rate. To perform the power adjustment activity when appropriate, the PMM1120adjusts the first sampling rate to a second sampling rate in view of the amount of activity and measures measure a second set of physiological measurements using the first sensor at the second sampling rate. In some cases, the second sampling rate is less than the first sampling rate. In other cases, the second sampling rate is greater than the first sampling rate. In another embodiment, the activity-based PMM calculates a rate of change of two or more physiological measurements in a period of time to determine an amount of activity of the wearable UMD. In another embodiment, a first set of sensors are coupled to the first sensor interface1105and a second sensor, designated as an activity sensor, is coupled to the second sensor interface1109. The PMM1120measures a first set of physiological measurements using the first set of sensors. To perform the power adjustment activity, the PMM1120determines a subset of less than all of the first set of sensors for a second set of physiological measurements and measures the second set using the subset, instead of the entire first set. In another embodiment, the PMM1120turns off at least one of the first set of sensors that is not in the subset to further reduce power consumption by the wearable UMD1100, for example. In another embodiment, the PMM1120adjust a sampling rate of at least one of the first set of sensors that is not in the subset in order to reduce power consumption by the wearable UMD1100. In another embodiment, the PMM1120measures a first set of physiological measurements using a first set of sensors. To perform the power adjustment activity, the PMM1120determines a subset of less than all of the first set of sensors based on at least one of an activity type of the user or a selected physiological output and measures a second set of physiological measurements using the subset. In another embodiment, the PMM1120measures a first set of physiological measurements using a first set of sensors. To perform the power adjustment activity, the PMM1120adjusts at least one of the first set of sensors to a higher granularity to measure a first type of physiological measurements based on at least one of an activity type of the user or a selected physiological output. For example, when the user wants a hydration measurement, the device may only use a certain subset or may adjust a sample rate of one sensor to have higher granularity when measuring hydration. In another embodiment, the PMM1120is able to turn off other components of the wearable UMD1100, such as an RF circuit Ill0used to communicate data via antenna1116or display1112. Alternatively, the PMM1120may activate, de-activate, turn on, turn-off, enable, or disable other components of the wearable UMD1100according to the measurement pattern defined for the activity level. Alternatively, the PMM1120may perform one or more of the power adjustment activities described herein. The wearable UMD1100also includes one or more power sources1114, such as batteries, to supply power to the various components. The power consumed by the wearable UMD1100can be adjusted up or down based on the activity level according to some embodiments as described herein. In one embodiment, the PMM1120is considered an activity-based PMM where the activity-based PMM, when executed by the processor1106, identifies default sample rates at which the sensor module1125takes a first set of physiological measurements with multiple sensors (1102,1104). The activity-based PMM determines an amount of activity of the wearable UMD1100based on at least one of the first set of physiological measurements. For example, the first set of physiological measurements may include measurements from the activity sensor(s)1104that are primarily used to determine an activity level. The activity-based PMM1120may compare the determined activity level against one or more threshold levels to determine an activity level and corresponding power adjustment activity. In one embodiment, the activity-based PMM1120determines a second sample rate for at least one of the multiple sensors (1102,1104) using the determined amount of activity and instructs the sensor module1125to adjust the at least one of the multiple sensors to the second sample rate for a second set of physiological measurements. In some cases, the second sample rate is less than the corresponding one of the default sample rates. A lower sampling rate may cause the wearable UMD1100to consume less power when taking the second set of physiological measurements than when using the default sample rates to take the first set of physiological measurements. In other cases, the second sample rate is more than the corresponding one of the default sample rates. The higher sampling rate may cause the wearable UMD1100to measure the second set of physiological measurements at a higher fidelity than the first set of physiological measurements. It could be higher fidelity, as well as higher accuracy, higher precision, or higher resolution. In another embodiment, the default sample rates is a first combination of different rates for different ones of the multiple sensors and the activity-based PMM can instruct the sensor module1122to adjust the default sample rates to a second combination of different rates for the different ones of the multiple sensors. In one embodiment, the multiple sensors include a hardware motion sensor to measure at least one of movement or motion of the wearable UMD1100. The activity-based PMM1120can determine the amount of activity based at least in part on the at least one of the movement or motion of the wearable UMD110. The hardware motion sensor may be an accelerometer sensor, a gyroscope sensor, a magnetometer, a GPS sensor, a location sensor, a vibration sensor, a 3D accelerometer sensor, a force sensor, a pedometer, a strain gauge, a magnetometer, and a geomagnetic field sensor. The multiple sensors may be from the following types of sensors: a physiological sensor, an environmental sensor, and a Newtonian sensor. The hardware motion sensor may be considered the activity sensor and other sensors may be used for physiological measurements, such as one or more of the following types of sensors: a pulse oximeter sensor, an ECG sensor, a fluid level sensor, an oxygen saturation sensor, a body temperature sensor, an ambient temperature sensor, a plethysmograph sensor, a respiration sensor, a breath sensor, a cardiac sensor, a heart rate sensor, an impedance sensor, an optical sensor, a spectrographic sensor, a humidity sensor, an ambient temperature sensor, an altitude sensor, a barometer, a GPS sensor, a triangulation sensor, a location sensor, a gyroscope sensor, a vibration sensor, an accelerometer sensor, a three dimensional (3D) accelerometer sensor, a force sensor, a pedometer, a strain gauge, a magnetometer, and a geomagnetic field sensor. In other embodiments, the measurements may be any one or more of the following types of measurements: a hydration level measurement, a heart rate measurement, a blood pressure measurement, an oxygen level measurement, and a temperature measurement. In some cases, a sensor may be used to take a temperature measurement at an inner ear region, such as in connection with a headphone that can be placed in an ear cavity. Alternatively, other types of measurements can be taken. In another embodiment, a first sensor of multiple sensors in the UMD110can take a first set of physiological measurements at a first sampling rate. The activity-based PMM calculates a rate of change of at least two of the first set of physiological measurements measured in a period of time and instructs the sensor module1122to adjust the first sensor to a second sampling rate for a second set of physiological measurements in view of the rate of change. In a further embodiment, the activity-based PMM1120calculates a second rate of change of at least more of the first set of physiological measurements measured in the period of time and instructs the sensor module1122to adjust the first sensor to the second sampling rate for the second set of physiological measurements in view of the rate of change and the second rate of change. In some cases, the rate of change may be indicative of an amount of activity of the wearable UMD110. In other cases, the amount of activity can be defined using the rate of change, as well as other physiological measurements or calculations made using the physiological measurements, like the rate of change. Although the embodiment illustrated inFIG.11illustrates and describes the power management being managed by the PMM1120as executable instructions by the processor1106, in other embodiments, the power management operations may be performed may be performed in a power management system comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In some embodiments, a separate controller could be coupled to the processor1106and the controller includes circuitry and/or instructions or microinstructions to perform the power management operations described herein. In another embodiment, a portable electronic device (e.g., wearable UMD1100) includes one or more sensors to integrated or coupled to the portable electronic device. In one example, the portable electronic device or a power management system of the portable electronic device can use the one or more sensors to manage power consumption of the portable electronic device. In one example, the portable electronic device may include an activity sensor, such as an accelerometer, a gyroscope, a 3D accelerometer, and so forth. In one example, the power management system can adjust an amount of power consumed by the portable electronic device in view of an amount of activity detected or measured by the activity sensor. In one example, the portable electronic device can be a wearable device, such as a wristband or a fitness band. In this example, the wearable device can use the activity sensor to measure an amount of activity (such as movement or motion) of the wearable device and communicate the amount of activity of the wearable device to the power management system. The power management system can adjust an amount of power used by the wearable device in view of the amount of activity measured by the activity sensor. For example, when the power management system determines that an activity rate of the user is relatively low (such as the user sitting down or resting), the power management system can reduce a sample rate duty cycle to provide a low power “sleep mode” during the lower activity level periods. An advantage of adjusting the sample rate duty cycle based on an activity level of the user can be to reduce a power consumption of the device during lower activity periods while maintaining high fidelity measurements during periods of relatively high activity. In one example, the wearable device can include multiple sensors used to take different measurements, such as a heart rate sensor, a bio-impedance sensor, an optical sensor, a skin temperature sensor, an ambient temperature sensor, a humidity sensor, a global position system (GPS) sensor, a pulse oximeter sensor, an accelerometer, a 3D accelerometer, a gyroscope, and so forth. The wearable device can use the multiple sensors to make one or more measurements, such as a hydration level measurement, a heart rate measurement, a blood pressure measurement, an oxygen level measurement, a temperature measurement (skin temperature and/or ambient temperature). In one example, the power management system can use the activity sensor to measure an activity level of the user and perform a power adjustment activity in view of the measured activity level. In one example, the power adjustment activities can include: adjusting a number of sensors used to take the different measurements, adjusting a number of different measurements to take, adjusting a frequency or granularity of one or more measurements (e.g. how often one or more measurements are taken), turning on or off one or more systems of the wearable device (such as a display or communications system), and so forth. The portable device (such as a wearable device) can include a communication module to transceive data (e.g., send and receive data). In another example, the power adjustment activities can include: adjusting a type of communication channel used to transceive data, such as via the Bluetooth®, Wi-Fi®, cellular technologies and so forth; a frequency in time or rate that the portable device sends or receives data; an amount of power used to send or receive data, such as adjusting a broadcast power used, an amount of power used to receive a signal, a data rate that data can be sent or received; a number of different channels or communication types the portable device may use, such as dual-band or multi-band communications; and so forth. In one example, when the wearable device uses the activity sensor to determine that an activity level of the user of the wearable device is below a threshold level (such as when the user is resting or sitting down), the wearable device can reduce a frequency of the number of times the wearable device take measurements using the one or more sensors of the wearable device. In another example, when the wearable device uses the activity sensor to determine that an activity level of the user of the wearable device exceeds a threshold level (such as when the user exercising or moving), the wearable device can increase a frequency of the number of times the wearable device take measurements using the one or more sensors of the wearable device. An advantage of the power management system performing a power adjustment activity in view of the measured activity level can be to adjust a measurement granularity in view of the activity of the user of the wearable device. For example, when a user is inactive (e.g. the activity level of the user is below a threshold level) then a rate of change in the one or more measurements taken by the wearable device is lower relative to when the user is active (e.g. the activity level of the user exceeds a threshold level). In this example, the power management system can preserve power by reducing the frequency of the number of times the wearable device take measurements when the user is inactive without reducing or substantially reducing a quality of the measurements because the lower rate of change in the measurements. Alternatively, the power management system can increase the power consumption of the wearable device to maintain a quality of the measurements by increasing the frequency of the number of times the wearable device take measurements when the user is active. The power management system can use a predictive algorithm to when a probability of an activity level of a user changing exceeds a threshold level (e.g., predict a transition for a current activity level to a new activity level). In one example, the predictive algorithm can be a regression algorithm such as a linear regression model, a discrete choice model, a logistic regression model, a multinomial logistic regression model, a probit regression model, a time series model, and so forth. In another example, the predictive algorithm can be a machine learning technique such as neural network, a multilayer perceptron (MLP), a radial basis function, a support vector machine, a Naive Bayes, a geospatial predictive model, and so forth. In one example, the power management system can use the predictive algorithm to determine when an activity level of a user may transition from a relatively high activity level to a relatively low activity level. For example, the power management system can use the predictive algorithm to determine when a user may finish a labor-intensive task (such as a labor-intensive job) and return home to relax. In one example, the predictive algorithm can use data from the GPS sensor or triangulation sensor to predict a transition from the relatively high activity level to the relatively low activity level. For example, the GPS sensor or triangulation data can indicate that a user may be moving from a work location to a home location and the predictive algorithm can determine that the user may be transitioning from the relatively high activity level to the relatively low activity level based on the change in location. In another example, the predictive algorithm can use scheduling information (such as a schedule of the user received from an input device) or a time of day to determine a transition in the activity level. In another example, the power management system can store previous activity information and analyze the previous activity information to predict patterns or trends in the previous activity information. The predictive algorithm can forecast or predict a transition in activity level based on the predicted patterns or trends in the previous activity information. For example, the power management system can use the predictive algorithm to determine that, based on a trend in the previous activity information, a user has a 90 percent probability of going to a gym after work at 7 PM on Wednesdays before going home. In this example, the power management system can transition to a high activity level while the user is at the gym and then transition to a low activity level when the user leaves the gym to go home. In another example, the power management system can use the predictive algorithm to determine when a user may switch from sleeping to awake. When the power management system predicts a transition for a current activity level to a new activity level, the power management system can adjust a sample rate duty cycle or measurement granularity. For example, when the power management system predicts a transition for a high activity level to a low activity level, the power management system can reduce the sample rate duty cycle or the measurement granularity. In another example, when the power management system predicts a transition for a low activity level to a high activity level, the power management system can increase the sample rate duty cycle or the measurement granularity. An advantage of the power management system using the predictive algorithm to determine a transition or change in activity level can be to reduce or eliminate using background monitoring (e.g., a full sleep mode) of an activity level to determine when to adjust the sample rate duty cycle or the measurement granularity. The predictive algorithms can also be used to adjust a length of the sleep mode. For example, when the device is in sleep mode, the device may only check every 5 minutes to see if it should wake up. This can be problematic when that 5 minutes of data may be desirable to be captured or to avoid the user having to wait minutes before the device may respond to user input. Using the predictive algorithm, the device can shorten or lengthen a frequency at which the device checks if it should wake up. When the background monitoring is reduced or eliminated, a power consumption of the portable device can be reduced because the portable device may not periodically use sensors for background monitoring. In another example, the portable device can monitor one or more measurement to determine a change or a rate of change of measurement data or data. In one example, the power management system can perform a power adjustment activity in view of the change or a rate of change of the measurement data or data. In one example, the portable device can take a measurement using a sensor of the portable device and store previous measurement data in a memory of the portable device. In one example, the portable device can compare the current measurement data to previous measurement data to determine an amount of change between the current measurement data and the previous measurement data. In another example, the portable device can compare the current measurement data to previous measurement data to determine a rate of change between the current measurement data and the previous measurement data. In another example, when the amount of change or the rate of change of the measurement data is below a threshold value, the power management system can perform a power adjustment activity to reduce a power consumption level of the portable device. In another example, when the amount of change or the rate of change of the measurement data exceeds a threshold value or score, the power management system can perform a power adjustment activity to increase the sample rate (power consumption level) of the portable device to maintain a measurement granularity. In another example, the power management system can perform a power adjustment activity in view of time, such as the time of day, day of the week, week of the month, or month of the year. In one example, the power management system can select a first measurement or data granularity threshold for the portable device during a first selected period of time and a second measurement or data granularity threshold for the portable device during a second selected period of time. For example, the portable device can be a portable device with multiple sensors (e.g. a sensor array) to take selected measurement. In this example, a user of the portable device may desire to have a higher measurement or data granularity threshold during a period when the user is exercising or working out (such as in the evening between 6 pm to 9 pm) and a lower measurement or data granularity threshold (relative to the higher measurement or data granularity threshold) when the user is not working out (such as when the user is eating, working, or sleeping). In one example, the portable device can use predefined schedule information of the user (such as a predefined daily routine of the user). In another example, the portable device can communicate with another device (such as a smartphone or computer) to receive schedule information example (such as an electronic calendar or appointment information) to determine when the user may desire to have a measurement or data granularity thresholds for different periods of time. In another example, the portable device can use a smart algorithm to track daily, weekly, and/or monthly activities of the user. In this example, the portable device can iteratively update an activity log of the user based on current and/or recent activity information. The portable device can use the information stored in the activity log to determine when the user may desire to have a measurement or data granularity thresholds for different periods of time In another example, the portable device can measure a hydration level of the user. In one example, during a first period of time, the user may typically or habitually be outside where a temperature is higher or a humidity level is higher and during a second period of time the user may be in an air-conditioned indoor location where the temperature is lower or the humidity is lower than the outdoor location. In this example, a hydration level measurement or data granularity threshold may be higher during a scheduled or typical period when the user is outdoors and the hydration level measurement or data granularity threshold may be lower (relative to the outdoor threshold) during a scheduled or typical period when the user is indoors. In another example, the portable device can measure stress to the physiological or physical body of the user. Similarly, the user may be indoors during winter months of the year for longer periods of time (the portable device using a lower measurement or data granularity threshold) and outdoors during summer months of the year for longer periods of time (the portable device using a higher measurement or data granularity threshold). In one example, during a first period of time, the user may typically or habitually be at a lower stress level (such as when the individual is sleeping) and during a second period of time the user may be at a higher stress level relative to the lower stress level (such as when the user first awakes in the morning or when the individual is working out). In this example, a stress level measurement or data granularity threshold may be lower during a scheduled or typical low-stress period and the stress level measurement or data granularity threshold may be higher (relative to the low-stress period) during a scheduled or typical period when the user is more stressed. In another example, the power management system can perform a power adjustment activity in view of the location of the user or the portable device. In one example, the portable device can determine the location of the user or the portable device using a global positioning system (GPS) sensor or a triangulation system (such as a wireless fidelity or cellular triangulation system). In one example, the power management system can select or set a first power consumption threshold and/or measurement or data granularity threshold for a first location and a second power consumption threshold and/or measurement or data granularity threshold for a second location. For example, the first location can be a location of a fitness facility and the second location can be a home location of the user. In this example, when the portable device determines that the user is at the home location, the power management system can decrease the power consumption threshold and/or measurement or data granularity. Additionally, when the portable device determines that the user is at the fitness facility location, the power management system can increase the power consumption threshold and/or measurement or data granularity. In one example, the portable device can determine the location of the user or the portable device using an altimeter. In one example, the power management system can select or set a first power consumption threshold and/or measurement or data granularity threshold for a first altitude and a second power consumption threshold and/or measurement or data granularity threshold for a second altitude. For example, the first altitude can be at an altitude above 1000 feet above sea level and the second altitude can be at an altitude between 0 and 999 feet above sea level. In this example, when the portable device determines that the user is at the first altitude, the power management system can increase the power consumption level and/or measurement or data granularity as the physical system of the user may be at a higher stress level or at a higher dehydration rate. Additionally, when the portable device determines that the user is at the second altitude, the power management system can decrease the power consumption level and/or measurement or data granularity as the physical system of the user may be at a lower stress level or at a lower dehydration rate. In one example, the portable device can determine when the portable device may be in proximity or within a threshold distance of one or more other portable devices. The portable device can use a beacon signal or a heartbeat signal via a communication network (such as networks using the Bluetooth®, RFID, or Zigbee® technologies) to determine when another portable device may be in proximity or within a threshold distance In this example, when the portable device may be in proximity or within a threshold distance of one or more other portable devices, the power management system may determine that the user may be located at an area where the user may physically exert himself or herself (such as a gym, playing field, bicycle path, and so forth). The power management system may increase the power consumption threshold and/or measurement or data granularity based on the one or more other portable devices being in proximity or within the threshold distance. In another example, the power management system may increase the power consumption threshold and/or measurement or data granularity when a threshold level of other portable devices may be in proximity or within the threshold distance. For example, the threshold level can be set to the proximity of 3 other devices. In this example, when one or two other devices may be in proximity or within the threshold distance, the power management system may maintain a current power consumption threshold and/or measurement or data granularity. When 3 or more other devices may be in proximity or within the threshold distance, the power management system may increase a current power consumption threshold and/or measurement or data granularity. An advantage of using a proximity or threshold distance of other devices when determining when to adjust a current power consumption threshold and/or measurement or data granularity can be to a low power consumption determination. For example, determining a location of the portable device using GPS can be higher in power consumption than using a personal area network (e.g., Bluetooth® technology). Determining the location from the PAN may consume a relatively lower amount of power. Another advantage of using a proximity or threshold distance of other devices when determining when to adjust a current power consumption threshold and/or measurement or data granularity can be when the portable device may be located in a building or other location where GPS or triangulation may not be available. In another example, the power management system can perform a power adjustment activity in view of manual user settings. The manual user setting can include: a selected battery usage life (e.g. how long the user desires the battery of the portable device to last); a measurement or data granularity, a display brightness level, a power source recharge rate (e.g. how often the portable device is recharged); a data communication frequency level (e.g. how frequently the portable device communicates data to another device); a communication network type (e.g. whether the portable device uses a cellular network, a wireless local area network (WLAN) (e.g., network using the Wi-Fi® technology), a PAN (e.g., Bluetooth® or Zigbee® technologies); one or more type of measurements the portable device can take using one or more sensors or the portable device; one or more types of sensors the portable device can use to make one or more measurements; a type or frequency of a sensory alert from the portable device (e.g. a vibration alert, a visual alert, an auditory alert; and so forth. In one example, a user of the portable management system can select or adjust one or more of the manual user settings to adjust a power consumption rate and/or a measurement or data granularity of the portable device. In one example, the portable device can receive the manual user setting from another device (such as a USB connection or a PAN connection with a computing device). In another example, the portable device can receive the manual user setting from a graphical user interface (such as via a touch screen integrated into the portable device). In one example, the graphic user interface an display an power usage level, power consumption rate, an approximated usage time period remaining for the portable device, and/or a measurement or data granularity level in view of different manual user settings. For example, the portable device can have an approximate usage period of two days when the portable device is initially be set to a data measurement frequency of once every minute, a display brightness of 2 lumens, and a have 2 sensors selected to take measurements. In this example, the user of the portable device can adjust the manual settings of the portable device to a data measurement frequency of once every 10 minutes, a display brightness of 1 lumen, and a have 1 sensor selected to take measurements to increase the usage period of the portable device to 4 days. In another example, the portable device can have presets or predefined settings that a user can select to adjust the power consumption threshold and/or measurement or data granularity. In another example, the portable device can have a series of selectable configurations that a user can select (via an input device) to adjust the power consumption threshold and/or measurement or data granularity. In one example, the power source of the portable device can be a battery power source, a solar power source, a kinetic or motion power source, an induction power source, a wired or physical contact power source, and so forth. In another example, the power management system can perform a power adjustment activity in view of a user profile or demographic information of a user. In one example, the user profile or demographic information of the user can comprise of: an age of the user; a gender of the user; a physical weight of the user; a body mass of the user; a health level of the user (such as if the user has any diseases, chronic conditions, is currently sick, and so forth); a body fat percentage of the individual; a health risk level of the individual (such as if the user smokes cigarettes, drinks alcohol, is pregnant); a race of the individual; a fitness level or activity level of the user (e.g. how often and for how long does the individual exercise); and so forth. In one example, when the physical weight of a user is lower than a threshold physical weight (for example 140 pounds) then the number of measurements taken by the portable device is set at a first number of measurements for a given time period and when the physical weight of a user is higher than the threshold physical weight then the number of measurements taken by the portable device is set at a second number of measurements for the time period. In one example, the first number of measurements for the time period is greater than the second number of measurements for the time period. For example, when a hydration level female that weighs 100 pounds (lbs.) decreases by 5% the effects of the hydration level decrease may be larger than a 5% hydration level decrease for a female that weighs 300 pounds. In this example, where the effects of the hydration level are greater for the female that weighs 100 lbs. the portable device can increase the number of measurements taken for the period (increasing the power consumption level) to provide the female with a greater measurement granularity level (such as a sample rate) to offset the increased sensitivity to a hydration level change. The power management system can adjust a power consumption level and/or adjust the measurement granularity level differently for different sensors of the sensor array, e.g., different sensors have different power consumption or measurement granularity priorities or weights. In one example, the power management system assigns a first weight to a first sensor of a sensor array (such as a bio-impedance sensor) and a second weight to a second sensor of the sensor array (such as a heart rate sensor). When the first sensor takes measurements, the power management system can increase the power consumption level and/or increase the measurement granularity level. When the second sensor takes measurements, the power management system can decrease the power consumption level and/or decrease the measurement granularity level. In another example, the measurement granularity level (which correlates with the power consumption level) can be adjusted in view of the fitness level of the individual using the portable device. For example, when the fitness level of a user is high (e.g. a marathon runner or professional athlete) then the measurement granularity level can be decreased as the rate of change in measurements taken by portable device can be lower than a rate of change in measurements taken by portable device when the user has a low fitness level (e.g. an individual that works out once a week). In this example, because the body of the user with a high fitness level has acclimated to a higher level of physical exertion before the rate of change in measurements shows a threshold rate of change, the number of measurement granularity level can be decreased compared to the user with a lower fitness level. In one example, the power management system can adjust a power consumption level of the portable device in view of a health level of the individual. For example, the portable device can monitor an oxygen consumption level of a user to determine that the user is sick. When the user is sick the portable device can increase a sensor measurement granularity to capture more measurement details while the user is sick. In another example, the power management system can perform a power adjustment activity in view of multiple user profile or demographic information of a user. For example, the power management system can select of first measurement granularity level for a user that is 20, a female, has a high fitness level, weighs 100 lbs., and is healthy. In this example, the power management system can select of a second measurement granularity level for a user that is 50, a male, has a low fitness level, weighs 250 lbs., and is unhealthy. In one example, the first measurement granularity level can be lower than the second measurement granularity level. In another example, the power management system can perform a power adjustment activity in view of a type of activity the portable device is used for. In one example, the type of activity can be a sports or athletic activity, such as running, football, basketball, soccer, baseball, hockey, and so forth. In another example, the type of activity can be a type of work of the user, such as an office work environment, a police officer work environment, a construction worker work environment, a military or soldier work environment, and so forth. In one example, the power management system can perform a power adjustment activity, such as adjusting a measurement granularity level, in view of the type of activity. For example, when the portable device is used for football, the measurement granularity level can be adjusted to a first measurement granularity level and when the when the portable device is used for baseball, the measurement granularity level can be adjusted to a second measurement granularity level. In another example, when the portable device is used by an individual in an office work environment the measurement granularity level can be adjusted to a third measurement granularity level and when the portable device is used by an individual in a military or soldier work environment the measurement granularity level can be adjusted to a fourth measurement granularity level. In another example, the power management system can perform a power adjustment activity in view of a calibration level of the portable device or one or more sensors of the portable device. In one example, when the calibration level of the portable device is below a threshold value an amount of power provided to a sensor of the portable device can be increased to provide a high amplitude to the sensor to enable a higher power measurement. In another example, when the calibration level of the portable device is below a threshold value a measurement granularity level can be increased to enable more measurements to be taken by the sensor to provide more data points to counterbalance for a lower calibration level. In another example, when the calibration level of the portable device exceeds a threshold value, an amount of power provided to a sensor of the portable device can be decreased to provide a lower amplitude to the sensor to enable a power consumption saving. In another example, the power management system can perform a power adjustment activity in view of an amount of power remaining for the power source of the portable device. For example, when the power source of the portable device has a remaining power level that exceeds a threshold amount, amplitude and or measurement granularity level of one or more sensors of the portable device can be at a first threshold power consumption level. In this example, when the power source of the portable device decreases below a threshold remaining power level, an amplitude and or measurement granularity level of one or more sensors of the portable device can be switched to a second threshold power consumption level. In another example, the power management system may use be a multiple threshold power consumption levels to increase an adjustability of the power consumption levels. In another example, the power management system may use a power consumption level ratio or an adjustment value to continuously adjust the power consumption level based on the amount of power remaining. In another example, the power management system can perform a power adjustment activity automatically (such as reducing a sample rate of one or more sensors) when the amount of power remaining or the state of charge decreases below a threshold level. In another example, the power management system can perform a power adjustment activity in view of a health level of the power source of the portable device, such as a battery health level. As a battery ages, an amount of power that the battery can store can decrease, e.g. an amount of time between when the battery needs to be recharged decreases. In one example, when the health level of the battery progressively decreases to different battery health levels, the power management system can adjust a power consumption level of the portable device to maintain an approximate same usable period between recharging of the portable device. In another example, the power management system can adjust a power consumption level of the portable device in view of a state of charge (e.g., a remaining amount of power) and a battery health level. The power management system can determine a capacity of the battery and expected duration the battery can provide power based on state of charge and the power management system. The power management system can adjust a power consumption level (such as a duty cycle) based on an expected duration the battery can provide power until a next recharging. In one example, when the portable device has new (e.g. hasn't been used) a battery of the portable device can have a 100% battery health level and the portable device can have a first power consumption level that enables the portable device to have a usage period of time, such as 24 hours. In this example, when the portable device has been used for a period of time, such as for 1 year, the battery health can decrease to 50% battery health. When the battery health decreases to 50% the power management system can adjust a power consumption level of the portable device (such as a measurement granularity level or power amplitude of a sensor) to a second power consumption level to enable the portable device to maintain the usage period of time. In another example, the health of the battery can be determined based on a remaining capacity of the battery cells. As a number of times a battery has been charged and discharged increases, the remaining capacity of the cells can decrease (such as for a lithium-ion battery). In this example, the portable device battery health can be associated with a number of charge and discharge cycles of the battery. In one example, the power management system can perform a power adjustment activity in view of a type of communication network the portable device is using to communicate data. In one example, the communications network can be a cellular network. The cellular network can be configured to operate based on a cellular standard, such as the third generation partnership projection (3GPP) long-term evolution (LTE) Rel. 8, 9, 10, 11, or 12 standard, or the Institute of Electronic and Electrical Engineers (IEEE) 802.16p, 802.16n, 802.16m—2011, 802.16h—2010, 802.16j—2009, or 802.16 —2009 standard. In another example, the communications network can be a wireless local area network (such as the Wi-Fi® technology) that can be configured to operate using a standard such as the IEEE 802.11 —2012, IEEE 802.11ac, or IEEE 802.11ad standard. In another example, the communications network can be configured to operate using the Bluetooth® standard such as Bluetooth® v1.0, Bluetooth® v2.0, Bluetooth® v3.0, or Bluetooth® v4.0. In another example, the communications network can be configured to operate using the ZigBee® standard, such as the IEEE 802.15.4 —2003 (ZigBee® 2003), IEEE 802.15.4 —2006 (ZigBee® 2006), or IEEE 802.15.4 —2007 (ZigBee® Pro) standard. In one example, when the portable device uses a cellular network to communicate data the power management system can adjust a power consumption level to a first power consumption level and when the portable device uses a wireless local area network (WLAN) to communicate data the power management system can adjust a power consumption level to a second power consumption level. In this example, for the portable device to communicate data over a cellular network, the portable device may consume more power than when the portable device communicates data using the WLAN. Accordingly, when the portable device communicates data using the cellular network the power management system can provide an increased amount of power to enable the portable device to communicate the data. In another example, when the portable device determines that the data is communicated using the cellular network, the portable device can adjust a measurement granularity level. For example, when communicating the data may use an increased amount of power, the portable device can decrease the number of measurements taken for a period of time (i.e. decreasing an amount of data to communicate) to adjust for the increased power consumption when using the cellular network. In one example, the power management system can use a non-communication setting to turn off the communications or data transfer by the portable device (e.g., an airplane mode). In this example, the communications and data transfer of the portable device can be turned off to conserve power. In one example, a user or a third party (i.e. an individual that is not a user of the portable device) can adjust one or more settings of the power management system. In one example, the user can be an athlete on a professional sports team and the 3rd party can be a trainer of the athlete. In this example, the trainer of the athlete can adjust a setting of the power management system, such as a measurement granularity level, when the athlete is training with the trainer. When the athlete has completed training with the trainer, the setting of the power management system can return to an initial setting, the setting remains at the settings selected by the trainer, or the athlete can select a new setting. In another example, the user can be a medical patient and the 3rd party can be a medical professional, such as a doctor monitoring the patient. In one example, the doctor can adjust different sensors of the portable device in view of a medical diagnosis or symptoms of the patient. For example, the doctor can increase an amount of power and measurement granularity for a heart rate sensor or a blood pressure sensor and decrease an amount of power to other sensors of the portable device (such as put the sensors in sleep mode or turn the sensors oft) when the patient is being monitored for a heart condition (such as a heart attack). In another example, a user can use multiple portable devices at the same time or substantially the same time. For example, the portable device can be a monitoring device that can be coupled to different locations of the body of the user. In one example, the multiple portable devices can communicate with each other. In one example, the power management system can adjust the power consumption level of one or more of the portable device in view of the number of devices being used at the same time. For example, when a user is using five different portable devices at different locations on the body of the user, the power management system can decrease the power consumption level of each portable device compared to a power consumption level when only one portable device is used. In this example, the power management system can decrease the power consumption level of each portable device because the granularity of the overall measurement data can increase as the number of portable devices being used by the user increases. In another example, the power management system can adjust the power consumption level of each of the multiple devices to different power consumption levels based on device criteria. In one example, the device criteria can include: a battery capacity level, a type of measurement the device is taking, a battery power remaining level, an approximate usage period of the device, and so forth. In this example, each of the multiple devices can have different device criteria. For example, a first portable device can have a battery power remaining level of 30 minutes and a second portable device can have a battery power remaining level of 24 hours. In this example, the power management system can decrease a power consumption level of the first portable device and increase a power consumption level of the second portable device to maintain an overall measurement granularity level while extending the battery power remaining period of the first portable device. In one example, the portable device can include a swappable battery pack. In one example, the portable device can be a wearable wristband to measure hydration. In this example, when a power remaining level of the portable device decreases below a threshold level, the portable device can indicate to the user to switch a first battery that is depleted with a second battery that is fully charged. In another example, the portable device can have an internal battery to provide power to the wearable wristband while the first battery is switched with the second battery. In one example, the portable device can alert the user to recharge a power source of the portable device or switch the power source of the portable device when a power remaining level of the current power source of the device decreases below a threshold level. In another example, when the remaining power level decreases below a threshold level, the portable device can provide the user with a different power consumption options. In one example, the power consumption options can include: a power off mode, where the device is turned off; a heartbeat mode, where the device runs on minimal power and wakes up to take measurements at selected periods of time or at selected events; a minimum power mode, where the portable device can continue to monitor the user but turn on one or more options on the portable device (such as a display screen, a speaker) and/or reduce the measurement granularity of the sensor measurements; a full power mode, where the portable device will continue to operate at full power until the power source is fully depleted. In another example, the portable device can operate in different modes in view of the remaining power level. For example, when the remaining power level is between 40% and 100% the portable device can function in full power mode, when the remaining power level is between 20% and 39% the portable device can operate in minimum power mode, and when the remaining power level is between 1% and 19% the portable device can operate in heartbeat mode. This example is not intended to be limiting and the remaining power levels and modes can vary. In one example, the modes can be predetermined for the remaining power levels. In one example, the portable device can have an override mode, where the user can select to continue to operate in a first mode when the portable device reaches a remaining power level that the portable device would normally switch at. In another example, a user can select different modes and/or remaining power levels using an input device such as a GUI. In another example, the portable device can determine a power consumption rates based on an amount different systems and sensors of the portable device may consume for different settings. In one example, the portable device can display to the user different operating time and/or different system and sensor settings and a user can select an operating time and/or different system and sensor settings in view of the power consumption rate of the portable device. In another example, the power management system or the portable device can increase or decrease a number of sensors of the portable device taking the measurement and/or the frequency that one or more of the sensors is taking measurements. For example, if a first sensor (such as an impedance spectroscopy sensor) consumes a power at a first rate and a second sensor (such as a heart rate monitor) consumes power at a second rate that is lower than the first rate, when the remaining power level of the power management system decrease below a threshold level, the power management system can adjust the first sensor and/or the second sensor. In one example, the power management system can turn off the first sensor. In another example, the power management system can decrease a measurement granularity of the first sensor. In another example, the power management system can turn off the second sensor. In another example, the power management system can decrease a measurement granularity of the second sensor. In another example, the power management system or the portable device can adjust one or more sensors in view of a rate of change in the measurements of the one or more sensors. For example, if a rate of change in measurements of a first sensor (such as an impedance spectroscopy sensor) is at a first rate and a rate of change in measurements of a second sensor (such as a heart rate monitor) is at a first rate, the power management system can adjust one or more settings of the first sensor and/or the second sensor. For example, when the first rate of the first sensor is below a threshold value (such as nearly stagnant), then the power management system may decrease a measurement granularity of the first sensor. In another example, when the second rate of change in measurements of the second sensor is above a threshold value (such as nearly stagnant), then the power management system may increase a measurement granularity of the second sensor. In another example, the power management system can adjust a power consumption mode of the portable device in view of the first rate and/or the second rate. In one example, when the first rate and/or the second rate is below a threshold value the power management system can switch the portable device to a heartbeat mode to conserve power. In another example, when the first rate and/or the second rate is above a threshold value the power management system can switch the portable device to a full power mode to capture a finer detail level for the measurements of the first sensor and/or the second sensor. In one example, the portable device can communicate with other devices, such as other measurement devices. The other measurement device can include: a heart rate monitor, a pulse oximeter sensor, a body weight measurement scale, a sleep tracker, a glucose meter, and so forth. In one example, the portable device can determine when the other device can take the same or similar measurements to the sensors attached to the portable device. When the other devices can take the same or similar measurements to the sensors attached to the portable device, the power management system can decrease the measurement granularity of that sensor of the portable device or turn the sensor of the portable device off. For example, the portable device can have a heart rate sensor and a chest strap can also have a heart rate sensor. In this example, the portable device can communicate with the chest strap to receive measurements from the chest strap and turn off the heart rate sensor of the portable device. In another example, the portable device can include one or more environmental sensors, such as an ambient temperature sensor, a humidity sensor, a weather sensor, an altitude sensor, a barometer, and so forth. In one example, the portable device can use one or more of the environmental sensors to adjust one or more settings of the physiological sensors of the portable device. For example, the portable device can use the humidity sensor to detect when the humidity level of the location where the user is located increases above a threshold humidity level. When the humidity level increase above the threshold, the portable device can increase a measurement granularity of one or more of the sensor to provide a more detailed measurement scope. In another example, when an ambient temperature sensor determines that the temperature decreases below a threshold temperature, the power management system can decrease a measurement granularity of one or more sensors of the portable device to conserve power as a rate of change in the measurements of the sensors may decrease as the temperature decreases (such as when the temperature decrease a rate of change of hydration measurements may also decrease). In one example, the environmental sensors can be attached to the portable device or integrated into the portable device. In another example, the environmental sensors can be at different locations, such as an environmental monitoring station that communicates with the portable device. In another example, the portable device can receive environmental information from another source, such as a weather monitoring website or an application on a smartphone in communication with the portable device. The portable device can include a processor to execute computer programs or applications. In one example, the power management system can use the algorithms, techniques, and/or methods discussed in the proceeding paragraphs to adjust power consumption activities based on applications that may be running on the portable device. For example, when the portable device is running an application that may consume power at a consumption level, the power management system may adjust a display of the portable device, a processing speed of the portable device, and so forth. In another example, the portable device can perform power consumption adjustment activities, such as adjusting a power consumption level or measurement granularity level, for one or more sensors based on an application running or executing on the portable device. For example, when the portable device is running an application that may consume a relatively large amount of power (such as an application with heavy computational or processing requirements) the power management system can perform power consumption adjustment activities to reduce power consumption. FIG.12is a flow diagram of a method1200of power management of a wearable UMD according to one embodiment. The method1200is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In one embodiment, the PMM1120ofFIG.11performs the method1200. In another embodiment, the PMS140ofFIG.1Aperforms the method1200. Alternatively, other components of the UMDs, as described herein, can perform some or all of the operations of method1200. Referring toFIG.12, processing logic begins with measuring a first set of physiological measurements of a user using a physiological sensor (one or more sensors) according to a first pattern (block1202). The processing logic measures an activity level of the user using an activity sensor of the wearable hardware device (block1204). The processing logic determines when there is a change in activity level (as defined by the power management system) (block1206). If there is no change, no power adjustment activity is performed, and the processing logic returns to block1202to measure additional physiological measurements according to the first pattern. If there is a change, a power adjustment activity may be performed to change the first pattern to a second pattern, and the processing logic measures a second set of physiological measurements of the user using the physiological sensor according to the second pattern (block1208). The processing logic can measure an activity level using the activity sensor (block1210) and return to block1206. In this embodiment, there are two patterns, however, in other embodiments, more than two patterns are possible. For example, the activity levels can be separated by multiple thresholds and depending on the measured activity level, corresponding patterns may be selected. In one embodiment, the different patterns are different sampling rates used to take physiological measurements. Alternatively, the different patterns can have any combination of the following: different numbers of sensors being used to take physiological measurements; different number of different physiological measurements to take; different frequency or different granularity of taking physiological measurements; different combination of other components being turned on or off during a period; different combinations of permitted communication types, frequencies, power levels, data rates, communication channels to transmit or receive data using one or more RF circuits components of the system. In another embodiment, the processing logic measures a first set of physiological measurements of a user using a physiological sensor according to a first pattern. The processing logic measures an activity level of the user using an activity sensor of the wearable hardware device. Processing logic adjusts the first pattern to a second pattern to take a second plurality of physiological measurement of the user in view of the activity level. The processing logic measures a second set of physiological measurements using the physiological sensor according to the second pattern. In a further embodiment, the processing logic determines that the activity level is below a threshold level. The processing logic adjusts to the second pattern accordingly where the second pattern includes a lower sampling rate than a sampling rate at which the first set of physiological measurements are taken with the first pattern. In another embodiment, when the processing logic determines that the activity level is above a threshold level, the processing logic adjusts to the second pattern accordingly where the second pattern includes a higher sampling rate than a sampling rate at which the first set of physiological measurements are taken with the first pattern. In other embodiments, the processing logic can use one or more predictive algorithms to determine a probability of change in the activity level over time. FIG.13is a flow diagram of a method1300of power management of a wearable UMD according to another embodiment. The method1300is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In one embodiment, the PMM1120ofFIG.11performs the method1300. In another embodiment, the PMS140ofFIG.1Aperforms the method1300. Alternatively, other components of the UMDs, as described herein, can perform some or all of the operations of method1300. Referring toFIG.13, processing logic begins with measuring a first set of physiological measurements of a user using a physiological sensor (one or more sensors) according to a first pattern (block1302). The processing logic measures an activity level of the user using an activity sensor of the wearable hardware device (block1304). The processing logic determines whether the activity level is less than a first threshold (block1306). If so, the processing logic measures a second set of physiological measurements of the user using the physiological sensor according to a second pattern (block1308). For example, the second pattern may be a lower sampling rate than a first sampling rate of the first pattern. If at block1306the activity level is not less than the first threshold, the processing logic determines if the activity level is less than a second threshold level (block1310). If so, the processing logic measures a second set of physiological measurements of the user using the physiological sensor according to a third pattern (block1312). For example, the third pattern may be a higher sampling rate than the first sampling rate of the first pattern. If at block1310the activity level is not less than the second threshold, the processing logic measures a second set of physiological measurements of the user using the physiological sensor according to a fourth pattern (block1314). For example, the fourth pattern may be a higher sampling rate than the first sampling rate of the first pattern and the second sampling rate of the second pattern. The processing logic at blocks1308,1312, and1314may return to block1304to measure an activity level or determine whether the activity level is changed and proceed accordingly. In other embodiments, more than two threshold levels may be used. Also, in other instances, the threshold levels can be different conventions and the comparison can be whether the activity level meets and/or exceeds the threshold level. In this embodiment, there are four patterns, however, in other embodiments, different combinations of patterns can be used. In one embodiment, the different patterns are different sampling rates used to take physiological measurements. Alternatively, the different patterns can have any combination of the following: different numbers of sensors being used to take physiological measurements; different number of different physiological measurements to take; different frequency or different granularity of taking physiological measurements; different combination of other components being turned on or off during a period; different combinations of permitted communication types, frequencies, power levels, data rates, communication channels to transmit or receive data using one or more RF circuits components of the system. In other embodiments, a hardware state machine may be used to determine which state the wearable UMD is in and perform the appropriate power adjustment activities corresponding to the state. The hardware state machine can also build in timers to prevent quick switching in some circumstances. FIG.14provides an example illustration of the device, such as a user equipment (UE), a base station, a UMD, a mobile wireless device, a mobile communication device, a tablet, a handset, or another type of wireless device according to one embodiment. The device can include one or more antennas configured to communicate with different devices. The device can be configured to communicate using at least one wireless communication standard including 3GPP LTE, WiMAX, High-Speed Packet Access (HSPA), Bluetooth®, and Wi-Fi® technologies. The device can communicate using separate antennas for each wireless communication standard or shared antennas for multiple wireless communication standards. The device can communicate in a wireless local area network (WLAN), a wireless personal area network (WPAN), and/or a WWAN. FIG.14also provides an illustration of a microphone and one or more speakers that can be used for audio input and output from the device. The display screen may be a liquid crystal display (LCD) screen, or another type of display screen such as an organic light emitting diode (OLED) display. The display screen can be configured as a touchscreen. The touch screen may use capacitive, resistive, or another type of touch screen technology. An application processor and a graphics processor can be coupled to the internal memory to provide processing and display capabilities. A non-volatile memory port can also be used to provide data input/output options to a user. The non-volatile memory port may also be used to expand the memory capabilities of the wireless device. A keyboard may be integrated with the wireless device or wirelessly connected to the wireless device to provide additional user input. A virtual keyboard may also be provided using the touchscreen. Various techniques, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, non-transitory computer readable storage medium, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various techniques. In the case of program code execution on programmable computers, the computing device may include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The volatile and non-volatile memory and/or storage elements may be a RAM, EPROM, flash drive, optical drive, magnetic hard drive, or another medium for storing electronic data. The base station and mobile station may also include a transceiver module, a counter module, a processing module, and/or a clock module or timer module. One or more programs that may implement or utilize the various techniques described herein may use an application programming interface (API), reusable controls, and the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) may be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and combined with hardware implementations. It should be understood that many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The modules may be passive or active, including agents operable to perform desired functions. Reference throughout this specification to “an example” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in an example” in various places throughout this specification are not necessarily all referring to the same embodiment. As used herein, multiple items, structural elements, compositional elements, and/or materials may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and example of the present invention may be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as de facto equivalents of one another, but are to be considered as separate and autonomous representations of the present invention. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the foregoing description, numerous specific details are provided, such as examples of layouts, distances, network examples, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, layouts, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. While the foregoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention. Accordingly, it is not intended that the invention be limited, except as by the claims set forth below. FIG.15illustrates a diagrammatic representation of a machine in the exemplary form of a computer system1500within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The exemplary computer system1500includes a processing device (processor)1502, a main memory1504(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory1506(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device1518, which communicate with each other via a bus1530. Processing device1502represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device1502may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device1502may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device1502is configured to execute instructions1526for performing the operations and steps discussed herein. The computer system1500may further include a network interface device1522. The computer system1500also may include a video display unit1510(e.g., a liquid crystal display (LCD), a cathode ray tube (CRT), or a touch screen), an alphanumeric input device1512(e.g., a keyboard), a cursor control device1514(e.g., a mouse), and a signal generation device1516(e.g., a speaker). The computer system1500may further include a video processing unit1528and an audio processing unit1532. The data storage device1518may include a machine-readable storage medium1524on which is stored one or more sets of instructions1526(e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions1526may also reside, completely or at least partially, within the main memory1504and/or within the processing device1502during execution thereof by the computer system1500, the main memory1504and the processing device1502also constituting computer-readable storage media. The instructions1526may further be transmitted or received over a network1520via the network interface device1522. While the machine-readable storage medium1524is shown in an exemplary implementation to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure. Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “segmenting”, “analyzing”, “determining”, “enabling”, “identifying,” “modifying” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer-readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions. The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 188,474 |
11857338 | DETAILED DESCRIPTION The invention illustrated inFIG.1relates to a system10for analyzing physico-chemical properties of a skin surface. The system10comprises a processing unit11associated with a contact sensor12and an environment sensor13. More precisely, the contact sensor12has an assembly of sensors17necessary for analyzing physico-chemical characteristics of the skin surface on which the contact sensor12is placed. Preferably, the sensors17implemented enable a hydration level and/or a sebum quantity and/or a desquamation level to be measured. For example, a hydration sensor, based on MEMS technology, measures a capacitance variation between the air and the skin surface in contact with this latter. Indeed, the dielectric constant of the skin is proportional to the quantity of water that it contains. This measurement method enables the penetration of electrostatic field lines to be controlled by the geometry of the sensor (interdigital combs) and by the excitation frequency. Furthermore, this process is non-invasive and the rapidity of acquisition, less than 5 seconds, enables reactions of the skin to the contact with the sensor to be avoided. To measure the density of sebum, two methods can be used. A first method consists in the use of the fluorescent properties of sebum. Under deep blue/near UVA radiation, sebum reacts by emitting at around 560 nm (orange-red). To do this, a monochromatic (or narrow spectral band) light source is used centered around 395 nm. The acquisition of sebum fluorescence is assured by the use of a CMOS sensor interfaced with a high-pass filter with a cut-off frequency of around 510 nm. The image obtained is segmented by image processing algorithms to quantify the surface covered by the sebum. A second method consists in the use of a patch which reacts with sebum (by becoming translucent on contact with a lipid). This patch is applied on the skin and captured by a CMOS sensor under white lighting with cross-polarization (between the emission and the CMOS sensor) obtained by using linear polarizing films. The image obtained is segmented by image processing algorithms to quantify the surface covered by the sebum. In order to measure desquamation, it is possible to use an adhesive patch enabling dead cells to be collected by applying it to the skin. This patch is then captured by a CMOS sensor under white lighting with cross-polarization then with parallel polarization. The two images thus obtained enable the dead cells to be segmented depending on their thickness and to determine a desquamation index. In the example ofFIG.1, the information20from these sensors is formatted by a component18, for example a microcontroller, before being sent to the processing unit11by a wireless communication module19. The contact sensor12may take a plurality of forms without changing the invention. For example, the contact sensor12may be pear-shaped with a first part intended to hold the contact sensor12and a second part intended to place the sensors17in contact with the user's skin. An activation button enables a measurement to be triggered of the skin parameters of the user. As a variant, the contact sensor12may be connected to the processing unit11by a wired connection without changing the invention. The environment sensor13also constitutes a nomadic object connected to the processing unit11by a wireless communication module15. More precisely, the environment sensor13has an assembly of sensors14necessary for the analysis of the environment in which the user evolves throughout the day. For example, a sensor14may be a simple temperature sensor. Other more complex sensors14may also be used, such as sensors capable of detecting air quality. For example, the company AIRPARIF® offers sensors14enabling fine particles to be detected. The particulate matter, or PM, is a complex mixture of extremely small particles and liquid droplets. Particulate pollution is constituted of a number of components, including acids (for example, nitrates and sulfates), organic chemical product, metals, soil particles or dust. Particle pollution could be at the origin of 42,000 premature deaths per year in France and numerous illnesses (asthma, allergies, respiratory and cardio-vascular diseases, lung cancer). The largest (greater than 2.5 micrometers) fall quite quickly, their duration in the air is in the order of 1 day, while the finest may remain in suspension for 1 week and travel thousands of kilometers. Once deposited, the particles may then be resuspended under the action of wind or, in an urban area, under the action of road traffic. The size of the particles is directly linked to their potential harmfulness with regards to health. Environmental organizations are concerned by particles having a diameter less than or equal to 10 micrometers because these are the particles which generally pass through the throat and nose and penetrate into the lungs. Once inhaled, these particles may affect the heart and lungs and cause serious health effects. The particles are classified into four categories: PM 10, large inhalable particles such as those found near roads and industrial dusts, they are less than 10 micrometers in diameter and include fine, very fine and ultrafine particles. PM 2.5, fine particles such as those contained in smoke and haze, are less than or equal to 2.5 micrometers in diameter. These particles may be directly emitted from sources such as forest fires, or they may form when gases, emitted by thermal power stations, industry and motor vehicles, react in the air. Diesel engines are the main source of them. Fine particles also include very fine and ultrafine particles. PM 1, very fine particles (the most dangerous to health) are less than or equal to 1 micrometer in diameter. They are practically only eliminated by precipitation and have the time to accumulate in the air. They thus include ultrafine particles. PM 0.1, ultrafine particles of which the diameter is less than 0.1 micrometer, also known as “nanoparticles”. Their lifespan is very short, in the order of a few minutes to a few hours. PM 2.5 and PM 1 may fall into the deepest part (alveoli) of the lungs whereupon gaseous exchanges take place between the air and blood. These are the most dangerous particles because the alveoli of the lungs have no efficient means of eliminating them and if the particles are soluble in water, they may pass into the blood stream within a few minutes. If they are insoluble in water, they remain in the alveoli of the lungs for a long period. The soluble elements may be polycyclic aromatic hydrocarbons (PAH) or benzene residues classified as carcinogenic. The sensors14may consist of an optical particle detector. The operating principle of these sensors14is the following: when a laser beam passes through pure air, the beam is invisible. When the beam is visible, it is because the beam is diffracted on the particles throughout its path. One such particle sensor uses a near infrared source, such as an avalanche laser diode or an electroluminescent diode with a narrow emission (or beam) angle associated with an amplifier in order to detect the visibility of the beam. Each particle which passes in front of the laser beam diffracts a part of this beam towards the sensor and, since the flow of air is constant, the width of the impulse measured enables the particles to be classified by size. A sliding average of quantities of particles per category is carried out over a period of 30 seconds. Other types of sensors may also be used such as condensation nucleus counters, APS (Aerodynamic Particle Sizer), differential mobility analyzers, DMPS granulometers, ELPI samplers, as well as other sensors based on mass measurement detection principles. The principle of condensation nucleus counters (CNC) is to artificially enlarge the particles by water or butanol condensation so as to be able to detect them with a conventional optical system. CNCs enable particles between 3 nm and 1.1 μm in diameter to be detected. An Aerodynamic Particle Sizer enables the concentration of the number of particles in a particle size range of 0.5 μm to 20 μm to be provided. The principle is that of time of flight spectrometry. The sample of aerosols is accelerated into an orifice. The rate of particle acceleration is determined by their aerodynamic diameter, the largest having the lowest acceleration due to a stronger inertia. After acceleration, the particles cross a system composed of two laser beams, a mirror and a photodetector enabling them to be counted and their speed, and therefore their aerodynamic diameter, to be measured. The Differential Mobility Analyzer (DMA) electrically charges the particles and then makes them pass into an electrostatic field, the assembly enabling the particles to exit at different times depending on their size, since the electric mobility is inversely related to the dimension of the particles. The particles are then counted using a Condensation Nucleus Counter. The DMPS (Differential Mobility Particle Sizer) or SMPS (Scanning Mobility Particle Sizer) thus combines a DMA and a CNC. This type of apparatus enables the number of particles between 10 nm and 1 μm to be determined. The Electrical Low Pressure Impactor (ELPI) operates using the same principle as the cascade impactors, but the particles are charged on entry into the impactor and an electrometer records the induced charges of each of the stages during the impact of the particles. Signal analysis enables the granulometry to be characterized, within a range of 0.07 to 10 μm. An acquisition program makes it possible to visualize the distributions, by volume and by mass of the particles. Thus, the number of sensors14capable of being used to implement the invention is particularly large and makes it possible to obtain very diverse information16about the stresses to which the skin is subjected throughout the day. The sensor14may also carry out a measurement of position. This measurement of position is thus transmitted to the processing unit11connected to a remote server making it possible to associate meteorological information or, more generally, information relating to the environment, with the position of the user. The measurements16from the sensors14are preferably carried out regularly throughout the day while the environment sensor13is worn by the user. For example, the environment sensor13may have an internal clock which takes measurements16every 10 minutes. The environment sensor13may take a plurality of forms without changing the invention. Preferably, the environment sensor13is a small object, that is to say of which the external dimensions are contained in a cube with sides of 5 cm. For example, the environment sensor13may be integrated into a bracelet, keyring, handbag trinket, charm or broach. Furthermore, the contact and environment sensors12-13may be integrated into a single and same casing in order to limit the number of objects of the system10. The measurements16,20from the contact and environment sensors12-13are thus transmitted to the processing unit11which integrates several bodies. Firstly, these measurements16,20are received by a wireless receiver22which transmits these measurements to analysis means23enabling physico-chemical properties24of the skin to be determined. An example embodiment of these analysis means23is illustrated inFIG.2. Two measurements16from the contact sensor12and two measurements20from the environment sensor13are analyzed by the analysis means23. The difference of each measurement16,20is analyzed with a Gaussian function centered on the average value expected for each measurement16,20. The distance of the measurement16,20with the Gaussian function being normalized between 0 and 1. The distances are then correlated to obtain a vector containing the physico-chemical properties24of the user's skin. As a variant, the Gaussian functions may be replaced by correspondence tables associating the properties of the skin with environmental parameters (as absolute value and as variation). For example, the two measurements20from the contact sensor12may represent the hydration level Th and the desquamation rate Td of the user's skin whereas the two measurements16from the environment sensor may represent the temperature T and the ultraviolet rays UV to which the skin is subjected throughout the day. The distance between the hydration level Th measured and the normal level is 0.5 at the output of the first Gaussian function and the distance between the desquamation level Td of the skin and the normal level is 0.3 at the output of the second Gaussian function. The distance between the temperature T measured over time and the temperature resistance on the hydration of the skin is 0.9 at the output of third Gaussian and the distance between the ultraviolet rays UV to which the skin is subjected and the resistance to radiation on the desquamation level is 0.5 at the output of the fourth Gaussian. It follows that the hydration level Th of the skin at the output of the analysis means23will be estimated as 0.7 as being the average between the distance between the hydration level Th measured and the normal level, 0.5, and the distance between the temperature T measured over time and the temperature resistance on the hydration of the skin, 0.9. Likewise, the desquamation level Td of the skin at the output of the analysis means23will be estimated as 0.4. The vector containing the physico-chemical properties24of the user's skin at the output of the analysis means23will comprise the values [0.7; 0.4] according to this example. Furthermore, the maximum of the distances from the Gaussian functions makes it possible to detect a physico-chemical property30of the skin having been subjected to the greatest stresses. In the previous example, the maximum of the distances is reached for the temperature T to which the skin is subjected to, which has a distance of 0.9. Thus, the most important stress factor for the skin will be estimated as being the temperature of the environment. The vector containing the physico-chemical properties24of the user's skin is then transmitted to a classification body25of the user's skin by comparing reference vectors, stored in a database33, with that containing the physico-chemical properties24of the user's skin. The maximum correlation between the reference vectors and that containing the physico-chemical properties24of the user's skin makes it possible to associate the user's skin with a category29. A treatment27associated with this skin category29is also stored in a database32of the processing unit11. The processing unit11has a user interface28displaying the physico-chemical property30of the skin having been subjected to the greatest stresses as well as the treatment27proposed depending on the skin category29detected. For example, if the hydration level has greatly decreased relative to a previous measurement, for example two days earlier, the processing unit11will precisely consider the evolution of the measurements16from the environment sensor. If the temperature T has greatly decreased relative to the two previous days, it is possible to conclude that the user has changed environment, for example due to a ski trip. Thus, the drop in hydration is normal and not linked to a physiological problem. The processing unit11will therefore propose products having an immediate effect and not “long-term” treatment products. According to another example of the use of environmental conditions on the physico-chemical parameters of the skin, significant variations may be noticed in the hydration levels depending on the external temperature and the relative humidity level. If the measurement of the hydration level is carried out in a one-off manner without taking into account the environmental conditions, the value obtained may appear abnormal whereas in reality it only reflects the environment. A temperate environment, that is to say with a temperature close to 25° C. and a humidity level between 60 and 65%, enables skin to have optimal physiological functioning. The measurement of physico-chemical parameters of the skin under these conditions are thus representative of its state of health. When the relative humidity level is less than 30%, the hydration level decreases as the water loss increases to compensate for the dryness of the air. This phenomenon is amplified by extreme temperatures, that is to say temperatures below 5° C. and above 29° C. Knowing these environmental conditions, the hydration level measured may be interpreted directly to propose a temporary suitable treatment; but this hydration level may also be corrected by the ambient temperature and the relative humidity level by using multilinear regression. The corrected hydration level is thus comparable to that measured in a temperate environment and a long-term treatment may be suggested. The determination of the multilinear regression parameters is carried out empirically by measurements on a panel of similar individuals. Furthermore, the user interface28may enable the user to enter information31, such as the type of creams applied to the user's skin during the day or medical contraindications to products. This type of information may have an impact on the classification of the user's skin and may be used by the analysis means23. Preferably, the processing unit11will be loaded onto a smartphone in order to use the processor and memory of the smartphone to carry out the processing of the measurements16,20. Furthermore, the processing may be partially or fully transferred to a remote server connected to the smartphone by a wireless connection. In order to know which products27to use, the user initially launches the smartphone app and will be able to find information about the measurements16,20from the previous days. The app may also issue information about the expected stress factors for the day, for example weather forecasts. To carry out a one-off measurement, the user will be asked to use the contact sensor12and to place it on the surface to be analyzed, for example the cheekbones. The start and end of the measurement are signaled by the smartphone by a vibration if it is in silent mode or by a sound. The app will then suggest products27according to this unique measurement. If the user plans to go out, the user will be asked to take the environment sensor13with them. At the end of the day, the user will carry out another one-off measurement using the contact sensor12and the results are then correlated with the measurements16captured throughout the day. The app may contain other information linked to the measurements16,20carried out enabling the user to be guided in their lifestyle in order to improve the health of their skin. For example, if the user's skin is not in good health and the weather forecast predicts temperatures which are too cold for the skin, the user could be advised to limit their exposure for a few days, the time required to rebuild the epidermis. Furthermore, the app advantageously has a configuration phase prior to using the app. This configuration phase enables user information to be acquired in order to configure the app according to the user requirements and preferences. For example, the user is invited to answer questions in this configuration phase intended to determine the brand of products preferred by the user, the average frequency of use of the products, the recurrence with which the user would like to be offered advice by the app, etc. Also, the user may set the frequency of messages sent/displayed by the app and the user may also define thresholds above which they would always want to be alerted, for example when the fine particle pollution exceeds a predefined threshold. The invention thus suggests particularly effective products for the user depending on two distinct measurements: a one-off measurement and a measurement performed over time in order to evaluate the stress to which the user's skin is subjected throughout the day. | 20,131 |
11857339 | BEST MODE Mode for Invention Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skill in the art can easily implement the present invention. The present invention may be embodied in many different forms and is not limited to the embodiments described herein. In order to clearly explain the present invention, parts irrelevant to the description are omitted in the drawings, and the same reference numerals are assigned to the same or similar components throughout the specification. In the specification, terms such as “comprises” or “have” are intended to designate the existence of features, numbers, steps, operations, components, parts, or combinations thereof described in the specification, and it should be understood that the existence or addition of the other features, numbers, steps, operations, components, parts or combinations thereof is not precluded in advance. Also, when a part of a layer, film, region, plate, etc. is said to be “on” another part, this includes not only the case where the other part is “directly on”, but also the case where there is another part therebetween. Conversely, when a part of a layer, film, region, plate, etc. is said to be “under” another part, this includes not only cases where it is “directly under” another part, but also a case where there is another part therebetween. Hereinafter, a device for a hazardous air quality warning and air quality improvement according to an embodiment of the present invention will be described in more detail with reference to the drawings. Referring toFIGS.1to8, a headgear-type device for hazardous air quality warning and air quality improvement according to a first embodiment of the present invention may include a headgear-type wired/wireless communication and health-bio biometric sensor3, a terminal20, a server (S), an air quality sensor12, and a portable air purifier10. Referring toFIGS.1to7, the headgear-type wired/wireless communication and health-bio biometric sensor3may detect biometric information so as to measure the state of the worker's body in close contact with the worker's head. In this case, the headgear-type wired/wireless communication and health-bio biometric sensor3may be installed to be in close contact with the worker's head within 10 cm outside the ear to measure oxygen saturation of the corresponding part. In particular, since blood vessels pass in front of the ear, the bio-signal may be obtained by making close contact with the target. In this case, the headgear-type wired/wireless communication and health-bio biometric sensor3may include a pulse oximeter for detecting oxygen saturation of blood, which is one of biometric information. In this case, the pulse oximeter may be used for a non-vascular measurement of oxygen saturation in blood by applying light of two different wavelengths from a semiconductor device to a blood vessel, and may be very useful because it has less invasiveness and quick response to the patient. In this case, the headgear-type biometric sensor may be a pulse signal sensor for detecting a pulse of blood, which is one of biometric information, while making close contact with the skin of the head. In this case, the headgear-type biometric sensor may be a temperature sensor that detects a body temperature, which is one of biometric information, while making close contact with the skin of the head. In this case, the headgear-type biometric sensor may be a proximity sensor that detects a close-fitting body and other signals, which are one of biometric information, while making close contact with the skin of the head. In this case, referring toFIGS.5to7, the headgear-type wired/wireless communication and health-bio biometric sensor3may include cases310and320, and health-bio oxygen saturation sensor311. In this case, the cases310and320may have both end portions310formed at symmetrical positions in the vicinity of both ears of the worker's body to make close contact with the ears by elastic force, a mounting portion310aextending from the both end portions310to be caught on the ears, and an extension portion320extending from the mounting portion310ato meet each other at the back of the head while providing elastic force to the both end portions. In this case, the health-bio oxygen saturation sensor311may be installed at both end portions310of the cases310and320to detect the oxygen saturation of the worker. In this case, the basic electronic components are mounted inside the case310and320of the headgear-type wired/wireless communication and health-bio biometric sensor3. That is, a battery and a PCB are basically mounted, and related electronic components may also be mounted to transmit and receive signals wirelessly with respect to the terminal. In this case, the headgear-type wired/wireless communication and health-bio biometric sensor3may transmit and receive data through wired/wireless communication with respect to basic electronic components, and speakers and microphones used for communication may be mounted inside the cases310and320. In this case, the headgear-type wired/wireless communication and health-bio biometric sensor3may have a power switch, an LED, etc. which are exposed on the outer surface of the case310and320to notify the operating state and abnormality of the biometric signal of the worker, as needed. In this case, in the headgear-type wired/wireless communication and health-bio biometric sensor3, both end portions310may come into close contact with the measurement position by the elastic force of the cases310and320, the mounting portion310aof the cases310and320may be caught on the ear, and the extension portion320may be caught on the back of the worker's head, so that the sensing operation may be performed in a state in which the biometric sensor is firmly fixed to the body during the operation of the worker. In this case, the headgear-type wired/wireless communication and health-bio biometric sensor may be equipped with other sensors including heart rate, pulse, body temperature, vibration, and position sensors, in addition to the oxygen saturation sensor to detect biometric information. Referring toFIGS.1to3, the terminal20may be connected to the headgear-type wired/wireless communication and health-bio biometric sensor3by wired or wireless manner to receive biometric information. In this case, a dedicated terminal may be used as the terminal20, or a corresponding application may be installed in smart phones which are widely used in these days. The server S may receive the biometric information from the terminal20, calculate whether it is within the normal state range according to the program based on the biometric information, and send a warning state to the terminal when the biometric information is out of the normal state range. Referring toFIGS.1to3, the air quality sensor12may detect air quality to obtain air quality information around the worker, and transmit the air quality information to the terminal20. As shown inFIGS.1to3, the portable air purifier10may be operated to purify the air around the worker when a warning state is received from the server S to the terminal20. In this case, the portable air purifier10may be attached to a part of the worker's body so that the portable air purifier10may be carried. In this case, the portable air purifier10may include a hose15for connecting a mask16covering a mouth and a body14of the air purifier so that purified air can be directly supplied to the mouth of the worker. In this case, the portable air purifier10may be provided with a handle so that the worker may hold the portable air purifier10using a hand, and a mask that covers the mouth may be fixed an exhaust port for discharging the purified air. In this case, each of the headgear-type wired/wireless communication and health-bio biometric sensor3and the air quality sensor12may be driven by an independent constant power source or battery. Referring toFIG.4, the operation sequence of the device for hazardous air quality warning and air quality improvement according to the present invention is briefly illustrated. Referring toFIG.4, when the worker starts work at the work position, monitoring is started. That is, the worker operates the application of the terminal20and also operates the sensors3and12and the portable air purifier10at the same time. When the worker's condition monitoring starts, the biometric information and air quality information are received in the terminal20in real time by the headgear-type wired/wireless communication and health-bio biometric sensor3and air quality sensor12, and at the same time, the information from the terminal20is transmitted to the server S. The server S calculates whether the worker's biometric information, surrounding air quality, and environmental data are within the safe range as pre-programmed. In this case, if the calculated value is within the safe range, the server S allows again the measurement for the biometric information and air quality information after a predetermined time elapses, and if the value is the same, it continues to repeat the measurement to ensure the safety of the worker. In this case, if the value calculated by the server S is out of the safe range, the server S warns the worker through the terminal20in various ways. In addition to the warnings, it is also possible to mobilize a rescue team or to automatically make a call to a911, a preset terminal or a device. In addition, the air quality around the worker may be purified by controlling the portable air purifier10to operate, or referring toFIG.3, the worker may wear the mask16in a state in which the hose15is connected to the exhaust port of the portable air purifier10and the mask16is connected to the hose15so that the worker can directly inhale the purified air. Referring toFIG.1there is shown a state in which the worker starts work in the working field. By notifying the start of the work through the terminal20, the surrounding situation monitoring is started. That is, the biometric information of the worker is received at regular intervals in real time by the wired/wireless communication and health-bio bio sensor3, and at the same time, the surrounding air quality information is also collected. The air quality sensor12may be mounted on the portable air purifier10or worn by a worker in an independent form. Referring toFIG.2, the terminal20makes communications with the wired/wireless communication and health-bio biometric sensor3, the air quality sensor12, and the air purifier10through various communication methods such as Bluetooth, infrared ray, optical wire, and wireless communication. The terminal20can also receive and transmit control signals while communicating with the devices. The terminal20transmits and receives the mutual communication result and information to and from the server S, and receives and transmits an accurate control signal to and from the server S in order to transfer the information to the portable air purifier10. Referring toFIG.3, when the information received from the headgear-type wired/wireless communication and health-bio biometric sensor3or the air quality sensor12is determined to be a warning and emergency state in the server S, it is transmitted to the worker through the terminal20so that the worker can evacuate from the contaminated workplace, or if the situation is very serious, the worker wears the mask16connected to the exhaust port of the portable air purifier10through the hose and evacuates. The portable air purifier10may be provided with a button1afor automatic and manual operation and a display10bshowing various states. Referring toFIGS.5and6, there is shown the external appearance of the headgear-type wired/wireless communication and health-bio biometric sensor3. The headgear-type wired/wireless communication and health-bio biometric sensor3may be configured such that the cases310and320may include both end portions310, the mounting portion310a, and the extension portion320. The both end portions320may come into close contact with the outside of the blood vessels in front and rear of the ears, which are parts of the worker's face, by the elastic force of the cases310and320, the mounting portions310aare configured to be caught on the ears, which are parts of the face, respectively, and the extension portion320is connected to each other so as to be caught on the back of the worker's head. Therefore, when the worker holds the both end portions310of the headgear-type wired/wireless communication and health-bio biometric sensor3and spreads the both end portions310in close contact with the front of the ears, the headgear-type wired/wireless communication and health-bio biometric sensor3is in close contact with the face of the worker to detect the oxygen saturation level of the worker in real time, and the oxygen saturation level is transmitted to the terminal20. Referring toFIG.7, both end portions310of the cases310and320, that is, the both end portions provided in inner surfaces thereof with the oxygen saturation sensor311make close contact with a portion located in front of the ear. Since the portion serves as a path of blood vessels of a person, it is suitable to measure the oxygen saturation. In other words, since blood vessels pass within about 10 cm near the ear, the measurement may be possible by positioning the both end portions on any one location around the ear. In particular, since large blood vessels pass in front of the ears, it would be most desirable to place the both end portions in this area. In a state in which the both end portions310are caught on the mounting portion310a, the both end portions310are kept in close contact with the worker, so that both hands are free and the operation of the worker is not affected at all. Meanwhile, referring toFIG.8, there is shown a headgear-type wired/wireless communication and health-bio biometric sensor3according to another embodiment of the present invention. Here, at least one connecting line312and323is further provided for connecting the cases310and320and the working hat of the worker to each other so as to distribute the load of the headgear-type wired/wireless communication and health-bio biometric sensor3. That is, when the elastic force of the cases310and320and the mounting state of the mounting portion310abecome longer, the worker may feel pain in the ear, so the connecting lines312and323are configured to be detachable to the working hat2using Velcro tape. In this case, the load applied to the entire mounting portion310acan be distributed, so that the worker can be prevented from feeling pain in the ear. In this case, the connecting lines312and323can be connected to and installed in the middle of both mounting portions310aand the extension portion320. Meanwhile, referring toFIGS.9and10, a headgear-type biometric sensor3according to another embodiment of the present invention is shown. Here, the extension portion320′ of the cases310and320′ may be provided with a linear shaft320a′ formed along the extension portion320′ so as to finely adjust the adhesion part of the wired/wireless communication and health-bio oxygen saturation sensor311, and a cylindrical adjustment portion320b′ installed eccentrically and rotatably on the linear shaft320a′. Referring toFIG.10, since it is rotated eccentrically, the distance of the biometric sensor in close contact with the back of the head may be finely adjusted, and eventually, the both end portion310may be finely adjusted in the front-rear directions. That is, if the sensing is not performed well, the adjustment portion320b′ may be rotated to finely adjust the sensing position so that the sensing may be achieved more accurately. Although one embodiment of the present invention has been described above, the idea of the present invention is not limited to the embodiments presented in this specification, and those skilled in the art who understand the idea of the present invention can configure, within the scope of the same idea, that other embodiments may be easily proposed by adding, changing, deleting, or adding components, and this will also fall within the scope of the present invention. INDUSTRIAL APPLICABILITY The present invention can be used as a safety device when working in a closed space such as underground. | 16,437 |
11857340 | DETAILED DESCRIPTION The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be clear and apparent to those skilled in the art that the subject technology is not limited to the specific details set forth herein and may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The following disclosure relates to an electronic wearable device, such as a watch or wrist-worn device. The wearable device can have electrodes capable of taking physiological measurements of a user when the device is worn on or otherwise coupled to a body of user. The placement or operation of the electrodes on the wearable device can, for example, allow for increased functionality in a relatively compact wearable device which may have limited real estate for functional components. According to some embodiments, a wrist-facing surface of a watch enclosure or watch band can include a pair of electrodes implemented as surface contacts that contact the user's body when worn on the user's wrist, allowing the electrodes to take electrical measurements from the user's skin or otherwise take electrical measurements from the user's body. A pairing of the electrodes can be configured to obtain an electrical resistance measurement of the user's skin in order to determine a galvanic skin resistance (GSR), also sometimes referred to herein as a galvanic skin response. The pairing of electrodes can be implemented as wrist-facing electrodes that are formed using a conductive coating such as a physical vapor deposition (PVD) coating. The conductive coating can be formed on a non-conductive surface of the watch, such as a rear surface of back cover that is made from a non-conductive material. The electrodes formed from the conductive coating can provide for a relatively thin conductive component that can be patterned in a desired area to permit the coated electrodes to obtain suitable measurements from the user's body without unduly interfering with other operational components. For example, operational components such as optical and/or electromagnetic devices can be included in the wearable device and configured to interact with other external objects through or around the coated electrodes. In some embodiments, the conductive coating used to form the electrodes can also provide a cosmetic feature, which can allow the electrodes to provide a desired coloring or other external appearance while also providing for physiological measurement functionality. According to some embodiments, one or more electrodes on the wearable device can be dual-purposed or multi-purposed for obtaining multiple types of physiological measurements. For example, to obtain a first type of measurement, an electrode on the wrist-facing surface can be operated in concert with another electrode on an outward-facing surface that faces away from the user's wrist. The electrode on the wrist-facing surface can provide a contact to the arm wearing the device, while other electrode on the outward-facing surface can provide a contact to the other free arm of the user by permitting the user to contact the outward-facing electrode with their other free arm. The electrodes coupled to the two arms can then cooperate to obtain an electrocardiogram (ECG or EKG) measurement based on an electrical potential difference between the two electrodes. When not in use for the ECG measurement, the wrist-facing electrode may then be operated for taking other physiological measurements, such as for a passive or continuous body monitoring scheme. By operating the same electrode for multiple measurements, e.g., both a ECG and GSR measurement, a need for one or more additional dedicated electrodes may be avoided and space may be saved on the wearable device. These and other embodiments are discussed below with reference toFIGS.1-9. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these Figures is for explanatory purposes only and should not be construed as limiting. FIG.1is a perspective view of an example of an electronic device100, such as a watch. While embodiments discussed herein are described with reference to a watch, it will be appreciated that the teachings relating to a watch can be applied to other electronic devices, including other wearable and/or portable computing devices. Examples include cell phones, smart phones, tablet computers, laptop computers, timekeeping devices, computerized glasses, headphones, head mounted displays, wearable navigation devices, sports devices, accessory devices, health-monitoring devices, medical devices, electronic bracelets and other jewelry. Hereinafter device100will be referred to as watch100. The watch100shown inFIG.1is implemented as a wrist-worn device having an enclosure102and a band104. The band104is configured to wrap around a wrist of a user to secure the device in place on the user's body. The band104is coupled to the enclosure102to permit the enclosure102to be worn on the user's body together with the band104. With reference toFIG.1, the enclosure102provides a structure that serves to enclose and support one or more internal components of the device, such as, for example, one or more integrated circuit chips, circuit boards, display devices, batteries, memory devices, or other functional components. It is contemplated that the enclosure can in general be implemented as any suitable structure that serves to enclose functional and/or operative components of the device, such as a watch, and that can be directly or indirectly coupled to the band104to permit the enclosure to be worn on the user's body. Although shown inFIG.1with a generally rectangular structure providing a rectangular front face, it is contemplated that the enclosure102can have any appropriate size or shape, such as round, hexagonal, square, or other shapes. In some embodiments, for example as shown inFIG.1, the enclosure102can provide main casing or casing assembly that provides an external structural framework of the watch100with which the user can directly interact. As shown inFIG.1, the enclosure102can include a perimeter sidewall108adjoining a front cover116and a back cover118. The front cover116can be disposed on a front side of the sidewall108and a front side of the enclosure102, while the back cover118can be disposed on a back side of the sidewall108and the enclosure102that is opposite to the front side. Internal components can be disposed be disposed in an interior space between the front cover116and the back cover118, while the sidewall108can extend peripherally or circumferentially around the interior space and internal components contained therein. It is contemplated that the front cover116, back cover118, and sidewall108can each be made from discrete components or pieces that are attached or otherwise assembled together. Alternatively, it is contemplated that any two or more of the enclosure components can be integrally formed from a substantially monolithic structure to provide for the desired enclosure framework. It is also contemplated that any one of the front cover116, the back cover118, and the sidewall108can be made from multiple discrete pieces, layers, or other components that are attached or otherwise assembled together. In some embodiments, the enclosure102or any one or more parts of the enclosure102can be made from rigid materials. Examples of rigid materials that can be utilized for the enclosure102include glass, ceramics, crystalline materials such as sapphire, aluminum, steel, and/or plastics. In some embodiments, for example as shown inFIG.1, one or more external functional components such as input/output (I/O) devices can be included as part of the enclosure102or otherwise supported by or coupled to the enclosure102to allow for manipulation by or other interaction with a user. As used herein, “I/O device” refers to any user interface device configured to receive input from a user and/or provide output to a user. “Input device” as used herein refers to any user interface device configured to receive input from a user and which may or may not be configured to provide output. “Output device” as used herein refers to any user interface device configured to provide output to a user and which may or may not be configured to receive input. For example, the watch100can include one or more buttons114disposed externally on or as part of the enclosure102. The buttons114can, for example, be implemented as mechanical push buttons or touch-sensitive buttons. Additionally or alternatively, the watch100can include a rotatable dial112disposed externally on or included as part of the enclosure102. The rotatable dial112can be disposed rotatably with respect to the sidewall108, and configured to provide for scrolling, sliding, or user interface (UI) navigation inputs. The button114and dial112are examples of I/O devices configured to interact with a user, and more particularly, are examples of input devices configured to receive input from a user for providing one or more functional inputs to the watch100. It is contemplated that the I/O devices disposed on or supported by the enclosure can be positioned on the sidewall108, as shown inFIG.1, or positioned in any other suitable location on the enclosure. With respect to the example shown inFIG.1and the frame of reference of a watch or wrist-worn device, the back cover118and back side correspond to a side of the enclosure102and the watch100that faces a wrist of the user when the watch100is worn on the wrist. More generally, the back cover118and the back side can face a body part of the user when the wearable device is worn on the body part. The front cover116and front side correspond to a side of the enclosure102and the watch that face away from the wrist of the user. More generally, the front cover116and the front side can face away from a body part of a user when the wearable device is worn on the body part. A display can be provided to present images or output various graphical information on or through the front cover116of the enclosure. In some embodiments, the front cover116can provide an input surface for a touch-sensitive device included in or overlapping with the display, such as a touch screen interface, force sensing device, and/or a fingerprint sensor. The input surface can, for example, permit a user to interact with graphical user interface (GUI) elements presented on the display. It is also contemplated that other wrist-worn devices can omit a display. Additionally or alternatively, it is contemplated that other I/O devices can be included, such as speakers, microphones, gesture interfaces, motion sensors, and the like. The band104shown inFIG.1is implemented as a wristband that includes a first band strap120and a second band strap122. The first band strap120and the second band strap122can connect to each other through a connector124that may, for example, be implemented as a clasp, a buckle, a magnetic attachment, or any other suitable mechanism for adjoining the first band strap120to the second band strap122. Each of the first band strap120and the second band strap122can be made from any suitable flexible and/or rigid components that can generally conform to the outer surface of a user's wrist. Examples include, without limitation, leather, fabrics, rubber, nylon, plastics, and metallic bracelets. It is contemplated that the band104can be implemented in numerous different configurations and can generally include any suitable flexible or rigid components that can be removably wrapped around a wrist of a user. For example, in some embodiments the band104can omit the connector124, such as an implementation having a single continuous watch band loop that is expandable to permit the expanded band104to be slid around a user's hand. Additionally or alternatively, the band104can include a sleeve or envelope that overlaps with the enclosure102in whole or in part to couple to the band104to the enclosure and hold the enclosure102in place. Various other configurations are possible. Likewise, while the band104is implemented as a wristband, it will be appreciated that the teachings of the watch band can be applied to other bands that are configured to wrap around other body parts of a user. The attachment interface106shown inFIG.1includes multiple attachment points, and in particular, includes an attachment point on one edge of the enclosure102and another attachment point on an opposing edge of the enclosure102to connect the first band strap120to the enclosure102and the second band strap122to the enclosure, respectively. The attachment interface106can include, for example, a slot, a lug, a threaded fastener, or any other suitable component to connect the band104to the enclosure102. Although multiple attachment points are shown inFIG.1, it is contemplated that other implementations can utilize more attachment points or a single attachment point for connecting the band104to the enclosure102. Although the attachment interface106is shown disposed on the sidewall108inFIG.1, it is contemplated that the attachment interface106can disposed on or coupled to any other feasible location on the enclosure102. With continued reference toFIG.1, the watch100can include one or more electrodes disposed on one or more exterior surfaces of the watch100to provide for physiological sensing functionality. The sensing electrodes can be disposed on one or more exterior surfaces of the watch100, such as exterior surfaces of the watch enclosure102and/or band104, to provide for a surface contact can take electrical measurements from the user's skin or body. The electrodes can be operated to perform an electrical measurement, for example, to measure electrocardiographic (ECG) characteristics, galvanic skin resistance, and other electrical properties of the user's body and/or the environment. It will be appreciated that any suitable number of electrodes can be provided. Each electrode can be insulated from other electrodes and/or other components of the watch. One or more electrodes can operate as a first terminal, and one or more electrodes can operate as an additional terminal. The electrodes can be of any suitable size, shape, and arrangement. According to various embodiments, the sensing electrodes can include one or more wrist-facing electrodes130, or more generally one or more body-facing electrodes disposed on a wrist-facing surface of the watch100or body-facing surface of the wearable device. As used herein, a “wrist-facing” surface or “wrist-facing” electrode refers to an exterior surface or electrode of a wrist-worn device that is configured to face towards or make contact with a wrist of a user when the wrist-worn device is worn on that wrist. Likewise and more generally, as used herein a “body-facing” surface or “body-facing” electrode refers to an exterior surface or electrode of a wearable device that is configured to face towards or make contact with a body part of a user when the wearable device is worn on that body part. The precise location and orientation of a wrist-facing or body-facing surface with respect to the components of the wearable device can vary depending on the implementation, design, and construction of a particular device. With respect to the watch example shown inFIG.1, the exterior surface of the back cover118that is disposed on the back side of the back cover118and faces away from an interior of the enclosure102is part of a wrist-facing surface of the watch100. Likewise, the exterior surface of the band104that corresponds to the inner diameter of the watch band loop is part of the wrist-facing surface of the watch100.FIG.1shows examples of wrist-facing electrodes130that can be disposed on the wrist-facing surface of the watch100. For example, in some embodiments the watch100can include one or more electrodes disposed on the wrist-facing surface of the back cover118and/or one or more wrist-facing electrodes130disposed on the wrist-facing surface of the band104. According to some embodiments, and as further described herein, one or more pairings of the wrist-facing electrodes130can be used to obtain one or more GSR signals based on a measurement of resistance between the pairing of electrodes. According to some embodiments, the sensing electrodes can additionally include one or more outward-facing electrodes132disposed on an outward-facing surface of the watch100or wearable device. As used herein, an “outward-facing” surface or “outward-facing” electrode refers to an exterior surface or electrode of a wrist-worn device or other wearable device that is configured to face away from and not make contact with a wrist of a user or other body part of a user when the wrist-worn device or other wearable device is worn on that wrist or body part. The precise location and orientation of an outward-facing surface with respect to the components of the wearable device can vary depending on the implementation, design, and construction of a particular device. With respect to the watch example shown inFIG.1, the exterior surface of the front cover116that is disposed on the front side of the front cover116and faces away from an interior of the enclosure102is part of an outward-facing surface of the watch100. The exterior surface of the sidewall108, the rotatable dial112, and the buttons114are also part of the outward-facing surface of the watch100, as is the exterior surface of the band104that corresponds to the outer diameter of the watch band loop.FIG.1shows examples of outward-facing electrodes132that can be disposed on the outward-facing surface of the watch100. For example, in some embodiments the watch100can include one or more outward-facing electrodes132disposed on the outward-facing surface of the enclosure102such as the outward facing electrodes132disposed on the sidewall108, the front cover116, and/or input devices such as the button(s)114and/or the rotatable dial112. Additionally or alternatively, the watch can include one or more outward-facing electrodes132disposed on the outward-facing and outer diameter surface of the band104. FIG.2shows a block diagram of watch100showing various functional components that may, for example, be housed within the enclosure102. The watch100can further include one or more other user interfaces238for receiving input from and/or providing output to a user. For example, one or more buttons, dials, crowns, switches, or other devices can be provided for receiving input from a user. The user interface238can include a speaker, a microphone, and/or a haptic device. A haptic device can be implemented as any suitable device configured to provide force feedback, vibratory feedback, tactile sensations, and the like. For example, in one embodiment, the haptic device may be implemented as a linear actuator configured to provide a punctuated haptic feedback, such as a tap or a knock. As further shown inFIG.2, the watch100includes one or more processing circuit(s)240(referred to generally herein as processing circuitry) that is/are configured to perform one or more functions for the watch100. By way of example, the processing circuitry can include one or more microprocessors, microcontrollers, field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs) such as I/O controller ICs, central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), discrete circuit elements, or other suitably configured electronic circuitry or computing elements. The processing circuitry can include or be configured to access a memory having instructions stored thereon. The instructions or computer programs may be configured to perform one or more of the operations or functions described with respect to the watch100. The processing circuitry240can be implemented as an electronic device capable of processing, receiving, or transmitting data, signals, or instructions. As described herein, the term “processing circuitry” is meant to encompass a single processor or processing unit, a single integrated circuit, multiple processors, multiple integrated circuits, multiple processing units, or other suitably configured computing element or elements. The memory can store electronic data that can be used by the watch100. For example, a memory can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for the various modules, data structures or databases, and so on. The memory can be configured as any type of memory. By way of example only, the memory can be implemented as random access memory, read-only memory, Flash memory, removable memory, or other types of storage elements, or combinations of such devices. As further shown inFIG.2, the watch100may include a communication component242that facilitates transmission of data and/or power to or from other electronic devices across standardized or proprietary protocols. For example, a communication component242can transmit electronic signals via a wireless and/or wired network connection. Examples of wireless and wired network connections include, but are not limited to, cellular, Wi-Fi, Bluetooth, infrared, RFID and Ethernet. As further shown inFIG.2, the watch100may also include one or more sensors244, such as biosensors or physiological sensors, positioned substantially anywhere on the watch100. The one or more sensors244may be configured to sense substantially any type of characteristic such as, but not limited to, images, pressure, light, touch, force, temperature, position, motion, and so on. For example, the sensor(s)244may be a photodetector, a temperature sensor, a light or optical sensor, an atmospheric pressure sensor, a humidity sensor, a magnet, a gyroscope, an accelerometer, and so on. In some examples, the watch100may include one or more health sensors. In some examples, the health sensors can be disposed on or configured to sense through a bottom surface of the watch100, such as on or near the back cover118. The one or more sensors244can include optical and/or electronic biometric sensors that may be used to compute one or more physiological characteristics. A sensor244can include a light source and a photodetector to form a photoplethysmography (PPG) sensor. Light can be transmitted from the sensor244, to the user, and back to the sensor244. For example, the back cover118or other part of the enclosure102can provide one or more windows (e.g., opening, transmission medium, and/or lens) to transmit light to and/or from the sensor244. The optical (e.g., PPG) sensor or sensors may be used to compute various physiological characteristics including, without limitation, a heart rate, a respiration rate, blood oxygenation level, a blood volume estimate, blood pressure, or a combination thereof. One or more of the sensors244may also be configured to perform an electrical measurement using one or more electrodes, such as electrode(s)130and electrodes132. The electrical sensor(s) may be used to measure electrocardiographic (ECG) characteristics, galvanic skin resistance, and/or other electrical properties of the user's body. Additionally or alternatively, a sensor244can be configured to measure body temperature, exposure to UV radiation, and other health-related information. As further shown inFIG.2, the watch100may include a battery246that is used to store and provide power to the other components of the watch100. The battery246may be a rechargeable power supply that is configured to provide power to the watch100. The watch100may also be configured to recharge the battery246using a wireless charging system using, for example, an electromagnetic device such as an inductive charging coil. As further shown inFIG.2, the watch100can include a display248. The display can include, for example, a liquid crystal display (LCD) panel, an organic light-emitting diode (OLED), a microLED panel, projector device, or any other suitable electronic display technology or display panel. In some embodiments, the display248can be configured to present information relating to other components of the watch100as images, video, text, or other graphical information. For example, the display248can be configured to present an ECG graph, GSR information, a heart rate, or other information gathered with the sensor(s)244. The various components shown inFIG.2can be coupled together or to the processing circuitry240via one or more busses, wireless communication links, or other interconnection technologies. FIGS.3-6illustrate an example of usage and operation for a watch100, in accordance with some embodiments.FIGS.3-6show an example in which the watch100is configured to obtain two different types of physiological signals, such as a GSR signal and a ECG signal, using one or more shared sensing electrodes. More particularly, in the example shown inFIGS.3-6, one or more of the wrist-facing electrodes130is dual-purposed for both GSR and ECG measurements. FIG.3shows a user350interacting with watch100during a first type of physiological measurement, such as a GSR measurement or a passive measurement that can be obtained from only the arm352wearing the device (left arm in the illustration). Referring toFIG.3, the GSR signal can be obtained while the watch is worn on the user's wrist based on electrical coupling or ohmic contact between the wrist-facing electrodes130and the skin on the user's wrist354. The measurement can be obtained without a need for the user to contact outward facing electrodes or other electrodes with another body part, such as their other free arm. Accordingly, in some embodiments the watch100can be configured to obtain the GSR measurement using a passive measurement scheme. As used herein, a “passive measurement” or obtaining a measurement or signal “passively” refers to obtaining the measurement or signal without providing an indication to the user that the measurement is being obtained. Alternatively, a passive measurement can be obtained with an indication, but performed automatically based on a continuous or intermittent sensing scheme, rather than in response to an active selection by the user350. FIG.4is a schematic diagram showing an example of a drive and sense scheme that can be implemented by the processing circuitry240to perform the measurement and obtain the desired GSR signal during the user interaction shown inFIG.3. As shown inFIG.4, the GSR signal may be obtained by measuring a resistance between a pairing of the wrist-facing electrodes130when they are coupled to or in contact with the skin on the user's wrist354. The pairing of electrodes may be spaced apart from one another and disposed sufficiently close to each other (for example within a millimeter or a few millimeters) to permit a resistance signal to be obtained within or across the skin. The measured resistance can be used to determine a resistivity of the user's skin, which can vary based on factors such as recent exercise, dry skin, arousal or other information indicative of an emotional state, or the like. To obtain the resistance measurement, the processing circuitry240can be configured to drive a drive signal onto a first wrist-facing electrode in the pairing of electrodes130, and receive a sense signal onto a second wrist-facing electrode in the pairing. The drive signal can, for example, be a DC signal applied to the drive electrode, and the receive signal can be a response such as a current or voltage measured from the receive electrode. Additionally or alternatively, other sensing schemes such as other drive and sense schemes can be implemented, including, for example, drive signals involving time-varying signals or periodic waveforms. FIG.5shows a user350interacting with watch100during a second type of physiological measurement, such as an ECG measurement or an active measurement that can be obtained from both the arm352wearing the watch (left arm in the illustration) and the free arm362not wearing the watch (right arm in the illustration). Referring toFIG.5, the ECG signal can be obtained while the watch is worn on the user's wrist354based on an electrical coupling or ohmic contact between the wrist-facing electrodes130and the skin on the user's wrist354, and based on an electrical coupling or ohmic contact between the outward-facing electrodes132and the user's free arm362, such as the skin on the user's finger364. Based on the coupling or contact with the two arms, a voltage difference or potential difference resulting from depolarizations and repolarizations of the heart can be obtained. In some embodiments, the ECG measurement can be an active measurement taken in response to a user selection to enter an ECG measurement mode, rather than a passive measurement like that described above for the GSR measurement. For the ECG measurement, the user may be prompted or instructed (e.g., using the interface238or display248) to hold their finger or other portion of their arm on the outward-facing electrode132, or the component of the device on which the outward-facing electrode132is disposed, for an extended period of time. This can permit the watch100(e.g., processing circuitry240of the watch) to obtain a series of ECG signals over time and generate a corresponding ECG graph that shows the electrical potential variation over time. The ECG graph can include various intervals, zones, or segments corresponding to a sinus rhythm of the heart, such as, for example, a PR interval, QT interval, PR segment, ST segment, or QRS segment. Those skilled in the art will readily appreciate the utility of ECG signals and the various portions in an ECG graph, and thus these intervals are not described here in detail. According to some embodiments, the processing circuitry can be configured to display the ECG graph on the display248or transmit the ECG graph to a doctor or other medical professional (e.g., using communication component242). FIG.6is a schematic diagram showing an example of a drive and sense scheme that can be implemented by the processing circuitry240to perform the ECG measurement and obtain the desired ECG signal during the user interaction shown inFIG.5. As shown inFIG.6, the ECG signal may be obtained by measuring an electric potential difference between one or more of the wrist-facing electrodes (e.g., the first or second wrist-facing electrode130shown in the figure), and one or more outward-facing electrodes132(e.g., a third electrode132as shown inFIG.6). One or more of the wrist-facing electrodes130used for the ECG measurement may be the same as that used in the GSR measurement described above with respect toFIGS.3and4. With continued reference toFIG.6, the processing circuitry240can be configured to drive an outward-facing electrode132and receive a sense signal on both of the wrist-facing electrodes used during the GSR measurement. Alternatively, other implementations are contemplated in which only one sense signal is obtained from only one of the wrist-facing electrodes130, or in which one or more of the wrist-facing electrodes130are driven with a drive signal during the ECG measurement and a sense signal is obtained from one or more outward-facing electrodes132. It is contemplated that the processing circuitry240can include or cooperate with a switch or switching device to obtain the GSR and ECG signals during distinct time periods. For example, during one time period, and in response to a user selection to obtain an ECG measurement, the common electrodes can be connected to or otherwise coupled with an ECG circuit or ECG receive circuit element to obtain the ECG signal while the outward-facing electrode132is being driven. When the ECG measurement is complete, the ECG circuit element can be decoupled from the shared electrode and repurposed for applying a drive signal or receiving a sense signal for a GSR measurement. Alternatively, other implementations are contemplated in which the different GSR signal and ECG signal are obtained simultaneously by obtaining a combined signal using a frequency coding or other coded multiplexing scheme. For example, in some embodiments, a first drive signal such as an ECG drive signal can be applied to the outward facing electrode132with a first frequency or other signal parameter. While the first drive signal is being applied, a second drive signal such as a GSR drive signal can be applied to one of the wrist-facing electrodes130(such as the left electrode shown in the figure), with a sufficiently different signal parameter such as a sufficiently different frequency to permit the constituent resulting signals to be discriminated or deconvolved from a combined sense signal. As used herein, “deconvolve” refers to any process for resolving, separating, or otherwise determining constituent components of a signal from a combined signal. The combined sense signal can be obtained from another one of the wrist-facing electrodes130(such as the right wrist-facing electrode shown in the figure) while the distinct drive signals are applied and the processing circuitry240can be configured to discriminate or otherwise deconvolve the combined signal into a constituent GSR signal and ECG signal. While only two wrist-facing electrodes130are shown inFIGS.3-6, it is contemplated that more than two wrist-facing electrodes can be provided in various embodiments. For example, three, four, five, six, seven, eight, or any other suitable number of electrodes can be included to provide multiple distinct pairings of electrodes that can be used for determining multiple localized GSR signals or resistances between different respective pairings. This can be used, for example, to compensate for drift, mitigate against moisture on the skin or on the electrodes, or improve the measurement accuracy based on localized information that can be discriminated by using multiple resistances between multiple pairings of electrodes. FIG.7shows an example of circuitry that can be utilized to obtain physiological measurements, in accordance with some embodiments.FIG.7is a circuit diagram showing circuit elements that can be implemented in the processing circuitry240and coupled to electrodes in the watch100.FIG.7also shows some electrical properties of a user's body as equivalent circuit elements, such as RGSRcorresponding to a resistance through a user's skin and VECGcorresponding to an electric potential between a user's arms and indicative of a polarization state of the user's heart. The example shown inFIG.7uses a switching configuration to permit a GSR signal and an ECG signal to be obtained from the same electrode during different time periods. Although only one shared electrode (labeled EGSR/ECG) is shown inFIG.7, the teachings of the circuit configuration shown inFIG.7can be applied to other configurations in which two or more electrodes are shared or multi-purposed for distinct types of measurements. With reference to the example shown inFIG.7, the processing circuitry240can include GSR sensing circuitry790having one or more circuits or circuit elements configured to obtain a GSR signal. It is contemplated that the GSR sensing circuitry790can include any of a variety of circuit elements suitable for obtaining a desired GSR signal indicative of a resistance RGSRbetween a pair of inward-facing electrodes130. For example, the GSR sensing circuitry790can include one or more amplifiers, operational amplifiers (op-amps), filters, drivers, receivers, and/or other circuit elements configured to drive, receive, and/or process appropriate electrical signals onto and/or from the pairing of electrodes to take a GSR measurement. With continued reference to the example shown inFIG.7, the processing circuitry240can include ECG sensing circuitry792having one or more circuits or circuit elements configured to obtain an ECG signal. It is contemplated that the ECG sensing circuitry792can include any of a variety of circuit elements suitable for obtaining a desired ECG signal indicative of a potential difference VECGbetween an inward-facing or wrist-facing electrode130and an outward-facing electrode132. For example, the ECG sensing circuitry792can include one or more amplifiers, operational amplifiers (op-amps), filters, drivers, receivers, and/or other circuit elements configured to drive, receive, and/or process appropriate electrical signals onto and/or from the pairing of electrodes to take an ECG measurement. The processing circuitry240can further include or otherwise cooperate with one or more switches794which are coupled between the sensing circuitry and one or more electrodes to selectively connect the electrode(s) to the corresponding sensing circuitry as appropriate. For example as shown inFIG.7, the processing circuitry240can operate the switch(es)794to electrically connect inward-facing electrode130(EGSR/ECG) to the ECG sensing circuitry792in order to obtain an ECG signal during a time period when an ECG measurement is desired. During another time period when a GSR measurement is desired and outside of the ECG measurement time period, the processing circuitry240can be configured to operate the switch(es)794to electrically disconnect the inward-facing electrode130(EGSR/ECG) from the ECG sensing circuitry792and to electrically connect the inward-facing electrode130(EGSR/ECG) to the GSR sensing circuitry790. It can be sufficient for a single switch794to be implemented between the sensing circuitry and a single shared electrode. Other implementations are contemplated in which multiple switches or other switching circuitry is implemented for selectively connecting multiple shared and/or unshared electrodes for electrically connecting and disconnecting them as desired during their respective sensing time periods. It is also contemplated that the GSR sensing circuitry790and the ECG sensing circuitry792can be entirely distinct or share one or more circuit elements in common, and it is sufficient for the switch794to be configured to selectively connect the shared electrode to any one or more circuit elements of the GSR sensing circuitry790and ECG sensing circuitry792, respectively. Further, implementations are contemplated that omit the switching arrangement, such as embodiments that obtain and deconvolve a combined signal as described above, in which case a single set of sensing circuitry can be used for obtaining the combined measurement from the shared electrode(s). FIGS.8and9show an example of watch enclosure102in which physiological sensing electrodes such as wrist-facing electrodes130are formed using a conductive coating such as a physical vapor deposition (PVD) coating.FIG.8is a perspective view of an example of a watch enclosure102containing coated wrist-facing electrodes130, whileFIG.9is a cross section view of an example of an enclosure102containing coated wrist-facing electrodes130. The coated electrodes can be formed by coating a conductive material onto a non-conductive substrate. For example, a glass, sapphire, or ceramic substrate can be provided as the back cover118, or a component of the back cover. Alternatively, other implementations are contemplated where the conductive coating material is formed on a surface of a dielectric layer that is formed on a conductive substrate, where the dielectric layer provides insulation to separate the electrodes or electrode channels from each other and from the conductive substrate. Referring toFIGS.8and9, the wrist-facing electrodes130can be formed by coating a PVD coating or other conductive material onto a back surface of the back cover118(bottom surface in the illustration ofFIG.9). The conductive coating can be advantageous for GSR electrodes that are configured to contact a wrist, for example, because the electrode layer or layers can be patterned in a desired area, such as a perimeter around a window760, and/or made relatively thin to avoid interfering with other functional components of the watch that can be configured to interact with an external object or objects762through the back cover118. For example, in some embodiments, by having a thinner electrode layer (compared to, for example, an implementation using solid vias or solid contacts extending through the substrate) interference with flux lines interacting with an electromagnetic device770may be reduced. The electromagnetic device770can be disposed within the enclosure102and configured to interact with an external object762through the back cover and through or around the wrist-facing electrodes130. For example, the electromagnetic device770can be implemented as an inductive charging coil used for charging the battery246and which interacts with an external object762(e.g., an external charging coil for the inductive charger) through the back cover118. Flux lines travelling between the internal and external coils can thus be minimally interrupted based on the thin patterned electrode layer coated on the back surface of the back cover118. Additionally or alternatively, by patterning the wrist-facing electrodes130in a peripheral or perimeter area of the back cover or a component of the back cover, the wrist-facing electrodes130can be disposed around an optical window760that can permit light traveling through the back cover118from or to an optical device772. For example, an optical device such as a PPG sensor or heart rate monitor can be disposed in the enclosure102. The PPG sensor can include a light emitter774(e.g., a light emitting diode or other light emitter) configured to emit light to an external object762through the back cover and through the window760between the GSR electrodes. The PPG sensor can also include a light detector776(e.g., a photodiode or other photodetector), configured to detect a response of the emitted light through the back cover118and through the window between the GSR electrodes after the emitted light interacts with the external object762(e.g., a user's wrist). Although only one emitter and detector are shown inFIG.9, it will be appreciated that the optical device can include any suitable number of emitters and or detectors, included multiple light emitters and or detectors in various embodiments. According to some embodiments, the wrist-facing electrodes130used for GSR measurements can provide a cosmetic external layer (e.g., having a desired color for the external appearance of the device), alone or in connection with other non-functional cosmetic layers such as cosmetic layer780. Additionally or alternatively, the conductive coating can be coated around an edge of a substrate such as an edge of the back cover118to provide electrode channels or routing that permits the coated wrist-facing electrodes130to electrically connect to processing circuitry240disposed in the enclosure102. FIG.9shows an example in which the back cover118is implemented with multiple discrete components, including a first inner component786and a second outer component788. In this example, the wrist-facing electrodes130can be utilized for GSR measurements only, or one or more of the wrist-facing electrodes130can be dual purposed for both GSR and ECG electrodes. Referring toFIG.9, the wrist-facing electrodes130are coated on a back exterior surface of the inner cover component786(bottom surface inFIG.8facing away from an interior of the enclosure). The conductive coating material used for the wrist-facing electrodes130is also coated around an edge of the inner cover component and coated on a front interior surface of the inner cover component (top surface inFIG.9facing towards an interior of the enclosure). This permits the conductive coating material used for the wrist-facing electrodes130to electrically connect the electrodes on the exterior surface of the enclosure102to the processing circuitry240contained within the enclosure102. As shown inFIG.9, a complementary cosmetic coating (e.g., of the same color), can be disposed on the outer component788. Numerous other arrangements are possible, including, for example, implementations in which functional GSR and or ECG electrodes are disposed on the outer cover component788. As described above, one aspect of the present technology is the gathering and use of data available from various sources to provide improved health-related or body monitoring functionality. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to diagnose heart conditions or determine an emotional state of a user. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of diagnostic or health consultation services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide health or mood-associated data for targeted content delivery services. In yet another example, users can select to limit the length of time health or mood-associated data is maintained or entirely prohibit the development of a baseline health or mood profile. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, electrodes can be operated or physiological measurements can be obtained based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the device, or publicly available information. A reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. For example, “a” module may refer to one or more modules. An element proceeded by “a,” “an,” “the,” or “said” does not, without further constraints, preclude the existence of additional same elements. Headings and subheadings, if any, are used for convenience only and do not limit the invention. The word exemplary is used to mean serving as an example or illustration. To the extent that the term include, have, or the like is used, such term is intended to be inclusive in a manner similar to the term comprise as comprise is interpreted when employed as a transitional word in a claim. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases. A phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list. The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, each of the phrases “at least one of A, B, and C” or “at least one of A, B, or C” refers to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. It is understood that the specific order or hierarchy of steps, operations, or processes disclosed is an illustration of exemplary approaches. Unless explicitly stated otherwise, it is understood that the specific order or hierarchy of steps, operations, or processes may be performed in different order. Some of the steps, operations, or processes may be performed simultaneously. The accompanying method claims, if any, present elements of the various steps, operations or processes in a sample order, and are not meant to be limited to the specific order or hierarchy presented. These may be performed in serial, linearly, in parallel or in different order. It should be understood that the described instructions, operations, and systems can generally be integrated together in a single software/hardware product or packaged into multiple software/hardware products. In one aspect, a term coupled or the like may refer to being directly coupled. In another aspect, a term coupled or the like may refer to being indirectly coupled. Terms such as top, bottom, front, rear, side, horizontal, vertical, and the like refer to an arbitrary frame of reference, rather than to the ordinary gravitational frame of reference. Thus, such a term may extend upwardly, downwardly, diagonally, or horizontally in a gravitational frame of reference. The disclosure is provided to enable any person skilled in the art to practice the various aspects described herein. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The disclosure provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles described herein may be applied to other aspects. All structural and functional equivalents to the elements of the various aspects described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”. The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter. The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language of the claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way. | 57,622 |
11857341 | DETAILED DESCRIPTION Disclosed herein are devices and methods of using a mobile or wearable device for the acquisition and spatial filtering of signals from a plurality of measurement electrodes. The mobile or wearable device may comprise a plurality of measurement electrodes, one or more reference electrodes, and a controller in communication with the measurement and/or reference electrodes. In some examples, the measurement electrodes can be dry electrodes. In some examples, the mobile or wearable device may comprise an interface module in communication with the measurement electrodes and the controller, where the interface module can be configured to adjust the connectivity between the plurality of measurement electrodes and the controller. In some examples, the interface module may comprise one or more multiplexers configured for the selection of individual and/or sets of measurement electrodes based on command signals from the controller. Methods of spatial filtering the signals from the measurement electrodes may comprise using the interface module to selectively transmit data from low-noise measurement electrode(s). Data from the measurement electrode(s) that have been determined to have high levels of noise (e.g., noise levels that exceed a predetermined and/or computed noise threshold) may be filtered out and may not be included in the generation of the overall ECG waveform. In some examples, filtering out data from high-noise measurement electrodes may comprise adjusting the connectivity of the multiplexer(s) of the interface module such that data from these high-noise measurement electrodes may not transmitted to the controller. Alternatively or additionally, filtering out data from high-noise measurement electrodes may comprise adjusting the connectivity of the multiplexer such that the frequency or rate at which the multiplexer connects the controller to the high-noise measurement electrodes can be lower than the frequency or rate at which the multiplexer(s) connect the controller to the low-noise measurement electrodes. In examples where each of the measurement electrodes has a dedicated channel to the controller, spatial filtering the signals across the plurality of measurement electrodes may comprise the controller rejecting, ignoring, and/or eliminating the data from the high-noise measurement electrodes from data analysis and interpretation. For example, the controller may incorporate only the signals from low-noise measurement electrodes in the computation of the overall ECG waveform. In some variations, the controller may generate an overall ECG waveform by computing a weighted sum across all of the measurement electrode signals. Spatial filtering of the signals from the plurality of measurement electrodes may comprise assigning a weight to a particular measurement electrode signal that can be inversely related (e.g., inversely proportional, etc.) to its ranked noise level as compared to the other measurement electrodes and/or the average noise level across all of the electrodes. In this variation, the signal(s) from high-noise measurement electrode(s) may be incorporated in the overall ECG waveform, but at a relatively lower weight as compared to the signal(s) from low-noise measurement electrode(s). Reducing the contribution of high-noise measurement electrode(s) may also reduce their impact to the signal-to-noise ratio (SNR) of the overall ECG waveform. Although the examples and applications of spatial filtering devices and methods are described in the context of generating a complete ECG waveform, it should be understood that the same or similar devices and methods may be used to collect and process data from the plurality of measurement electrodes and may or may not generate an ECG waveform. For example, the spatial filtering of the signals from the plurality of measurement electrodes may facilitate the monitoring of certain cardiac characteristics (e.g., heart rate, arrhythmias, changes due to medications or surgery, function of pacemakers, heart size, etc.) and/or ECG waveform characteristics (e.g., timing of certain waves, intervals, complexes of the ECG waveform) by the controller and/or user without generating a complete ECG waveform. In some examples, the controller may generate a subset of the ECG waveform (e.g., one or more of the P wave, QRS complex, PR interval, T wave, U wave) based on spatially filtered measurement electrode signals. The ECG devices described herein may optionally comprise a display that can provide a visual representation of the collected and/or filtered measurement electrode data to the user. Alternatively or additionally, the filtered measurement electrode data may not be displayed by the ECG device, but instead can be relayed to a companion device (e.g., a tablet, laptop, smartphone, computer, server, etc.) that can have a display for outputting a visual representation of the data. Moreover, examples of the disclosure include spatial filtering devices and methods configured for other types of measurements including, but not limited to, EEG and EMG measurements or optical determination of parameters on blood constituents. The terminology used in the description of the variations described herein is for the purpose of describing particular variations only and is not intended to be limiting. As used in the description of the various described variations and the appended claims, the singular forms “a”, “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Variations of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some variations, the device can be a portable communications device, such as an internet-enabled telephone such as a smartphone, a mobile telephone, or a wearable communications device, such as a wristband, watch, clip, headband, earphone or ear piece, internet-enabled eyewear, or any computing device, portable or otherwise, such as a personal calendaring device, electronic reader, tablet, desktop, or laptop computers, etc. Any of these devices may also include other functions, such as personal digital assistant (PDA) and/or music player functions. Optionally, any of the above-listed electronic devices may comprise touch-sensitive surfaces (e.g., touch screen displays and/or touchpads). Alternatively or additionally, the electronic devices may include one or more other physical user-interface devices, such as a physical mouse, a keyboard, and/or a joystick. FIG.2Aillustrates exemplary personal electronic device200, such as one that may be used for acquiring and spatially filtering signals from electrode arrays for generating ECG waveforms. Device200includes body202. Personal electronic device200may be a portable device such as a tablet, smart phone, watch, and in some variations, may be part of a wireless-capable eyepiece or eye-wear, head gear, and the like. In other variations, personal electronic device200may not be a portable device, and may be desktop computer. In some variations, device200has touch-sensitive display screen204. Alternatively, or in addition to touch screen204, device200may have a display and a touch-sensitive surface. In some variations, touch screen204(or the touch-sensitive surface) may have one or more intensity (force) sensors for detecting intensity of contacts (e.g., touches) being applied. The one or more intensity sensors of touch screen204may provide output data that represents the intensity of touches. The user interface of device200can respond to touches based on their intensity. For example, touches of different intensities can invoke different user interface operations on device200. FIG.2Bdepicts the various components of exemplary personal electronic device200. Similar components may also be included in any of the devices described herein (e.g., device300,310ofFIGS.3A-3C). Device200can include a bus212that operatively couples I/O section214with one or more computer processors216and memory218. I/O section214may be connected to display204, which may have a touch-sensitive component222and, optionally, a touch-intensity sensitive component224. In addition, I/O section214may be connected with communication unit230for receiving application and operating system data, using Bluetooth, Wi-Fi, near field communication (NFC), cellular, and/or other wireless communication techniques. Device200may include input mechanisms206and/or208. Input mechanism206may be a rotatable input device or a depressible and rotatable input device, for example. In some examples, input mechanism208may be a button. Input mechanism208may be a microphone, in some examples. Personal electronic device200can include various sensors, such as GPS sensor232, accelerometer234, directional sensor240(e.g., compass), gyroscope236, motion sensor238, and/or a combination thereof, all of which can be operatively connected to I/O section214. Examples with ECG measurement capabilities, described in greater detail below, may include one or more reference electrodes242and an array of measurement electrodes244. The connection between the various sensors and the I/O section214may be an electrical wire or bus, and/or wireless (e.g., Bluetooth, Wi-Fi, near field communication (NFC), cellular, and/or other wireless communication techniques). Memory218of personal electronic device200can be a non-transitory computer-readable storage medium, for storing computer-executable instructions, which, when executed by one or more computer processors216, for example, can cause the computer processors to perform the techniques and methods described herein. The computer-executable instructions can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. A “non-transitory computer-readable storage medium” can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on DVD, CD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like. Personal electronic device200may not be limited to the components and configuration ofFIG.2B, but can include other or additional components in multiple configurations. As described herein, a “controller” may refer to a system comprising a computer processor such as a microprocessor, central processing unit (CPU), a digital signal processor (DSP), programmable logic device (PLD), and/or the like. In some variations, device200may have one or more input mechanisms206and208. Input mechanisms206and208, if included, can be physical. Examples of physical input mechanisms may include rotatable mechanisms and push buttons. In some variations, device200may have one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device200with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, pockets, collars, bracelets, watch straps, chains, trousers, belts, shoes, socks, purses, backpacks, undergarments, and so forth. These attachment mechanisms may permit device200to be worn by a user. Attention is now turned toward variations of additional device modules and associated processes that may be implemented on an electronic device, such as portable multifunction device300, for acquiring ECG signals from an electrode array and spatial filtering of those signals. FIGS.3A-3Cdepict one variation of a mobile or wearable device300that may be used to acquire and spatially filter signals from a plurality of measurement electrodes. The device300may be a wrist-worn device, such as a watch, bracelet, or wrist band. The device300may comprise one or more reference electrodes302located on a skin-contacting surface of the device. For example, the device may be a watch having a housing with a front side306that can face the user and a back side308that can contact the skin region around the wrist. As depicted inFIG.3B, a reference electrode302can be located on the back side308. In this example, only one reference electrode302is depicted, however, in other variations, such as depicted inFIG.3C, there may be more than one reference electrode (e.g., 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15 or more, etc.). In some examples, a plurality of measurement electrodes310may be located on the device300(e.g., on the back side308and/or front side306), as illustrated inFIG.3A. In some examples, the plurality of measurement electrodes can be located on an accessory that can be separate or detached or detachable from the device300. One or more of the plurality of measurement electrodes can be provided at various locations or regions of the device300. For example, plurality of measurement electrodes310can be located at a first side wall portion R1of the device300and/or a second side wall portion R3of the device. Alternatively or additionally, one or more measurement electrodes can be located on the outward-facing surface of the wrist band of the device300. For example, one or more measurement electrodes can be on the outward-facing surface of the band R2located below the housing or can be on the outward-facing surface of the band located above the housing R4. Some examples may include a first one or more measurement electrodes located at R1and a second one or more measurement electrodes located at R2. The user's ECG data can be collected when the user puts his/her thumb on the first one or more measurement electrodes and his/her index finger on the second one or more measurement electrodes. In some examples, there may only be one location included measurement electrodes on the device300. The signals measured by the one or more measurement electrodes can be transmitted to the device300using wireless communications or using one or more electrical wires or cables. The electrodes of the electrode arrays described herein may be “dry” electrodes. “Dry electrodes” can be electrodes configured to contact the user without use of a conducting or electrolytic gel located between the user's skin and any surface of the electrodes. Typically, ECG measurement systems use wet Ag/AgCL electrodes. Without the aid of such gels, obtaining electrical signals with an acceptable or favorable SNR can be challenging. Low-frequency noise (e.g., about 0.5 Hz to about 40 Hz) may be introduced at the electrode-skin interface. This frequency band also encompasses the ECG signals-of-interest, which may pose a challenge (e.g., make it computationally intensive) to filtering out the noise without diminishing the signal strength and/or integrity. Without wishing to be bound by theory, sources of such low-frequency noise may include sweat glands (e.g., due to electrolyte behavior), local motion artifacts, local dead skin and other skin irregularities, as well as non-homogenous skin contact. Furthermore, measuring ECG signals from different sites on the limbs (e.g., hand(s), finger(s), feet, toe(s)) may introduce noise of a highly stochastic nature. Such stochastic noise may have a peak-peak value great than about 50 μVpp, which can exceed the noise threshold that can be acceptable for ECG measurements and waveforms. In some cases, these noise sources may be localized and spatially specific. That is, if an electrode array is placed on a small patch of skin (e.g., about 1 cm2, about 2 cm2, etc.), the measurements from one electrode in the electrode array can be affected by noise from sweat glands, while another electrode in the electrode array may not be affected by sweat glands. In this example, the distribution of noise across the electrode array can depend on the distribution of sweat glands across that patch of skin. In some instances, the electrode array can make poor or inconsistent contact with the user's skin. This may be particularly the case when ECG data is being collected from anatomical structures with irregular curves and shapes, such as from a fingertip.FIG.3Dschematically depicts a finger, which may have a constantly-changing surface (denoted by the constantly-changing slopes of the dotted lines), as well as concave or convex regions (a concave region is enclosed in the dotted oval). Other geometric surface irregularities may also include curves that have non-constant radius of curvature, skin folds or clefts, variable surface elasticity of different types of tissue (e.g., finger nails, bones, are relatively inelastic as compared to skin), etc. These irregularities may increase the impedance of an electrode and render the ECG signals acquired by that electrode particularly susceptible to noise (especially from motion artifacts). The devices and methods disclosed herein address these and other sources of noise by utilizing a plurality of individually-controllable/measurable measurement electrodes and spatial filtering of the signals acquired by the plurality of measurement electrodes. Spatial filtering of the signals acquired by the plurality of measurement electrodes may comprise measuring the noise levels for each of the measurement electrodes, determining which measurement electrode(s) have noise levels that are at, above, or below a noise threshold, and excluding the data from high-noise measurement electrode(s) in the computation of the overall ECG waveform. Filtering out the signals from the high-noise measurement electrode(s) may improve the quality of the overall ECG waveform and/or simplify the computational processing of the ECG data acquired by the measurement electrodes. FIG.4Adepicts a schematic functional block diagram of an exemplary system for measuring ECG signals from a plurality of individually-controllable/measurable measurement electrodes and spatial filtering those signals. The system400may comprise plurality of measurement electrodes402, a controller404, and an interface module406. Interface module406can be configured to transmit signals from the plurality of measurable electrodes402to the controller404. One or more reference electrodes may be in communication with the interface module and/or controller. The communication channel403(between the plurality of measurement electrodes402and the interface module406) and the communication channel405(between the interface module406and the controller404) may be wired or wireless. The communication channels may transmit signals that can represent measured ECG data or signals, controller commands to the interface module, and the like. In some examples, the signal transmitted to the controller can be a differential signal (e.g., a signal representing the difference between signal values measured at two or more measurement electrodes). As described previously with regard toFIGS.3A-3C, the plurality of measurement electrodes402, interface module406, and the controller404may be located on the same device or may be located on separate devices or components. For example, the plurality of measurement electrodes402, interface module406, and the controller404may all be located on a wrist-worn device such as a watch. Alternatively, the interface module406and the controller404may be located on the wrist-worn device while the plurality of measurement electrodes402may be located on a separate accessory device. In some examples, the communication channel403may be wireless, while the communication channel405may be wired. In some examples, the electrode array402and the interface module406may be located on an accessory device, and the controller may be located on a wrist-worn device. In some examples, the communication channel403may be wired, while the communication channel405may be wireless. Although the plurality of measurement electrodes402is depicted as having four electrodes inFIG.4A, it should be understood that the plurality of measurement electrodes may comprise any number of electrodes, as may be desirable. For example, the plurality of measurement electrodes402may comprise 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 20, 22, 24, 25, 27, 30, etc. electrodes. The arrangement of these measurement electrodes may vary, for example, depending on the size and size of the anatomical region from which the plurality of measurement electrodes can be used to measure ECG signals. For example, the plurality of measurement electrodes may be arranged as a circle, rectangle, diamond, triangle, in a line, or in any anatomically-specific fashion, etc. The interface module406can be configured to amplify and filter the signals from the electrodes. In some examples, the interface module406can selectively transmit the signals measured by the plurality of measurement electrodes to the controller. For example, the interface module406may comprise one or more buffers, filters (e.g., 60 Hz notch filters, bandpass filters, etc.), amplifiers (e.g., differential amplifiers, etc.), and/or analog-to-digital converter (ADC). In some examples, the raw signals measured by the plurality of measurement electrodes may be filtered, amplified, and converted to a digital signal before the signals are transmitted from the interface module to the controller. Optionally, in some variations, the interface module406may comprise a switch circuit, such as a multiplexer, where ECG signals from each of the measurement electrodes can be transmitted to the multiplexer (either before or after amplifying, filtering and/or converting to a digital signal). Based on commands from the controller, the multiplexer can selectively output or transmit the data from certain measurement electrodes to the controller. The number of multiplexer output channels may be the same as or less than the number of measurement electrodes. Multiplexing the data collected by the plurality of measurement electrodes may help to reduce the number of signal processing components in the interface module, thereby reducing the size of the overall device. In some examples, the interface module may comprise a plurality of multiplexers, for example, arranged serially or in stages. Furthermore, the multiplexer may be used to selectively transmit the signals from the relatively low-noise measurement electrodes to the controller instead of the signals from the relatively high-noise measurement electrodes. By doing so, the multiplexer can spatially filter the signals from the measurement electrodes based on commands from the controller by rejecting the high-noise signals and transmitting the low-noise signals. FIGS.4B-4Cdepict examples of interface modules416,426that comprise a plurality of amplifiers and multiplexers.FIG.4Bdepicts an interface module416that comprises amplifiers410and multiplexers412, such that the signals from each measurement electrode402can be amplified before they arrive at the input ports of the multiplexers. In some examples, interface module416can include circuitry configured to reject any signals from measurement electrodes402that may be associated with high noise (i.e., the noise is greater than a noise threshold). In this manner, only low-noise signals can be transmitted through communication channel405to controller404. In some examples, the high-noise signals may be rejected (e.g., not sent through communications channel405to controller404) for a certain time period, followed by a periodic check/determination whether the electrodes associated with the previously high-noise signals are now associated with low-noise signals. Although the multiplexers412are depicted as having two input ports, it should be understood that multiplexers might have any number of input ports as may be desirable. In some examples, interface module416can include circuitry that may not entirely reject signals associated with high noise, but instead may sample (and transmit to controller404) the signals associated with high noise at a different frequency (e.g., lower frequency) than the signals associated with low noise. In some examples, interface module416can include circuitry that may weigh the signals associated with high noise differently than the signals associated with low noise. For example, the high-noise signals can be given a lower weight (i.e., relative contribution to the overall ECG signal) than low-noise signals. FIG.4Cdepicts another variation of an interface module426that comprises amplifiers450and multiplexers452, such that the signals from each measurement electrode402can be selected by the multiplexer before they are amplified. In still other variations, differential amplifiers may be used (in either the circuit topology ofFIG.4BorFIG.4C), where the first input to a differential amplifier may be a measurement electrode, and the second input to the differential amplifier may be a reference electrode (and/or an electrode selected from a plurality of reference electrodes). In some examples, the input signals (e.g., from the plurality of measurement electrodes and a reference electrode or reference electrode array) to one or more differential amplifiers may be pre-selected by one or more multiplexers so that the low-noise ECG signals can be amplified and processed. In some examples, the interface module426can be configured to group together (e.g., via one or more switches) low-noise signals and can be configured to group together high-noise signals. The group of low-noise signals can be measured at one frequency, and the group of high-noise signals can be measured at another frequency. For example, the group of low-noise signals can be measured more frequently than the group of high-noise signals. FIG.4Ddepicts an exemplary plurality of measurement electrodes and associated spatial filtering of the measurement electrodes outputs. Plurality of measurement electrodes420comprises nine individually-addressable/controllable measurement electrodes421-429that can be arranged in a square (diamond) shape. In other examples, any number of measurement electrodes can be arranged in any shape. Each trace can be coupled to a unique electrode (e.g., electrode421,422,423,424,426,427,428,429, respectively (the trace coupled to electrode425is not depicted)), and separate signals431,432,433,434,436,437,438,439can be measured and transmitted to the controller. Signals from the traces can be acquired during a portion of the cardiac cycle with minimal cardiac activity, such as the inter-beat interval. The controller can determine that the peak value or magnitude of the signals437-430from electrodes427-429can be higher than the peak value or magnitude of the signals from the other electrodes, and such fluctuations can be the result of noise. In some examples, the controller can average all of the signals431-439to obtain an average signal that can represent a noise threshold against which the signals from the measurement electrodes can be compared. For example, the controller may compute the peak value or magnitude of that average signal, and compare the peak value or magnitude of each of the signals431-439with that of the average signal to identify measurement electrodes that have suprathreshold values or magnitudes. Such measurement electrodes can be considered as “high-noise” measurement electrodes. These high-noise electrodes may be determined to be located at or contacting skin regions that give rise to higher levels of noise, for example. For example, without wishing to be bound by theory, the noise that affects the electrodes427-429(which may be located in a contiguous spatial region) may arise from sweat glands that can be co-located with the electrodes427-429. In some instances, the skin region that contacts electrodes421-426may have fewer, if any, sweat glands than the skin region contacted by electrodes427-429. Once the controller has identified electrodes427-429as high-noise electrodes, the signals from the high-noise electrodes may be excluded from generating the overall ECG waveform. For example, electrodes427-429can be grouped together, and electrodes421-426can be grouped together. Signal430can represent the sum of the signals431-436associated with low-noise measurement electrodes; signals437-439from high-noise measurement electrodes can be excluded. The signals from electrodes427-429may be excluded by adjusting the channel selection of the multiplexer(s) in the interface module such that signals from high-noise measurement electrodes may not selected for transmission to the controller. In this manner, more bandwidth can be made available between the interface module and the control module for the transmission of signals from low-noise measurement electrodes421-426. In some examples, the signals from high-noise measurement electrodes427-429may be transmitted to the controller (along with the signals from the low-noise measurements electrodes421-426), but not included in the determination of the overall ECG waveform. The device can operate with any configuration for sampling ECG data. For example, all measurement electrodes (e.g., measurement electrodes421-429) can sample ECG data at the same time, and the signals can be transmitted to the controller at the same time. In some examples, the measurement electrodes can sample ECG data sequentially (e.g., electrode421can sample ECG data first, followed by electrode422sampling data second, etc.), and the signals can be transmitted to the controller sequentially. In some examples, the device can perform an initial scan including sampling all of the measurement electrodes to determine whether one or more measurement electrodes include suprathreshold noise levels. Subsequent scans can exclude the measurement electrodes with suprathreshold noise levels, but can include the electrodes with subthreshold noise levels. In some examples, the device can simultaneously sample ECG data from multiple electrodes to further reject or disable electrodes. For example, electrode421and electrode429can simultaneously sample ECG data. If the noise levels from the measurements differ, then the device can determine whether to use the measurements from the measurement electrode with lower noise levels or disable the measurement electrode with higher noise levels. In some examples, each of the measurement electrodes can be coupled to a unique communication channel.FIG.4Edepicts an example of spatial filtering of ECG data from eight measurement electrodes (electrodes represented by channels 1-8). The signals440from the eight measurement electrodes can have varying degrees of noise, with the signal442from channel 8 having the greatest amount of noise. The signal442from channel 8 may be identified as having suprathreshold noise levels by the controller and can be filtered out (e.g., excluded from the computation of the overall ECG waveform). Signal444can be the overall ECG waveform generated by the data from channels 1-7, and can exclude data from channel 8. Spatial filtering the signals from the eight measurement electrodes to exclude data from channel 8 may help to preserve the integrity of the overall ECG waveform, and limit (or entirely eliminate) the effect of electrodes with suprathreshold noise levels on the ECG waveform. FIGS.5A-5Bare flowchart depictions of variations of methods of spatial filtering that may be performed by a controller of a mobile or wearable device for acquiring ECG signals. Some methods of spatial filtering may completely eliminate or reject the signals from measurement electrodes that have noise levels that exceed the noise threshold. In some examples, signals from the measurements electrodes associated with noise levels that exceed the noise threshold can be included, but the signals can be scaled down (e.g., be associated with a lower weighting factor). In some examples, the input from high-noise measurement electrodes can be completely eliminated or rejected, as depicted inFIG.5A. Method500may comprise contacting a reference electrode to a first skin region in step502, and contacting one or more of a plurality of measurement electrodes to a second skin region in step504. For instances where the reference electrode can be located on a wrist-worn device such as a watch, method500may comprise putting on the watch such that the reference electrode can contact the skin region at or near the wrist, and the measurement electrode(s) can contact a second region of skin (e.g., a fingertip). For example, if the measurement electrode(s) are located on the watch, the user may touch the tip of his/her finger to the surface of the watch that has the measurement electrode(s). In some examples, if the measurement electrode(s) are located on a separate accessory device, the user may contact the measurement electrode(s) by contacting the accessory device. After the reference electrode and the measurement electrode(s) have contacted the skin of the user, method500may comprise measuring the noise levels for each measurement electrode in step506. For example, the impedance and/or electrical signals may be measured for each measurement electrode(s). Such measurements can be transmitted from the measurement electrode(s) to the interface module and then transmitted to the controller, using wired and/or unwired communications. In some examples, the controller can optionally average the noise levels from each of the measurement electrode(s) in step508. The average noise level can be used to determine a noise threshold against which the noise levels of each of the measurement electrodes can be compared. Alternatively, the noise threshold may be preselected or predetermined, and may be independent of the average measured noise level of the measurement electrodes. Alternatively, in some examples, a preselected or predetermined noise threshold may be adjusted based on the noise levels of the measurement electrodes (e.g., shifted upwards or downwards based on the computed average noise level). Once a noise threshold has been determined and/or calculated, the controller may identify the measurement electrodes with noise levels that are at or below the threshold noise levels (which may be referred to as “low-noise” measurement electrodes) in step510. The controller may send a command signal to the interface module with instructions to acquire and transmit signals only from low-noise measurement electrode(s). Signals from high-noise measurement electrode(s) (i.e., any measurement electrodes that are not low-noise measurement electrodes) may be rejected by the interface module. In some examples, the controller may send a command signal to the interface module to acquire and transmit signals from the measurement electrode(s) with the least amount of noise. For example, the controller may rank the measurement electrodes based on their relative noise levels and issue commands to the interface module to gather and transmit signals only from some (e.g., three, four, five, etc.), but not all, measurement electrodes with the least noise. After sufficient ECG data has been acquired by the controller (e.g., after a period of time, such as about 5-20 seconds), the controller may generate an ECG waveform based on the signals from the low-noise measurement electrodes in step514. Optionally, the generated ECG waveform may be displayed to the user or practitioner and/or transmitted to a remote server for storage and/or further analysis. In some examples, spatial filtering can include scaling down the signals associated with or under-sampling high-noise measurement electrode(s), as depicted inFIG.5B. Method520may comprise contacting a reference electrode to a first skin region in step522, and contacting a plurality of measurement electrodes to a second skin region in step524. For examples where the reference electrode can be located on a wrist-worn device such as a watch, method520may comprise putting on the watch such that the reference electrode can contact the skin region at or near the wrist, and the plurality of measurement electrodes can contact a second region of skin (e.g., a fingertip). For example, if the plurality of measurement electrodes is located on the watch, the user may touch the tip of his/her finger to the surface of the watch that has the plurality of measurement electrodes. In some examples, if the plurality of measurement electrodes is located on a separate accessory device, the user may contact the plurality of measurement electrodes by contacting the accessory device. After the reference electrode and the plurality of measurement electrodes contact to the skin of the user, method520may comprise measuring the noise levels for each measurement electrode in step526. For example, the impedance and/or electrical signals may be measured for each measurement electrode. Such measurements can be transmitted from the measurement electrode(s) to the interface module and then transmitted to the controller, using wired and/or unwired communications. In some examples, the controller can optionally average the noise levels from each of the measurement electrodes in step528. The average noise level may be used as a noise threshold against which the noise levels of each of the measurement electrodes may be compared. Alternatively, the noise threshold may be preselected or predetermined and may be independent of the average measured noise level of the measurement electrodes. Alternatively, a preselected or predetermined noise threshold may be adjusted (e.g., shifted upwards or downwards based on the computed average noise level) based on the noise levels of the measurement electrodes. Once a noise threshold has been determined and/or calculated, the controller may identify the measurement electrode(s) (e.g., “low-noise” measurement electrodes) with noise levels that are at or below the noise threshold in step530. The controller may also identify the electrodes (e.g., “high-noise” measurement electrodes) with noise levels that are above the noise threshold levels in step532. In some examples, the controller can send a command signal to the interface module with instructions to adjust the sampling frequency for low-noise and high-noise measurement electrodes in step534. For example, the interface module can adjust the switching in the multiplexer(s) such that signals from low-noise measurement electrodes can be transmitted to the controller more frequently than signals from high-noise measurement electrodes. The sampling frequency of a particular measurement electrode can be inversely related (e.g., inversely proportional, etc.) to its noise level. For example, the noise levels of the plurality of measurement electrodes can be ranked by the controller; the frequency at which the multiplexer can switch to a particular measurement electrode and can transmit its signal to the controller can be inversely proportional to the ranking of that particular measurement electrode. In some variations, the interface module can be configured to (e.g., using a plurality of staged multiplexers) provide a dedicated channel between low-noise measurements electrodes to the controller and then multiplex between the high-noise measurement electrodes. In some examples, the controller can prioritize the transmission of ECG data from low-noise measurement electrodes over high-noise measurement electrodes by increasing the multiplexer selection frequency and/or sampling frequency of the low-noise measurement electrodes. In some examples, the controller can reduce the selection frequency and/or sampling frequency of the high-noise measurement electrodes. In some instances, the controller can generate a good quality, low-noise ECG waveform, without increasing the power consumption or bandwidth requirements of the device. Alternatively or additionally to adjusting the characteristics of data acquisition, the signal(s) from high-noise measurement electrode(s) can be processed differently by the controller as compared to the signals from the low-noise measurement electrode(s). For example, to the extent that the overall ECG waveform can be a weighted sum of the signals from the plurality of measurement electrodes, the controller may scale down the magnitude or weight of the signal from high-noise measurement electrodes when computing the overall ECG waveform. After sufficient ECG data has been acquired by the controller (e.g., after a period of time, such as about 5-20 seconds), the controller can generate an ECG waveform based on the signals from the low-noise measurement electrodes in step522. Optionally, the generated ECG waveform may be displayed to the user or practitioner and/or transmitted to a remote server for storage and/or further analysis. The variations of spatial filtering methods described above and depicted inFIGS.5A-5Bcan classify the noise characteristics of the measurement electrodes before ECG data and/or signals are acquired and processed. In some examples, the noise characteristics of the measurement electrodes can be evaluated before, during, and/or after data acquisition. For example, in some instances where the user can move during data acquisition, a measurement electrode that was previously determined to have subthreshold noise levels may be affected by motion artifacts, acquiring signals with unfavorable noise characteristics. In such scenario, it may also be that a measurement electrode previously determined to have suprathreshold noise levels may have improved noise conditions, for example, due to better skin contact or being moved to a location with fewer sweat glands, etc. A controller that can evaluates the noise characteristics of the measurement electrodes throughout data acquisition interval may detect this change and may dynamically adjust the sampling frequency and/or grouping of the measurement electrodes, whose noise characteristics may have changed. In some examples, where the overall ECG waveform can be a weighted sum of the signals from the measurement electrodes, the weighting factor may vary as a function of time such that when the signal levels from a particular measurement electrode exceed the noise threshold, the weighting factor can be dynamically changed (e.g., decrease for that time period). In some examples, when the signal levels from that same measurement electrode are below the noise threshold, the weighting factor can be dynamically changed (e.g., increased for that time period). The noise characteristics of the measurement electrodes may be performed on a sample-by-sample basis or at set time intervals during the ECG data acquisition period (e.g., for an acquisition period of 10 seconds, the noise characteristics of the measurement electrodes may be re-evaluated every second, or every two seconds, or every 0.5 seconds, etc.). The controller can be configured to generate notifications to the user and/or medical practitioner regarding the signal quality and/or noise levels of the signals from the measurement electrodes. For example, if at any point the majority of the measurement electrodes have suprathreshold noise levels, and/or exceed a maximum acceptable noise threshold (i.e., such that an interpretable ECG waveform cannot be generated (e.g., the data is too sparse or the SNR is below a certain threshold)), the controller can prompt the user to re-position or otherwise adjust one or more measurement electrode(s). For example, the controller may suggest that the user position one or more measurement electrode(s) at a flatter anatomical region, and/or press one or more measurement electrode(s) to more intimately contact the skin surface, etc. In some examples, the controller can indicate exactly which measurement electrode(s) have unusual levels of noise, and the user may inspect those measurement electrode(s) and check their contact with the skin region. In some examples, the controller may also generate an ECG waveform form that may be projected to the user on a display of the mobile or wearable device, and/or transmitted to a remote server for storage and/or further analysis. Although descriptions given herein have been in relation to certain examples, various additional examples and alterations to the described examples are contemplated within the scope of the disclosure. Thus, no part of the foregoing description should be interpreted to limit the scope of the disclosure as set forth in the following claims. For all of the examples described above, the steps of the methods need not be performed sequentially. The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various examples with various modifications as are suited to the particular use contemplated. Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims. As described above, one aspect of the present technology is the gathering and use of data available from various sources to improve the delivery to users of invitational content or any other content that may be of interest to them. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, home addresses, or any other identifying information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to deliver targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables calculated control of the delivered content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. Despite the foregoing, the present disclosure also contemplates examples in which users selectively block the use of, or access to, personal information data. The present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide location information for targeted content delivery services. In yet another example, users can select to not provide precise location information, but permit the transfer of location zone information. Therefore, although the present disclosure broadly describes use of personal information data to implement one or more various disclosed examples, the present disclosure also contemplates that the various examples can also be implemented without the need for accessing such personal information data. That is, the various examples of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publically available information. A device is disclosed. The device can comprise: one or more measurement electrodes configured to contact one or more first areas of a skin surface, each measurement electrode being independently measurable and configured to generate a measurement signal indicative of one or more electrical signals of a user, the measurement signal included in a plurality of measurement signals; and a controller configured to: receive the plurality of measurement signals, compare each measurement signal to a noise threshold, reject or apply a first weighting factor to each measurement signal having a level greater than or equal to the noise threshold, perform one or more of accepting and applying a second weighting factor to each measurement signal having a level less than the noise threshold, and determine one or more physiological parameters from the accepted measurement signals. Additionally or alternatively, in some examples, some of the one or more measurement electrodes are configured to contact an area of the skin surface different from other measurement electrodes. Additionally or alternatively, in some examples, the device further comprises: a reference electrode configured to contact a second area of the skin surface and located on a lower surface of a housing of the device, wherein the one or more measurement electrodes are located on an upper surface, opposite the lower surface, of the housing. Additionally or alternatively, in some examples, some of the one or more measurement electrodes are located at a first location of the device and others of the one or more measurement electrodes are located at a second location, separate and distinct from the first location, of the device. Additionally or alternatively, in some examples, the one or more first areas of the skin surface are located proximate to each other. Additionally or alternatively, in some examples, the device further comprises: one or more communications channels, each communication channel associated with one of the one or more measurement electrodes. Additionally or alternatively, in some examples, the one or more measurement electrodes include one or more first measurement electrodes and one or more second measurement electrodes, the one or more first measurement electrodes associated with a level of noise lower than the one or more second measurement electrodes, the device further comprising: one or more communications channels, each communication channel associated with one of the one or more first measurement electrodes; and one or more multiplexers configured to dynamically reconfigure connections of the one or more second measurement electrodes to the controller. A method is disclosed. The method can comprise: contacting one or more first areas of a skin surface of a user with one or more measurement electrodes; for each measurement electrode, measuring one or more electrical signals of the user and generating one or more measurement signals indicative of the measured one or more electrical signals; transmitting the one or more measurement signals using one of one or more communications channels to a controller; and determining one or more physiological parameters from the transmitted one or more measurement signals. Additionally or alternatively, in some examples, the method further comprises: for each measurement electrode, comparing the one or more measurement signals to a noise threshold level; and determining one or more first measurement electrodes from the one or more measurement electrodes and one or more second measurement electrodes from the one or more measurement electrodes based on the comparison, the one or more first electrodes having measurement signals less than the noise threshold level and the one or more second electrodes having measurement signals greater than or equal to the noise threshold level or a standard deviation from the noise threshold level, wherein determining the one or more physiological parameters include measurement signals associated with the one or more first electrodes. Additionally or alternatively, in some examples, the determining the one or more physiological parameters excludes measurement signals associated with the one or more second electrodes. Additionally or alternatively, in some examples, the method further comprises: applying one or more first weighting factors to the measurement signals associated with the one or more first measurement electrodes; and applying one or more second weighting factors, less than the first weighting factor, to the measurement signals associated with the one or more second measurement electrodes. Additionally or alternatively, in some examples, each first weighting factor is inversely proportional to a noise level of the associated first measurement electrode, and each second weighting is inversely proportional to a noise level of the associated second measurement electrode. Additionally or alternatively, in some examples, after measuring the one or more electrical signals using each measurement electrode, for each first measurement electrode, measuring one or more electrical signals of the user and generating one or more second measurement signals indicative of the measured one or more electrical signals. Additionally or alternatively, in some examples, measuring the one or more electrical signals for each first measurement electrode includes a first measurement frequency, and measuring the one or more electrical signals for each second measurement electrode includes a second measurement frequency, the first measurement frequency greater than the second measurement frequency. Additionally or alternatively, in some examples, measuring the one or more electrical signals for each measurement electrode includes a frequency inversely proportional to a noise level associated with the measurement electrode. Additionally or alternatively, in some examples, the method further comprises: for each measurement electrode, measuring one or more second electrical signals of the user; generating one or more second measurement signals indicative of the measured one or more second electrical signals; comparing the one or more second measurement signals to the noise threshold level; determining a change in noise level based on the comparison; and reassigning the one or more measurement electrodes associated with the change in noise level. Additionally or alternatively, in some examples, the one or more first areas of the skin surface include a thumb and an index finger of the user, wherein the measuring the one or more electrical signals is after the thumb and index finger contact the one or more measurement electrodes. Additionally or alternatively, in some examples, the measuring the one or more electrical signals is simultaneous for all measurement electrodes, and the transmitting the one or more measurement signals is simultaneous. Additionally or alternatively, in some examples, the method further comprises: ordering noise levels associated with the one or more measurement electrodes; and determining one or more first measurement electrodes having a lower order than other measurement electrodes, wherein the determining the one or more physiological parameters include measurement signals associated with the one or more first measurement electrodes. Additionally or alternatively, in some examples, the method further comprises: comparing one or more measurement signals to a noise threshold level; and prompting the user to move at least one of the one or more measurement electrodes to a different area of the skin surface. | 58,383 |
11857342 | DETAILED DESCRIPTION The figures and descriptions provided herein may be simplified to illustrate aspects of the described embodiments that are relevant for a clear understanding of the herein disclosed processes, machines, manufactures, and/or compositions of matter, while eliminating for the purpose of clarity other aspects that may be found in typical similar devices, systems, compositions and methods. Those of ordinary skill may thus recognize that other elements and/or steps may be desirable or necessary to implement the devices, systems, compositions and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein. However, the present disclosure is deemed to inherently include all such elements, variations, and modifications to the described aspects that would be known to those of ordinary skill in the pertinent art in light of the discussion herein. Embodiments are provided throughout so that this disclosure is sufficiently thorough and fully conveys the scope of the disclosed embodiments to those who are skilled in the art. Numerous specific details are set forth, such as examples of specific aspects, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. Nevertheless, it will be apparent to those skilled in the art that certain specific disclosed details need not be employed, and that embodiments may be embodied in different forms. As such, the exemplary embodiments set forth should not be construed to limit the scope of the disclosure. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. For example, as used herein, the singular forms “a”, “an” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The steps, processes, and operations described herein are thus not to be construed as necessarily requiring their respective performance in the particular order discussed or illustrated, unless specifically identified as a preferred or required order of performance. It is also to be understood that additional or alternative steps may be employed, in place of or in conjunction with the disclosed aspects. Yet further, although the terms first, second, third, etc. may be used herein to describe various elements, steps or aspects, these elements, steps or aspects should not be limited by these terms. These terms may be only used to distinguish one element or aspect from another. Thus, terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, step, component, region, layer or section discussed below could be termed a second element, step, component, region, layer or section without departing from the teachings of the disclosure. As used herein, the terminology “determine” and “identify,” or any variations thereof includes selecting, ascertaining, computing, looking up, receiving, determining, establishing, obtaining, or otherwise identifying or determining in any manner whatsoever using one or more of the devices and methods are shown and described herein. As used herein, the terminology “example,” “the embodiment,” “implementation,” “aspect,” “feature,” or “element” indicates serving as an example, instance, or illustration. Unless expressly indicated, any example, embodiment, implementation, aspect, feature, or element is independent of each other example, embodiment, implementation, aspect, feature, or element and may be used in combination with any other example, embodiment, implementation, aspect, feature, or element. As used herein, the terminology “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is unless specified otherwise, or clear from context, “X includes A or B” is intended to indicate any of the natural inclusive permutations. That is if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form. As used herein, the terminology “computer” or “computing device” includes any unit, or combination of units, capable of performing any method, or any portion or portions thereof, disclosed herein. For example, the “computer” or “computing device” may include at least one or more processor(s). As used herein, the terminology “processor” indicates one or more processors, such as one or more special purpose processors, one or more digital signal processors, one or more microprocessors, one or more controllers, one or more microcontrollers, one or more application processors, one or more central processing units (CPU)s, one or more graphics processing units (GPU)s, one or more digital signal processors (DSP)s, one or more application specific integrated circuits (ASIC)s, one or more application specific standard products, one or more field programmable gate arrays, any other type or combination of integrated circuits, one or more state machines, or any combination thereof. As used herein, the terminology “memory” indicates any computer-usable or computer-readable medium or device that can tangibly contain, store, communicate, or transport any signal or information that may be used by or in connection with any processor. For example, a memory may be one or more read-only memories (ROM), one or more random access memories (RAM), one or more registers, low power double data rate (LPDDR) memories, one or more cache memories, one or more semiconductor memory devices, one or more magnetic media, one or more optical media, one or more magneto-optical media, or any combination thereof. As used herein, the terminology “instructions” may include directions or expressions for performing any method, or any portion or portions thereof, disclosed herein, and may be realized in hardware, software, or any combination thereof. For example, instructions may be implemented as information, such as a computer program, stored in memory that may be executed by a processor to perform any of the respective methods, algorithms, aspects, or combinations thereof, as described herein. Instructions, or a portion thereof, may be implemented as a special purpose processor, or circuitry, that may include specialized hardware for carrying out any of the methods, algorithms, aspects, or combinations thereof, as described herein. In some implementations, portions of the instructions may be distributed across multiple processors on a single device, on multiple devices, which may communicate directly or across a network such as a local area network, a wide area network, the Internet, or a combination thereof. As used herein, the term “application” refers generally to a unit of executable software that implements or performs one or more functions, tasks or activities. For example, applications may perform one or more functions including, but not limited to, vital signs monitoring, health monitoring, telephony, web browsers, e-commerce transactions, media players, travel scheduling and management, smart home management, entertainment, and the like. The unit of executable software generally runs in a predetermined environment and/or a processor. The non-limiting embodiments described herein are with respect to rings or devices and methods for making the rings or devices, where the rings or devices are vital signs monitoring or health signs monitoring rings or devices with integrated display. The ring or device and method for making the ring or device with integrated display may be modified for a variety of applications and uses while remaining within the spirit and scope of the claims. The embodiments and variations described herein, and/or shown in the drawings, are presented by way of example only and are not limiting as to the scope and spirit. The descriptions herein may be applicable to all embodiments of the device and the methods for making the devices. Disclosed herein are implementations of health or vital (collectively “vital”) signs monitoring rings or devices with integrated display (collectively “rings”) and methods for making the rings. The vital signs monitoring ring with integrated display is easily attached and removed from the user. The rings may use a combination of sensors, printed electronics, batteries, display electronics, flexible materials or enclosures. In an implementation, the vital signs monitoring ring with integrated display includes flexible display layers based on organic, electrochromic, or quantum dot display techniques and materials. The parameters that may be displayed on the ring include, but are not limited to, heart rate (HR), heart rate variability (HRV), oxygen saturation (SpO2), body surface temperature, pH levels and the like. The vital signs monitoring ring with integrated display integrates multiple sensing modalities such as, but not limited to, photoplethysmography (PPG), oxygen saturation mapping (oximetry), temperature monitoring, and pH monitoring in one wearable device for detecting pulsatile signals or the lack thereof. In an implementation, the vital signs monitoring ring with integrated display may include a single or multi-lead electrocardiogram (ECG) sensor. The device and captured data are used to track the data from the multiple sensing modalities over time. In an implementation, the vital signs monitoring ring with integrated display is reusable and rechargeable. In an implementation, the variety of sensors may include, but is not limited to, a PPG sensor, temperature sensors, and an accelerometer. In an implementation, the PPG sensor is a transmission mode oximetry measurement sensor. In an implementation, the transmission mode oximetry measurement sensor may include light emitting diodes (LEDs) and photodiodes. The LEDs may be red LEDs, near infrared (NIR) LEDs, and/or green LEDs. In an implementation, the vital signs monitoring ring with integrated display may include a single lead ECG sensor. In an implementation, the vital signs monitoring ring with integrated display may include a low power microcontroller with Bluetooth® for communication, an analog front-end (AFE) for measuring oximetry signals (PPG) and oxygen saturation (SpO2), an accelerometer, temperature sensor, and an oximetry layer including LEDs and photodiodes. In an implementation, the LEDs and photodiodes are on the same layer and same plane. In this instance, oximetry measurement is done via reflection or reflective oximetry. In an implementation, the LEDs and photodiodes are on the same layer and on different planes. In this instance, oximetry measurement is done via transmissive oximetry. In an implementation, a combination of reflective and transmissive can be done by appropriate placement of the LEDs and photodiodes. In an implementation, the AFE of the vital signs monitoring ring with integrated display may measure ECG signals and the vital signs monitoring ring with integrated display may include screen printed Silver-Silver Chloride electrodes (Ag—AgCl). In an implementation, the vital signs monitoring ring with integrated display may include a pH sensor. Power for the ring is internally supplied by a battery or power source as described herein. The battery provides power to the various components on the vital signs monitoring ring with integrated display. The battery may permit the vital signs monitoring ring with integrated display to be run in a continuous mode of operation for a defined time period. For example, the defined time period may be 7 days. In an implementation, the battery may be a stack of Lithium polymer or similar batteries providing 3.6 V and 50 mAh, for example. In implementations, the battery is a flexible battery. In implementations, the battery is a rigid battery. In implementations, the battery is a flexi-rigid battery. The data from the vital signs monitoring ring with integrated display may be communicated to a mobile device for display or analysis. In an implementation, the communication may be done via wireless, Bluetooth®, and the like. The data may include ECG live data, heart rate, heart rate variability, fall detection, SpO2, pH, body surface temperature, and the like. In an implementation, the flexible battery is rechargeable. FIG.1is a perspective view of an example vital signs monitoring ring with integrated display1000in accordance with certain implementations,FIG.2is a top view of the vital signs monitoring ring with integrated display1000in accordance with certain implementations,FIG.3is a cross-sectional view along line B-B of the vital signs monitoring ring with integrated display1000in accordance with certain implementations, andFIGS.4A-Bare perspective views of a printed circuit board assembly (PCBA) layer1100and a ring shell1200of the vital signs monitoring ring with integrated display1000in accordance with certain implementations. In an implementation, the PCBA layer1100is configured to be cylindrically positioned on and attached to the ring shell1200. In an implementation, the PCBA layer1100is configured to be cylindrically positioned on and bonded to the ring shell1200. The vital signs monitoring ring with integrated display1000includes the PCBA layer1100and the ring shell1200. As shown and described herein, the PCBA layer1100may be a flexible double surface populated PCBA. In an implementation, the PCBA layer1100may include a display area1300. In an implementation, the display area1300may be a 2-digit, 7-segment display. In an implementation, the display area8110may have more or less digits. In an implementation, the display area1300may be a printed display. In an implementation, the display area1300may be a printed LED display implemented using organic, electrochromic, or quantum dot display techniques and materials as described herein. In an implementation, the display area1300may display oxygen saturation, pH levels, temperature, heart rate, and other physiological parameters. In an implementation, the display area1300may include step counts and other action-related parameters. In an implementation, the display area1300may include multiple display areas, where each display area may display a different physiological or action-related parameter. In an implementation, the PCBA layer1100may include a pair of printed silver-silver chloride (Ag—AgCl) electrodes for an ECG sensor measurements or for other sensor measurements. In an implementation, the electrodes are screen-printed Ag—AgCl electrodes. In an implementation, there is one electrode one each surface of the PCBA layer1100, where a first electrode contacts a user surface when the vital signs monitoring ring with integrated display1000is positioned on a user digit and a second electrode is touchable or engageable by a user to complete a sensor circuit with the first electrode. In an implementation, the first electrode contacts the user surface via a hydrogel layer. In an implementation, the PCBA layer1100may include a variety of sensors (as shown and described herein) including, but not limited to, PPG sensor, ECG sensor, an accelerometer, pH sensor, and temperature sensor(s). In an implementation, the accelerometer may be used for activity tracking such as steps, fall detection, sleep efficiency, and sleep staging. In an implementation, one or more temperature sensors may be used to determine a temperature profile for a wound, for example. The one or more temperature sensors may sense or monitor surface temperatures of a localized body area. In an implementation, the pH sensor may monitor the pH levels of a localized body area. The pH level may vary from 0 to 14 and as stated herein, may be displayed via layer1. For example, pH of normal healing wounds range from 5.5 to 6.5 and pH of nonhealing wounds are greater than 6.5. In an implementation, the pH sensors may be potentiometric pH sensors. In an implementation, the pH sensors may be implemented using carbon/polyaniline and Ag—AgCl electrodes. In an implementation, the ring shell1200may have a spool or spindle type form including a cylinder1210having a pair of rims or ridges1220and1222at each end of the cylinder1210. In an implementation, the cylinder1210may include tabs or projections1230for maintaining the PCBA layer1100on the cylinder1210. In an implementation, the cylinder1210may include windows1240for operation of sensors on the PCBA layer1100as described herein. For example, the windows1240may allow light transmissions from light emitting diodes (LEDs) to impact a user surface and be detected by photodiodes after traveling through a user digit or the like. In an implementation, the cylinder1210may include charging ports1250for charging and recharging the vital signs monitoring ring with integrated display1000. In an implementation, the ring shell1200may be flexi-rigid. In an implementation, the ring shell1200may be rigid. In an implementation, the ring shell1200may be flexible. FIG.5Ais a top view or outer surface5100of a PCBA layer5000for a vital signs monitoring ring with integrated display in accordance with certain implementations. The outer surface5100of the PCBA layer5000may include a display section5200, a switch5300, an accelerometer5400, an analog front-end5500, and other components. In an implementation, the display section5200may be a segmented LED display which includes a plurality of LEDs.FIG.5Bis a bottom view or inner surface5150of the PCBA layer5000ofFIG.5Ain accordance with certain implementations. The inner surface5150of the PCBA layer5000may include charging terminals5600and5650, temperature sensor5700, photodiodes5800and5850, sensor LEDs5900and5950, and other components.FIG.5Cis a side view of the PCBA layer5000ofFIG.5Ain accordance with certain implementations. The side view of the PCBA layer5000may show the switch5300, the analog front-end5500, the photodiode5800, the sensor LEDs5900, and other components. The PCBA layer5000and the components shown herein function as described herein in the specification. FIG.6is a perspective view of an example vital signs monitoring ring with integrated display6000in accordance with certain implementations.FIG.7is an exploded perspective view of the vital signs monitoring ring with integrated display6000ofFIG.6in accordance with certain implementations. The vital signs monitoring ring with integrated display6000includes a ring shell6100, a PCBA layer6200, and an overmold layer6300. As shown and described herein, the PCBA layer6200may be a flexible double surface populated PCBA. A switch6400is connected to the PCBA layer6200and a battery6500is attached to the ring shell6100and connected to the PCBA layer6200. The ring shell6100and the PCBA layer6200and components function as described herein in the specification. For example, the PCBA layer6200may include a variety of sensors (as shown and described herein) including, but not limited to, a temperature sensor6600. The overmold layer6300covers the ring shell6100, the PCBA layer6200, and the battery6500, and provides access to the switch6400. In an implementation, the PCBA layer6200may include a display area6210. In an implementation, the display area6210may be a 2-digit, 7-segment display. In an implementation, the display area6210may have more or less digits. In an implementation, the display area6210may be a printed display. In an implementation, the display area6210may be a printed LED display implemented using organic, electrochromic, or quantum dot display techniques and materials as described herein. In an implementation, the display area6210may display oxygen saturation, pH levels, temperature, heart rate, and other physiological parameters. In an implementation, the display area6210may include step counts and other action-related parameters. In an implementation, the display area6210may include multiple display areas, where each display area may display a different physiological or action-related parameter. FIG.8Ais a perspective view of a vital signs monitoring ring with integrated display8000in accordance with certain implementations,FIG.8Bis a perspective view of an outer layer8100of the vital signs monitoring ring with integrated display8000in accordance with certain implementations,FIG.8Cis a perspective view of a PCBA layer8200of the vital signs monitoring ring with integrated display8000in accordance with certain implementations, andFIG.8Dis a perspective view of an inner layer8300of the vital signs monitoring ring with integrated display8000in accordance with certain implementations. In an implementation, the vital signs monitoring ring with integrated display8000may include a ring structure for holding or housing the outer layer8100, the PCBA layer8200, and the inner layer8300such as, for example, the ring shell1200ofFIG.1. In an implementation, the outer layer8100, the PCBA layer8200, and the inner layer8300may be flexi-rigid. In an implementation, the outer layer8100, the PCBA layer8200, and the inner layer8300may be flexible. In an implementation, the outer layer8100of the vital signs monitoring ring with integrated display8000may include a display area8110. In an implementation, the display area8110may be a 3-digit, for example. In an implementation, the display area8110may have more or less digits. In an implementation, the display area8110may be a printed display. In an implementation, the display area8110may be a printed LED display implemented using organic, electrochromic, or quantum dot display techniques and materials as described herein. In an implementation, the display area8110may display oxygen saturation, pH levels, temperature, heart rate, and other physiological parameters. In an implementation, the display area8110may include step counts and other action-related parameters. In an implementation, the display area8110may include multiple display areas, where each display area may display a different physiological or action-related parameter. In an implementation, the outer layer8100of the vital signs monitoring ring with integrated display8000may include a first electrode of a pair of printed silver-silver chloride (Ag—AgCl) electrodes for an ECG sensor measurements or for other sensor measurements. In an implementation, the first electrode is a screen-printed silver-silver chloride (Ag—AgCl) electrode which is touchable or engageable by a user to complete a sensor circuit with a second electrode. In an implementation, the outer layer8100of the vital signs monitoring ring with integrated display8000may include a power button to turn on the vital signs monitoring ring with integrated display8000. In an implementation, PCBA layer8200of the vital signs monitoring ring with integrated display8000may include a power source, passive and active electronics, sensors, and the like as described herein. The PCBA layer8200of the vital signs monitoring ring with integrated display8000may be electrically and mechanically connected to the outer layer8100of the vital signs monitoring ring with integrated display8000. In an implementation, the power source may be a stack of Lithium polymer or similar batteries providing 3.6 V and 50 mAh, for example. In an implementation, the power source may be a flexible battery. The power source provides power to the various components on the vital signs monitoring ring with integrated display8000. In an implementation, the inner layer8300of the vital signs monitoring ring with integrated display8000may include the transmission mode oximetry measurement sensor components8310which may include LEDs8320and photodiodes8330and function as described herein. The LEDs8320may be red LEDs, near infrared (NIR) LEDs, and/or green LEDs. In an implementation, the LEDs8320and photodiodes8330may be implemented using electrochromic, organic or quantum dot materials and techniques. In an implementation, the inner layer8300of the vital signs monitoring ring with integrated display8000may include the second electrode of the pair of printed Ag—AgCl electrodes for the ECG sensor measurements or for other sensor measurements. In an implementation, the second electrode contacts a user surface when the vital signs monitoring ring with integrated display8000is positioned on the user digit. In an implementation, the second electrode contacts the user surface via a hydrogel layer. The inner layer8300of the vital signs monitoring ring with integrated display8000may be electrically and mechanically connected to the PBCA layer8200of the vital signs monitoring ring with integrated display8000. Oximeters sense oxygen saturation in tissues by optically quantifying concentrations of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb). Pulse oximetry is one modality for ratiometric optical measurements on pulsatile arterial blood by leveraging photoplethysmography (PPG) at a minimum of two distinct wavelengths. PPG comprises optoelectronic components such as LEDs and photodiodes. In an implementation, the transmission mode oximetry measurement components may use red and NIR LEDs or red and green LEDs. The molar absorption coefficients of HbO2and Hb are disparate at the red and NIR wavelengths. The red LEDs and the NIR LEDs act as emitters (converting electrical energy into light energy) where light is transmitted at 612 nm and 712 nm wavelengths, respectively. In an implementation, red and green (532 nm) may also be used as LED combinations. The photodiodes sense the non-absorbed light from the LEDs. The signals are inverted by means of an operational amplifier. These signals are interpreted as light that has been absorbed by the tissue being probed and are assigned to direct current (DC) and alternating current (AC) components. The DC component is treated as light absorbed by the tissue, venous blood, and non-pulsatile arterial blood. The AC component is treated as pulsatile arterial blood. In an implementation, the data may be streamed to a device application through a Bluetooth® connection.FIGS.11A-Bare an example diagram of an interface screen11000on a device for interacting with a vital signs monitoring ring with integrated display in accordance with certain implementations. The interface screen11000may have a link or tab11100for selection of a sub-menu for ring oximetry display11200. The ring oximetry display11200may show, for example, SpO211210, temperature11230, step count11220, and other physiological or action parameters. As referenced herein above, the vital signs monitoring ring with integrated display may also include an application which may run on a device such as mobile devices, end user devices, cellular telephones, Internet Protocol (IP) devices, mobile computers, laptops, handheld computers, PDAs, personal media devices, smartphones, notebooks, notepads, phablets, smart watches, and the like (collectively “user device”). The vital signs monitoring ring with integrated display may wirelessly communicate with the user device and the application together with the user device may analyze, display and provide alerts to a user the vitals signs data collected by the vital signs monitoring ring with integrated display. The vital signs monitoring ring with integrated display may interface with the application to measure, stream and record real-time data for providing comprehensive sensing information to user(s).FIGS.11A-Bare example diagrams of interface screens for reviewing the sensor data as described herein. FIG.9is an example diagram of a hardware architecture of a vital signs monitoring ring with integrated display9000in accordance with certain implementations. The vital signs monitoring ring with integrated display9000includes an analog front-end (AFE)9100which may be connected to a variety of sensors present on the vital signs monitoring ring with integrated display9000. The AFE9100may be connected to a processor9200, which is further connected to a LED display9300, temperature sensor(s)9400, oximetry sensor9500, power9600, accelerometer9700, and an antenna9800. In an implementation, the processor9200is a low power MCU with integrated Bluetooth®. In an implementation, the antenna9800is a Bluetooth® which may communicate with a device9900using a corresponding antenna9950. In an implementation, the hardware architecture may, in part, be implemented on the PCBA layer1100, the PCBA layer8200, and the like. FIG.10is an example diagram of a software architecture of a vital signs monitoring ring with integrated display10000in accordance with certain implementations. A processor software/firmware of the vital signs monitoring ring with integrated display10000includes, but is not limited to, a power module10100, a data transfer module10150, and drivers10200for the LEDs10210, AFE10220, Bluetooth® stack10230, accelerometer10240, display10250, temperature sensor10260, oximetry10270, serial peripheral interface10280, and the like. An application device10500may include, but is not limited to, applications to process and display SpO2data10510, temperature10520, step count/fall detection (via accelerometer)10530, and like data. The application device further includes a data storage mode10540, a Bluetooth® stack10550, and other libraries10560. In an implementation, the software/firmware architecture may, in part, be implemented on or with the processor9200. FIG.12is a flowchart for a method12000for transmission mode oximetry measurement for a vital signs monitoring ring with integrated display in accordance with some implementations. The method12000includes: directing12100light from LEDs at user digit and sensing transmitted light at photodiodes; detecting12200specific wavelengths; generating12300SpO2signal; pre-processing12400the generated SpO2signal; detecting12500signal characteristics of the pre-processed SpO2signal; making12600absorption ratio measurements; applying12700absorption ratio measurements against calibration model; and predicting12800SpO2level. The method12000includes directing12100light from LEDs at user digit and sensing transmitted light at photodiodes. The vital signs monitoring ring with integrated display may be positioned on a digit of a user. Upon activation, the LEDs transmit or emit light through the user digit. The photodiodes sense or capture the transmitted light as it travels through the user digit. In an implementation, red and NIR LEDs are used. In an implementation, red and green LEDs are used. The method12000includes detecting12200specific wavelengths. Signals associated with specific wavelengths are separated or filtered from the captured transmitted light. In an implementation, red and NIR wavelengths are filtered. In an implementation, red and green wavelengths are filtered. The method12000includes generating12300SpO2signal and pre-processing12400the generated SpO2signal. A SpO2signal is generated from the filtered wavelength signals and digital signal processing is applied to the SpO2signal. The method12000includes detecting12500signal characteristics of the pre-processed SpO2signal. Signal characteristics determinative for SpO2are determined, such as, but not limited to, valleys and peaks of the processed SpO2signal. The method12000includes making12600absorption ratio measurements. Absorption ratio measurements are computed from the SpO2signal characteristics. The method12000includes applying12700absorption ratio measurements against calibration model. The absorption ratio measurements are normalized or calibrated against a model. The method12000includes predicting12800SpO2level. The calibrated absorption ratio measurements are used to predict a SpO2level. FIG.13is a diagram of an example Organic Light Emitting Diode (OLED) stack13000for a vital signs monitoring ring with integrated display in accordance with certain implementations. The OLED stack13000may include a seal layer13100, a cathode layer13200, an emissive layer13300, a conductive layer13400, an anode layer13500, and a substrate13600. In an implementation, the emissive layer13300may be a film of organic compound which emits light in response to current injection. The organic compound may be organic polymers, inks, light emitting polymers, and the like. In an implementation, the conductive layer13400may be organic polymers, inks, and the like. FIG.14is a diagram of an example Electrochromic Device (ECD) stack14000for a vital signs monitoring ring with integrated display in accordance with certain implementations. The ECD stack may include a substrate14100, an electrolyte layer14200, electrochromic layers14300and14310, electrodes14400and14410, and a substrate14500. In this instance, the electrochromic materials are organic or inorganic substances that change color when charged with electricity. The ECD controls optical properties, such as transmission, absorption, reflectance and/or emittance, in a continual but reversible manner by applying voltage. The ECDs may be printed on plastics, paper, and the like and provide flexible yet robust structures. The ECDs use ultra-low power and are activated by small currents. The ECDs can be integrated with sensors for motion, touch, proximity, temperature and the like. FIGS.15A and15Bare architectures for quantum dot light emitting diodes (QLEDs).FIG.15Ais a diagram of a quantum dot15000, which are semiconductor particles with optical and electrical properties in the nanometer size area. The quantum dot15000, in general, includes a core15100, a shell15200and ligands15300. The core15100are the material emitting colors, the shell15200are coatings to protect the core15100, and the ligands15300are long chain molecules so that the quantum dots can be printed in a liquid form. FIG.15Bis a diagram of a QLED stack15500for a vital signs monitoring ring with integrated display in accordance with certain implementations. The QLED stack15500includes a negative voltage electrode15600, a charge injection material layer15650, a core-shell quantum dot layer15700, a charge injection layer15750, a positive voltage electrode15800, and a transparent substrate15850. The QLEDs produce pure monochromatic light (red, green, blue) and have low power consumption. Charge injected in the QLED stack15500results in electroluminescence. The chemical make-up and size of the quantum dots allows tuning of the color of the emitted light. In general, a vital signs monitoring ring with integrated display includes a ring housing, the ring housing comprising at least two windows and a printed circuit board assembly (PCBA) layer configured to be attached to the ring housing. The PCBA layer includes a display section, a sensor section, a transmission mode oximetry measurement section configured to be in alignment with the at least two windows, a power supply, and a switch configured to power on the vital signs monitoring ring with integrated display via the power supply. The display section is configured to display physiological and action parameters associated with a user by sensing the physiological and action signals from a digit of user wearing the vital signs monitoring ring with integrated display using at least the sensor section and the transmission mode oximetry measurement section. In an implementation, the sensor section includes at least one of an accelerometer, electrocardiogram (ECG) sensor, a temperature sensor, and a pH sensor. In an implementation, the PCBA layer further includes a first printed silver-silver chloride electrode printed on a top surface of the PCBA layer, a second printed silver-silver chloride electrode printed on a bottom surface of the PCBA layer, and a hydrogel layer connected to the second printed silver-silver chloride electrode, where the second printed silver-silver chloride electrode contacts a user digit through a third window when the vital signs monitoring ring with integrated display is positioned on the digit, and where a sensor circuit is completed with the second printed silver-silver chloride electrode when the user touches the first printed silver-silver chloride electrode. In an implementation, the transmission mode oximetry measurement components further includes first wavelength light emitting diodes configured to transmit light at the digit, second wavelength light emitting diodes configured to transmit light at the digit, and photodiodes configured to capture transmitted light traveling through the digit, where one of the first wavelength light emitting diodes and the second wavelength light emitting diodes, or the photodiodes are aligned in one window of the at least two windows and a remaining one of the first wavelength light emitting diodes and the second wavelength light emitting diodes, or the photodiodes are aligned in another window of the at least two windows. In an implementation, the PCBA layer further includes a wireless component which is configured to transmit at least sensor data to a monitoring device. In an implementation, the PCBA layer further includes a processor, the processor configured to filter the captured light signal to determine a first wavelength signal and a second wavelength signal, generate an oximetry signal from the first wavelength signal and the second wavelength signal, determine signal characteristics of the oximetry signal, determine absorption ratio measurements from the determined signal characteristics of the oximetry signal, calibrate the absorption ratio measurements, and predict an oximetry level. In an implementation, the ring housing further includes a charging port, the power source configured to be charged via the charging port. In general, a vital signs monitoring ring with integrated display includes a first layer having a display section and an activation switch, a second layer having a sensor section and a power supply, and a third layer having an oximetry sensor. The first layer, the second layer, and the third layer are electrically and mechanically connected and collectively configured to be in a cylindrical configuration, where activation of the activation switch powers on the vital signs monitoring ring with integrated display via the power supply, and the display section is configured to display physiological and action parameters associated with a user by sensing the physiological and action signals from a digit of user wearing the vital signs monitoring ring with integrated display using at least the sensor section and the oximetry sensor. In an implementation, the sensor section includes at least one of an accelerometer, electrocardiogram (ECG) sensor, a temperature sensor, and a pH sensor. In an implementation, the first layer includes a first printed silver-silver chloride electrode printed on a top surface of the first layer, the third layer includes a second printed silver-silver chloride electrode printed on a bottom surface of the third layer, and a hydrogel layer connected to the second printed silver-silver chloride electrode, where the second printed silver-silver chloride electrode contacts a user digit when the vital signs monitoring ring with integrated display is positioned on the digit, and where a sensor circuit is completed with the second printed silver-silver chloride electrode when the user touches the first printed silver-silver chloride electrode. In an implementation, the oximetry sensor include transmission mode oximetry measurement components further includes first wavelength light emitting diodes configured to transmit light at the digits, second wavelength light emitting diodes configured to transmit light at the digit, and photodiodes configured to capture transmitted light traveling through the digit. In an implementation, the oximetry sensor include reflective mode oximetry measurement components further includes photodiodes configured to capture transmitted light reflected from the digit. In an implementation, the ring further includes a ring shell, the ring shell configured to hold the first layer, the second layer, and the third layer. In an implementation, the ring shell further includes a charging port, the power source configured to be charged via the charging port. In an implementation, the second layer further comprises a processor, the processor configured to filter the captured light signal to determine a first wavelength signal and a second wavelength signal, generate an oximetry signal from the first wavelength signal and the second wavelength signal, determine signal characteristics of the oximetry signal, determine absorption ratio measurements from the determined signal characteristics of the oximetry signal, calibrate the absorption ratio measurements, and predict an oximetry level. In general, a vital signs monitoring ring with integrated display includes a display layer including a switch and a sensor layer including at least accelerometer, a temperature sensor, and a transmission mode oximetry sensor. The display layer and the sensor layer are electrically and mechanically connected and collectively configured to be in a cylindrical configuration, where activation of an activation switch powers on the vital signs monitoring ring with integrated display via a power supply, and the display section is configured to display sensor data associated with a user by sensing signals from a digit of user wearing the vital signs monitoring ring with integrated display using at least the accelerometer, the temperature sensor, and the transmission mode oximetry sensor. In an implementation, the sensor section further includes at least one of an electrocardiogram (ECG) sensor and a pH sensor. In an implementation, the display layer includes a first printed silver-silver chloride electrode printed on a top surface of the display layer, the sensor layer includes a second printed silver-silver chloride electrode printed on a bottom surface of the sensor layer, and a hydrogel layer connected to the second printed silver-silver chloride electrode, where the second printed silver-silver chloride electrode contacts a user digit when the vital signs monitoring ring with integrated display is positioned on the digit, and where a sensor circuit is completed with the second printed silver-silver chloride electrode when the user touches the first printed silver-silver chloride electrode. In an implementation, the transmission mode oximetry sensor further includes first wavelength light emitting diodes configured to transmit light at the digit, second wavelength light emitting diodes configured to transmit light at the digit, and photodiodes configured to capture transmitted light traveling through the digit. In an implementation, the sensor layer further comprises a processor, the processor configured to filter the captured light signal to determine a first wavelength signal and a second wavelength signal, generate an oximetry signal from the first wavelength signal and the second wavelength signal, determine signal characteristics of the oximetry signal, determine absorption ratio measurements from the determined signal characteristics of the oximetry signal, calibrate the absorption ratio measurements, and predict an oximetry level. The construction and arrangement of the methods as shown in the various exemplary embodiments are illustrative only. Although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials and components, colors, orientations, etc.). For example, the position of elements may be reversed or otherwise varied and the nature or number of discrete elements or positions may be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. The order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. Although the figures may show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law. | 45,962 |
11857343 | DETAILED DESCRIPTION Methods are provided for the fabrication of microneedles. Microneedles fabricated according to the herein described methods will generally be constructed of multiple lengths of wire winded together, brazed and further manipulated to include a reversible engagement feature. The subject microneedles may find use in a variety of applications and, among other purposes, the reversible engagement feature of such a microneedle may by employed in implanting an implantable device into a biological tissue. Also provided are methods of inserting an implantable device into a biological tissue having an outer membrane. The subject methods may include ablating a section of the outer membrane and inserting the implantable device through the ablated section of outer membrane, including e.g., where the implantable device is inserted using a microneedle including e.g., those microneedles for which methods of fabrication are provided herein. Before the present methods are described, it is to be understood that this invention is not limited to particular method or composition described, as such may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting, since the scope of the present invention will be limited only by the appended claims. Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limits of that range is also specifically disclosed. Each smaller range between any stated value or intervening value in a stated range and any other stated or intervening value in that stated range is encompassed within the invention. The upper and lower limits of these smaller ranges may independently be included or excluded in the range, and each range where either, neither or both limits are included in the smaller ranges is also encompassed within the invention, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either or both of those included limits are also included in the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, some potential and preferred methods and materials are now described. All publications mentioned herein are incorporated herein by reference to disclose and describe the methods and/or materials in connection with which the publications are cited. It is understood that the present disclosure supersedes any disclosure of an incorporated publication to the extent there is a contradiction. As will be apparent to those of skill in the art upon reading this disclosure, each of the individual embodiments described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present invention. Any recited method can be carried out in the order of events recited or in any other order which is logically possible. It must be noted that as used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a wire” includes a plurality of such wires and reference to “the microneedle” includes reference to one or more microneedles and equivalents thereof, e.g. insertion needles, known to those skilled in the art, and so forth. The publications discussed herein are provided solely for their disclosure prior to the filing date of the present disclosure. Nothing herein is to be construed as an admission that the present invention is not entitled to antedate such publication by virtue of prior invention. Further, the dates of publication provided may be different from the actual publication dates which may need to be independently confirmed. Methods As summarized above, embodiments of the present disclosure include those directed to methods of fabricating a microneedle. The fabricated microneedles, described in more detail below, may find use in a variety of applications, including but not limited to e.g., insertion into a biological tissue of interest, delivery of a cargo into a biological tissue, implantation of an implantable device into a biological tissue, and the like. Depending on the particular application, microneedles may thus be fabricated to have certain characteristics compatible with or desirable for their insertion into biological tissues. Such characteristics will vary and may include but are not limited to e.g., biocompatibility, rigidity, strength, minimal diameter, minimal displacement volume, and the like. Non-limiting examples of methods, systems, devices, etc., in which a microneedle fabricated as described herein may be employed include but are not limited to e.g., those described in PCT International Publication No. WO 2016/126340; the disclosure of which is incorporated herein by reference in its entirety. The subject microneedles will generally be fabricated from multiple lengths of wire where two or more lengths of wire may be wound together. The actual number of lengths of wire wound together in fabricating a microneedle in the instant methods will vary and may range from 2 to 10 or more, including but not limited to e.g., 2 to 10, 2 to 8, 2 to 6, 2, 3, 4, 5, 6, 7, 8, 9, 10, etc. The lengths of wire used during fabrication may be different sections of the same continuous wire (e.g., looped) or may be sections of separate (i.e., individual) wires. It is noted that the individual lengths of wire from which the microneedles of the subject disclosure are fabricated will generally, but not necessarily, be unsuitable for use as a microneedle without one or more of the fabrication steps described herein. Put another way, the individual wires will generally, but not necessarily, have one or more characteristics that prevent their use alone as a microneedle as described herein, such as e.g., insufficient rigidity, insufficient strength, insufficient size, lack of certain desirable features (e.g., an engagement feature), etc. In general, the subject methods of fabricating a microneedle will include winding multiple lengths of wire to from a helix, brazing the helix to generate a microneedle that includes the multiple lengths of wire, and further manipulating the lengths of wire of the brazed microneedle to produce desired features. Wire Winding As summarized above, the subject methods of fabricating a microneedle will generally include winding multiple lengths of wire together. For simplicity, in the following, wire winding will be most frequently described in relation to winding of two wires together; however, the instant methods are not so limited and an ordinary skilled artisan will readily understand that two lengths of wires or more than two lengths of wires may be employed. For example, as specifically set forth in the examples below, in some instances, four lengths of wire may be employed. Furthermore, as used herein the term “length of wire” may refer to a wire or a portion of a wire and, as such, a single wire may have multiple lengths or a length of wire may encompass an entire wire. As such, in the following, the term “length of wire” may be used interchangeably with “wire” for simplicity in some instances. An ordinary skilled artisan will readily understand that where two lengths of wire are described as being wound together the lengths may be portions of the same wire, i.e., looped back on itself, or may be present on separate wires. Winding of wires of the subject methods may be performed under tension. Such winding under tension may generate a helix. In some instances, a helix of the subject disclosure may be referred to herein as a primary helix. The term “primary helix”, as used herein, generally refers to a helix of wound wires wherein one of the wires of the helix functions as the point of the microneedle after fabrication. A wire of the primary helix may also be utilized to form an engagement feature of a microneedle. Tension may be applied to the wires using any suitable method including e.g., mounting the wires in a spring loaded apparatus configured to tension the wires during winding of a helix, e.g., a primary helix. Suitable apparatus for winding wires under tension include e.g., a winding jig having one or more springs configured to tension the wires. In some instances, a winding jig having two springs positioned on at each end of the sections of wire may be employed, such as but not limited to e.g., an exemplary winding jig as described herein. Where more than two wires are employed the additional wire(s) may be integrated into the primary helix (e.g., forming a three-stranded helix, a four-stranded helix, etc.) or may form a separate helix (e.g., around the primary helix, including e.g., those helices referred to herein as secondary helices). Wire made of any suitable and appropriate material may be employed in the subject methods. Suitable and appropriate wire materials will include those materials that are sufficiently rigid and strong. In some embodiments, suitable wire is made of a rigid material, such as tungsten or an alloy thereof, and therefore the generated microneedle is correspondingly rigid (stiff). Convenient stiff, strong materials may include those having a relatively high Young's modulus. Additional suitable materials include, but are not limited to: tungsten carbide, iridium, tungsten-rhenium alloy, carbon fiber, boron, boride (e.g., BN), ceramic oxides and nitrides, and composite materials. In some embodiments, a suitable material may be alloyed with hafnium carbide (HfC) or zirconium carbide (ZrC) for additional stiffness. For example, in some instances, a wire material employed may be tungsten or an alloy thereof, e.g., tungsten-rhenium, including those alloyed with HfC or ZrC. As noted above, suitable wire materials may also include carbon based materials and thus wires of the present fabrication methods may include one or more carbonaceous solids or carbonaceous materials. Suitable carbonaceous materials will vary and may include but are not limited to e.g., carbon fiber, carbon nanotube, etc. Wires utilized in the subject methods may have a wide range of dimensions and thus a microneedle produced from the wound wires may correspondingly have a large variety of dimensions and geometries. Wires utilized in the instant fabrication methods may range in diameter from 4 μm or less to 100 μm or more, including but not limited to e.g., from 4 μm to 100 μm, from 4 μm to 50 μm, from 4 μm to 40 μm, from 4 μm to 30 μm, from 4 μm to 25 μm, from 4 μm to 20 μm, from 4 μm to 15 μm, from 4 μm to 10 μm, from 5 μm to 100 μm, from 5 μm to 50 μm, from 5 μm to 40 μm, from 5 μm to 30 μm, from 5 μm to 25 μm, from 5 μm to 20 μm, from 5 μm to 15 μm, from 5 μm to 10 μm, 4 μm, 5 μm, 6 μm, 7 μm, 8 μm, 9 μm, 10 μm, 11 μm, 12 μm, 13 μm, 14 μm, 15 μm, 16 μm, 17 μm, 18 μm, 19 μm, 20 μm, etc. In some instances, the diameters of the wires employed in fabrication, whether the same or different, will be less than 100 μm, including e.g., less than 75 μm, less than 50 μm, etc. The diameter of wound wires may essentially be the sum of the diameters of the individually wound wires. In some instances, the produced microneedle may be described according to its “maximum diameter”. As used below, the term “maximum diameter” is used in the following context to mean the diameter of the microneedle at the point along its length (of the portion of the microneedle that is inserted or is to be inserted in a biological tissue) at which it is its widest. For example, in some cases, the microneedle has one diameter at the point of contact with the biological tissue, but another length farther up the microneedle (e.g., due to a change in geometry of the microneedle), and the ‘maximum diameter’ describes the diameter when the microneedle is its widest (along the portion of the microneedle that is inserted or is to be inserted). Likewise, the term “maximum cross sectional area” is used to mean the cross sectional area of the microneedle at the point along its length (of the portion of the microneedle that is inserted or is to be inserted in a biological tissue) at which it is its biggest (i.e., the ‘maximum cross sectional area’ describes the cross-sectional area when the microneedle is its widest, along the portion of the microneedle that is inserted or is to be inserted). In some cases, the fabricated microneedle has a maximum diameter (e.g., along the length of insertion) of 80 μm or less (e.g., 70 μm or less, 65 μm or less, 60 μm or less, 55 μm or less, 50 μm or less, 55 μm or less, 50 μm or less, 45 μm or less, 40 μm or less, 35 μm or less, 30 μm or less, or 25 μm or less). For example, in some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) of 65 μm or less. In some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) of 35 μm or less. In some cases, the fabricated microneedle has a maximum diameter (e.g., along the length of insertion) in a range of from 10 to 80 μm (e.g., from 10 to 70 μm, from 10 to 65 μm, from 10 to 60 μm, from 10 to 55 μm, from 10 to 50 μm, from 10 to 45 μm, from 10 to 40 μm, from 10 to 35 μm, from 15 to 80 μm, from 15 to 70 μm, from 15 to 65 μm, from 15 to 60 μm, from 15 to 55 μm, from 15 to 50 μm, from 15 to 45 μm, from 15 to 40 μm, from 15 to 35 μm, from 20 to 80 μm from 20 to 70 μm, from 20 to 65 μm, from 20 to 60 μm, from 20 to 55 μm, from 20 to 50 μm, from 20 to 45 μm, from 20 to 40 μm, from 20 to 35 μm, from 25 to 80 μm from 25 to 70 μm, from 25 to 65 μm, from 25 to 60 μm, from 25 to 55 μm, from 25 to 50 μm, from 25 to 45 μm, from 25 to 40 μm, or from 25 to 35 μm). In some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) in a range of from 20 to 65 μm. In some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) in a range of from 25 to 65 μm. In some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) in a range of from 20 to 35 μm. In some cases, the microneedle has a maximum diameter (e.g., along the length of insertion) in a range of from 25 to 35 μm. In some cases, the fabricated microneedle has a maximum cross sectional area (e.g., along the length of insertion) of 5000 μm2or less (e.g., 4500 μm2or less, 4000 μm2or less, 3500 μm2or less, 3000 μm2or less, 2500 μm2or less, 2000 μm2or less, 1500 μm2or less, 1000 μm2or less, 800 μm2or less, 750 μm2or less, or 700 μm2or less). In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) of 4000 μm2or less (e.g., 3500 μm2or less, 3000 μm2or less, 2500 μm2or less, 2000 μm2or less, 1500 μm2or less, 1000 μm2or less, 800 μm2or less, 750 μm2or less, or 700 μm2or less). In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) of 3500 μm2or less (e.g., 3000 μm2or less, 2500 μm2or less, 2000 μm2or less, 1500 μm2or less, 1000 μm2or less, 800 μm2or less, 750 μm2or less, or 700 μm2or less). In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) of 2000 μm2or less (e.g., 1500 μm2or less, 1000 μm2or less, 800 μm2or less, 750 μm2or less, or 700 μm2or less). In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) of 1000 μm2or less (e.g., 800 μm2or less, 750 μm2or less, or 700 μm2or less). In some cases, the fabricated microneedle has a maximum cross sectional area (e.g., along the length of insertion) in a range of from 250 to 4000 μm2(e.g., from 250 to 3500 μm2, from 250 to 3000 μm2, from 250 to 2500 μm2, from 250 to 3000 μm2, from 250 to 2500 μm2, from 250 to 2000 μm2, from 250 to 1500 μm2, from 250 to 1000 μm2, from 250 to 800 μm2, from 400 to 4000 μm2, from 400 to 3500 μm2, from 400 to 3000 μm2, from 400 to 2500 μm2, from 400 to 3000 μm2, from 400 to 2500 μm2, from 400 to 2000 μm2, from 400 to 1500 μm2, from 400 to 1000 μm2, from 400 to 800 μm2, from 500 to 4000 μm2, from 500 to 3500 μm2, from 500 to 3000 μm2, from 500 to 2500 μm2, from 500 to 3000 μm2, from 500 to 2500 μm2, from 500 to 2000 μm2, from 500 to 1500 μm2, from 500 to 1000 μm2, from 500 to 800 μm2, from 1000 to 4000 μm2, from 1000 to 3500 μm2, from 1000 to 3000 μm2, from 1000 to 2500 μm2, from 1000 to 3000 μm2, from 1000 to 2500 μm2, from 1000 to 2000 μm2, from 1000 to 1500 μm2, from 2000 to 4000 μm2, from 2000 to 3500 μm2, from 2000 to 3000 μm2, from 2000 to 2500 μm2, from 2000 to 3000 μm2, from 2000 to 2500 μm2, from 2500 to 4000 μm2, from 2500 to 3500 μm2, from 2500 to 3000 μm2, from 2500 to 2500 μm2, or from 2500 to 3000 μm2). In some cases, the fabricated microneedle has a maximum cross sectional area (e.g., along the length of insertion) in a range of from 2000 to 4500 μm2. In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) in a range of from 2500 to 4000 μm2. In some cases, the microneedle has a maximum cross sectional area (e.g., along the length of insertion) in a range of from 500 to 1000 μm2. In some instances, four or more wires may be wound into two or more helices, including e.g., an inner primary helix and an outer secondary helix. Where wires are wound into two or more helices the two or more helices may be wound one at a time or simultaneously. For example, in some instances, multiple lengths of wire may be loaded into a wire winding jig such that winding the wires simultaneously forms a primary helix, that includes wires that will eventually form the microneedle point and the engagement feature, and a secondary helix, that includes one or more additional wires. Secondary helices, as described herein, may serve to strengthen the final fabricated microneedle and/or add additionally desired diameter to the microneedle. Brazing As summarized above, the subject methods of fabricating a microneedle will generally include brazing the wound wire. Brazing is a metal-joining process whereby filler metal (i.e., brazing material) is heated above its melting point and distributed between two or more close-fitting parts by capillary action. Generally, the brazing material is brought slightly above its melting (i.e., liquidus) temperature and is then applied to the base metal (i.e., the metal to be joined) where it flows over the base metal (also referred to as wetting). The wetted base metal is then cooled to join the pieces together. In some instances, brazing may be performed using a flux to prevent oxides from forming when the metal is heated. In some instances, brazing may be performed under a suitable atmosphere including e.g., an inert or reducing atmosphere, to prevent oxidation. In some instances, e.g., when brazing is performed under a suitable environment, brazing may be performed in the absence of flux. Accordingly, the instant methods may include brazing the wound wire to join the individual lengths of wire together. The brazing material utilized may vary depending on a variety of factors, including e.g., the wires to be joined or the application method. For example, in some instances, the wound wire may be a carbonaceous wire and carbide forming elements may be employed in the braze alloy, such as but not limited to e.g., nickel, chromium, iron, etc. In some instances, e.g., where tungsten wire is used, a group 11 base alloy may be employed such e.g., copper, silver, gold, etc. Furthermore, alloys of group 11 metals may also find use in brazing employed in the subject methods including but not limited to e.g., iridium alloys of group 11 metals including but not limited to e.g., copper-iridium alloy, gold-iridium alloy, sliver-iridium alloy, and the like. In some instances, the brazing material may include an alloy containing iridium, an alloy containing scandium, an alloy containing zirconium, an alloy containing nickel, an alloy containing silicon, an alloy containing beryllium, and the like. Brazing may be performed in a brazing machine, including e.g., those machines specifically designed for brazing wound wires. Useful brazing machines will vary and will generally include an apparatus for holding the wound wire and a crucible (also referred to in some instances as a heater basket) to contain the brazing material at a temperature sufficient for brazing. Accordingly, in methods of fabrication of the present disclosure a wound wire, e.g., held in a wire winding jig or other apparatus for holding the wound wire, may be contacted with the brazing material in the crucible, e.g., by raising the crucible and/or lowering the wound wire to make contact. In some instances, a brazing machine may include an apparatus for oscillating the wound wire through the brazing material. As such, in some instances, the wound wire, e.g., a primary helix, a secondary helix or both, is oscillated laterally (i.e., back-and-forth) through the braze material. Accordingly, the movement of the wire may be lateral movement relative to the braze material and/or crucible to facilitate wetting a length of wound wire that exceeds the diameter of the crucible. Such oscillating may be achieved through any convenient mechanism including but not limited to e.g., one or more pulleys present on the brazing machine and configured to oscillate the wound wire laterally. The rate at which wound wire is oscillated through the brazing material may also vary and may range from 0.01 cm/s or less to 100 cm/s or more including but not limited to e.g., 0.01 cm/s to 100 cm/s, 0.1 cm/s to 100 cm/s, 0.01 cm/s to 10 cm/s, 0.1 cm/s to 10 cm/s, 0.1 cm/s to 5 cm/s, 0.5 cm/s to 10 cm/s, 0.5 cm/s to 5 cm/s, 0.5 cm/s to 2 cm/s, 0.5 cm/s, 1 cm/s, 1.5 cm/s, 2 cm/s, etc. Crucibles useful in methods of fabrication as described herein will also vary depending on various factors including e.g., the brazing material employed. For example, methods of heating the crucible may vary and may include but are not limited to e.g., electrical/resistance heating, laser heating, and the like. In some instances, the crucible may be electrically heated and an associated brazing machine may include one or more heavy current busses connected to the crucible. Crucibles of the subject methods may be made of a variety of materials including but not limited to e.g., tungsten and tungsten alloy materials, carbonaceous materials, and the like. In some instances, an employed crucible is made of boron-nitride. A brazing machine may further include an enclosed chamber. Such chambers find use in controlling the brazing atmosphere including but not limited to controlling the atmospheric pressure during brazing, controlling the atmospheric composition during brazing (e.g., gas composition, moisture, etc.). Accordingly, useful chambers include vacuum chambers and chambers that include a vacuum feed-through. In some instances, brazing is performed under a vacuum including but not limited to e.g., a vacuum of less than 760 mTorr, including but not limited to e.g., less than 750 mTorr, less than 500 mTorr, from 250 mTorr to 750 mTorr, from 400 mTorr to 750 mTorr, from 500 mTorr to 600 mTorr, and the like. In some instances, the brazing chamber may be kept very dry and brazing may be performed under low moisture conditions including e.g., where the pressure ratio of hydrogen to water is sufficiently high to reduce any surface oxides on the most oxidizable surfaces (e.g., tungsten surfaces) at the maximum process temperature. In some instances, brazing under low moisture conditions includes flowing a shielding gas through the brazing chamber. Useful shielding gases will vary and may include but are not limited to e.g., a mixture of hydrogen and argon including e.g., a 20% hydrogen mixture with argon. In some instances, prior to flowing a shielding gas through the brazing chamber the chamber is subjected to a vacuum of less than 100 mTorr, including but not limited to e.g., less than 50 mTorr, less than 10 mTorr, less than 5 mTorr, etc. Vacuum conditions and/or the presence of a shielding gas may not be limited to brazing steps and may, in some instances, be present before or after brazing. For example, in some instances, a vacuum may be applied prior to brazing including e.g., during one or more steps that precedes brazing, including e.g., during wire winding. In some instances, a vacuum may be applied following brazing including e.g., during one or more steps that follows brazing, including e.g., during further manipulating the brazed microneedle (e.g., fatiguing, sharpening, etc.). In some instances, the wound wire may be heated during brazing. Useful methods of heating the wound wire during brazing will vary and may include but are not limited to e.g., electrical/resistance heating, laser heating, and the like. In some embodiments, the brazing chamber and/or the apparatus for holding the wound wire during brazing may include an electrical connection for passing current through the wound wire to heat the wire during brazing. In some instances, the wound wire may be not subjected to additional heating (i.e., heating in addition to that imparted by the molten brazing material) through, e.g., electrical or laser heating, during brazing. In some instances, brazing of the wound wire may include the application of a base solvent material that is not incorporated into the brazed wire. In some instances, a base solvent material may be used to dissolve a braze material and allow the braze material to alloy with the wire material (e.g., W, W—Re, etc.). Useful solvent materials will vary and may include but are not limited to e.g., copper, gold, etc. In such embodiments, the solvent material may be applied to the wire as a base and used to dissolve a braze material such as but not limited to e.g., nickel, chromium, iron, cobalt, etc. Following brazing, the wire may be heated, e.g., utilizing any convenient method (e.g., electrical/resistance, laser, etc.) to evaporate away the solvent base material while heating the brazing material sufficiently above the liquidus temperature to alloy directly with the wire material. In some embodiments, use of a brazing process involving a solvent base material may be employed to generate a stronger and/or stiffer joint than that achieved through brazing without the solvent material. In some instances, the subject fabrication methods may include tempering the microneedle, e.g., tempering following brazing. Such tempering may include controlled heating and/or cooling of the microneedle. In some instances, the microneedle is tempered by heating the microneedle following brazing. Useful methods of heating the microneedle during or following brazing will vary and may include but are not limited to e.g., electrical/resistance heating, laser heating, and the like. Useful tempering temperatures will vary and may range from less than 100° C. to 1000° C. or more including but not limited to e.g., from 100° C. to 1000° C., from 200° C. to 1000° C., from 300° C. to 1000° C., from 400° C. to 1000° C., from 100° C. to 900° C., from 100° C. to 800° C., from 100° C. to 700° C., from 100° C. to 600° C., from 200° C. to 800° C., from 300° C. to 700° C., from 400° C. to 600° C., 500° C., etc. Useful tempering times will also vary and may range from minutes or less to hours or more, including but not limited to e.g., from 5 min. to 6 hours, from 10 min. to 3 hours, from 15 min. to 1 hour, about 30 min., about 1 hour, etc. After brazing is complete the wires of the helix will be substantially joined by the brazing process forming a microneedle unit. The microneedle will generally, but need not necessarily, be subjected to further manipulation to apart certain desired characteristics and/or features upon the produced microneedle. Such manipulations may be applied to the microneedle as a whole or, although brazed, to individual wires of which the microneedle is made up. Manipulations As summarized above, the subject methods of fabricating a microneedle may include further manipulating one or more of the lengths of wire of the microneedle to produce one or more desired features or impart one or more desired characteristics upon the microneedle or a portion thereof. In some instances, further manipulation of the microneedle following brazing may be employed to produce a reversible engagement feature on the microneedle. A microneedle fabricated according to the subject methods will generally include an engagement feature, e.g., as corresponding to an engagement feature present on an implantable device. Such engagement features will generally be “reversible” allowing the engagement of the feature, e.g., during implantation of a cargo, and disengagement of the feature, e.g., to leave the cargo implanted when removing the microneedle. For example, prior to and during implantation, an implantable device may be reversibly engaged with a microneedle (via the corresponding engagement features of the implantable device and the microneedle). Essentially, the microneedle, with device loaded, is inserted into a biological tissue (e.g., to a desired depth), and the microneedle is then retracted, thereby disengaging the implantable device from the microneedle and allowing the implantable device to remain implanted in the biological tissue at a desired position. Microneedles fabricated according to the herein described method may be employed individually (e.g., to insert one implantable device or multiple implantable devices) or in parallel with additional microneedles such that a plurality of implantable devices is implanted into the biological tissue using a plurality of microneedles. The engagement features present on microneedles fabricated according to present methods will vary. As described above, in some instances, the fabrication method facilitates desirable fracturing of the microneedle to generate the engagement feature; however, the engagement feature need not necessarily be employed, e.g., to engage the engagement feature of an implantable device, as fractured and may, in some instances, be further shaped or modified as desired. Accordingly, essentially any desirable shape of engagement feature may be fashioned from an engagement feature generated through fracturing as described above. In some embodiments, the engagement feature of the microneedle is positioned in a distal region of the microneedle. As used herein the “distal region” is the distal-most 25% of the microneedle (relative to the entire length of the microneedle). To be clear the distal end of the in microneedle is the tip of the needle that penetrates into the target tissue (e.g., the biological tissue). In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but not at the distal end (meaning, the engagement feature is set back from the distal tip, i.e., the engagement feature is set back from the distal end). For example, in some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 10% of the distal region. In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 5% of the distal region. In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 3% of the distal region. In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 2% of the distal region. In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 1% of the distal region. In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is not present in the distal most 0.5% of the distal region. In some cases, the engagement feature of the microneedle is positioned at least 5 μm away from the distal end of the microneedle (e.g., at least 10 μm, at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned at least at least 10 μm away from the distal end of the microneedle (e.g., at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned at least at least 20 μm away from the distal end of the microneedle (e.g., at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is positioned at least 5 μm away from the distal end (e.g., at least 10 μm, at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is positioned at least 10 μm away from the distal end (e.g., at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned in the distal region of the microneedle, but is positioned at least 20 μm away from the distal end (e.g., at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned within 100 μm of the distal end of the microneedle (e.g., within 90 μm, 80 μm, 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle). In some cases, the engagement feature of the microneedle is positioned within 100 μm of the distal end of the microneedle (e.g., within 90 μm, 80 μm, 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), but is not positioned at the distal end of the microneedle. For example in some cases, the engagement feature of the microneedle is positioned within 100 μm of the distal end of the microneedle (e.g., within 90 μm, 80 μm, 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), and is positioned at least 5 μm away from the distal end (e.g., at least 10 μm, at least 15 μm, at least 20 μm, at least 25 μm, at least 30 μm, at least 35 μm, at least 40 μm, at least 45 μm, or at least 50 μm away from the distal end). In some cases, the engagement feature of the microneedle is positioned within 100 μm of the distal end of the microneedle (e.g., within 90 μm, 80 μm, 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), and is positioned at least 10 μm from the distal end of the microneedle. In some cases, the engagement feature of the microneedle is positioned within 100 μm of the distal end of the microneedle (e.g., within 90 μm, 80 μm, 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), and is positioned at least 20 μm from the distal end of the microneedle. In some cases, the engagement feature of the microneedle is positioned within 80 μm of the distal end of the microneedle (e.g., within 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), and is positioned at least 10 μm from the distal end of the microneedle. In some cases, the engagement feature of the microneedle is positioned within 80 μm of the distal end of the microneedle (e.g., within 70 μm, 60 μm, 50 μm, 40 μm, 30 μm, 20 μm, or 10 μm, of the distal end of the microneedle), and is positioned at least 20 μm from the distal end of the microneedle. As noted above, the engagement feature may be produced through fracturing a wire of the multi-wire microneedle. Such fracturing may be achieved by fatiguing a distal end of the length of wire to cause the wire to fracture at or near a desired position, e.g., as described above. As described in detail above, desired positions for the engagement feature will generally be positioned at some distance from the distal trip of the microneedle. Accordingly, in the case of the multi-wire microneedles described herein, the engagement feature will generally be positioned along a second length of wire (e.g., a longer length of wire, an unfractured length of wire, or the like) that may serve as the distal tip of the microneedle. Methods or microneedle preparation, including those described, herein may be employed to induce the wire to facture at or near the desired point, e.g., through the mechanical influences imparted by the one or more fabrication steps (e.g., winding, brazing, tempering, etc.) or combinations thereof. Fatiguing of wires to generate a fracture at a desired position, as noted above, is influenced by the fabrication steps used in generating the microneedle. Thus, following brazing one or more wires may be fatigued at the point where the wire leaves the brazed helix of which it is a component. Different wires may be fatigued to fracture at different points, e.g., a wire fatigued to produce an engagement feature may fracture at a point different from an additional wire utilized to provide support to the wires of the primary helix. In some instances, a wire of the microneedle may be further manipulated by sharpening. For example, in some embodiments, the wire that forms the distal tip of the microneedle may be sharpened at its end. In some instance, e.g., where a microneedle has an introduced engagement feature, the end of the wire that is sharpened may be the end that is most proximal to the engagement feature. Any convenient method of sharpening the end of the wire or tip of the microneedle may be employed including but not limited to e.g., electrochemical etching. Electrochemical etching techniques employed in the herein described methods will vary any may include but are not limited to e.g., those electrochemical etching techniques and/or etching baths suitable for generating a fine point, those electrochemical etching techniques and/or etching baths suitable for removing brazing material, and the like. In some instances, the method may include electrochemically etching the microneedle in FeCl3. In some instances, the method may include electrochemically etching the microneedle in NaOH. Electrical current may or may not be employed during etching. In some instances, etching is performed in NaOH with current ranging from 0.5 V to 10 V (e.g., 1 V to 6 V). As noted above, in various steps of the present methods, e.g., before, during or after brazing, the method may include passing electrical current through one or more lengths of wire. Current passing through the wire may be employed to serve for a variety of different purposes including but not limited to e.g., heating the wire to a desired temperature, re-nature the crystal structure of the wire, clean the surface of the wire, and the like. Exemplary instances where current may be employed to heat the wire(s), e.g., before, during or after brazing, have been described above. In some instances, at various steps throughout the fabrication method current passed through the wire(s) may be sufficient to raise the temperature of the primary helix, the secondary helix or both to at least 500° C., including but not limited to e.g., at least 600° C., at least 700° C., at least 800° C., at least 900° C., at least 1000° C., at least 1100° C., at least 1200° C., at least 1300° C., at least 1400° C., at least 1500° C., at least 1600° C., at least 1700° C., at least 1800° C., at least 1900° C., at least 2000° C., etc. Useful electrical currents include those sufficient to reach a desired temperature in a particular type of wire, including e.g., one or more of the particular types of wire described herein. For example, in some instances, a useful current may include a current sufficient to raise the temperature of the primary helix, the secondary helix or both to about 1300° C. for pure tungsten wire and the like. In some instances, a useful current may include a current sufficient to raise the temperature of the primary helix, the secondary helix or both to about 1600° C. for tungsten-rhenium wire and the like. Current may be applied directly to the wire(s) or may be applied indirectly, e.g., through one or more electrical connections present on an apparatus utilized during fabrication including but not limited to e.g., a winding jig, a brazing machine, and the like. As noted above, passing current through the wire may serve to re-nature the wire including e.g., where the current is sufficient to recrystallize the lengths of wire. As-drawn many of the wires suitable for use in the subject fabrication methods have elongated crystalline domains, which leads to ductility and high tensile strength of the material. These characteristics of the wire may be manipulated, e.g., to render the crystalline domains more regular, through the application of stress (e.g., tension during winding) and/or heat/current through the wire to orient the domains more perpendicular to the direction of applied stress. Furthermore, due to reduced emission area and nonlinear dependence of resistance on temperature, the helical area of the wound wire can be brought to a temperature sufficient for recrystallization while the free ends of the helices are kept below the recrystallization temperature and do not recrystallize. Therefore, recrystallization may, in some instances, be limited to only the helically wound sections of wire while the free ends not incorporated into the helix are not recrystallized. Accordingly, in such circumstances, differences between the ductility and brittleness of the helical and free ends of the wire may be employed to induce fracturing at or near desired positions during fatiguing steps biasing the fracture point closer to the helix. Furthermore, the alignment of the crystal structure of the wire due to recrystallizing may also be employed to increase the frequency of desirable fracturing of the wire which generates engagement features. During manipulations involving heating of the wire, microneedle and/or a helix thereof to a desired temperature, including e.g., a temperature sufficient for recrystallization, etc., the temperature of the wire, microneedle and/or a helix thereof may be monitored. Any suitable method of monitoring the temperature of the wire may be employed. For example, in some instances, the temperature of the wire may be estimated, e.g., by observing the color of the wire by a skilled observer. In some instances, monitoring the temperature of the wire may include measuring the temperature of one or more lengths of the wire. Temperature monitoring may be employed using any convenient technique and may be performed during any step where or method by which the wires are heated. In some instances, the temperature of the wire may be measured while the current is passing through the wire. Useful methods of measuring the temperature of the wire that may be employed include but are not limited to e.g., an optical measurement (e.g., as performed using an optical pyrometer), an electrical measurement (e.g., as performed by measuring changes in resistance), and the like. The manipulations described herein may be applied to any wire of a microneedle and, e.g., equally applied to wires of a primary helix or a secondary helix, when present, alike. For example, in some instances, while fatiguing may be employed to generate an engagement feature in one wire of a microneedle, fatiguing may be similarly employed to shorten the wire(s) of a secondary helix thus shaping the microneedle as desired. Further methods may be employed to shape and/or adjust the characteristics of fabricated microneedles as desired including but not limited to e.g., through the use of sandpaper or other abrasive. Insertion Through a Membrane As summarized above, the methods of the present disclosure include inserting an implantable device into a biological tissue having an outer membrane. Methods involving insertion of a device into a biological tissue having an outer membrane may include ablating a section of the outer membrane and inserting the device through the ablated section of outer membrane. Such methods may or may not include where the implantable device is inserted using a microneedle, including e.g., those fabricated according to the methods described herein. The subject methods will generally be performed on a biological tissue. Such biological tissues include but are not limited to: brain, muscle, liver, pancreas, spleen, kidney, bladder, intestine, heart, stomach, skin, colon, and the like. In some cases, the targeted biological tissue is a central nervous system (CNS) tissue, including but not limited to e.g., brain tissue, spinal cord tissue, eye tissue, and the like. The biological tissue can be from any multicellular organism including but not limited to invertebrates, vertebrates, fish, birds, mammals, rodents (e.g., mice, rats), ungulates, cows, sheep, pigs, horses, non-human primates, and humans. In some cases, the biological tissue is ex vivo (e.g., a tissue explant). In some cases, the biological tissue is in vivo (e.g., the method is a surgical procedure performed on a subject). The subject methods allow for the insertion of an implantable device into a biological tissue having an outer membrane. As such, in some instances, the biological tissue upon which the method is employed may be a tissue having an outer membrane. Outer membranes of biological tissues may, in some instances, inhibit or otherwise negatively impact the insertion of the device. In some instances, biological tissues may not inhibit or otherwise negatively impact the insertion of the device but it may be nonetheless desireable to keep the membrane as intact as possible (e.g., to increase the likelihood of a successful procedure (e.g., to increase positive surgical outcomes and/or decrease the instances of negative surgical outcomes). Non-limiting examples of membranes that may be present on biological tissues include but are not limited to e.g., dura, pia-arachnoid complex, and the like. Implantation through the outer membrane of the subject tissue may be achieved by ablating a section of the outer membrane. In some instances, such ablation may be referred to as “micro drilling” or “laser micro drilling” as the subject method, in some instances, is employed to induce one or more holes in the subject membrane through which the implantable device may be introduced but otherwise leave the membrane intact. The subject methods will generally include ablating the section of membrane through the use of a laser. In some instances, the laser employed may be a Q-switched laser. Prior to applying the laser for ablation of membrane tissue, the membrane may be contacted with a photosensitizer. By “photosensitizer” as used herein is generally meant an agent that sensitizes the tissue to which it is applied to the laser as compared to tissue where the photosensitizer is not applied. Photosensitizers of the subject methods will generally have a particular wavelength that is strongly absorbed by the photosensitizer, referred to herein as a “wavelength of strong absorption”. When subjected to light (e.g., laser light) corresponding to its wavelength of strong absorption the photosensitizer will disproportionately absorb energy from the light (e.g., laser light) as compared to other molecules. As such, cells and/or tissues contacted with the photosensitizer and subjected to light (e.g., laser light) corresponding the wavelength of strong absorption of the photosensitizer will disproportionately absorb energy from the light (e.g., laser light) leading to ablation of the cell and/or tissue. Useful photosensitizers will vary and generally include those agents having a wavelength of strong absorption, such as e.g., many dyes. In some instances, depending on the application, photosensitizers employed may also be those that are biocompatible such that they do not adversely affect the tissue or cells to which they are applied in the absence of applied light (e.g., applied laser light) that corresponds to the wavelength of strong absorption. In some instances, a photosensitizer employed in the subject methods may be erythrosin B. Ablation of the section of membrane according to the present methods may include applying a photosensitizer to the membrane and subjecting the applied photosensitizer to laser light having an emission wavelength that corresponds to the wavelength of strong absorption of the photosensitizer. The photosensitizer may be applied broadly to the membrane or to specific locations where ablation is desired. Given the general ability to shape a laser light beam as desired, e.g., through the use of optical components such as mirrors, lens, etc., the laser may be specifically applied to an area of the membrane where ablation is desired, regardless of whether the photosensitizer is applied broadly or to specific locations of the membrane. In some instances, the emission wavelength of the employed laser is a non-ionizing emission wavelength. In some instances, the laser employed is a green laser having a wavelength between about 495 nm to about 570 nm. In some instances the emission wavelength of the laser is 527 nm. In some instances, application of the laser to tissues to which the photosensitizer has not been applied does not result in ablation or otherwise damage or adversely affect the tissue. Following ablation of the section of membrane an implantable device may be inserted through the hole in the membrane and into the biological tissue at any desired depth. In some instances, implantation of the implantable device may be facilitated through the use of a microneedle. For example, in some instances, a microneedle may be employed that includes an engagement feature corresponding to an engagement feature present on the implantable device and the engagement feature may allow implantation of the device and subsequent retraction of the microneedle. In some instances, a microneedle with an engaged implantable device may be referred to as a loaded or device-loaded microneedle. In some instances, useful microneedles include but are not limited to e.g., those microneedles fabricated according to the methods described herein. Methods, compositions and systems for device implantation that may find use in the subject methods of implanting a device into a tissue having an outer membrane may further include but are not limited to e.g., those described in PCT International Patent Application No. WO 2016/126340; the disclosure of which is incorporated herein by reference in its entirety. The subject methods may find use in any method where a device is desired to be implanted into a tissue having an outer membrane, including those where the membrane may interfere with such implantation or it may be desirable to maintain the integrity of the membrane, including where e.g., the device includes but is not limited to a microneedle, an electrode, a waveguide, or the like. Non-limiting exemplary implantable devices are described below. The size of the section of membrane ablated according to the described methods will vary. In some instances, the section of membrane ablated may correspond to the size of the implantable device to be inserted. For example, in some instances, the size of the hole made by ablation of the membrane may correspond with and allow for the insertion of an implantable device including e.g., those described below. Implantable Devices Subject implantable devices that can be implanted using the method of the present disclosure into a tissue having an outer membrane include e.g., a device that includes: (i) a biocompatible substrate (e.g., a non-conductive substrate, e.g., a flexible substrate such as a polyimide-based polymer), (ii) a conduit (e.g., a conductor of electricity such as an electrode, a conductor of photons such as a waveguide) that is disposed on the biocompatible substrate, and (iii) an engagement feature (e.g., a loop) for reversible engagement with an insertion needle. In some cases, the biocompatible substrate includes the engagement feature of the implantable device. In some cases, the conduit includes the engagement feature of the implantable device. A subject microneedle includes an engagement feature that corresponds to the engagement feature of the implantable device. As used herein the term “conduit” refers to a substance that can conduct information to an external device. A conduit can be a conductor of electricity (e.g., an electrode), a conductor of photons (e.g., a waveguide such as an optic fiber), a conductor of fluid (e.g., a microfluidic channel), etc. As such, a subject implantable device can be used for a large variety of purposes, and this will depend on the nature of the conduit(s) present as part of the implantable device. For example, an implantable device can be used as (1) a sensor (detector), (2) an effector (e.g., to deliver a stimulation such as light, current, and/or a drug, e.g., which can change the tissue environment into which the device is implanted), or (3) both, depending on the nature of the conduit(s) present as part of the implantable device. Examples of when a subject implantable device can be used as a sensor include, but are not limited to situations in which the device includes, as a conduit: (i) an electrode that is used as a recording electrode; (ii) a chemical sensing element such as an analyte sensor, e.g., a working electrode; (iii) a photodetector, e.g., for radiography and/or in-vivo imaging; etc. Examples of when a subject implantable device can be used as an effector include, but are not limited to situations in which the device includes, as a conduit: (i) an electrode that is used for stimulation, e.g., for delivering a current; (ii) a light emitting diode (LED) and/or a microscale laser, e.g., for optogenetic applications; and/or (iii) a waveguide (e.g., optical fiber) for delivering light, e.g., for optogenetic applications; etc. In some cases, effectors will effect cells that have been physically, genetically, and/or virally modified to include (e.g., express) biological transducers (e.g., ion channels, RF-sensitive nanoparticles, and the like). For example, a subject implantable device that includes a waveguide (e.g., an optical fiber) may be used to irradiate and affect target naive or transfected tissue. Because electrodes can be used as sensors (e.g., to detect changes in electrical activity) or as effectors (e.g., to deliver a current to the surrounding tissue), an implantable device that includes a conductor (e.g., an electrode) as a conduit can function in some cases as a sensor, as an effector, or as both. For example, electrodes can be used for closed and/or open-loop micro or macro stimulation. As used herein the phrase “disposed on” (e.g., when a conduit is disposed on a biocompatible substrate) is meant to encompass cases in which the conduit is present on, within (e.g., sandwiched), or embedded within the biocompatible substrate. In some cases, the biocompatible substrate can provide mechanical shape/structure to the implantable device while the conduit can provide for communication with an external device. For example, a conduit (e.g., an electrode) can be sandwiched between substrate layers (e.g., non-conductive layers) and/or embedded within a biocompatible substrate, and such an element would be considered herein to be “disposed on” the biocompatible substrate (e.g., in some cases the biocompatible substrate can have more than one layer). In some cases, at least a portion of the conduit is exposed to the surrounding environment (e.g., when the conduit is an electrode). The biocompatible substrate can be any convenient biocompatible substrate and in some cases will be an inert and non-conductive (e.g., insulating) biocompatible substrate (e.g., an insulator). In some cases, the biocompatible substrate is flexible (e.g., the biocompatible substrate is a flexible biocompatible substrate, e.g., a flexible biocompatible substrate, e.g., a flexible non-conductive biocompatible substrate). In some cases, the biocompatible substrate is inert. In some cases, the biocompatible substrate is inert and/or non-conductive. A biocompatible substrate (e.g., a flexible biocompatible substrate) can be made from any convenient material. In some cases a biocompatible substrate (e.g., a flexible biocompatible substrate) comprises an inert polymeric material (e.g., polyimide, e.g., a polyimide-based polymer, parylene, etc.). In some cases a biocompatible substrate (e.g., a flexible biocompatible substrate) comprises polyimide (e.g., comprises a polyimide-based polymer). In some cases, the biocompatible substrate (e.g., a flexible biocompatible substrate) of a subject implantable device includes an inert polymeric material (e.g., polyimide, e.g., a polyimide-based polymer, parylene, etc.). In some cases, the biocompatible substrate of a subject implantable device includes a conductive material such as metal. In some cases, the biocompatible substrate of a subject implantable device includes NiTi (Nickel-Titanium). For a non-conducting biocompatible substrate, any convenient non-conducting plastic or polymeric material and/or other non-conducting, flexible, deformable material can be used. Examples include but are not limited to thermoplastics such as polycarbonates, polyesters (e.g., Mylar™ and polyethylene terephthalate (PET)), polyvinyl chloride (PVC), polyurethanes, polyethers, polyamides, polyimides, or copolymers of these thermoplastics, such as PETG (glycol-modified polyethylene terephthalate). In some cases, a dissolving polymer (e.g. polycaprolactone) can be used as an insertion shuttle. In some cases, a thin layer of dielectric (e.g., ceramic, glass, and the like) can be used as an insulator and barrier layer. In some cases, the first layer can be partially-cured (e.g., partially cured PI), in which case the stack can be PI-dielectric-metal-dielectric (e.g., PI-ceramic-metal-ceramic). In some cases, a subject implantable device includes one or more insulating and/or moisture barrier layers (e.g., a dielectric, Al2O3, and the like). In some such cases, such layers might not be ductile (e.g., in some cases such a layer(s) is ductile and in some cases such a layer(s) is not ductile). In some cases, the biocompatible substrate is inert (e.g., can be an inert biocompatible substrate). In some embodiments, a subject implantable device includes two layers of biocompatible substrate (e.g., non-conductive biocompatible substrate) with metal sandwiched within. In some cases, such an arrangement can provide, e.g., insulation in the inner layer and/or desirable mechanical properties in the outer layer. In some embodiments, a flexible biocompatible substrate of an implantable device includes first and second thin-film (e.g., of polyimide, of parylene, etc.) layers sandwiched around the conduit (e.g., metal). In other words, the conduit (e.g., metal) can be adjacent to the first thin-film (e.g., of polyimide, of parylene, etc.) layer; and the second thin-film (e.g., polyimide or parylene) layer, forming a thin-film metal thin-film sandwich. A subject implantable device includes a conduit. Any convenient conduit can be used and a large variety of conduits are envisioned that would be useful in a large variety of settings, which can depend on context, e.g., what biological tissue is being targeted, what disease or condition is being treated, whether the implanted implantable device(s) will be used for research or therapeutic purposes, etc. Examples of suitable conduits include, but are not limited to: an electrode, a light emitting diode (LED) (e.g., for optogenetic applications), a microscale laser (e.g., for optogenetic applications), a chemical sensing element such as an analyte sensor/detector, a photodetector (e.g., for radiography or in-vivo imaging), an optical element such as a waveguide (e.g., an optical fiber), a reflectometry based sensor, and the like. In some cases, the conduit of a subject implantable device is an electrode. As noted above, in some cases an implantable device that includes an electrode can be used a sensor (detector), an effector (e.g., for stimulation of surrounding tissue), or both. A conduit (e.g., an electrode for recording and/or stimulation) can comprise (e.g., can be made of) any convenient conductive material. For example, a conduit that conducts electricity (e.g., an electrode) can comprise: copper (Cu), titanium (Ti), copper and titanium, Nickel (Ni), Nickel-Titanium (NiTi, nitinol), chromium (Cr), platinum (Pt), platinum/iridium alloys, tantalum (Ta), niobium (Nb), zirconium (Zr), hafnium (Hf), Co—Cr—Ni alloys, stainless steel, gold (Au), a gold alloy, palladium (Pd), carbon (C), silver (Ag), a noble metal, an allotrope of any of the above, a biocompatible material, and any combination thereof. In some embodiments, the conduit (e.g., electrode) of a subject implantable device comprises (e.g., is made of) a metalization stack selected from: Cr/Au, SiC/Pt, Pt/SiC, and Ta/Cr/Au. In some cases, the conduit (e.g., electrode) of a subject implantable device comprises Cr/Au (e.g., a Cr/Au metalization stack). In some cases, the conduit (e.g., electrode) of a subject implantable device comprises SiC/Ti/Pt/SiC (e.g., a SiC—Ti—Pt—SiC metalization stack). For example, SiC can be used for adhesion (e.g., as an adhesion layer, e.g., a 5-30 nm thick adhesion layer) to the biocompatible substrate (e.g., in some cases PI). of the subject implantable device while Ti can serve as an adhesion layer (e.g., a 5-30 nm thick adhesion layer) between Pt and SiC). The conduit can have any convenient cross sectional shape, such as, but not limited to, a circular cross section, a rectangular cross section, a square cross section, a triangular cross section, a planar cross section, or an elliptical cross-section. In some cases, a subject implantable device includes only one conduit (e.g., an electrode, a wave guide). In some cases, a subject implantable device includes one or more conduits (e.g., electrodes, waveguides) (e.g., two or more, three or more, four or more, five or more, six more, seven or more, eight or more, etc.). In some cases, a subject implantable device includes a plurality of conduits (e.g., electrodes, waveguides) (e.g., 2, 3, 4, 5, 6, 7, 8, 9, 10, 2 or more, 3 or more, 4 or more, 5 or more, 6 more, 7 or more, or 8 or more conduits). In some embodiments, when an implantable device includes more than one conduit (e.g., electrode), each conduit (e.g., electrode, waveguide) can be in communication (e.g., electrical communication, optic communication) with an external device, e.g., can be independently electrically connected to respective wires or fibers (e.g., such that electrical stimulation can be directed to selected electrodes and/or electrical activity can be detected by selected electrodes). In some cases, a conduit of a subject implantable device is an electrochemical implantable device. An “electrochemical implantable device” is a device configured to detect the presence and/or measure the level of an analyte in a sample via electrochemical oxidation and reduction reactions on the implantable device. These reactions are transduced to an electrical signal that can be correlated to an amount, concentration, or level of an analyte in the sample. For more on using electrodes as an electrochemical implantable device, refer to U.S. Pat. No. 6,175,752, which is hereby incorporated by reference in its entirety. For example, in some cases, a subject implantable device includes two or more electrodes where one electrode is a working electrode and another electrode is a counter electrode. In some cases, a subject implantable device includes two or more electrodes where one electrode is a working electrode and another electrode is a reference electrode. In some cases, a subject implantable device includes three or more electrodes where one electrode is a working electrode, one electrode is a counter electrode, and one electrode is a reference electrode. A “counter electrode” refers to an electrode paired with the working electrode, through which passes a current equal in magnitude and opposite in sign to the current passing through the working electrode. The term “counter electrode” is meant to include counter electrodes which also function as reference electrodes (i.e., a counter/reference electrode). A “working electrode” is an electrode at which an analyte (or a second compound whose level depends on the level of the analyte) is electrooxidized or electroreduced with or without the agency of an electron transfer agent. An “electron transfer agent” is a compound that carries electrons between the analyte and the working electrode, either directly, or in cooperation with other electron transfer agents. One example of an electron transfer agent is a redox mediator. “Electrolysis” is the electrooxidation or electroreduction of a compound either directly at an electrode or via one or more electron transfer agents. A “working surface” is that portion of the working electrode which is coated with or is accessible to the electron transfer agent and configured for exposure to an analyte-containing fluid. A variety of dimensions and geometries are suitable for a subject implantable device and any convenient set of dimensions/geometries can be used, and will likely vary based on various considerations such as, but not limited to: the type of target tissue, the type of conduit present (e.g., electrode, LED, laser, waveguide, etc.), the cost of materials, the rate and/or ease of fabrication, the level of desired tissue displacement, etc. As used below, the term “maximum diameter” is used in the following context to mean the diameter of the implantable device at the point along its length at which it is its widest, and the term “maximum cross sectional area” is used to mean the cross sectional area of the implantable device at the point along its length at which the cross sectional area is greatest. In some cases, the implantable device has a maximum diameter of 80 μm or less (e.g., 70μ or less, 65 μm or less, 60 μm or less, 55 μm or less, 50 μm or less, 55 μm or less, 50 μm or less, 45 μm or less, 40 μm or less, 35 μm or less, 30 μm or less, or 25 μm or less, 20 μm or less, 15 μm or less, 10 μm or less, 9 μm or less, 8 μm or less, 7 μm or less, 6 μm or less, 5 μm or less, 4 μm or less, 3 μm or less, 2 μm or less, 1 μm or less, 0.5 μm or less, etc.). For example, in some cases, the implantable device has a maximum diameter of 65 μm or less. In some cases, the implantable device has a maximum diameter of 35 μm or less. In some cases, the implantable device has a maximum diameter of 25 μm or less. In some cases, the implantable device has a maximum diameter of 15 μm or less. In some cases, the implantable device has a maximum diameter of 5 μm or less. In some cases, the implantable device has a maximum diameter in a range of from 0.5 to 80 μm (e.g., from 0.5 to 70 μm, from 0.5 to 65 μm, from 0.5 to 60 μm, from 0.5 to 55 μm, from 0.5 to 50 μm, from 0.5 to 45 μm, from 0.5 to 40 μm, from 0.5 to 35 μm, from 0.5 to 25 μm, from 0.5 to 15 μm, from 0.5 to 5 μm, from 15 to 80 μm from 15 to 70 μm, from 15 to 65 μm, from 15 to 60 μm, from 15 to 55 μm, from 15 to 50 μm, from 15 to 45 μm, from 15 to 40 μm, from 15 to 35 μm, from 20 to 80 μm from 20 to 70 μm, from 20 to 65 μm, from 20 to 60 μm, from 20 to 55 μm, from 20 to 50 μm, from 20 to 45 μm, from 20 to 40 μm, from 20 to 35 μm, from 25 to 80 μm from 25 to 70 μm, from 25 to 65 μm, from 25 to 60 μm, from 25 to 55 μm, from 25 to 50 μm, from 25 to 45 μm, from 25 to 40 μm, or from 25 to 35 μm). In some cases, the implantable device has a maximum diameter in a range of from 0.5 to 65 μm. In some cases, the implantable device has a maximum diameter in a range of from 10 to 65 μm. In some cases, the implantable device has a maximum diameter in a range of from 0.5 to 35 μm. In some cases, the implantable device has a maximum diameter in a range of from 10 to 35 μm. In some cases, the implantable device has a maximum cross sectional area of 5000 μm2or less (e.g., 4500 μm2or less, 4000 μm2or less, 3500 μm2or less, 3000 μm2or less, 2500 μm2or less, 2000 μm2or less, 1500 μm2or less, 1000 μm2or less, 800 μm2or less, 750 μm2or less, or 700 μm2or less, or 600 μm2or less, or 500 μm2or less, or 400 μm2or less, or 300 μm2or less, or 250 μm2or less, or 200 μm2or less, or 150 μm2or less, or 100 μm2or less, or 90 μm2or less, or 80 μm2or less, or 70 μm2or less, or 60 μm2or less, or 50 μm2or less, or 40 μm2or less, or 30 μm2or less, or 20 μm2or less, or 10 μm2or less, etc.). In some cases, the implantable device has a maximum cross sectional area of 1000 μm2or less (e.g., 900 μm2or less, 800 μm2or less, 700 μm2or less, 600 μm2or less, 500 μm2or less, 400 μm2or less, 300 μm2or less, 200 μm2or less, or 100 μm2or less, etc.). In some cases, the implantable device has a maximum cross sectional area of 100 μm2or less (e.g., 90 μm2or less, 80 μm2or less, 70 μm2or less, 60 μm2or less, 50 μm2or less, 40 μm2or less, 30 μm2or less, or 20 μm2or less). In some cases, the implantable device has a maximum cross sectional area in a range of from 2.5 to 4000 μm2(e.g., from 2.5 to 3500 μm2, from 2.5 to 3000 μm2, from 2.5 to 2500 μm2, from 2.5 to 3000 μm2, from 2.5 to 2500 μm2, from 2.5 to 2000 μm2, from 2.5 to 1500 μm2, from 2.5 to 1000 μm2, from 2.5 to 500 μm2, from 2.5 to 250 μm2, from 2.5 to 100 μm2, from 2.5 to 50 μm2, from 2.5 to 10 μm2, from 10 to 4000 μm2, from 10 to 3500 μm2, from 10 to 3000 μm2, from 10 to 2500 μm2, from 10 to 3000 μm2, from 10 to 2500 μm2, from 10 to 2000 μm2, from 10 to 1500 μm2, from 10 to 1000 μm2, from 10 to 500 μm2, from 10 to 250 μm2, from 10 to 100 μm2, from 10 to 50 μm2, from 10 to 25 μm2, from 100 to 4000 μm2, from 100 to 3500 μm2, from 100 to 3000 μm2, from 100 to 2500 μm2, from 100 to 3000 μm2, from 100 to 2500 μm2, from 100 to 2000 μm2, from 100 to 1500 μm2, from 100 to 1000 μm2, from 500 to 4000 μm2, from 500 to 3500 μm2, from 500 to 3000 μm2, from 500 to 2500 μm2, from 500 to 3000 μm2, from 500 to 2500 μm2, from 500 to 2000 μm2, from 500 to 1500 μm2, from 500 to 1000 μm2, from 500 to 800 μm2, from 1000 to 4000 μm2, from 1000 to 3500 μm2, from 1000 to 3000 μm2, from 1000 to 2500 μm2, from 1000 to 3000 μm2, from 1000 to 2500 μm2, from 1000 to 2000 μm2, from 1000 to 1500 μm2, from 2000 to 4000 μm2, from 2000 to 3500 μm2, from 2000 to 3000 μm2, from 2000 to 2500 μm2, from 2000 to 3000 μm2, from 2000 to 2500 μm2, from 2500 to 4000 μm2, from 2500 to 3500 μm2, from 2500 to 3000 μm2, from 2500 to 2500 μm2, or from 2500 to 3000 μm2). In some cases, the implantable device has a maximum cross sectional area in a range of from 2.5 to 1000 μm2. In some cases, the implantable device has a maximum cross sectional area in a range of from 2000 to 4500 μm2. In some cases, the implantable device has a maximum cross sectional area in a range of from 5 to 100 μm2. In some cases, the implantable device has a maximum cross sectional area in a range of from 100 to 1000 μm2. Implantable devices, such as implantable probes, may be of essentially any dimension including those falling within the maximum dimensions described above. In some cases, an implantable device may be dimensioned from 0.5 μm to 100 μm by 5 μm to 1000 μm, including e.g., 0.5 μm by 1000 μm, 0.5 μm by 750 μm, 0.5 μm by 500 μm, 0.5 μm by 250 μm, 0.5 μm by 100 μm, 0.5 μm by 75 μm, 0.5 μm by 50 μm, 0.5 μm by 25 μm, 0.5 μm by 10 μm, 0.5 μm by 5 μm, 1 μm by 1000 μm, 1 μm by 750 μm, 1 μm by 500 μm, 1 μm by 250 μm, 1 μm by 100 μm, 1 μm by 75 μm, 1 μm by 50 μm, 1 μm by 25 μm, 1 μm by 10 μm, etc. Devices and Kits Also provided are devices and kits thereof for practicing one or more of the above-described methods. For example, devices including e.g., devices for use in fabricating microneedles according to the methods described herein are included. Such devices will vary any may include but are not limited to e.g., devices that include components for holding lengths of wire in place and/or winding wires (such as e.g., a winding jig), devices that include components for brazing wound wire (such as, e.g., a brazing machine), and the like. Devices for insertion of an implantable device into a biological tissue having an outer membrane are also included. Such devices will vary and may include but are not limited to e.g., an ablation laser, a targeting and implantation rig that includes an ablation laser, and the like. Also included are kits for practicing the subject methods including e.g., kits for fabricating microneedles that include e.g., components of the methods described above including but not limited to e.g., wires, brazing material, one or more of the above described devices, etc. Also included are kits for implanting an implantable device into a tissue having an outer membrane that include components of the above described methods including e.g., one or more photosensitizers, one or more implantable devices, a microneedle, components for microneedle fabrication, etc. In addition to the above components, the subject kits will further include instructions for practicing the subject methods. These instructions may be present in the subject kits in a variety of forms, one or more of which may be present in the kit. One form in which these instructions may be present is as printed information on a suitable medium or substrate, e.g., a piece or pieces of paper on which the information is printed, in the packaging of the kit, in a package insert, etc. Yet another means would be a computer readable medium, e.g., diskette, CD, etc., on which the information has been recorded. Yet another means that may be present is a website address which may be used via the internet to access the information at a removed site. Any convenient means may be present in the kits. EXPERIMENTAL The following examples are put forth so as to provide those of ordinary skill in the art with a complete disclosure and description of how to make and use the present invention, and are not intended to limit the scope of what the inventors regard as their invention nor are they intended to represent that the experiments below are all or the only experiments performed. Efforts have been made to ensure accuracy with respect to numbers used (e.g. amounts, temperature, etc.) but some experimental errors and deviations should be accounted for. Unless indicated otherwise, parts are parts by weight, molecular weight is weight average molecular weight, temperature is in degrees Centigrade, and pressure is at or near atmospheric. Example 1 Microneedle Fabrication Methods involving the insertion of an implantable device into a biological tissue may involve using a microneedle with a reversible engagement feature to implant the implantable device and subsequently retract the microneedle leaving the implantable device implanted in the biological tissue as a desired location. One implemented design of a microneedle (100) having a reversible engagement feature (101) with implantable device (102) engaged is depicted inFIG.1. Here two wires are twisted together to from the microneedle and one wire is made longer, by 100-150 μm, and one shorter. The longer wire is sharpened via electrochemical etching, while the shorter is blunt. These needles are often too fine (composed of wires <15 μm in diameter) to support themselves without buckling outside of a small cannula (70 μm or less ID), hence may be fabricated to be thicker along a length, particularly along the portion that resides within a telescoping section of cannula to provide support and prevent buckling. A photograph of the microneedle schematizedFIG.1, loaded into a cartridge, showing the step engagement feature and the sharpened point at the bottom is provided inFIG.2. In the instant example of microneedle fabrication, a length of fine (5-50 μm) tungsten, tungsten-rhenuim, carbon, or other stiff, strong, high-melting point materials and alloys thereof is wound in a “W” shape on a winding jig, e.g., as schematically depicted inFIG.3A. A step-by-step process of loading the winding jig is depicted inFIGS.3B-3D. Specifically, the wire is wound on the winding jig by maneuvering one end (closed circle,300) through the middle eyelet (301), the first loop (302) and back through the middle eyelet again (FIG.3A). Next, the end (closed circle,300) is passed through the second loop (303) and the bottom eyelet (304) (FIG.3C). Next, the other end (open rhombus,305) is passed through the second loop (303) and back through the top eyelet (306) (FIG.3D). Both ends are tightened under clamps and tension is maintained in the four strands. As depicted inFIG.4, once the tungsten wire is loaded in the jig all four strands are twisted 70 times by rotating the left wheel (400), which is mounted with the second loop (401). Then two inner strands are twisted 10 times by rotating the right wheel (402), which is associated with the first loop (403). While rotating, the tungsten wire pulls the first and second loop holders inward, and the springs (404) mounted on the jig maintain tension on the wire strands. The spring elements tension the wires equally and clamp elements (405) hold the wire ends one tension has been established. One end of the “W” is a loop which the wire runs through twice; this loop free to turn, thereby allowing all 4 wires to be twisted into a helix. The other end, through which only one pass-through of wire, is also of a loop that's free to turn, and allows the 2 wires to form a continuation of the 4-helix. Thus, two wires (the ends) break off from the helix 1-30 mm before the end loop, while the 2 that continue have the same helix angle. Given the helices generated at the end nearest the first loop (406), following brazing with copper-iridium as described below, the two ends of the outer strands are removed at the indicated outer-strand break points (407) and one of the inter strands is removed at the indicated inner-strand break point (408). The unbroken inner strand serves as the microneedle tip (409) the end of the broken inner strand (410) provides an engagement feature (411). This needle winding jig has the added feature that the two ends are electrically isolated. For brazing, the needle winding jig with needle loaded (which may be referred to as a brazing jig) is then installed in a chamber. A photo of the needle brazing jig, with wire loaded prior to brazing is provided inFIG.5. On the left, visible inFIG.5, is a connector for supplying current to heat the needle for surface oxide reduction and recrystallization. A schematic cross-sectional rendering of the brazing chamber is provided inFIG.6. SpecificallyFIG.6shows the jig (1) installed on a linear slide (2), which moves laterally under control of a pulley system (3) attached to the end of vacuum feed-throughs. The melt (4) is held in a tungsten heater basket, in turn mounted to heavy current busses (5) (in turn fed by vacuum feed-throughs) which are mounted on a base which can move vertically (6). This permits the user to dip the wire into the melt under visual control through the viewing maze (7), which prevents copper from condensing on the viewing window (8) in the lid. Photos of the brazing chamber and the melt within the chamber are provided inFIG.7andFIG.8, respectively. During brazing, the chamber is brought down to a vacuum of <10 mTorr, before a shielding gas is flowed through, a mixture of very dry argon and hydrogen (in practice 20% H2in Ar at a pressure of 500-600 mTorr). This gas mixture is continuously pumped out, so any water vapor produced is rapidly removed. It has been found that the pressure ratio of H2:H2O2O should be sufficiently high to reduce any surface oxides on the most oxidizable surface, usually tungsten, at the braze and recrystallization temperatures, as estimated via an Ellingham diagram. Then, bias current is passed through the needle wires until they are >1000 C, at which point the oxides on the surface of the wire are reduced (or sublimate), leaving a clean metal surface. This temperature can be, and usually is, further adjusted to recrystallize the wire, which facilitates the generation of the engagement feature. This temperature is currently measured by examining the color of the wire by eye through the viewing port, but could also be measured by an optical pyrometer or via resistance changes of the underlying wire or other means. The recrystallization step has an added feature that, due to reduced emission area and nonlinear dependence of resistance on temperature, only the helix gets above the critical temperature, leaving the free ends (effectively those after the break-away points) in the ductile state; this makes fatiguing and handling the free ends considerably easier, and biases the break point close to the helix. Following bias recrystallization a mass of element/alloy immiscible with the wire material above the liquidus temperature, (in this example copper in a carbon or boron-nitride (BN) crucible), is brought significantly above its melting temperature, and raised to meet the wire. Copper has a very high contact angle with carbon and BN, which means that the meniscus of molten copper can significantly exceed the lip of the crucible, facilitating this step. The carbon crucible also serves to scavenge any available O2in the system, which is desirable. In a carbon crucible dangerous tungsten carbide can form on the bare tungsten, using a boron-nitride crucible circumvents this issue and allows the use of carbide-forming elements in the braze alloy. The needle-winding jig is then smoothly moved laterally several times at approximately 1 cm/s by use of the pulley system identified inFIG.6, such that the molten braze metal joins the multiple wires into a single part. Pure Cu and Cu—Ir alloy have been repeatedly tested and found to be sufficient for needle brazing. However, as described in detail above, other elements/alloys may be employed. The nascent needle is then removed from the molten braze material, the heating elements to the crucible are disengaged, and the vacuum chamber is allowed to cool prior being flushed with argon or another inert gas. At this point, it is possible and advisable to pass current to heat the needle to a tempering temperature of about 500° C., for a period of about 30 minutes to improve the strength and modulus of the braze. The chamber is then opened, and the needle-winding jig is removed. Then, three of the four wires that exit the helix are removed by fatiguing the wire, at the points at the points described above. Two of the constituent wires exit the helix 1-30 mm from the last, and provide strength to prevent buckling within the telescoping cannula. The annealing/reducing temperature significantly influences the wire characteristics which in turn influence the fracture at the engagement feature. It was found that when a lower annealing/reducing temperature was employed the wire retains its ductile nature leading to a barb shaped, rather than a step shaped, engagement feature. It was also found that when a higher annealing/reducing temperature was employed the entire needle is considerably more brittle. This increase in overall brittle character of the needle leads to needles that are likely to fracture during assembly and during use. As-drawn, the wire has very elongated crystalline domains, which leads to ductility and high tensile strength of the material; these domains become more regular, and orient perpendicular to the direction of applied stress (tension due to springs in the needle-winding jig) during annealing and recrystallization. The addition of rhenium to the tungsten wire provides two advantages at this point: it raises the recrystallization temperature, so that the window between surface-oxide reduction and recrystallization is larger, and it increases the strength and modulus of the resulting needle. The quality and angle of the engagement features are affected by the recrystallization temperature that the wire is subjected to before being dipped in the copper (Cu) melt. Successful fabrication of a microneedle with desired characteristics has been empirically verified using recrystallization temperatures of about 1300° C. for pure tungsten (W) wire, and 1600° C. for tungsten-rhenium (W—26% Re) wire. In instances where the shorter wire tends to break near (e.g., about 10 μm) where the braze fillet ends, leading to a “fork” which thin electrodes can be stuck in during insertion, very fine sandpaper (e.g., 10 μm grit) may be employed to remedy this situation. In such instances, the very fine sandpaper is delicately run along the needle, with three wires removed, but one wire still attached to the needle-winding jig, until the final step is eroded to the point that no fork can be observed, and the needle is essentially as drawn in the above figures. Youngs modulus has been measured for both Cu and Cu—Ir brazed needles to be 379 GPa and 383 GPa, respectively, with an accurate model of the area moment of inertia for the 4 wire cross-section. Data for force vs deflection is shown inFIG.9. At this point, the needle is loaded into a quadruple-telescoping cartridge, the far end is attached to the upper part of this cartridge, and this device is installed in another machine for inspecting and etching the needle, shown inFIG.10. This machine consists of a microscope and two etchant wells. The first etchant well is FeCl3, for removing Cu from the surface of the needle; the microscope is used to dip the needle into the solution just to the point of the shoulder/final break. The second is 1M NaOH, which is used with 1-6V AC to electrolytically etch the longest part of the needle to an extremely fine point. This machine also affords assessment of needle motion, which can be impeded if any dust entered the cannula during assembly. Alternative Brazing Processes An alternative brazing process has been tested. This alternative process involves the use of elements in the braze that bring tungsten into solution and thereby alloy substantially with this base metal. In this scheme, a copper or gold base (solvent) is used to dissolve to saturation Ni, Cr, Fe, or Co. These stay in solution until the wire, heated by laser or resistance, evaporates away the copper, and the Ni/Fe/Cr/Co is sufficiently above liquidus temperature to alloy directly with the W. The alloying will increase the liquidus temperature, stopping the wire from completely going into solution, and if properly controlled results in a strong, stiff bond. A second example of this process is to evaporate or sputter a controlled quantity of alloying element on the surface of a tungsten-rhenium (W/W—Re) wire. This surface coating is then melted via laser, alloying with the W, and forms the desired fillet with the wires. It has been found that in either of these approaches the quantity of element that directly alloys with W should be kept insufficient to penetrate the bulk of the wire. Where such quantity is not kept insufficient, the element may penetrate the bulk of the wire causing it to break under the tension of the winding jig. Example 2 Laser Micro-Drilling The needle as fabricated as describe herein is generally so fine and small that, in certain applications, when loaded with an electrode, it may not be able to reliably penetrate a desired biological tissue. For example, it was found that needle as fabricated as describe herein may fail to reliably penetrate rat dura, buckling in approximately 1 out of 10 insertions. In some instances as well, when an electrode/needle combination does penetrate a desired biological tissue (e.g., rat dura) it may pull a significant mass of surface material (e.g., collagen/elastin from the fibrous meningeal tissue) along with it. This situation may cause the electrode to adhere to the needle sufficiently tightly such that approximately one out of two inserted electrodes are removed with the needle, even when ballistic retraction is employed. It was found that rat dura is approximately the same thickness/toughness as the non-human primate (NHP) pia-arachnoid complex (PAC). PAC, whether rat or NHP, is not easily dissected away. Durotomy in the rat, although not necessarily required to insert electrodes, can be avoided by employing the following methods thereby reducing trauma to the brain and improving surgical outcomes. To remedy both these issues and provide other benefits (e.g., to give allow for further reducing the needle and/or electrode size), a system for laser-drilling micro-holes (e.g., in the dura (rat) or PAC (NHP)) has been developed and integrated into implantation procedures. Specifically, in the instant example a q-switched green (527 nm) laser was employed for drilling micro-holes in dura and PAC dyed with Erythrosin B, a food-grade dye with a strong absorption peak at the emission wavelength. Unlike typical laser surgery approaches, here a non-ionizing wavelength was employed which is weakly absorbed by the tissue. Thus, when diffusion of the dye is controlled, off-target damage can also be controlled and minimized. This approach also makes possible the use of a ionizing (e.g. UV) Q-switched or pulsed laser to ablate the tissue without heating or cautery effect. A schematic image of the system employing this micro-hole laser drilling approach is provided inFIG.11and a photo of the working laser drilling, removed from the integrated system, is shown in the photograph provided inFIG.12. The preceding merely illustrates the principles of the invention. It will be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples and conditional language recited herein are principally intended to aid the reader in understanding the principles of the invention and the concepts contributed by the inventors to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents and equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. The scope of the present invention, therefore, is not intended to be limited to the exemplary embodiments shown and described herein. Rather, the scope and spirit of the present invention is embodied by the appended claims. | 92,272 |
11857344 | DETAILED DESCRIPTION Non-limiting examples of various aspects and variations of the invention are described herein and illustrated in the accompanying drawings. As generally described herein, an analyte monitoring system may include an analyte monitoring device that is worn by a user and includes one or more sensors for monitoring at least one analyte of a user. The sensors may, for example, include one or more electrodes configured to perform electrochemical detection of at least one analyte. The analyte monitoring device may communicate sensor data to an external computing device for storage, display, and/or analysis of sensor data. For example, as shown inFIG.1, an analyte monitoring system100may include an analyte monitoring device110that is worn by a user, and the analyte monitoring device110may be a continuous analyte monitoring device (e.g., continuous glucose monitoring device). The analyte monitoring device110may include, for example, a microneedle array comprising at least one electrochemical sensor for detecting and/or measuring one or more analytes in body fluid of a user. In some variations, the analyte monitoring device may be applied to the user using suitable applicator160, or may be applied manually. The analyte monitoring device110may include one or more processors for performing analysis on sensor data, and/or a communication module (e.g., wireless communication module) configured to communicate sensor data to a mobile computing device102(e.g., smartphone) or other suitable computing device. In some variations, the mobile computing device102may include one or more processors executing a mobile application to handle sensor data (e.g., displaying data, analyzing data for trends, etc.) and/or provide suitable alerts or other notifications related to the sensor data and/or analysis thereof. It should be understood that while in some variations the mobile computing device102may perform sensor data analysis locally, other computing device(s) may alternatively or additionally remotely analyze sensor data and/or communicate information related to such analysis with the mobile computing device102(or other suitable user interface) for display to the user. Furthermore, in some variations the mobile computing device102may be configured to communicate sensor data and/or analysis of the sensor data over a network104to one or more storage devices106(e.g., server) for archiving data and/or other suitable information related to the user of the analyte monitoring device. The analyte monitoring devices described herein have characteristics that improve a number of properties that are advantageous for a continuous analyte monitoring device such as a continuous glucose monitoring (CGM) device. For example, the analyte monitoring device described herein have improved sensitivity (amount of sensor signal produced per given concentration of target analyte), improved selectivity (rejection of endogenous and exogenous circulating compounds that can interfere with the detection of the target analyte), and improved stability to help minimize change in sensor response over time through storage and operation of the analyte monitoring device. Additionally, compared to conventional continuous analyte monitoring devices, the analyte monitoring devices described herein have a shorter warm-up time that enables the sensor(s) to quickly provide a stable sensor signal following implantation, as well as a short response time that enables the sensors(s) to quickly provide a stable sensor signal following a change in analyte concentration in the user. Furthermore, as described in further detail below, the analyte monitoring devices described herein may be applied to and function in a variety of wear sites, and provide for pain-free sensor insertion for the user. Other properties such as biocompatibility, sterilizability, and mechanical integrity are also optimized in the analyte monitoring devices described herein. Although the analyte monitoring systems described herein may be described with reference to monitoring of glucose (e.g., in users with Type 2 diabetes, Type 1 diabetes), it should be understood that such systems may additionally or alternatively be configured to sense and monitor other suitable analytes. As described in further detail below, suitable target analytes for detection may, for example, include glucose, ketones, lactate, and cortisol. One target analyte may be monitored, or multiple target analytes may be simultaneously monitored (e.g., in the same analyte monitoring device). For example, monitoring of other target analytes may enable the monitoring of other indications such as stress (e.g., through detection of rising cortisol and glucose) and ketoacidosis (e.g., through detection of rising ketones). As shown inFIG.2A, in some variations, an analyte monitoring device110may generally include a housing112and a microneedle array140extending outwardly from the housing. The housing112, may, for example, be a wearable housing configured to be worn on the skin of a user such that the microneedle array140extends at least partially into the skin of the user. For example, the housing112may include an adhesive such that the analyte monitoring device110is a skin-adhered patch that is simple and straightforward for application to a user. The microneedle array140may be configured to puncture the skin of the user and include one or more electrochemical sensors (e.g., electrodes) configured for measuring one or more target analytes that are accessible after the microneedle array140punctures the skin of the user. In some variations, the analyte monitoring device110may be integrated or self-contained as a single unit, and the unit may be disposable (e.g., used for a period of time and replaced with another instance of the analyte monitoring device110). An electronics system120may be at least partially arranged in the housing112and include various electronic components, such as sensor circuitry124configured to perform signal processing (e.g., biasing and readout of electrochemical sensors, converting the analog signals from the electrochemical sensors to digital signals, etc.). The electronics system120may also include at least one microcontroller122for controlling the analyte monitoring device110, at least one communication module126, at least one power source130, and/or other various suitable passive circuitry127. The microcontroller122may, for example, be configured to interpret digital signals output from the sensor circuitry124(e.g., by executing a programmed routine in firmware), perform various suitable algorithms or mathematical transformations (e.g., calibration, etc.), and/or route processed data to and/or from the communication module126. In some variations, the communication module126may include a suitable wireless transceiver (e.g., Bluetooth transceiver or the like) for communicating data with an external computing device102via one or more antennas128. For example, the communication module126may be configured to provide uni-directional and/or bi-directional communication of data with an external computing device102that is paired with the analyte monitoring device110. The power source130may provide power for the analyte monitoring device110, such as for the electronics system. The power source130may include battery or other suitable source, and may, in some variations, be rechargeable and/or replaceable. Passive circuitry127may include various non-powered electrical circuitry (e.g., resistors, capacitors, inductors, etc.) providing interconnections between other electronic components, etc. The passive circuitry127may be configured to perform noise reduction, biasing and/or other purposes, for example. In some variations, the electronic components in the electronics system120may be arranged on one or more printed circuit boards (PCB), which may be rigid, semi-rigid, or flexible, for example. Additional details of the electronics system120are described further below. In some variations, the analyte monitoring device110may further include one or more additional sensors150to provide additional information that may be relevant for user monitoring. For example, the analyte monitoring device110may further include at least one temperature sensor (e.g., thermistor) configured to measure skin temperature, thereby enabling temperature compensation for the sensor measurements obtained by the microneedle array electrochemical sensors. In some variations, the microneedle array140in the analyte monitoring device110may be configured to puncture skin of a user. As shown inFIG.2B, when the device110is worn by the user, the microneedle array140may extend into the skin of the user such that electrodes on distal regions of the microneedles rest in the dermis. Specifically, in some variations, the microneedles may be designed to penetrate the skin and access the upper dermal region (e.g., papillary dermis and upper reticular dermis layers) of the skin, in order to enable the electrodes to access interstitial fluid that surrounds the cells in these layers. For example, in some variations, the microneedles may have a height generally ranging between at least 350 μm and about 515 μm. In some variations, one or more microneedles may extend from the housing such that a distal end of the electrode on the microneedle is located less than about 5 mm from a skin-interfacing surface of the housing, less than about 4 mm from the housing, less than about 3 mm from the housing, less than about 2 mm from the housing, or less than about 1 mm from the housing. In contrast to traditional continuous analyte monitoring devices (e.g., CGM devices), which include sensors typically implanted between about 8 mm and about 10 mm beneath the skin surface in the subcutis or adipose layer of the skin, the analyte monitoring device110has a shallower microneedle insertion depth of about 0.25 mm (such that electrodes are implanted in the upper dermal region of the skin) that provides numerous benefits. These benefits include access to dermal interstitial fluid including one or more target analytes for detection, which is advantageous at least because at least some types of analyte measurements of dermal interstitial fluid have been found to closely correlate to those of blood. For example, it has been discovered that glucose measurements performed using electrochemical sensors accessing dermal interstitial fluid are advantageously highly linearly correlated with blood glucose measurements. Accordingly, glucose measurements based on dermal interstitial fluid are highly representative of blood glucose measurements. Additionally, because of the shallower microneedle insertion depth of the analyte monitoring device110, a reduced time delay in analyte detection is obtained compared to traditional continuous analyte monitoring devices. Such a shallower insertion depth positions the sensor surfaces in close proximity (e.g., within a few hundred micrometers or less) to the dense and well-perfused capillary bed of the reticular dermis, resulting in a negligible diffusional lag from the capillaries to the sensor surface. Diffusion time is related to diffusion distance according to t=x2/(2D) where t is the diffusion time, x is the diffusion distance, and D is the mass diffusivity of the analyte of interest. Therefore, positioning an analyte sensing element twice as far away from the source of an analyte in a capillary will result in a quadrupling of the diffusional delay time. Accordingly, conventional analyte sensors, which reside in the very poorly vascularized adipose tissue beneath the dermis, result in a significantly greater diffusion distance from the vasculature in the dermis and thus a substantial diffusional latency (e.g., typically 5-20 minutes). In contrast, the shallower microneedle insertion depth of the analyte monitoring device110benefits from low diffusional latency from capillaries to the sensor, thereby reducing time delay in analyte detection and providing more accurate results in real-time or near real-time. For example, in some embodiments, diffusional latency may be less than 10 minutes, less than 5 minutes, or less than 3 minutes. Furthermore, when the microneedle array rests in the upper dermal region, the lower dermis beneath the microneedle array includes very high levels of vascularization and perfusion to support the dermal metabolism, which enables thermoregulation (via vasoconstriction and/or vasodilation) and provides a barrier function to help stabilize the sensing environment around the microneedles. Yet another advantage of the shallower insertion depth is that the upper dermal layers lack pain receptors, thus resulting in a reduced pain sensation when the microneedle array punctures the skin of the user, and providing for a more comfortable, minimally-invasive user experience. Thus, the analyte monitoring devices and methods described herein enable improved continuous monitoring of one or more target analytes of a user. For example, as described above, the analyte monitoring device may be simple and straightforward to apply, which improves ease-of-use and user compliance. Additionally, analyte measurements of dermal interstitial fluid may provide for highly accurate analyte detection. Furthermore, compared to traditional continuous analyte monitoring devices, insertion of the microneedle array and its sensors may be less invasive and involve less pain for the user. Additional advantages of other aspects of the analyte monitoring devices and methods are further described below. As shown in the schematic ofFIG.3A, in some variations, a microneedle array300for use in sensing one or more analytes may include one or more microneedles310projecting from a substrate surface302. The substrate surface302may, for example, be generally planar and one or more microneedles310may project orthogonally from the planar surface. Generally, as shown inFIG.3B, a microneedle310may include a body portion312(e.g., shaft) and a tapered distal portion314configured to puncture skin of a user. In some variations, the tapered distal portion314may terminate in an insulated distal apex316. The microneedle310may further include an electrode320on a surface of the tapered distal portion. In some variations, electrode-based measurements may be performed at the interface of the electrode and interstitial fluid located within the body (e.g., on an outer surface of the overall microneedle). In some variations, the microneedle310may have a solid core (e.g., solid body portion), though in some variations the microneedle310may include one or more lumens, which may be used for drug delivery or sampling of the dermal interstitial fluid, for example. Other microneedle variations, such as those described below, may similarly either include a solid core or one or more lumens. The microneedle array300may be at least partially formed from a semiconductor (e.g., silicon) substrate and include various material layers applied and shaped using various suitable microelectromechanical systems (MEMS) manufacturing techniques (e.g., deposition and etching techniques), as further described below. The microneedle array may be reflow-soldered to a circuit board, similar to a typical integrated circuit. Furthermore, in some variations the microneedle array300may include a three electrode setup including a working (sensing) electrode having an electrochemical sensing coating (including a biorecognition element such as an enzyme) that enables detection of a target analyte, a reference electrode, and a counter electrode. In other words, the microneedle array300may include at least one microneedle310that includes a working electrode, at least one microneedle310including a reference electrode, and at least one microneedle310including a counter electrode. Additional details of these types of electrodes are described in further detail below. In some variations, the microneedle array300may include a plurality of microneedles that are insulated such that the electrode on each microneedle in the plurality of microneedles is individually addressable and electrically isolated from every other electrode on the microneedle array. The resulting individual addressability of the microneedle array300may enable greater control over each electrode's function, since each electrode may be separately probed. For example, the microneedle array300may be used to provide multiple independent measurements of a given target analyte, which improves the device's sensing reliability and accuracy. Furthermore, in some variations the electrodes of multiple microneedles may be electrically connected to produce augmented signal levels. As another example, the same microneedle array500may additionally or alternatively be interrogated to simultaneously measure multiple analytes to provide a more comprehensive assessment of physiological status. For example, as shown in the schematic ofFIG.4, a microneedle array may include a portion of microneedles to detect a first Analyte A, a second portion of microneedles to detect a second Analyte B, and a third portion of microneedles to detect a third Analyte C. It should be understood that the microneedle array may be configured to detect any suitable number of analytes (e.g., 1, 2, 3, 4, 5 or more, etc.). Suitable target analytes for detection may, for example, include glucose, ketones, lactate, and cortisol. Thus, individual electrical addressability of the microneedle array300provides greater control and flexibility over the sensing function of the analyte monitoring device. In some variations of microneedles (e.g., microneedles with a working electrode), the electrode320may be located proximal to the insulated distal apex316of the microneedle. In other words, in some variations the electrode320does not cover the apex of the microneedle. Rather, the electrode320may be offset from the apex or tip of the microneedle. The electrode320being proximal to or offset from the insulated distal apex316of the microneedle advantageously provides more accurate sensor measurements. For example, this arrangement prevents concentration of the electric field at the microneedle apex316during manufacturing, thereby avoiding non-uniform electro-deposition of sensing chemistry on the surface of the electrode320that would result in faulty sensing. As another example, placing the electrode320offset from the microneedle apex further improves sensing accuracy by reducing undesirable signal artefacts and/or erroneous sensor readings caused by stress upon microneedle insertion. The distal apex of the microneedle is the first region to penetrate into the skin, and thus experiences the most stress caused by the mechanical shear phenomena accompanying the tearing or cutting of the skin. If the electrode320were placed on the apex or tip of the microneedle, this mechanical stress may delaminate the electrochemical sensing coating on the electrode surface when the microneedle is inserted, and/or cause a small yet interfering amount of tissue to be transported onto the active sensing portion of the electrode. Thus, placing the electrode320sufficiently offset from the microneedle apex may improve sensing accuracy. For example, in some variations, a distal edge of the electrode320may be located at least about 10 μm (e.g., between about 20 μm and about 30 μm) from the distal apex or tip of the microneedle, as measured along a longitudinal axis of the microneedle. The body portion312of the microneedle310may further include an electrically conductive pathway extending between the electrode320and a backside electrode or other electrical contact (e.g., arranged on a backside of the substrate of the microneedle array). The backside electrode may be soldered to a circuit board, enabling electrical communication with the electrode320via the conductive pathway. For example, during use, the in-vivo sensing current (inside the dermis) measured at a working electrode is interrogated by the backside electrical contact, and the electrical connection between the backside electrical contact and the working electrode is facilitated by the conductive pathway. In some variations, this conductive pathway may be facilitated by a metal via running through the interior of the microneedle body portion (e.g., shaft) between the microneedle's proximal and distal ends. Alternatively, in some variations the conductive pathway may be provided by the entire body portion being formed of a conductive material (e.g., doped silicon). In some of these variations, the complete substrate on which the microneedle array300is built upon may be electrically conductive, and each microneedle310in the microneedle array300may be electrically isolated from adjacent microneedles310as described below. For example, in some variations, each microneedle310in the microneedle array300may be electrically isolated from adjacent microneedles310with an insulative barrier including electrically insulative material (e.g., dielectric material such as silicon dioxide) that surrounds the conductive pathway extending between the electrode320and backside electrical contact. For example, body portion312may include an insulative material that forms a sheath around the conductive pathway, thereby preventing electrical communication between the conductive pathway and the substrate. Other example variations of structures enabling electrical isolation among microneedles are described in further detail below. Such electrical isolation among microneedles in the microneedle array permits the sensors to be individually addressable. This individually addressability advantageously enables independent and parallelized measurement among the sensors, as well as dynamic reconfiguration of sensor assignment (e.g., to different analytes). In some variations, the electrodes in the microneedle array can be configured to provide redundant analyte measurements, which is an advantage over conventional analyte monitoring devices. For example, redundancy can improve performance by improving accuracy (e.g., averaging multiple analyte measurement values for the same analyte which reduces the effect of extreme high or low sensor signals on the determination of analyte levels) and/or improving reliability of the device by reducing the likelihood of total failure. In some variations, as described in further detail below with respective different variations of the microneedle, the microneedle array may be formed at least in part with suitable semiconductor and/or MEMS fabrication techniques and/or mechanical cutting or dicing. Such processes may, for example, be advantageous for enabling large-scale, cost-efficient manufacturing of microneedle arrays. In some variations, a microneedle may have a generally columnar body portion and a tapered distal portion with an electrode. For example,FIGS.5A-5Cillustrate an example variation of a microneedle500extending from a substrate502.FIG.5Ais a side cross-sectional view of a schematic of microneedle500, whileFIG.5Bis a perspective view of the microneedle500andFIG.5Cis a detailed perspective view of a distal portion of the microneedle500. As shown inFIGS.5B and5C, the microneedle500may include a columnar body portion512, a tapered distal portion514terminating in an insulated distal apex516, and an annular electrode520that includes a conductive material (e.g., Pt, Ir, Au, Ti, Cr, Ni, etc.) and is arranged on the tapered distal portion514. As shown inFIG.5A, the annular electrode520may be proximal to (or offset or spaced apart from) the distal apex516. For example, the electrode520may be electrically isolated from the distal apex516by a distal insulating surface515aincluding an insulating material (e.g., SiO2). In some variations, the electrode520may also be electrically isolated from the columnar body portion512by a second distal insulating surface515b. The electrode520may be in electrical communication with a conductive core540(e.g., conductive pathway) passing along the body portion512to a backside electrical contact530(e.g., made of Ni/Au alloy) or other electrical pad in or on the substrate502. For example, the body portion512may include a conductive core material (e.g., highly doped silicon). As shown inFIG.5A, in some variations, an insulating moat513including an insulating material (e.g., SiO2) may be arranged around (e.g., around the perimeter) of the body portion512and extend at least partially through the substrate502. Accordingly, the insulating moat513may, for example, help prevent electrical contact between the conductive core540and the surrounding substrate502. The insulating moat513may further extend over the surface of the body portion512. Upper and/or lower surfaces of the substrate502may also include a layer of substrate insulation504(e.g., SiO2). Accordingly, the insulation provided by the insulating moat513and/or substrate insulation504may contribute at least in part to the electrical isolation of the microneedle500that enables individual addressability of the microneedle500within a microneedle array. Furthermore, in some variations the insulating moat513extending over the surface of the body portion512may function to increase the mechanical strength of the microneedle500structure. The microneedle500may be formed at least in part by suitable MEMS fabrication techniques such as plasma etching, also called dry etching. For example, in some variations, the insulating moat513around the body portion512of the microneedle may be made by first forming a trench in a silicon substrate by deep reactive ion etching (DRIE) from the backside of the substrate, then filling that trench with a sandwich structure of SiO2/polycrystalline silicon (poly-Si)/SiO2by low pressure chemical vapor deposition (LPCVD) or other suitable process. In other words, the insulating moat513may passivate the surface of the body portion512of the microneedle, and continue as a buried feature in the substrate502near the proximal portion of the microneedle. By including largely compounds of silicon, the insulating moat513may provide good fill and adhesion to the adjoining silicon walls (e.g., of the conductive core540, substrate502, etc.). The sandwich structure of the insulating moat513may further help provide excellent matching of coefficient of thermal expansion (CTE) with the adjacent silicon, thereby advantageously reducing faults, cracks, and/or other thermally-induced weaknesses in the insulating moat513. The tapered distal portion may be fashioned out by an isotropic dry etch from the frontside of the substrate, and the body portion512of the microneedle500may be formed from DRIE. The frontside metal electrode520may be deposited and patterned on the distal portion by specialized lithography (e.g., electron-beam evaporation) that permits metal deposition in the desired annular region for the electrode520without coating the distal apex516. Furthermore, the backside electrical contact530of Ni/Au may be deposited by suitable MEMS manufacturing techniques (e.g., sputtering). The microneedle500may have any suitable dimensions. By way of illustration, the microneedle500may, in some variations, have a height of between about 300 μm and about 500 μm. In some variations, the tapered distal portion514may have a tip angle between about 60 degrees and about 80 degrees, and an apex diameter of between about 1 μm and about 15 μm. In some variations, the surface area of the annular electrode520may include between about 9,000 μm2and about 11,000 μm2, or about 10,000 μm2. As described above, each microneedle in the microneedle array may include an electrode. In some variations, multiple distinct types of electrodes may be included among the microneedles in the microneedle array. For example, in some variations the microneedle array may function as an electrochemical cell operable in an electrolytic manner with three types of electrodes. In other words, the microneedle array may include at least one working electrode, at least one counter electrode, and at least one reference electrode. Thus, the microneedle array may include three distinct electrode types, though one or more of each electrode type may form a complete system (e.g., the system might include multiple distinct working electrodes). Furthermore, multiple distinct microneedles may be electrically joined to form an effective electrode type (e.g., a single working electrode may be formed from two or more connected microneedles with working electrode sites). Each of these electrode types may include a metallization layer and may include one or more coatings or layers over the metallization layer that help facilitate the function of that electrode. Generally, the working electrode is the electrode at which oxidation and/or reduction reaction of interest occurs for detection of an analyte of interest. The counter electrode functions to source (provide) or sink (accumulate) the electrons, via an electrical current, that are required to sustain the electrochemical reaction at the working electrode. The reference electrode functions to provide a reference potential for the system; that is, the electrical potential at which the working electrode is biased is referenced to the reference electrode. A fixed, time-varying, or at least controlled potential relationship is established between the working and reference electrodes, and within practical limits no current is sourced from or sinked to the reference electrode. Additionally, to implement such a three-electrode system, the analyte monitoring device may include a suitable potentiostat or electrochemical analog front end to maintain a fixed potential relationship between the working electrode and reference electrode contingents within the electrochemical system (via an electronic feedback mechanism), while permitting the counter electrode to dynamically swing to potentials required to sustain the redox reaction of interest. Working Electrode As described above, the working electrode is the electrode at which the oxidation and/or reduction reaction of interest occurs. In some variations, sensing may be performed at the interface of the working electrode and interstitial fluid located within the body (e.g., on an outer surface of the overall microneedle). In some variations, a working electrode may include an electrode material and a biorecognition layer in which a biorecognition element (e.g., enzyme) is immobilized on the working electrode to facilitate selective analyte quantification. In some variations, the biorecognition layer may also function as an interference-blocking layer and may help prevent endogenous and/or exogenous species from directly oxidizing (or reducing) at the electrode. A redox current detected at the working electrode may be correlated to a detected concentration of an analyte of interest. This is because assuming a steady-state, diffusion-limited system, the redox current detected at the working electrode follows the Cottrell relation below: i(t)=nFADCπt where n is the stoichiometric number of electrons mitigating a redox reaction, F is Faraday's constant, A is electrode surface area, D is the diffusion coefficient of the analyte of interest, C is the concentration of the analyte of interest, and t is the duration of time that the system is biased with an electrical potential. Thus, the detected current at the working electrode scales linearly with the analyte concentration. Moreover, because the detected current is a direct function of electrode surface area A, the surface area of the electrode may be increased to enhance the sensitivity (e.g., amperes per molar of analyte) of the sensor. For example, multiple singular working electrodes may be grouped into arrays of two or more constituents to increase total effective sensing surface area. Additionally or alternatively, to obtain redundancy, multiple working electrodes may be operated as parallelized sensors to obtain a plurality of independent measures of the concentration of an analyte of interest. The working electrode can either be operated as the anode (such that an analyte is oxidized at its surface), or as the cathode (such that an analyte is reduced at its surface). FIG.6Adepicts a schematic of an exemplary set of layers for a working electrode610. For example, as described above, in some variations the working electrode610may include an electrode material612and a biorecognition layer including a biorecognition element. The electrode material612functions to encourage the electrocatalytic detection of an analyte or the product of the reaction of the analyte and the biorecognition element. The electrode material612also provides ohmic contact and routes an electrical signal from the electrocatalytic reaction to processing circuitry. In some variations, the electrode material612may include platinum as shown inFIG.6A. However, the electrode material612may alternatively include, for example, palladium, iridium, rhodium, gold, ruthenium, titanium, nickel, carbon, doped diamond, or other suitable catalytic and inert material. In some variations, the electrode material612may be coated with a highly porous electrocatalytic layer, such as a platinum black layer613, which may augment the electrode surface area for enhanced sensitivity. Additionally or alternatively, the platinum black layer613may enable the electrocatalytic oxidation or reduction of the product of the biorecognition reaction facilitated by the biorecognition layer614. However, in some variations the platinum black layer613may be omitted (as shown inFIGS.6D and6G, for example). The electrode may enable the electrocatalytic oxidation or reduction of the product of the biorecognition reaction if the platinum black layer613is not present. The biorecognition layer614may be arranged over the electrode material612(or platinum black layer613if it is present) and functions to immobilize and stabilize the biorecognition element which facilitates selective analyte quantification for extended time periods. In some variations, the biorecognition element may include an enzyme, such as an oxidase. As an exemplary variation for use in a glucose monitoring system, the biorecognition element may include glucose oxidase, which converts glucose, in the presence of oxygen, to an electroactive product (i.e., hydrogen peroxide) that can be detected at the electrode surface. Specifically, the redox equation associated with this exemplary variation is Glucose+OxygenHydrogen Peroxide+Gluconolactone (mediated by glucose oxidase); Hydrogen PeroxideWater+Oxygen (mediated by applying an oxidizing potential at the working electrode). However, in other variations the biorecognition element may additionally or alternatively comprise another suitable oxidase or oxidoreductase enzyme such as lactate oxidase, alcohol oxidase, beta-hydroxybutyrate dehydrogenase, tyrosinase, catalase, ascorbate oxidase, cholesterol oxidase, choline oxidase, pyruvate oxidase, urate oxidase, urease, and/or xanthine oxidase. In some variations, the biorecognition element may be cross-linked with an amine-condensing carbonyl chemical species that may help stabilize the biorecognition element within the biorecognition layer614. As further described below, in some variations, the cross-linking of the biorecognition element may result in the microneedle array being compatible with ethylene oxide (EO) sterilization, which permits exposure of the entire analyte monitoring device (including sensing elements and electronics) to the same sterilization cycle, thereby simplifying the sterilization process and lowering manufacture costs. For example, the biorecognition element may be cross-linked with glutaraldehyde, formaldehyde, glyoxal, malonaldehyde, succinaldehyde, and/or other suitable species. In some variations, the biorecognition element may be cross-linked with such an amine-condensing carbonyl chemical species to form cross-linked biorecognition element aggregates. Cross-linked biorecognition element aggregates that have at least a threshold molecular weight may then be embedded in a conducting polymer. By embedding only those aggregates that have a threshold molecular weight, any uncross-linked enzymes may be screened out and not incorporated into the biorecognition layer. Accordingly, only aggregates having a desired molecular weight may be selected for use in the conducting polymer, to help ensure that only sufficiently stabilized, cross-linked enzyme entities are included in the biorecognition layer, thereby contributing to a biorecognition layer that is overall better suited for EO sterilization without loss in sensing performance. In some variations, only cross-linked aggregates that have a molecular weight that is at least twice that of glucose oxidase may be embedded in the conducting polymer. In some variations, the conducting polymer may be permselective to contribute to the biorecognition layer's robustness against circulating androgynous electroactive species (e.g., ascorbic acid, vitamin C, etc.), fluctuations of which may adversely affect the sensitivity of the sensor. Such a permselective conducting polymer in the biorecognition layer may further be more robust against pharmacological interferences (e.g., acetaminophen) in the interstitial fluid that may affect sensor accuracy. Conducting polymers may be made permselective by, for example, removing excess charge carriers by an oxidative electropolymerization process or by neutralizing these charge carriers with a counter-ion dopant, thereby transforming the conducting polymer into a non-conducting form. These oxidatively-polymerized conducting polymers exhibit permselectivity and are hence able to reject ions of similar charge polarity to the dopant ion (net positive or negative) or by via size exclusion due to the dense and compact form of the conducting polymers. Furthermore, in some variations the conducting polymer may exhibit self-sealing and/or self-healing properties. For example, the conducting polymer may undergo oxidative electropolymerization, during which the conducting polymer may lose its conductivity as the thickness of the deposited conducting polymer on the electrode increases, until the lack of sufficient conductivity causes the deposition of additional conducting polymer to diminish. In the event that the conducting polymer has succumbed to minor physical damage (e.g., during use), the polymeric backbone may re-assemble to neutralize free charge and thereby lower overall surface energy of the molecular structure, which may manifest as self-sealing and/or self-healing properties. In some variations, the working electrode may further include a diffusion-limiting layer1615arranged over the biorecognition layer614. The diffusion-limiting layer615may function to limit the flux of the analyte of interest in order to reduce the sensitivity of the sensor to endogenous oxygen fluctuations. For example, the diffusion-limiting layer615may attenuate the concentration of the analyte of interest so that it becomes the limiting reactant to an aerobic enzyme. However, in some variation (e.g., if the biorecognition element is not aerobic), the diffusion-limiting layer615may be omitted. The working electrode may further include, in some variations, a hydrophilic layer616that provides for a biocompatible interface to, for example, reduce the foreign body response. However, in some variations the hydrophilic layer616may be omitted (e.g., if the diffusion-limiting layer expresses hydrophilic moieties to serve this purpose), as shown inFIGS.6D and6G, for example. Counter Electrode As described above, the counter electrode is the electrode that is sourcing or sinking electrons (via an electrical current) required to sustain the electrochemical reaction at the working electrode. The number of counter electrode constituents can be augmented in the form of a counter electrode array to enhance surface area such that the current-carrying capacity of the counter electrode does not limit the redox reaction of the working electrode. It thus may be desirable to have an excess of counter electrode area versus the working electrode area to circumvent the current-carrying capacity limitation. If the working electrode is operated as an anode, the counter electrode will serve as the cathode and vice versa. Similarly, if an oxidation reaction occurs at the working electrode, a reduction reaction occurs at the counter electrode and vice versa. Unlike the working or reference electrodes, the counter electrode is permitted to dynamically swing to electrical potentials required to sustain the redox reaction of interest on the working electrode. As shown inFIG.6B, a counter electrode620may include an electrode material622, similar to electrode material612. For example, like the electrode material612, the electrode material622in the counter electrode620may include a noble metal such as gold, platinum, palladium, iridium, carbon, doped diamond, and/or other suitable catalytic and inert material. In some variations, the counter electrode620may have few or no additional layers over the electrode material632. However, in some variations the counter electrode620may benefit from increase surface area to increase the amount of current it can support. For example, the counter electrode material632may be textured or otherwise roughened in such a way to augment the surface area of the electrode material632for enhanced current sourcing or sinking ability. Additionally or alternatively, the counter electrode620may include a layer of platinum black624, which may augment electrode surface as described above with respect to some variations of the working electrode. However, in some variations of the counter electrode, the layer of platinum black may be omitted (e.g., as shown inFIG.6E). In some variations, the counter electrode may further include, a hydrophilic layer that provides for a biocompatible interface to, for example, reduce the foreign body response. Additionally or alternatively, in some variations as shown inFIG.6H, the counter electrode620may include a diffusion-limiting layer625(e.g., arranged over the electrode). The diffusion-limiting layer625may, for example, be similar to the diffusion-limiting layer615described above with respect toFIG.6A. Reference Electrode As described above, the reference electrode functions to provide a reference potential for the system; that is, the electrical potential at which the working electrode is biased is referenced to the reference electrode. A fixed or at least controlled potential relationship may be established between the working and reference electrodes, and within practical limits no current is sourced from or sinked to the reference electrode. As shown inFIG.6C, a reference electrode630may include an electrode material632, similar to electrode material612. In some variations, like the electrode material612, the electrode material632in the reference electrode630may include a metal salt or metal oxide, which serves as a stable redox coupled with a well-known electrode potential. For example, the metal salt may, for example, include silver-silver chloride (Ag/AgCl) and the metal oxide may include iridium oxide (IrOx/Ir2O3/IrO2). In other variations, noble and inert metal surfaces may function as quasi-reference electrodes and include gold, platinum, palladium, iridium, carbon, doped diamond, and/or other suitable catalytic and inert material. Furthermore, in some variations the reference electrode630may be textured or otherwise roughened in such a way to enhance adhesion with any subsequent layers. Such subsequent layers on the electrode material632may include a platinum black layer634. However, in some variations, the platinum black layer may be omitted (e.g., as shown inFIGS.6F and6I). The reference electrode630may, in some variations, further include a redox-couple layer636, which main contain a surface-immobilized, solid-state redox couple with a stable thermodynamic potential. For example, the reference electrode may operate at a stable standard thermodynamic potential with respect to a standard hydrogen electrode (SHE). The high stability of the electrode potential may be attained by employing a redox system with constant (e.g., buffered or saturated) concentrations of each participant of the redox reaction. For example, the reference electrode may include saturated Ag/AgCl (E=+0.197V vs. SHE) or IrOx (E=+0.177 vs. SHE, pH=7.00) in the redox-couple layer636. Other examples of redox-couple layers636may include a suitable conducting polymer with a dopant molecule such as that described in U.S. Patent Pub. No. 2019/0309433, which is incorporated in its entirety herein by this reference. In some variations, the reference electrode may be used as a half-cell to construct a complete electrochemical cell. Additionally or alternatively, in some variations as shown inFIG.6I, the reference electrode630may include a diffusion-limiting layer635(e.g., arranged over the electrode and/or the redox-couple layer). The diffusion-limiting layer635may, for example, be similar to the diffusion-limiting layer615described above with respect toFIG.16A. Exemplary Electrode Layer Formation Various layers of the working electrode, counter electrode, and reference electrode may be applied to the microneedle array and/or functionalized, etc. using suitable processes such as those described below. In a pre-processing step for the microneedle array, the microneedle array may be plasma cleaned in an inert gas (e.g., RF-generated inert gas such as argon) plasma environment to render the surface of the material, including the electrode material (e.g., electrode material612,622, and632as described above), to be more hydrophilic and chemically reactive. This pre-processing functions to not only physically remove organic debris and contaminants, but also to clean and prepare the electrode surface to enhance adhesion of subsequently deposited films on its surface. Multiple microneedles (e.g., any of the microneedle variations described herein, each of which may have a working electrode, counter electrode, or reference electrode as described above) may be arranged in a microneedle array. Considerations of how to configure the microneedles include factors such as desired insertion force for penetrating skin with the microneedle array, optimization of electrode signal levels and other performance aspects, manufacturing costs and complexity, etc. For example, the microneedle array may include multiple microneedles that are spaced apart at a predefined pitch (distance between the center of one microneedle to the center of its nearest neighboring microneedle). In some variations, the microneedles may be spaced apart with a sufficient pitch so as to distribute force (e.g., avoid a “bed of nails” effect) that is applied to the skin of the user to cause the microneedle array to penetrate the skin. As pitch increases, force required to insert the microneedle array tends to decrease and depth of penetration tends to increase. However, it has been found that pitch only begins to affect insertion force at low values (e.g., less than about 150 μm). Accordingly, in some variations the microneedles in a microneedle array may have a pitch of at least 200 μm, at least 300 μm, at least 400 μm, at least 500 μm, at least 600 μm, at least 700 μm, or at least 750 μm. For example, the pitch may be between about 200 μm and about 800 μm, between about 300 μm and about 700 μm, or between about 400 μm and about 600 μm. In some variations, the microneedles may be arranged in a periodic grid, and the pitch may be uniform in all directions and across all regions of the microneedle array. Alternatively, the pitch may be different as measured along different axes (e.g., X, Y directions) and/or some regions of the microneedle array may include a smaller pitch while other may include a larger pitch. Furthermore, for more consistent penetration, microneedles may be spaced equidistant from one another (e.g., same pitch in all directions). To that end, in some variations, the microneedles in a microneedle array may be arranged in a hexagonal configuration as shown inFIG.7. Alternatively, the microneedles in a microneedle array may arranged in a rectangular array (e.g., square array), or in another suitable symmetrical manner Another consideration for determining configuration of a microneedle array is overall signal level provided by the microneedles. Generally, signal level at each microneedle is invariant of the total number of microneedle elements in an array. However, signal levels can be enhanced by electrically interconnecting multiple microneedles together in an array. For example, an array with a large number of electrically connected microneedles is expected to produce a greater signal intensity (and hence increased accuracy) than one with fewer microneedles. However, a higher number of microneedles on a die will increase die cost (given a constant pitch) and will also require greater force and/or velocity to insert into skin. In contrast, a lower number of microneedles on a die may reduce die cost and enable insertion into the skin with reduced application force and/or velocity. Furthermore, in some variations a lower number of microneedles on a die may reduce the overall footprint area of the die, which may lead to less unwanted localized edema and/or erythema. Accordingly, in some variations, a balance among these factors may be achieved with a microneedle array including 37 microneedles as shown inFIG.7or a microneedle array including 7 microneedles are shown in FIGS.8A8C. However, in other variations there may be fewer microneedles in an array (e.g., between about 5 and about 35, between about 5 and about 30, between about 5 and about 25, between about 5 and about 20, between about 5 and about 15, between about 5 and about 100, between about 10 and about 30, between about 15 and about 25, etc.) or more microneedles in an array (e.g., more than 37, more than 40, more than 45, etc.). Additionally, as described in further detail below, in some variations only a subset of the microneedles in a microneedle array may be active during operation of the analyte monitoring device. For example, a portion of the microneedles in a microneedle array may be inactive (e.g., no signals read from electrodes of inactive microneedles). In some variations, a portion of the microneedles in a microneedle array may be activated at a certain time during operation and remain active for the remainder of the operating lifetime of the device. Furthermore, in some variations, a portion of the microneedles in a microneedle array may additionally or alternatively be deactivated at a certain time during operation and remain inactive for the remainder of the operating lifetime of the device. In considering characteristics of a die for a microneedle array, die size is a function of the number of microneedles in the microneedle array and the pitch of the microneedles. Manufacturing costs are also a consideration, as a smaller die size will contribute to lower cost since the number of dies that can be formed from a single wafer of a given area will increase. Furthermore, a smaller die size will also be less susceptible to brittle fracture due to the relative fragility of the substrate. Furthermore, in some variations, microneedles at the periphery of the microneedle array (e.g., near the edge or boundary of the die, near the edge or boundary of the housing, near the edge or boundary of an adhesive layer on the housing, along the outer border of the microneedle array, etc.) may be found to have better performance (e.g., sensitivity) due to better penetration compared to microneedles in the center of the microneedle array or die. Accordingly, in some variations, working electrodes may be arranged largely or entirely on microneedles located at the periphery of the microneedle array, to obtain more accurate and/or precise analyte measurements. FIG.7depicts an illustrative schematic of 37 microneedles arranged in an example variation of a microneedle array. The 37 microneedles may, for example, be arranged in a hexagonal array with an inter-needle center-to-center pitch of about 750 μm (or between about 700 μm and about 800 μm, or between about 725 μm and about 775 μm) between the center of each microneedle and the center of its immediate neighbor in any direction. FIGS.8A and8Bdepict perspective views of an illustrative schematic of seven microneedles810arranged in an example variation of a microneedle array800. The seven microneedles810are arranged in a hexagonal array on a substrate802. As shown inFIG.8A, the electrodes820are arranged on distal portions of the microneedles810extending from a first surface of the substrate802. As shown inFIG.8B, proximal portions of the microneedles810are conductively connected to respective backside electrical contacts830on a second surface of the substrate802opposite the first surface of the substrate802.FIGS.8C and8Ddepict plan and side views of an illustrative schematic of a microneedle array similar to microneedle array800. As shown inFIGS.8C and8D, the seven microneedles are arranged in a hexagonal array with an inter-needle center-to-center pitch of about 750 μm between the center of each microneedle and the center of its immediate neighbor in any direction. In other variations the inter-needle center-to-center pitch may be, for example, between about 700 μm and about 800 μm, or between about 725 μm and about 775 μm. The microneedles may have an approximate outer shaft diameter of about 170 μm (or between about 150 μm and about 190 μm, or between about 125 μm and about 200 μm) and a height of about 500 μm (or between about 475 μm and about 525 μm, or between about 450 μm and about 550 μm). Furthermore, the microneedle arrays described herein may have a high degree of configurability concerning where the working electrode(s), counter electrode(s), and reference electrode(s) are located within the microneedle array. This configurability may be facilitated by the electronics system. In some variations, a microneedle array may include electrodes distributed in two or more groups in a symmetrical or non-symmetrical manner in the microneedle array, with each group featuring the same or differing number of electrode constituents depending on requirements for signal sensitivity and/or redundancy. For example, electrodes of the same type (e.g., working electrodes) may be distributed in a bilaterally or radially symmetrical manner in the microneedle array. For example,FIG.9Adepicts a variation of a microneedle array900A including two symmetrical groups of seven working electrodes (WE), with the two working electrode groups labeled “1” and “2”. In this variation, the two working electrode groups are distributed in a bilaterally symmetrical manner within the microneedle array. The working electrodes are generally arranged between a central region of three reference electrodes (RE) and an outer perimeter region of twenty counter electrodes (CE). In some variations, each of the two working electrode groups may include seven working electrodes that are electrically connected amongst themselves (e.g., to enhance sensor signal). Alternatively, only a portion of one or both of the working electrode groups may include multiple electrodes that are electrically connected amongst themselves. As yet another alternative, the working electrode groups may include working electrodes that are standalone and not electrically connected to other working electrodes. Furthermore, in some variations the working electrode groups may be distributed in the microneedle array in a non-symmetrical or random configuration. As another example,FIG.9Bdepicts a variation of a microneedle array900B including four symmetrical groups of three working electrodes (WE), with the four working electrode groups labeled “1”, “2”, “3”, and “4.” In this variation, the four working electrode groups are distributed in a radially symmetrical manner in the microneedle array. Each working electrode group is adjacent to one of two reference electrode (RE) constituents in the microneedle array and arranged in a symmetrical manner. The microneedle array also includes counter electrodes (CE) arranged around the perimeter of the microneedle array, except for two electrodes on vertices of the hexagon that are inactive or may be used for other features or modes of operation. In some variations, only a portion of microneedle array may include active electrodes. For example,FIG.9Cdepicts a variation of a microneedle array900C with 37 microneedles and a reduced number of active electrodes, including four working electrodes (labeled “1”, “2”, “3”, and “4”) in a bilaterally symmetrical arrangement, twenty-two counter electrodes, and three reference electrodes. The remaining eight electrodes in the microneedle array are inactive. In the microneedle array shown inFIG.9C, each of the working electrodes is surrounded by a group of counter electrodes. Two groups of such clusters of working electrodes and counter electrodes are separated by a row of the three reference electrodes. As another example,FIG.9Ddepicts a variation of a microneedle array900D with 37 microneedles and a reduced number of active electrodes, including four working electrodes (labeled “1”, “2”, “3”, and “4”) in a bilaterally symmetrical arrangement, twenty counter electrodes, and three reference electrodes, where the remaining ten electrodes in the microneedle array are inactive. As another example,FIG.9Edepicts a variation of a microneedle array900E with 37 microneedles and a reduced number of active electrodes, including four working electrodes (labeled “1”, “2”, “3”, and “4”), eighteen counter electrodes, and two reference electrodes. The remaining thirteen electrodes in the microneedle array are inactive. The inactive electrodes are along a partial perimeter of the overall microneedle array, thereby reducing the effective size and shape of the active microneedle arrangement to a smaller hexagonal array. Within the active microneedle arrangement, the four working electrodes are generally in a radially symmetrical arrangement, and each of the working electrodes is surrounded by a group of counter electrodes. FIG.9Fdepicts another example variation of a microneedle array900F with 37 microneedles and a reduced number of active electrodes, including four working electrodes (labeled “1”, “2”, “3”, and “4”), two counter electrodes, and one reference electrode. The remaining thirty electrodes in the microneedle array are inactive. The inactive electrodes are arranged in two layers around the perimeter of the overall microneedle array, thereby reducing the effective size and shape of the active microneedle arrangement to a smaller hexagonal array centered around the reference electrode. Within the active microneedle arrangement, the four working electrodes are in a bilaterally symmetrical arrangement and the counter electrodes are equidistant from the central reference electrode. FIG.9Gdepicts another example variation of a microneedle array900G with 37 microneedles and a reduced number of active electrodes. The active electrodes in microneedle array900G are arranged in a similar manner as that in microneedle array900F shown inFIG.9F, except that the microneedle array900G includes one counter electrode and two reference electrodes, and the smaller hexagonal array of active microneedles is centered around the counter electrode. Within the active microneedle arrangement, the four working electrodes are in a bilaterally symmetrical arrangement and the reference electrodes are equidistant from the central counter electrode. FIG.9Hdepicts another example variation of a microneedle array900H with seven microneedles. The microneedle arrangement contains two microneedles assigned as independent working electrodes (1 and 2), a counter electrode contingent comprised of four microneedles, and a single reference electrode. There is bilateral symmetry in the arrangement of working and counter electrodes, which are equidistant from the central reference electrode. Additionally, the working electrodes are arranged as far as possible from the center of the microneedle array (e.g., at the periphery of the die or array) to take advantage of a location where the working electrodes are expected to have greater sensitivity and overall performance. FIG.9Idepicts another example variation of a microneedle array900I with seven microneedles. The microneedle arrangement contains four microneedles assigned as two independent groupings (1 and 2) of two working electrodes each, a counter electrode contingent comprised of two microneedles, and a single reference electrode. There is bilateral symmetry in the arrangement of working and counter electrodes, which are equidistant from the central reference electrode. Additionally, the working electrodes are arranged as far as possible from the center of the microneedle array (e.g., at the periphery of the die or array) to take advantage of a location where the working electrodes are expected to have greater sensitivity and overall performance. FIG.9Jdepicts another example variation of a microneedle array900J with seven microneedles. The microneedle arrangement contains four microneedles assigned as independent working electrodes (1, 2, 3, and 4), a counter electrode contingent comprised of two microneedles, and a single reference electrode. There is bilateral symmetry in the arrangement of working and counter electrodes, which are equidistant from the central reference electrode. Additionally, the working electrodes are arranged as far as possible from the center of the microneedle array (e.g., at the periphery of the die or array) to take advantage of a location where the working electrodes are expected to have greater sensitivity and overall performance. WhileFIGS.9A-9Jillustrate example variations of microneedle array configurations, it should be understood that these figures are not limiting and other microneedle configurations (including different numbers and/or distributions of working electrodes, counter electrodes, and reference electrodes, and different numbers and/or distributions of active electrodes and inactive electrodes, etc.) may be suitable in other variations of microneedle arrays. Analog Front End In some variations, the electronics system of the analyte monitoring device may include an analog front end. The analog front end may include sensor circuitry (e.g., sensor circuitry124as shown inFIG.2A) that converts analog current measurements to digital values that can be processed by the microcontroller. The analog front end may, for example, include a programmable analog front end that is suitable for use with electrochemical sensors. For example, the analog front end may include a MAX30131, MAX30132, or MAX30134 component (which have 1, 2, and 4 channel, respectively), available from Maxim Integrated (San Jose, Calif.), which are ultra-low power programmable analog front ends for use with electrochemical sensors. The analog front end may also include an AD5940 or AD5941 component, available from Analog Devices (Norwood, Mass.), which are high precision, impedance and electrochemical front ends. Similarly, the analog front end may also include an LMP91000, available from Texas Instruments (Dallas, Tex.), which is a configurable analog front end potentiostat for low-power chemical sensing applications. The analog front end may provide biasing and a complete measurement path, including the analog to digital converters (ADCs). Ultra-low power may allow for the continuous biasing of the sensor to maintain accuracy and fast response when measurement is required for an extended duration (e.g. 7 days) using a body-worn, battery-operated device. In some variations, the analog front end device may be compatible with both two and three terminal electrochemical sensors, such as to enable both DC current measurement, AC current measurement, and electrochemical impedance spectroscopy (EIS) measurement capabilities. Furthermore, the analog front end may include an internal temperature sensor and programmable voltage reference, support external temperature monitoring and an external reference source and integrate voltage monitoring of bias and supply voltages for safety and compliance. In some variations, the analog front end may include a multi-channel potentiostat to multiplex sensor inputs and handle multiple signal channels. For example, the analog front end may include a multi-channel potentiostat such as that described in U.S. Pat. No. 9,933,387, which is incorporated herein in its entirety by this reference. In some variations, the analog front end and peripheral electronics may be integrated into an application-specific integrated circuit (ASIC), which may help reduce cost, for example. This integrated solution may include the microcontroller described below, in some variations. Microcontroller In some variations, the electronics system of the analyte monitoring device may include at least one microcontroller (e.g., controller122as shown inFIG.2A). The microcontroller may include, for example, a processor with integrated flash memory. In some variations, the microcontroller in the analyte monitoring device may be configured to perform analysis to correlate sensor signals to an analyte measurement (e.g., glucose measurement). For example, the microcontroller may execute a programmed routine in firmware to interpret the digital signal (e.g., from the analog front end), perform any relevant algorithms and/or other analysis, and route processed data to and/or from the communication module. Keeping the analysis on-board the analyte monitoring device may, for example, enable the analyte monitoring device to broadcast analyte measurement(s) to multiple devices (e.g., mobile computing devices such as a smartphone or smartwatch, therapeutic delivery systems such as insulin pens or pumps, etc.) in parallel, while ensuring that each connected device has the same information. In some variations, the microcontroller may be configured to activate and/or inactivate the analyte monitoring device on one or more detected conditions. For example, the device may be configured to power on the analyte monitoring device upon insertion of the microneedle array into skin. This may, for example, enable a power-saving feature in which the battery is disconnected until the microneedle array is placed in skin, at which time the device may begin broadcasting sensor data. Such a feature may, for example, help improve the shelf life of the analyte monitoring device and/or simplify the analyte monitoring device-external device pairing process for the user. Aspects of the current subject matter are directed to fault detection, as well as diagnostics related to the fault detection, in a microneedle array-based analyte monitoring device, such as the analyte monitoring device110. The electrochemical sensors (e.g., electrodes of the analyte monitoring device110) configured for measuring one or more target analytes may experience various faults during use of the analyte monitoring device110. A fault may be a failure of one or more aspects of the analyte monitoring device110in which the failure affects operation of the analyte monitoring device110. Examples of faults include degradation of the electrode membrane (e.g., cracking, delamination, and/or other damage to the membrane structure and/or surface that affects sensing), degradation of the biorecognition element (e.g., inactivation and/or denaturation), a physiologic response to implantation of the microneedle array (e.g., a foreign body response, encapsulation, protein adhesion, or collagen formation occurring in response to the insertion of the microneedles on which the electrodes are formed), improper placement or insertion of the microneedle array (e.g., the microneedles, on which the electrodes are formed, not placed at a sufficient depth for the analyte sensing), pressure attenuation (e.g., pressure applied to the analyte monitoring device110), and external environmental influences (e.g., external impact to the electronics of the analyte monitoring device110). The fault may affect the electrical and/or electrochemical behavior of the analyte monitoring device110, resulting in errors and/or unreliability in measurements of the target analyte or analytes. In some instances, the fault may be temporary, such as in the case of pressure attenuations. In other instances, the fault may permanently affect operation of the analyte monitoring device110. Some faults may be detectable by monitoring the current draw. For example, a value of the sensing current at the working electrode of the analyte monitoring device110may indicate and/or correlate to some faults. In these instances, if the sensing current exhibits extreme, erratic, and/or unexpected behaviors or patterns, the fault may be determinable based on characteristics of the exhibited behaviors or patterns of the sensing current. The extreme, erratic, and/or unexpected behaviors or patterns of the sensing current may be characterized by rapid rates of change that are non-physiologically capable or possible. High noise may also contribute to the behaviors or patterns of the sensing current. Other faults, however, may not impact the sensing current while still impacting the electrical and/or electrochemical behavior of the analyte monitoring device110. An alternative or additional variable is thus needed for insight to and verification of changes to the electrical and/or electrochemical behavior of the analyte monitoring device110. Voltage at the counter electrode is an example of a variable that provides such insight and verification. Thus, by monitoring the voltage at the counter electrode, a fault may be detected. While various types of faults, such as those described above, may occur, faults may generally be characterized by if the analyte monitoring device110can recover from the fault (e.g., the fault is temporary) or if the analyte monitoring device110is damaged and operation should cease (e.g., the fault is permanent). By monitoring the counter electrode voltage, as well as, in some variations, how the counter electrode voltage corresponds with or is correlated to the sensing current, such a characterization may be made and a response to the fault may be determined. The response to the fault may be in the form of a mode of operation in which to operate the analyte monitoring device. For example, if the fault is temporary, the mode of operation may include blanking and/or disregarding any sensing data during the fault. In this situation, sensing data is inaccurate and thus not reported to the user or used for operational purposes. If the fault is permanent, the mode of operation may be to stop operation of the analyte monitoring device. In some variations, this may include ceasing application of a bias potential between the working electrode and the reference electrode. In some variations, the counter electrode voltage is monitored to identify one or more characteristics that may serve as an indication of a fault. The characteristics indicative of a fault may include a rate of change of the counter electrode voltage and/or a lower compliance limit of the counter electrode voltage. The characteristics may be explained by considering the relationship between the counter electrode potential and the current at the working electrode. That is, as further described herein, the counter electrode voltage dynamically swings or adjusts to electrical potentials required to sustain the redox reaction at the working electrode. The counter electrode voltage may thus be considered as the voltage that is required to support the level of current at the working electrode (e.g., the sensing current). As the sensing current fluctuates or changes, the counter electrode voltage fluctuates or changes in a corresponding or reciprocal manner. If the sensing current experiences a rapid rate of change, the counter electrode voltage responds with a rapid rate of change. The correspondence, or correlation, between the sensing current and the counter electrode voltage may be defined as equal but opposite in rate of change (or near equal but opposite (e.g., up to about a 5% difference between the rates of change)). If the sensing current changes at a specified rate, the counter electrode voltage changes at the specified rate in the opposite direction. The rate of change of the counter electrode voltage then serves as an indicator of the rate of change of the sensing current. A sensing current that exhibits a rapid rate of change is non-physiologically capable or possible. Thus, by monitoring the counter electrode voltage, a determination may be made as to the physiological viability of the sensing current. As a rapid rate of change is not physiologically possible, such a change serves as an indication that something is wrong with the device. In some variations, a rapid rate of change of the counter electrode voltage may be defined as about 0.10 volts/minute. In some variations, a rapid rate of change of the counter electrode voltage may be defined as between about 0.05 volts/minute and about 0.15 volts/minute. For example, in some variations, a rapid rate of change of the counter electrode voltage may be defined as about 0.05 volts/minute, about 0.06 volts/minute, about 0.07 volts/minute, about 0.08 volts/minute, about 0.09 volts/minute, about 0.10 volts/minute, about 0.11 volts/minute, about 0.12 volts/minute, about 0.13 volts/minute, about 0.14 volts/minute, or about 0.15 volts/minute. A rapid rate of change of the sensing current may be associated with a rate of change of the analyte being measured. In the example of glucose, a rapid rate of change may be about 4 mg/dL/min. In some variations, a rapid rate of change of glucose may be between about 3.5 mg/dL/min and about 6 mg/dL/min. The lower compliance limit of the counter electrode voltage may be defined as the lowest level to which the counter electrode voltage may swing. The counter electrode voltage may also have an upper compliance limit, the highest level to which the counter electrode may swing. If the counter electrode voltage swings to the lower compliance limit, this may serve as an indication that the sensing current reached a high magnitude current that is not physiologically capable, indicating occurrent of a fault. Thus, the counter electrode voltage experiencing a rate of change that meets or exceeds a threshold rate of change and/or meets a threshold compliance limit serve as indications that there is a fault within the analyte monitoring device110. In some variations, upon identifying that the rate of change of the counter electrode voltage meets or exceeds a threshold rate of change and/or that the counter electrode voltage meets a threshold compliance limit, characteristics or parameters of the counter electrode voltage may be compared to characteristics or parameters of the sensing current to determine if the fault is temporary or permanent. The comparison may include determination of the correspondence, or correlation, between the counter electrode voltage and the sensing current. In some variations, the counter electrode voltage corresponding with the sensing current such that the counter electrode voltage is changing in an equal rate of change as that of the sensing current, is representative of a pressure-induced signal attenuation. Such a pressure-induced signal attenuation may be caused by external pressure being applied to the analyte monitoring device110and may be characterized as a temporary fault. When the external pressure is removed, the analyte monitoring device110operates as intended. In some variations, changes in the counter electrode voltage corresponding with changes in the sensing current, such that the correspondence is maintained, coupled with the counter electrode voltage meeting a lower compliance limit is representative of changes in the physiologic environment surrounding the sensor and/or changes in the sensor surface. In other variations, the counter electrode voltage meeting the lower compliance limit, regardless of the sensing current, is representative of a change in the physiologic environment and/or changes in the sensor surface. In this scenario, the counter electrode voltage does not need to be correlated with the sensing current. Changes in the physiologic environment surrounding the sensor and changes in the sensor surface may be examples of permanent faults. In some variations, changes in the counter electrode voltage deviating from the changes in the sensing current, such that the counter electrode voltage and the sensing current are changing in different ways, coupled with the rapid rate of change of the counter electrode voltage, may be representative of an external impact to the electronics of the analyte monitoring device. An external impact may be an example of a permanent fault. When the correlation between the counter electrode voltage and the sensing current is determined, the analyte monitoring device110(e.g., the controller) responds by applying a mode of operation consistent with the fault. For example, based on the identified characteristic of the counter electrode voltage and the correspondence of the counter electrode voltage and the sensing current, a mode of operation is applied to the microneedle array-based analyte monitoring device. In some variations, the mode of operation includes disregarding the sensing current if the changes in the counter electrode voltage correspond with the changes in the sensing current and if the rate of change of the counter electrode voltage exceeds a threshold rate of change. As described herein, this may be representative of pressure-induced signal attenuation. When the pressure-induced signal attenuation is removed from the counter electrode voltage and the sensing current (e.g., the rate of change of the counter electrode voltage does not exceed the threshold rate of change), the sensing current is no longer disregarded as the fault has been remedied. In some variations, the mode of operation includes discontinuing application of a potential between the working electrode and the reference electrode if the changes in the counter electrode voltage correspond with the changes in the sensing current and if the lower compliance limit of the counter electrode voltage meets a threshold compliance limit. The threshold compliance limit being reached is an indication of a permanent fault, and the bias potential is removed to stop operation. In some variations, the mode of operation includes discontinuing application of a potential between the working electrode and the reference electrode if the changes in the counter electrode voltage deviate from the changes in the sensing current and if the rate of change of the counter electrode voltage exceeds a threshold rate of change. This is an indication of a permanent fault, and the bias potential is removed to stop operation. As further described herein, the reference electrode functions to provide a reference potential for the three-electrode electrochemical system implemented by the analyte monitoring device110. The electrical potential at which the working electrode is biased is referenced to the reference electrode. A fixed, time-varying, or at least controlled potential relationship is established between the working and reference electrodes, and within practical limits no current is sourced from or sinked to the reference electrode. To implement such a three-electrode electrochemical system, the analyte monitoring device110includes a potentiostat or an electrochemical analog front end (e.g., an analog front end) to maintain a fixed potential relationship between the working electrode and the reference electrode within the three-electrode electrochemical system, while permitting the counter electrode to dynamically swing to potentials required to sustain the redox reaction of interest. Biasing the electrochemical system with the potentiostat or the analog front end to establish the electrical potential relationship between the working electrode and the reference electrode drives the redox reaction at the working electrode and causes the counter electrode to sink an electrical current in an oxidative process or source an electrical current in a reductive process to sustain the redox reaction at the working electrode. The magnitude of the electrical current is proportional to the magnitude of the redox reaction occurring at the working electrode and to the impedance or resistance between the working electrode and the counter electrode. Biasing the electrochemical system results in formation of a voltage at the counter electrode, the value of which is also proportional to the magnitude of the redox reaction at the working electrode and to the impedance or resistance between the working electrode and the counter electrode. The voltage at the counter electrode adjusts to the electrical potential to balance the redox reaction occurring at the working electrode when maintained at the electrical potential versus the reference electrode. Upon occurrence of a fault, in which one or more aspects of the analyte monitoring device110affects operation of the analyte monitoring device110, the voltage at the counter electrode is modulated and reflective of the accumulated impedance between the working electrode and the counter electrode. By monitoring the voltage at the counter electrode, an indication of the impedance between the working electrode and the counter electrode may be determined. The three-electrode electrochemical system of the analyte monitoring device110can be modeled as an electrical network or system, including electrical components to correlate the voltage at the counter electrode with the impedance or resistance between the working electrode and the counter electrode, which can be correlated with one or more conditions, including fault types. By associating or characterizing the impedance with certain conditions including faults of the three-electrode electrochemical system, voltage values can be correlated with one or more faults. FIG.10depicts a representation of a potentiostat circuit1000of the analyte monitoring device110. The potentiostat circuit1000may be part of the sensor circuitry124, depicted in and described with reference toFIG.2A. The potentiostat circuit1000includes an electrochemical cell1010that connects the working electrode and the counter electrode of the three-electrode electrochemical system. FIG.11depicts a Randles equivalent circuit1100that is representative of the electrochemical cell1010shown inFIG.10A. The Randles equivalent circuit1100includes a solution resistance Rs(also referred to as an uncompensated resistance Ruor RΩ), a charge transfer resistance Rct, and a double-layer capacitance Cal between a counter electrode1120and a working electrode1110. The solution resistance Rsis in series with a parallel combination of the charge transfer resistance Rctand the double-layer capacitance Cdl. The Randles equivalent circuit1100connects the terminals between the counter electrode1120and the working electrode1110. The solution resistance Rsis indicative of the level of ohmic contact between the counter electrode1120and the working electrode1110and may indicate the electrolytic content/ionic strength of the medium in which the analyte monitoring device110is operating (e.g., the fluid in which the electrodes of the microneedle array are positioned, such as, for example, interstitial fluid). The charge transfer resistance Rctis indicative of the magnitude of the electrochemical reaction occurring at the working electrode1110. The double-layer capacitance Cdlis indicative of surface morphology and constituency at the working electrode1110(e.g., the composition and makeup of the surface of the working electrode1110). The Randles equivalent circuit1100of the electrochemical cell1010of the analyte monitoring device110is a simplification of the redox reaction occurring within the electrochemical cell1010. By modeling the electrochemical cell1010with the Randles equivalent circuit1100, contributions from the solution resistance Rs, the charge-transfer resistance Rct, and the double-layer capacitance Cdlmay be identified. A frequency response analysis, including amplitude and phase components, may be used to understand the impedance behavior of the electrochemical cell1010at DC (ω0) and at AC (ω∞) frequency perturbations. The voltage at the counter electrode1120, in the DC case, provides an assessment of the overall resistive components of the system (e.g., Rs+Rct) as Cdlis assumed to have infinite impedance as ω0. In the other extreme, as ω∞, Cdlapproaches negligible impedance and Rctis bypassed. This allows the quantification of Rsalone, which may be realized with an impulse or unit step function applied to the counter electrode1120. In the DC case (ω0), the voltage at the counter electrode1120is expected to swing to more extreme values, to the compliance voltage of the potentiostat, when additional current must be sourced or sinked to maintain the fixed potential relationship between the working electrode and the reference electrode. This is manifested via the counter electrode voltage migrating away from the voltage established at the working electrode1110. In extreme cases, the voltage at the counter electrode1120approaches the compliance voltage, or the maximal voltage afforded by the circuit driving the counter electrode1120. The manifestation of this mode of operation in the Randles equivalent circuit is a charge transfer resistance Rctthat tends toward the value of the solution resistance Rs. In the DC case, this is an indication that one or more of the following faults is occurring: a short circuit generated between the working electrode and the counter electrode, a failure of the reference electrode's ability to maintain a stable thermodynamic potential, a compromise to a diffusion-limiting membrane, and a steady increase of the porosity of the sensing layer contained within analyte-selective sensor. The counter electrode voltage approaches the voltage value in which the working electrode1110is maintained in scenarios in which the current requirements to sustain the fixed potential relationship between the working electrode and the reference electrode tend toward negligible values (e.g., inconsequential values of current flow through the system, i0). The manifestation of this mode of operation in the Randles equivalent circuit is a charge transfer resistance Rctthat tends toward infinity. In the DC case, this is an indication that one or more of the following faults is occurring: improper sensor insertion, improper access to a viable anatomic compartment, partial or complete occlusion of the sensor (e.g., due to biofouling/protein adsorption/collagen formation/encapsulation) such that analyte diffusion is attenuated, and a failure of the reference electrode's ability to maintain a stable thermodynamic potential. Measurement of the voltage at the counter electrode may be achieved by a potentiostat, an electrochemical analog front end, or a converter, such as a voltage-sensitive or current-sensitive analog-to-digital converter (ADC). In some instances, and as shown in a measurement circuit1200inFIG.12, a buffer1210and a filter1220(e.g., a low-pass filter) may provide isolation from a converter1230to isolate the components from the counter electrode included in the electrochemical sensor1240. In some implementations, a differential amplifier, a transimpedance amplifier, or a finite gain amplifier may be incorporated. The filter1220may be positioned before the converter1230to reduce high-frequency, low-frequency, both high-frequency and low-frequency, and/or band-limited signals from interfering with the measurement of the counter electrode voltage. In some instances, a voltage arising at one or more working electrodes is measured and used to supplement and/or complement the fault identification. The working electrode voltage may be compared against a counter electrode voltage to assess and/or determine the fault. An analog-to-digital converter may be in electrical communication with the working electrode. In some implementations, a galvanostat is incorporated to establish a desired electrical current relationship between the working electrode and the counter electrode. Scenarios where the voltage at a counter electrode approaches that of the voltage at the working electrode is indicative of an impedance or resistance value of an analyte sensor decaying to low levels, by merit of Ohm's Law (v=Zi, where Z is the accumulated impedance of the analyte sensor). This is an indication that any one or more of the following faults is occurring: a short circuit generated between the working electrode and the counter electrode, a failure of the reference electrode's ability to maintain a stable thermodynamic potential, a compromise to a diffusion-limiting membrane, or a steady increase of the porosity of the sensing layer contained within analyte-selective sensor. The counter electrode voltage approaches the working electrode voltage in situations in which the counter electrode voltage is swinging in a positive direction to support the level of current at the working electrode (e.g., the sensing current). If the difference between the voltage at the counter electrode and the voltage at the working electrode increases, this is indicative of an impedance or resistance value of an analyte sensor increasing to very large values. This is an indication that any one or more of the following faults is occurring: improper sensor insertion, partial or complete occlusion of the sensor (e.g., due to biofouling/protein adsorption/collagen formation/encapsulation) such that analyte diffusion is attenuated, or a failure of the reference electrode's ability to maintain a stable thermodynamic potential. The difference between the counter electrode voltage and the working electrode voltage increasing occurs when the counter electrode voltage swings in a negative direction to support the sensing current. Thus, in some instances, a voltage is measured at the working electrode and the counter electrode to identify the fault. The voltage value of the counter electrode adjusts dynamically to support the prescribed current requirements of the analyte sensor, as shown inFIG.13A.FIG.13Ais a representation of the electrochemical cell using both the Nyquist plot and the Bode plot formulation. The Bode plot illustrates the amplitude and phase response of the electrochemical cell. FIG.13Bis a Nyquist plot of the electrochemical cell, illustrating the real (Re{Z}) and imaginary (Im{Z}) components of the electrochemical impedance as radian frequency ω is varied. A zero imaginary component of the impedance is achieved in two cases according to the Randles equivalent circuit model: (1) when the radian frequency approaches ∞, allowing inference of the solution resistance (Rs/RΩ), and (2) when the radian frequency approaches 0, allowing inference of the charge-transfer resistance (Rct) combined with the solution resistance Rs. Perturbing the electrochemical cell at both frequency extremes enables a full characterization of the real (resistive) components of the electrochemical cell. Assuming the electrochemical cell is purely capacitive, a semi-circle interpolation between both Im{Z}0 intersection enables the calculation of a double-layer capacitance Cdl. FIGS.14-17are example plots illustrating the relationship between current and corresponding counter electrode voltage in different fault situations, indicating the operational relationship between the sensing current and the counter electrode voltage. The example plots may be used to provide indications of sensor impedance changes between the counter electrode and the working electrode. FIG.14includes a sensing current plot1410and a corresponding counter electrode voltage plot1420, versus time. During normal operation (e.g., before points1411,1421and between points1413,1423and points1414,1424), as the sensor current changes, the counter electrode voltage changes in an equal or near equal but opposite rate of change, which is visually depicted in the plots1410and1420as a mirrored response. During normal operation in which no faults are exhibited, the counter electrode voltage rate of change and the sensing current rate of change may be near equal or substantially equal. For example, a difference of up to about 5% may exist between the rates of change. In some variations, a difference of up to 10% may exist between the rates of change. The difference between the counter electrode voltage rate of change and the sensing current rate of change may vary, within the near equal or substantially equal range of up to 5% or in some instances up to 10%, during normal operation. Faults are indicated at points1421,1422,1423,1424, and1425in the counter electrode voltage and correspond, respectively, to points1411,1412,1413,1414, and1415in the sensing current. The faults at points1421,1422,1423,1424, and1425are representative of pressure-induced signal attenuations and are identified by deviation in the correspondence between the counter electrode voltage and the sensing current. As shown in the plots1410and1420, at the faults, the counter electrode voltage corresponds to the sensing current with an equal or near equal rate of change. For example, the rates of change may differ between one another by up to 5% or in some instances up to 10%. FIG.15(similar toFIG.14) includes a current plot1510and a corresponding counter electrode voltage plot1520, versus time. During normal operation (e.g., before points1511,1521and between points1511,1521and points1512,1522), as the sensor current changes, the counter electrode voltage changes in an equal but opposite rate of change, which is visually depicted in the plots1510and1520as a mirrored response. During normal operation in which no faults are exhibited, the counter electrode voltage rate of change and the sensing current rate of change may be near equal or substantially equal. For example, a difference of up to about 5% may exist between the rates of change. In some variations, a difference of up to 10% may exist between the rates of change. The difference between the counter electrode voltage rate of change and the sensing current rate of change may vary, within the near equal or substantially equal range of up to 5% or in some instances up to 10%, during normal operation. Faults are indicated at points1521,1522,1523, and1524in the counter electrode voltage and correspond, respectively, to points1511,1512,1513, and1514in the sensing current. The faults at points1521,1522,1523, and1524are representative of pressure-induced signal attenuations and are identified by deviation in the correspondence between the counter electrode voltage and the sensing current. As shown in the plots1510and1520, at the faults, the counter electrode voltage corresponds to the sensing current with an equal or near equal rate of change. For example, the rates of change may differ between one another by up to 5% or in some instances up to 10%. FIG.16includes a current plot1610and a corresponding counter electrode voltage plot1620, versus time. During normal operation (e.g., before points1621,1611), as the sensor current changes, the counter electrode voltage changes in an equal or near equal but opposite rate of change, which is visually depicted in the plots1610and1620as a mirrored response. During normal operation in which no faults are exhibited, the counter electrode voltage rate of change and the sensing current rate of change may be near equal or substantially equal. For example, a difference of up to about 5% may exist between the rates of change. In some variations, a difference of up to 10% may exist between the rates of change. The difference between the counter electrode voltage rate of change and the sensing current rate of change may vary, within the near equal or substantially equal range of up to 5% or in some instances up to 10%, during normal operation. The counter electrode voltage reaching a lower compliance limit at point1621is an indication of a fault. The point1621may correspond to a preceding current spike at point1611in the sensor current, but in some instances, it may not be a clear correlation between the counter electrode voltage and the sensing current. The fault at1621, based on the lower compliance limit being reached, is representative of changes in the physiologic environment surrounding the sensor or changes in the sensor surface. FIG.17includes a current plot1710and a corresponding counter electrode voltage plot1720, versus time. Points1721and1722, representative of faults due to the rapid rate of change exhibited, are indicated in the counter electrode voltage and, as shown, are unrelated to current of the analyte monitoring device. As the current is not experiencing substantial fluctuations or unexpected variations, the points1721and1722are indications of faults unrelated to current of the analyte monitoring device and are instead correlated to external environmental influences, such as external impact to the electronics of the analyte monitoring device. FIG.18depicts an illustrative schematic of a fault detection and diagnostics system1800for monitoring the counter electrode voltage and the working electrode voltage according to the described implementations. Aspects of the fault detection and diagnostics system1800may be incorporated in the analyte monitoring device110. An analog front end1840, as described herein and which maintains a fixed potential relationship between the working electrode1810and the reference electrode1830within the electrochemical system while permitting the counter electrode1820to dynamically swing to potentials required to sustain the redox reaction of interest at the working electrode, is included. A converter1815coupled to the working electrode1810is optionally provided to convert the working electrode voltage. A converter1825coupled to the counter electrode1820is provided to convert the counter electrode voltage. In some instances, one converter may be provided and coupled to each of the working electrode1810and the counter electrode1820for converting the voltages. The converter1815, the converter1825, and/or the single converter may be an analog to digital converter. The digitized voltage signals are transmitted to a controller1822coupled to each converter. In some instances, the controller122shown in and described with reference toFIG.2Amay incorporate operational aspects of the controller1822. The controller1822may be a separate component. In some instances, the controller122is incorporated in place of the controller1822. The controller1822(and/or the controller122) process the counter electrode voltage, the sensing current, and optionally the working electrode voltage to identify faults and associated modes of operation, according to aspects described herein. The controller1822may provide instructions or corrective signals to the three-electrode electrochemical system and may provide an output1824to alert the user of the faults and optionally the mode of operation. The output1824may be provided on a user interface of the analyte monitoring device and/or may be communicated (e.g., wirelessly through near-field communication, Bluetooth, or other wireless protocol) to a remote device and/or remote server. In some variations, more than one working electrode is incorporated and used for detecting an analyte. For example, in the microneedle array configurations900H,900I, and/or900J, shown inFIGS.9H,9I, and9J, more than one working electrode and more than one counter electrode are incorporated. In variations in which more than one counter electrode are incorporated, the counter electrodes are shorted together such that one cumulative counter electrode voltage is monitored as the shorted together counter electrodes together act as one counter electrode. With more than one working electrode, each additional working electrode generates a respective sensing current. In some variations, a correlation between the counter electrode voltage and each working electrode sensing current may be determined. As each working electrode is positioned on a separate and discrete microneedle in the microneedle array, faults that arise may not be consistent between the working electrodes. For example, electrode membrane degradation and biorecognition element degradation may vary across the plurality of working electrodes. Additionally, with respect to improper placement or insertion, in some instances the working electrodes may experience different insertion depths such that while one or more working electrodes are sufficiently inserted, others may not be. Pressure attenuations may also, in some instances, affect the working electrodes differently. Therefore, based on the differences that can occur across the microneedle array, it may be useful to separately monitor and analyze the counter electrode voltage against each working electrode sensing current. The separate monitoring and analysis may serve to provide an indication of a fault at one or more working electrodes. In some variations, when one fault is identified, a corresponding mode of operation is applied. If more than one fault is identified and the faults are different, the mode of operation to discontinue application of a potential between the working electrode and the reference electrode takes a priority over the mode of operation to blank and/or disregard sensing data. In some variations, if a fault is detected at one working electrode but one or more additional working electrodes are operating according to normal operation (e.g., no fault detected), the potential applied at the working electrode exhibiting a fault may be discontinued while allowing operation to continue with the remaining working electrodes. In some variations, a minimum number of operational working electrodes may be defined such that operation of the analyte monitoring device continues if the number of operational working electrodes meets or exceeds the minimum number. In some variations, a combined sensing current is based on the working electrode sensing currents being combined. For example, the sensing current from each of the working electrodes may be averaged to form a combined sensing current. The combined sensing current may be used with the counter electrode voltage, as described herein, to determine faults and modes of operation of the analyte monitoring device. Additional details related to the Randles equivalent model are provided. The impedance Z of the Randles equivalent model is given by the relation: Z=Rs+Rct∥Cdl[1] Expanding this relation to represent the impedance as a function of radian frequency ω: Z~=Rs+Rct1+jωRctCdl[2] At the DC case (zero frequency), the impedance is given by: {tilde over (Z)}(ω→0)=Rs+Rct[3] At the AC case (high frequency extreme), the impedance is given by: {tilde over (Z)}(ω→∞)=Rs[4] Recasting equation 2: Z~=Rs+Rct1+ω2Rct2Cdl2-jωRct2Cdl1+ω2Rct2Cdl2[5] The real and imaginary components of the impedance given in equation 5 may be easily identified as: Re{Z~}=Rs+Rct1+ω2Rct2Cdl2[6]Im{Z~}=-ω2Rct2Cdl21+ω2Rct2Cdl2[7] Given a substitution: ξ=1+ω2Rct2Cdl2[8] The amplitude response of the system is given by: ❘"\[LeftBracketingBar]"Z~❘"\[RightBracketingBar]"=[Re{Z~}]2+[Im{Z~}]2=Rs2+Rctξ(2Rs+Rctξ+ω2Rct3Cdl2ξ)[9] The phase response is accordingly computed: φ=tan-1(Im{Z~}Re{Z~})=tan-1(-ωRct2CdlRsξ+Rct)[10] The current supported by the electrochemical reaction iCELLmay be computed by applying Kirchoff s Voltage Law to the Randles cell: iCELL(ω)=VCE-VWEZ~=VCE-VWERs+Rct1+jωRctCdl[11] The counter electrode voltage, VCE, may be computed by reformulating the above relation: VCE=VWE+iCELL(ω)[Rs+Rct1+jωRctCdl][12] The current may be a positive or negative quantity depending on the configuration of the potentiostat and whether the electrochemical reaction is undergoing oxidation or reduction. In the provided model and current worked equations, it is assumed that the current flows from the counter electrode (held at highest potential) through the electrochemical cell and into the working electrode, which is held at a lower potential (e.g., ground-referenced); this model assumes a reduction reaction (e.g., current flows into the working electrode and thus acts as an electron source). It is also possible for the counter electrode to be held at a lower potential than the working electrode (in oxidation), causing the current to flow from the working electrode into the counter electrode. In this case, the working electrode acts as an electron sink. For the DC case: VCE=VWE+iCELL[Rs+Rct] [13] For a given Rsand Rct, VCEwill track iCELL. For a finite charge transfer resistance Rct: limRs→∞VCE=∞[14] This is the compliance voltage limit of the potentiostat. In this scenario, there is no ohmic connection between the counter electrode and working electrode. Likewise: limRs→0VCE=VWE+iCELLRct[15] This represents the ideal operating condition for an electrochemical system. This is achieved by operating in a medium of sufficient electrolytic/ionic strength (e.g., buffer solution or a physiological fluid of a wearer). Likewise, for a finite solution resistance Rs: limRct→∞VCE=VWE[16] In other words, the counter electrode voltage will approach the working electrode voltage as the current through the electrochemical cell, iCELL, approaches zero due to an infinite charge-transfer resistance. The practical manifestation of this is a complete passivation of the working electrode surface such that no current can flow; an ideal double-layer capacitor is thus formed. As for the case when the said charge transfer resistance approaches zero: limRct→0VCE=VWE+iCELLRs[17] The current through the electrochemical cell becomes invariant of the charge transfer process (e.g., as in an electrolysis reaction). Instead, the counter electrode will track the current flowing through the electrochemical cell (assuming the solution resistance/electrolytic content remains constant throughout the electrolysis). In the AC case, as the frequency tends towards extreme values: limω→∞VCE=VWE+iCELLRs[18] The current through the electrochemical cell becomes invariant of the charge transfer process (e.g., as in an electrolysis reaction). Similarly, in the DC case, as the frequency tends towards zero: limω→0VCE=VWE+iCELL[Rs+Rct][19] This is the same as equation 13. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to explain the principles of the invention and its practical applications, they thereby enable others skilled in the art to utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention. | 108,167 |
11857345 | While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. DETAILED DESCRIPTION For the following defined terms, these definitions shall be applied, unless a different definition is given in the claims or elsewhere in this specification. All numeric values are herein assumed to be modified by the term “about,” whether or not explicitly indicated. The term “about” generally refers to a range of numbers that one of skill in the art would consider equivalent to the recited value (i.e., having the same function or result). In many instances, the terms “about” may include numbers that are rounded to the nearest significant figure. The recitation of numerical ranges by endpoints includes all numbers within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5). As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. It is noted that references in the specification to “an embodiment”, “some embodiments”, “other embodiments”, etc., indicate that the embodiment described may include one or more particular features, structures, and/or characteristics. However, such recitations do not necessarily mean that all embodiments include the particular features, structures, and/or characteristics. Additionally, when particular features, structures, and/or characteristics are described in connection with one embodiment, it should be understood that such features, structures, and/or characteristics may also be used connection with other embodiments whether or not explicitly described unless clearly stated to the contrary. The following detailed description should be read with reference to the drawings in which similar elements in different drawings are numbered the same. The drawings, which are not necessarily to scale, depict illustrative embodiments and are not intended to limit the scope of the invention. During some medical interventions, it may be desirable to measure and/or monitor the blood pressure within a blood vessel. For example, some medical devices may include pressure sensors that allow a clinician to monitor blood pressure. Such devices may be useful in determining fractional flow reserve (FFR), which may be understood as the pressure after a stenosis relative to the pressure before the stenosis (and/or the aortic pressure). FIG.1illustrates a portion of an example medical device10. In this example, medical device10is a blood pressure sensing guidewire10. However, this is not intended to be limiting as other medical devices are contemplated including, for example, catheters, shafts, leads, wires, or the like. Guidewire10may include a tubular member or elongated shaft12. Shaft12may include a proximal region14and a distal region16. The materials for proximal region14and distal region16may vary and may include those materials disclosed herein. For example, distal region16may include a nickel-cobalt-chromium-molybdenum alloy (e.g., MP35-N). Proximal region14may include stainless steel. These are just examples. Other materials may also be utilized. In some embodiments, proximal region14and distal region16are formed from the same monolith of material. In other words, proximal region14and distal region16are portions of the same tube defining shaft12. In other embodiments, proximal region14and distal region16are separate tubular members that are joined together. For example, a section of the outer surface of regions14/16may be removed and a sleeve17may be disposed over the removed sections to join regions14/16. Alternatively, sleeve17may be simply disposed over regions14/16. Other bonds may also be used including welds, thermal bonds, adhesive bonds, or the like. If utilized, sleeve17used to join proximal region14with distal region16may include a material that desirably bonds with both proximal region14and distal region16. For example, sleeve17may include a nickel-chromium-molybdenum alloy (e.g., INCONEL). A plurality of slots18may be formed in shaft12. In at least some embodiments, slots18are formed in distal region16. In at least some embodiments, proximal region14lacks slots18. However, proximal region14may include slots18. Slots18may be desirable for a number of reasons. For example, slots18may provide a desirable level of flexibility to shaft12(e.g., along distal region16) while also allowing suitable transmission of torque. Slots18may be arranged/distributed along distal region16in a suitable manner including any of those arrangements disclosed herein. For example, slots18may be arranged as opposing pairs of slots18that are distributed along the length of distal region16. In some embodiments, adjacent pairs of slots18may have a substantially constant spacing relative to one another. Alternatively, the spacing between adjacent pairs may vary. For example, more distal portions of distal region16may have a decreased spacing (and/or increased slot density), which may provide increased flexibility. In other embodiments, more distal portions of distal region16may have an increased spacing (and/or decreased slot density). These are just examples. Other arrangements are contemplated. A pressure sensor20may be disposed within shaft12(e.g., within a lumen22of shaft12). While pressure sensor20is shown schematically inFIG.1, it can be appreciated that the structural form and/or type of pressure sensor20may vary. For example, pressure sensor20may include a semiconductor (e.g., silicon wafer) pressure sensor, piezoelectric pressure sensor, a fiber optic or optical pressure sensor, a Fabry-Perot type pressure sensor, an ultrasound transducer and/or ultrasound pressure sensor, a magnetic pressure sensor, a solid-state pressure sensor, or the like, or any other suitable pressure sensor. In some cases, the sensor20may be a different type of sensor, such as a temperature sensor. As indicated above, pressure sensor20may include an optical pressure sensor. In at least some of these embodiments, an optical fiber or fiber optic cable24(e.g., a multimode fiber optic) may be attached to pressure sensor20and may extend proximally therefrom. An attachment member26may attach optical fiber24to shaft12. Attachment member26may be circumferentially disposed about and attached to optical fiber24and may be secured to the inner surface of shaft12(e.g., distal region16). In at least some embodiments, attachment member26is proximally spaced from pressure sensor20. Other arrangements are contemplated. Additional features and structural elements of the pressure sensor20may be seen inFIGS.4through7, which illustrate features of an optical pressure sensing block that may be used as the pressure sensor20. In at least some embodiments, distal region16may include a portion with a thinned wall and/or an increased inner diameter that defines a housing region52. In general, housing region52is the portion of distal region16that ultimately “houses” the pressure sensor (e.g., pressure sensor20). By virtue of having a portion of the inner wall of shaft12being removed at housing region52, additional space may be created or otherwise defined that can accommodate sensor20. In at least some embodiments, it may be desirable for pressure sensor20to have reduced exposure along its side surfaces to fluid pressure (e.g., from the blood). Accordingly, it may be desirable to position pressure sensor20along a landing region50defined along housing region52. Landing region50may be substantially free of slots18so that the side surfaces of pressure sensor20have a reduced likelihood of being deformed due to fluid pressures at these locations. Distal of landing region50, housing region52may include slots18that provide fluid access to pressure sensor20. Moreover, one or more of slots18may define a fluid pathway that allows blood (and/or a body fluid) to flow from a position along the exterior or outer surface of guidewire10(and/or shaft12), through slots18, and into the lumen22of shaft12, where the blood can come into contact with pressure sensor20. Because of this, no additional side openings/holes (e.g., other than one or more slots18, a single slot18extending through the wall of shaft12, and/or a dedicated pressure port or opening) may be necessary in shaft12for pressure measurement. This may also allow the length of distal portion16to be shorter than typical sensor mounts or hypotubes that would need to have a length sufficient for a suitable opening/hole (e.g., a suitable “large” opening/hole) to be formed therein that provides fluid access to sensor20. A tip member30may be coupled to distal region16. Tip member30may include a shaping member32and a spring or coil member34. A distal tip36may be attached to shaping member32and/or spring34. In at least some embodiments, distal tip36may take the form of a solder ball tip. Tip member30may be joined to distal region16of shaft12with a bonding member46such as a weld. Shaft12may include a hydrophilic coating19. In some embodiments, hydrophilic coating19may extend along substantially the full length of shaft12. In other embodiments, one or more discrete sections of shaft12may include hydrophilic coating19. In use, a clinician may use guidewire10to measure and/or calculate FFR (e.g., the pressure after an intravascular occlusion relative to the pressure before the occlusion and/or the aortic pressure). Measuring and/or calculating FFR may include measuring the aortic pressure in a patient. This may include advancing guidewire10through a blood vessel or body lumen54to a position that is proximal or upstream of an occlusion56as shown inFIG.2. For example, guidewire10may be advanced through a guide catheter58to a position where at least a portion of sensor20is disposed distal of the distal end of guide catheter58and measuring the pressure within body lumen54. This pressure may be characterized as an initial pressure. In some embodiments, the aortic pressure may also be measured by another device (e.g., a pressure sensing guidewire, catheter, or the like). The initial pressure may be equalized with the aortic pressure. For example, the initial pressure measured by guidewire10may be set to be the same as the measured aortic pressure. Guidewire10may be further advanced to a position distal or downstream of occlusion56as shown inFIG.3and the pressure within body lumen54may be measured. This pressure may be characterized as the downstream or distal pressure. The distal pressure and the aortic pressure may be used to calculate FFR. It can be appreciated that an FFR system that utilizes an optical pressure sensor in a pressure sensing guidewire may be navigated through the tortuous anatomy. This may include crossing relatively tight bends in the vasculature. Because of this, and for other reasons, it may be desirable of pressure sensing guidewire to be relatively flexible, for example adjacent to the distal end. It can be appreciated that in relatively flexible guidewires, bending the guidewire could result in contact between an inner surface of the guidewire and, for example, the pressure sensor. Such contact could lead to alterations and/or deformations of the pressure sensor, potentially leading to pressure reading offsets. Accordingly, disclosed herein are pressure-sensing guidewires that may include structural features that may help to reduce contact between the pressure sensor and the inner surface of the guidewire and, therefore, help to reduce the possibility of pressure reading offsets. FIG.4illustrates an optical pressure sensing block120that may be used, for example, as the basis for the pressure sensor20as shown inFIGS.1-3. In some embodiments, for example as will be discussed with respect toFIGS.8-11, a plurality of individual optical pressure sensing blocks120may be machined from a block of glass. In some cases, the distal and proximal profiles of a plurality of optical pressure sensing blocks120may be milled, etched or otherwise formed in either side of a glass block. The individual optical pressure sensing blocks120may then be diced or otherwise cut apart. In some cases, additional elements such as a pressure sensing membrane may be secured to the plurality of optical pressure sensing blocks120before they are cut apart. In some instances, the pressure sensing membranes may be added after the optical pressure sensing blocks120are cut apart. It will be appreciated that this manufacturing discussion is illustrative only. In some embodiments, as illustrated, the optical pressure sensing block120may be considered as including a distal portion122and a proximal portion124. A center portion126is disposed between the distal portion122and the proximal portion124. In some cases, as illustrated, the center portion126may have a constant or relatively constant diameter. The distal portion122may taper from the center portion126towards a distal end123of the distal portion122. In some instances, the proximal portion124may taper from the center portion126towards a proximal end127of the proximal portion124. While not illustrated, in some cases it is contemplated that a cross-sectional diameter of the optical pressure sensing block120may vary smoothly from a maximum somewhere within the center portion126towards each of the distal end123and the proximal end127. In some instances, the center portion126may be seen as having a larger cross-sectional diameter than either the distal portion122or the proximal portion124. Thus it will be appreciated that the center portion126may help to prevent the distal portion122of the optical pressure sensing block120from contacting other components of the pressure sensing guidewire10. In some cases, the optical pressure sensing block120is formed of a single or monolithic glass block. In some embodiments, the optical pressure sensing block120may be considered as forming a Fabry-Perot optical sensing device that includes a sensor block including a proximal portion, a distal portion and a center portion disposed between the proximal and distal portions, the center portion having an outer diameter that is greater than an outer diameter of the distal portion and greater than an outer diameter of the proximal portion. The distal portion defines a cavity therein, and there may be a pressure sensing layer disposed over the cavity. The distal portion122of the optical pressure sensing block120may include a recess128formed in the distal end123that helps to form the pressure sensor20. As is shown inFIG.5, a pressure sensing membrane may span the recess128. In some embodiments, as illustrated for example inFIG.4, the proximal portion124of the optical pressure sensing block120extends proximally to form an optical fiber connector130. The optical fiber connector130may be integrally formed as part of the optical pressure sensing block120, and may be configured for attachment to an optical fiber such as the optical fiber24shown inFIGS.1-3. In some cases, the optical fiber connector130may be configured to improve the accuracy and effectiveness of a connection between the optical pressure sensing block120and the aforementioned optical fiber24. In some cases, the optical fiber connector130may have an angled proximal end132that may facilitate fusion splicing between the optical fiber connector130and an optical fiber such as the optical fiber24. In some cases, the proximal end132may instead be flat, rather than angled, depending on how the optical fiber is to be attached. It will be appreciated that because the optical fiber24is glass, and the optical fiber connector130, by virtue of being an integral part of the optical pressure sensing block120, is also glass, an accurate connection can be achieved using fusion splicing. Fusion splicing is a process known for attaching one optical fiber to another optical fiber, for example. InFIG.5, it can be seen that a pressure sensing membrane140has been secured to the distal end123of the optical pressure sensing block120via a eutectic bond142. In some cases, the pressure sensing membrane140may be a thin layer of silicon that can flex relative to the void128in response to changes in pressure adjacent the pressure sensing membrane140opposite the void128.FIG.6illustrates inclusion of an optical fiber150that may, for example, represent the optical fiber24shown and discussed herein. In the illustrated embodiment, the optical fiber150has a distal end152that is angled in a complementary fashion to the proximal end132of the optical fiber connector130. It will be appreciated that if the proximal end132of the optical fiber130is not angled, or has a different profile, that the distal end152of the optical fiber150will have a corresponding profile. In some cases, as shown inFIG.7, the optical fiber connector may instead be an aperture drilled or otherwise formed in the optical pressure sensing block220. It will be appreciated that this may further improve alignment between the optical pressure sensing membrane at the opposing end of the optical pressure sensing block220. The optical pressure sensing block220includes a distal portion222and a proximal portion224. A center portion226is disposed between the distal portion222and the proximal portion224. In some cases, as illustrated, the center portion226may have a constant or relatively constant diameter. The distal portion222may taper from the center portion226towards a distal end223of the distal portion222. In some instances, the proximal portion224may taper from the center portion226towards a proximal end227of the proximal portion224. While not illustrated, in some cases it is contemplated that a cross-sectional diameter of the optical pressure sensing block220may vary smoothly from a maximum somewhere within the center portion226towards each of the distal end223and the proximal end227. The distal portion222of the optical pressure sensing block220may include a recess228formed in the distal end223that helps to form the pressure sensor20. A pressure sensing membrane240has been secured to the distal end223of the optical pressure sensing block220via a eutectic bond242. In some cases, the pressure sensing membrane240may be a thin layer of silicon that can flex relative to the void228in response to changes in pressure adjacent the pressure sensing membrane240opposite the void228. In this illustration, an optical fiber connector230is an aperture that is formed within the proximal end227of the proximal portion224. The aperture forming the optical fiber connector230has a diameter that is about the same as a diameter of an optical fiber250such the optical fiber250may be inserted into the optical fiber connector230but is located by the optical fiber connector230such that there is no play, or relative movement between the optical fiber connector230and the optical fiber250. In some cases, the optical fiber connector230has a bottom surface232that is complementary to a profile of a distal end252of the optical fiber250. Once the optical fiber250is firmly secured within the optical fiber connector230, the optical fiber250may be secured in place by an adhesive254placed about the optical fiber250near the proximal end227of the proximal portion224. FIGS.8through11provide an illustrative but non-limiting manufacturing method for the optical pressure sensing block120. As shown inFIG.8, pockets310may be milled into a glass wafer300. A silicon membrane wafer320may be disposed over the glass wafer300and heat and pressure may be applied to create a eutectic bond322between the silicone membrane wafer320and the glass wafer300, as shown inFIG.9. Next, the proximal profile may be machined as shown inFIG.10, removing material to form optical fiber connectors330. In some cases, a femtosecond laser system may be used to mill the illustrated profile into the glass wafer300. Finally, as shown inFIG.11, the assembly may be diced to form individual optical pressure sensing blocks340. In some cases, it will be appreciated that the milling shown inFIG.10may occur before or after the silicone membrane wafer320is attached to the glass block300via the eutectic bond322. The materials that can be used for the various components of guidewire10(and/or other guidewires disclosed herein) and the various tubular members disclosed herein may include those commonly associated with medical devices. For simplicity purposes, the following discussion makes reference to shaft12and other components of guidewire10. However, this is not intended to limit the devices and methods described herein, as the discussion may be applied to other tubular members and/or components of tubular members or devices disclosed herein. Shaft12may be made from a metal, metal alloy, polymer (some examples of which are disclosed below), a metal-polymer composite, ceramics, combinations thereof, and the like, or other suitable material. Some examples of suitable metals and metal alloys include stainless steel, such as 304V, 304L, and 316LV stainless steel; mild steel; nickel-titanium alloy such as linear-elastic and/or super-elastic nitinol; other nickel alloys such as nickel-chromium-molybdenum alloys (e.g., UNS: N06625 such as INCONEL® 625, UNS: N06022 such as HASTELLOY® C-22®, UNS: N10276 such as HASTELLOY® C276®, other HASTELLOY® alloys, and the like), nickel-copper alloys (e.g., UNS: N04400 such as MONEL® 400, NICKELVAC® 400, NICORROS® 400, and the like), nickel-cobalt-chromium-molybdenum alloys (e.g., UNS: R30035 such as MP35-N® and the like), nickel-molybdenum alloys (e.g., UNS: N10665 such as HASTELLOY® ALLOY B2®), other nickel-chromium alloys, other nickel-molybdenum alloys, other nickel-cobalt alloys, other nickel-iron alloys, other nickel-copper alloys, other nickel-tungsten or tungsten alloys, and the like; cobalt-chromium alloys; cobalt-chromium-molybdenum alloys (e.g., UNS: R30003 such as ELGILOY®, PHYNOX®, and the like); platinum enriched stainless steel; titanium; combinations thereof; and the like; or any other suitable material. As alluded to herein, within the family of commercially available nickel-titanium or nitinol alloys, is a category designated “linear elastic” or “non-super-elastic” which, although may be similar in chemistry to conventional shape memory and super elastic varieties, may exhibit distinct and useful mechanical properties. Linear elastic and/or non-super-elastic nitinol may be distinguished from super elastic nitinol in that the linear elastic and/or non-super-elastic nitinol does not display a substantial “superelastic plateau” or “flag region” in its stress/strain curve like super elastic nitinol does. Instead, in the linear elastic and/or non-super-elastic nitinol, as recoverable strain increases, the stress continues to increase in a substantially linear, or a somewhat, but not necessarily entirely linear relationship until plastic deformation begins or at least in a relationship that is more linear that the super elastic plateau and/or flag region that may be seen with super elastic nitinol. Thus, for the purposes of this disclosure linear elastic and/or non-super-elastic nitinol may also be termed “substantially” linear elastic and/or non-super-elastic nitinol. In some cases, linear elastic and/or non-super-elastic nitinol may also be distinguishable from super elastic nitinol in that linear elastic and/or non-super-elastic nitinol may accept up to about 2-5% strain while remaining substantially elastic (e.g., before plastically deforming) whereas super elastic nitinol may accept up to about 8% strain before plastically deforming. Both of these materials can be distinguished from other linear elastic materials such as stainless steel (that can also can be distinguished based on its composition), which may accept only about 0.2 to 0.44 percent strain before plastically deforming. In some embodiments, the linear elastic and/or non-super-elastic nickel-titanium alloy is an alloy that does not show any martensite/austenite phase changes that are detectable by differential scanning calorimetry (DSC) and dynamic metal thermal analysis (DMTA) analysis over a large temperature range. For example, in some embodiments, there may be no martensite/austenite phase changes detectable by DSC and DMTA analysis in the range of about −60 degrees Celsius (° C.) to about 120° C. in the linear elastic and/or non-super-elastic nickel-titanium alloy. The mechanical bending properties of such material may therefore be generally inert to the effect of temperature over this very broad range of temperature. In some embodiments, the mechanical bending properties of the linear elastic and/or non-super-elastic nickel-titanium alloy at ambient or room temperature are substantially the same as the mechanical properties at body temperature, for example, in that they do not display a super-elastic plateau and/or flag region. In other words, across a broad temperature range, the linear elastic and/or non-super-elastic nickel-titanium alloy maintains its linear elastic and/or non-super-elastic characteristics and/or properties. In some embodiments, the linear elastic and/or non-super-elastic nickel-titanium alloy may be in the range of about 50 to about 60 weight percent nickel, with the remainder being essentially titanium. In some embodiments, the composition is in the range of about 54 to about 57 weight percent nickel. One example of a suitable nickel-titanium alloy is FHP-NT alloy commercially available from Furukawa Techno Material Co. of Kanagawa, Japan. Some examples of nickel titanium alloys are disclosed in U.S. Pat. Nos. 5,238,004 and 6,508,803, which are incorporated herein by reference. Other suitable materials may include ULTANIUM™ (available from Neo-Metrics) and GUM METAL™ (available from Toyota). In some other embodiments, a superelastic alloy, for example a superelastic nitinol can be used to achieve desired properties. In at least some embodiments, portions or all of shaft12may also be doped with, made of, or otherwise include a radiopaque material. Radiopaque materials are understood to be materials capable of producing a relatively bright image on a fluoroscopy screen or another imaging technique during a medical procedure. This relatively bright image aids the user of guidewire10in determining its location. Some examples of radiopaque materials can include, but are not limited to, gold, platinum, palladium, tantalum, tungsten alloy, polymer material loaded with a radiopaque filler, and the like. Additionally, other radiopaque marker bands and/or coils may also be incorporated into the design of guidewire10to achieve the same result. In some embodiments, a degree of Magnetic Resonance Imaging (MRI) compatibility is imparted into guidewire10. For example, shaft12or portions thereof may be made of a material that does not substantially distort the image and create substantial artifacts (i.e., gaps in the image). Certain ferromagnetic materials, for example, may not be suitable because they may create artifacts in an MRI image. Shaft12, or portions thereof, may also be made from a material that the MRI machine can image. Some materials that exhibit these characteristics include, for example, tungsten, cobalt-chromium-molybdenum alloys (e.g., UNS: R30003 such as ELGILOY®, PHYNOX®, and the like), nickel-cobalt-chromium-molybdenum alloys (e.g., UNS: R30035 such as MP35-N® and the like), nitinol, and the like, and others. A sheath or covering (not shown) may be disposed over portions or all of shaft12that may define a generally smooth outer surface for guidewire10. In other embodiments, however, such a sheath or covering may be absent from a portion of all of guidewire10, such that shaft12may form the outer surface. The sheath may be made from a polymer or other suitable material. Some examples of suitable polymers may include polytetrafluoroethylene (PTFE), ethylene tetrafluoroethylene (ETFE), fluorinated ethylene propylene (FEP), polyoxymethylene (POM, for example, DELRIN® available from DuPont), polyether block ester, polyurethane (for example, Polyurethane 85A), polypropylene (PP), polyvinylchloride (PVC), polyether-ester (for example, ARNITEL® available from DSM Engineering Plastics), ether or ester based copolymers (for example, butylene/poly(alkylene ether) phthalate and/or other polyester elastomers such as HYTREL® available from DuPont), polyamide (for example, DURETHAN® available from Bayer or CRISTAMID® available from Elf Atochem), elastomeric polyamides, block polyamide/ethers, polyether block amide (PEBA, for example available under the trade name PEBAX®), ethylene vinyl acetate copolymers (EVA), silicones, polyethylene (PE), Marlex high-density polyethylene, Marlex low-density polyethylene, linear low density polyethylene (for example REXELL®), polyester, polybutylene terephthalate (PBT), polyethylene terephthalate (PET), polytrimethylene terephthalate, polyethylene naphthalate (PEN), polyetheretherketone (PEEK), polyimide (PI), polyetherimide (PEI), polyphenylene sulfide (PPS), polyphenylene oxide (PPO), poly praraphenylene terephthalamide (for example, KEVLAR®), polysulfone, nylon, nylon-12 (such as GRILAMID® available from EMS American Grilon), perfluoro(propyl vinyl ether) (PFA), ethylene vinyl alcohol, polyolefin, polystyrene, epoxy, polyvinylidene chloride (PVdC), poly(styrene-b-isobutylene-b-styrene) (for example, SIBS and/or SIBS 50A), polycarbonates, ionomers, biocompatible polymers, other suitable materials, or mixtures, combinations, copolymers thereof, polymer/metal composites, and the like. In some embodiments the sheath can be blended with a liquid crystal polymer (LCP). For example, the mixture can contain up to about 6 percent LCP. In some embodiments, the exterior surface of the guidewire10(including, for example, the exterior surface of shaft12) may be sandblasted, beadblasted, sodium bicarbonate-blasted, electropolished, etc. In these as well as in some other embodiments, a coating, for example a lubricious, a hydrophilic, a protective, or other type of coating may be applied over portions or all of the sheath, or in embodiments without a sheath over portion of shaft12, or other portions of guidewire10. Alternatively, the sheath may comprise a lubricious, hydrophilic, protective, or other type of coating. Hydrophobic coatings such as fluoropolymers provide a dry lubricity which improves guidewire handling and device exchanges. Lubricious coatings improve steerability and improve lesion crossing capability. Suitable lubricious polymers are well known in the art and may include silicone and the like, hydrophilic polymers such as high-density polyethylene (HDPE), polytetrafluoroethylene (PTFE), polyarylene oxides, polyvinylpyrolidones, polyvinylalcohols, hydroxy alkyl cellulosics, algins, saccharides, caprolactones, and the like, and mixtures and combinations thereof. Hydrophilic polymers may be blended among themselves or with formulated amounts of water insoluble compounds (including some polymers) to yield coatings with suitable lubricity, bonding, and solubility. Some other examples of such coatings and materials and methods used to create such coatings can be found in U.S. Pat. Nos. 6,139,510 and 5,772,609, which are incorporated herein by reference. The coating and/or sheath may be formed, for example, by coating, extrusion, co-extrusion, interrupted layer co-extrusion (ILC), or fusing several segments end-to-end. The layer may have a uniform stiffness or a gradual reduction in stiffness from the proximal end to the distal end thereof. The gradual reduction in stiffness may be continuous as by ILC or may be stepped as by fusing together separate extruded tubular segments. The outer layer may be impregnated with a radiopaque filler material to facilitate radiographic visualization. Those skilled in the art will recognize that these materials can vary widely without deviating from the scope of the present invention. Various embodiments of arrangements and configurations of slots are also contemplated that may be used in addition to what is described above or may be used in alternate embodiments. For simplicity purposes, the following disclosure makes reference to guidewire10, slots18, and shaft12. However, it can be appreciated that these variations may also be utilized for other slots and/or tubular members. In some embodiments, at least some, if not all of slots18are disposed at the same or a similar angle with respect to the longitudinal axis of shaft12. As shown, slots18can be disposed at an angle that is perpendicular, or substantially perpendicular, and/or can be characterized as being disposed in a plane that is normal to the longitudinal axis of shaft12. However, in other embodiments, slots18can be disposed at an angle that is not perpendicular, and/or can be characterized as being disposed in a plane that is not normal to the longitudinal axis of shaft12. Additionally, a group of one or more slots18may be disposed at different angles relative to another group of one or more slots18. The distribution and/or configuration of slots18can also include, to the extent applicable, any of those disclosed in U.S. Pat. Publication No. US 2004/0181174, the entire disclosure of which is herein incorporated by reference. Slots18may be provided to enhance the flexibility of shaft12while still allowing for suitable torque transmission characteristics. Slots18may be formed such that one or more rings and/or tube segments interconnected by one or more segments and/or beams that are formed in shaft12, and such tube segments and beams may include portions of shaft12that remain after slots18are formed in the body of shaft12. Such an interconnected structure may act to maintain a relatively high degree of torsional stiffness, while maintaining a desired level of lateral flexibility. In some embodiments, some adjacent slots18can be formed such that they include portions that overlap with each other about the circumference of shaft12. In other embodiments, some adjacent slots18can be disposed such that they do not necessarily overlap with each other, but are disposed in a pattern that provides the desired degree of lateral flexibility. Additionally, slots18can be arranged along the length of, or about the circumference of, shaft12to achieve desired properties. For example, adjacent slots18, or groups of slots18, can be arranged in a symmetrical pattern, such as being disposed essentially equally on opposite sides about the circumference of shaft12, or can be rotated by an angle relative to each other about the axis of shaft12. Additionally, adjacent slots18, or groups of slots18, may be equally spaced along the length of shaft12, or can be arranged in an increasing or decreasing density pattern, or can be arranged in a non-symmetric or irregular pattern. Other characteristics, such as slot size, slot shape, and/or slot angle with respect to the longitudinal axis of shaft12, can also be varied along the length of shaft12in order to vary the flexibility or other properties. In other embodiments, moreover, it is contemplated that the portions of the tubular member, such as a proximal section, or a distal section, or the entire shaft12, may not include any such slots18. As suggested herein, slots18may be formed in groups of two, three, four, five, or more slots18, which may be located at substantially the same location along the axis of shaft12. Alternatively, a single slot18may be disposed at some or all of these locations. Within the groups of slots18, there may be included slots18that are equal in size (i.e., span the same circumferential distance around shaft12). In some of these as well as other embodiments, at least some slots18in a group are unequal in size (i.e., span a different circumferential distance around shaft12). Longitudinally adjacent groups of slots18may have the same or different configurations. For example, some embodiments of shaft12include slots18that are equal in size in a first group and then unequally sized in an adjacent group. It can be appreciated that in groups that have two slots18that are equal in size and are symmetrically disposed around the tube circumference, the centroid of the pair of beams (i.e., the portion of shaft12remaining after slots18are formed therein) is coincident with the central axis of shaft12. Conversely, in groups that have two slots18that are unequal in size and whose centroids are directly opposed on the tube circumference, the centroid of the pair of beams can be offset from the central axis of shaft12. Some embodiments of shaft12include only slot groups with centroids that are coincident with the central axis of the shaft12, only slot groups with centroids that are offset from the central axis of shaft12, or slot groups with centroids that are coincident with the central axis of shaft12in a first group and offset from the central axis of shaft12in another group. The amount of offset may vary depending on the depth (or length) of slots18and can include other suitable distances. Slots18can be formed by methods such as micro-machining, saw-cutting (e.g., using a diamond grit embedded semiconductor dicing blade), electron discharge machining, grinding, milling, casting, molding, chemically etching or treating, or other known methods, and the like. In some such embodiments, the structure of the shaft12is formed by cutting and/or removing portions of the tube to form slots18. Some example embodiments of appropriate micromachining methods and other cutting methods, and structures for tubular members including slots and medical devices including tubular members are disclosed in U.S. Pat. Publication Nos. 2003/0069522 and 2004/0181174-A2; and U.S. Pat. Nos. 6,766,720; and 6,579,246, the entire disclosures of which are herein incorporated by reference. Some example embodiments of etching processes are described in U.S. Pat. No. 5,106,455, the entire disclosure of which is herein incorporated by reference. It should be noted that the methods for manufacturing guidewire110may include forming slots18shaft12using these or other manufacturing steps. In at least some embodiments, slots18may be formed in tubular member using a laser cutting process. The laser cutting process may include a suitable laser and/or laser cutting apparatus. For example, the laser cutting process may utilize a fiber laser. Utilizing processes like laser cutting may be desirable for a number of reasons. For example, laser cutting processes may allow shaft12to be cut into a number of different cutting patterns in a precisely controlled manner. This may include variations in the slot width, ring width, beam height and/or width, etc. Furthermore, changes to the cutting pattern can be made without the need to replace the cutting instrument (e.g., blade). This may also allow smaller tubes (e.g., having a smaller outer diameter) to be used to form shaft12without being limited by a minimum cutting blade size. Consequently, shaft12may be fabricated for use in neurological devices or other devices where a relatively small size may be desired. It should be understood that this disclosure is, in many respects, only illustrative. Changes may be made in details, particularly in matters of shape, size, and arrangement of steps without exceeding the scope of the invention. This may include, to the extent that it is appropriate, the use of any of the features of one example embodiment being used in other embodiments. The invention's scope is, of course, defined in the language in which the appended claims are expressed. | 40,120 |
11857346 | DETAILED DESCRIPTION The exemplary embodiments of the surgical system and related method of use disclosed are in terms of medical devices for the treatment of musculoskeletal disorders and more particularly, in terms of growth modulating implants including fixation screws for the treatment of a deformity by monitoring the growth modulating implants in real-time to determine longitudinal growth, growth rate, differential growth or lung capacity, for example, and methods of monitoring the growth modulating implants, implant operational status and patient deformity. The exemplary embodiments of the surgical system and related methods of use disclosed are discussed in terms of medical devices for the treatment of musculoskeletal disorders and more particularly, in terms of a vertebral fixation screws, including for example pedicle screws, as well as hooks, cross connectors, offset connectors and related systems for use during various spinal procedures or other orthopedic procedures and that may be used in conjunction with other devices and instruments related to spinal treatment, such as rods, wires, plates, intervertebral implants, and other spinal or orthopedic implants, insertion instruments, specialized instruments such as, for example, delivery devices (including various types of cannula) for the delivery of these various spinal or other implants to the vertebra or other areas within a patient in various directions, and/or a method or methods for treating a spine, such as open procedures, mini-open procedures, or minimally invasive procedures. Exemplary prior art devices that may be modified to include the various embodiments of load sensing systems include, for example, U.S. Pat. Nos. 6,485,491 and 8,057,519, all incorporated herein by reference in their entirety. The present disclosure may be understood more readily by reference to the following detailed description of the embodiments taken in connection with the accompanying drawing figures, which form a part of this disclosure. It is to be understood that this application is not limited to the specific devices, methods, conditions or parameters described and/or shown herein, and that the terminology used herein is for the purpose of describing particular embodiments by way of example only and is not intended to be limiting. In some embodiments, as used in the specification and including the appended claims, the singular forms “a,” “an,” and “the” include the plural, and reference to a particular numerical value includes at least that particular value, unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. It is also understood that all spatial references, such as, for example, horizontal, vertical, top, upper, lower, bottom, left and right, are for illustrative purposes only and can be varied within the scope of the disclosure. For example, the references “upper” and “lower” are relative and used only in the context to the other, and are not necessarily “superior” and “inferior”. Generally, similar spatial references of different aspects or components indicate similar spatial orientation and/or positioning, i.e., that each “first end” is situated on or directed towards the same end of the device. Further, the use of various spatial terminology herein should not be interpreted to limit the various insertion techniques or orientations of the implant relative to the positions in the spine. The following discussion includes a description of a vertebral pedicle screw system and related components and methods of employing the vertebral pedicle screw in accordance with the principles of the present disclosure. Reference is made in detail to the exemplary embodiments of the present disclosure, which are illustrated in the accompanying figures. The components of the vertebral pedicle screw system described herein can be fabricated from biologically acceptable materials suitable for medical applications, including metals, synthetic polymers, ceramics and bone material and/or their composites. For example, the components of the vertebral pedicle screw system, individually or collectively, can be fabricated from materials such as stainless steel alloys, commercially pure titanium, titanium alloys, Grade 5 titanium, super-elastic titanium alloys, cobalt-chrome alloys, stainless steel alloys, superelastic metallic alloys (e.g., Nitinol, super elasto-plastic metals, such as GUM METAL®), ceramics and composites thereof such as calcium phosphate (e.g., SKELITE™), thermoplastics such as polyaryletherketone (PAEK) including polyetheretherketone (PEEK), polyetherketoneketone (PEKK) and polyetherketone (PEK), carbon-PEEK composites, PEEK-BaSO4polymeric rubbers, polyethylene terephthalate (PET), fabric, silicone, polyurethane, silicone-polyurethane copolymers, polymeric rubbers, polyolefin rubbers, hydrogels, semi-rigid and rigid materials, elastomers, rubbers, thermoplastic elastomers, thermoset elastomers, elastomeric composites, rigid polymers including polyphenylene, polyamide, polyimide, polyetherimide, polyethylene, epoxy, bone material including autograft, allograft, xenograft or transgenic cortical and/or corticocancellous bone, and tissue growth or differentiation factors, partially resorbable materials, such as, for example, composites of metals and calcium-based ceramics, composites of PEEK and calcium based ceramics, composites of PEEK with resorbable polymers, totally resorbable materials, such as, for example, calcium based ceramics such as calcium phosphate, tri-calcium phosphate (TCP), hydroxyapatite (HA)-TCP, calcium sulfate, or other resorbable polymers such as polyaetide, polyglycolide, polytyrosine carbonate, polycaroplaetohe and their combinations. Various components of the vertebral pedicle screw system may be formed or constructed material composites, including the above materials, to achieve various desired characteristics such as strength, rigidity, elasticity, compliance, biomechanical performance, durability and radiolucency or imaging preference. The components of the present vertebral pedicle screw system, individually or collectively, may also be fabricated from a heterogeneous material such as a combination of two or more of the above-described materials. The components of the vertebral pedicle screw system may be monolithically formed, integrally connected or include fastening elements and/or instruments, as described herein. The components of the vertebral pedicle screw system may be formed using a variety of subtractive and additive manufacturing techniques, including, but not limited to machining, milling, extruding, molding, 3D-printing, sintering, coating, vapor deposition, and laser/beam melting. Furthermore, various components of the vertebral pedicle screw system may be coated or treated with a variety of additives or coatings to improve biocompatibility, bone growth promotion or other features. To the extent the plate is entirely or partially radiolucent, it may further include radiographic markers made, for example of metallic pins, at one or both ends, on each corner of the ends, and/or along the length of the implant in various locations including near the center of the assembly. The vertebral pedicle screw system may be employed, for example, with a minimally invasive procedure, including percutaneous techniques, mini-open and open surgical techniques to deliver and introduce instrumentation and/or one or more spinal implants at a surgical site within a body of a patient, for example, a section of a spine. In some embodiments, the vertebral pedicle screw system may be employed with surgical procedures, as described herein, and/or, for example, corpectomy, discectomy, fusion and/or fixation treatments that employ spinal implants to restore the mechanical support function of vertebrae. In some embodiments, the pedicle screw system may be employed with surgical approaches, including but not limited to: anterior lumbar interbody fusion (ALIF), direct lateral interbody fusion (DLIF), oblique lateral lumbar interbody fusion (OLLIF), oblique lateral interbody fusion (OLIF), various types of anterior fusion procedures, and any fusion procedure in any portion of the spinal column (sacral, lumbar, thoracic, and cervical, for example). The vertebral pedicle screw system may be employed, for example, in a distraction-based systems that may include growing rods, a compression-based system such as tether and vertebral staples and guided-growth system that allows anchors to slide over the rod as in Luque trolley and Shilla systems. This application incorporates by reference in its entirety U.S. Published Application 2020/0022740, entitled “SET SCREW SENSOR PLACEMENT,” assigned to Warsaw Orthopedic, Inc. FIG.1illustrates an example anchoring assembly10and longitudinal member100according to an embodiment. As illustrated inFIG.1, an anchoring assembly10includes a screw20and an anchoring member30. The screw20has an elongated shape with a first end mounted within a vertebral member200and a second end extending outward above the vertebral member200. The anchoring member30is configured to operatively connect to the second end of the screw20and is movably connected to the screw20to accommodate the longitudinal member100positioned at various angular positions. The anchoring member30includes a channel31sized to receive the longitudinal member100. A set screw50attaches to the anchoring member30to capture the longitudinal member100within the channel31. FIG.2illustrates an example exploded view of a screw assembly and longitudinal member according to an embodiment. As shown byFIG.2, anchoring member30provides a connection between the screw20and longitudinal member100. Anchoring member30includes a first end32that faces towards the vertebral member200, and a second end33that faces away. A chamber is positioned between the first and second ends32,33and is sized to receive at least a portion of the screw20. In various embodiments, a first end32may be considered a base portion of an anchoring member30, and a second end33may be considered a head portion of an anchoring member. The second end33of the anchoring member30includes a channel31sized to receive the longitudinal member100. Channel31terminates at a lower edge38that may include a curved shape to approximate the longitudinal member100. Threads37may be positioned towards the second end33to engage with the set screw50. In one embodiment as illustrated inFIG.2, the threads37are positioned on the interior of the anchoring member30facing towards the channel31. In another embodiment, the threads37may be on the exterior of the anchoring member30. An interior of the anchoring member30may be open between the first and second ends32,33. In various embodiments, an anchoring member30may include a washer60. A washer60may be generally cylindrical and may have a hole66there through. As illustrated byFIG.1a washer60may be positioned near a first end32of an anchoring member30. A screw20may engage with an anchoring member30via positioning through the hole66of a washer60. A washer60may include recessed portions which may be configured to accommodate placement of a longitudinal member100therein. The use of a washer60in connection with an anchoring member30may help minimize misalignment of the longitudinal member within the anchoring member. In an embodiment, set screw50attaches to the anchoring member30and captures the longitudinal member100within the channel31. As illustrated inFIG.2, the set screw50may be sized to fit within the interior of the channel31and include exterior threads51that engage threads37on the anchoring member30. A driving feature52may be positioned on a top side to receive a tool during engagement with the anchoring member30. In some embodiments, the set screw50may be mounted on an exterior of the anchoring member30. Set screw50includes a central opening and is sized to extend around the second end33. A set screw50may be a break-off set screw or a non-break-off set screw. In certain embodiments, a set screw50may include a slot53for receiving or routing of electronic connections as illustrated inFIGS.13A and13B. Threads51are positioned on an inner surface of the central opening to engage with the external threads37on the anchoring member30. The set screw50and anchoring member30may be constructed for the top side of the set screw50to be flush with or recessed within the second end33when mounted with the anchoring member30.FIG.13Aillustrates an example set screw50having an antenna300positioned on an external portion of the set screw.FIG.13Billustrates an example set screw50having an antenna300positioned internally in a central opening of the set screw. FIG.3illustrates an example load sensing assembly for a set screw according to an embodiment. As illustrated byFIG.3, a load sensing assembly may include an antenna300, such as a radio frequency identification (RFID) coil, a near field-communication (NFC) antenna or other short-range communication transmitter and/or receiver. A load sensing assembly may include one or more integrated circuits302such as, for example, an RFID chip302or an NFC chip. A load sensing assembly may include one or more electronics components304and/or a strain gauge306, such as for example a silicon strain gauge. A strain gauge306may be a device that measures strain on an object. For instance, a strain gauge306may measure a force between a set screw and a longitudinal member when the set screw is engaged with an anchoring member. A strain gauge306may include one or more sensors or sensor nodes that measure strain, force, resistance, load and or the like. In an embodiment, one or more of the electronics components304may include a flexible electronics component, such as, for example, a flex circuit or one or more electrical circuits. The antenna300may be operably connected to the electronics component304via a connecting member308. For instance, as shown inFIG.3, the connecting member308may be connected to both the antenna300and the electronics component304. The connecting member308may be positioned perpendicularly to both the antenna300and the electronics component304. In various embodiments, a connecting member308and an antenna300and/or electronics component304may be constructed integrally or may be separately constructed and attached together in any suitable manner, such as for example by adhesive, chemical, mechanical or cement bonding. The integrated circuit302may be operably connected to the electronics component304. For instance, as illustrated inFIG.3, an electronics component304may have a top surface310and a bottom surface312. An integrated circuit302may be positioned on the top surface310of an electronics component304, and may be connected to the top surface in any suitable manner, including, for example, adhesive, chemical, mechanical or cement bonding. An integrated circuit302may include memory according to an embodiment. The memory may be used to store various information. For example, one or more measurements of a strain gauge306may be stored in memory. As another example, a unique identifier associated with a load sensing assembly, a component thereof, or a set screw may be stored in memory. Additional and/or alternate information or types of information may be stored according to this disclosure. A strain gauge306may be operably connected, for example by adhesive, cement, mechanical or chemical bonding, to the electronics component304. For instance, a strain gauge306may be operably connected to the electronics component304via the bottom surface312of the electronics component304. A strain gauge306may be connected to the bottom surface312of an electronics component304in any suitable manner including, without limitation, via an adhesive bonding agent. As shown inFIG.3, an antenna300may have a generally curved shape. The antenna300may include a first end and a second end. The antenna300may include an opening that extends from the first end toward the second end. As illustrated inFIG.4A, a load sensing assembly may be configured to be mounted to a set screw. The antenna300is sized to extend around the set screw such that the integrated circuit302, electronics component304, strain gauge306and connecting member308are positioned within the central opening of the set screw as illustrated inFIG.4A. As illustrated inFIG.4A, the antenna300may circumferentially surround at least a portion of the exterior of the set screw. In other embodiments, as illustrated byFIG.4B, the antenna300may be positioned at least partially inside of the central opening of a set screw. In certain embodiments, the strain gauge306may be connected to a portion of the central opening of the set screw in any suitable manner including, without limitation via an adhesive. The strain gauge306may be connected to a portion of the central opening such that it is positioned to measure a force between the set screw and a longitudinal rod when the set screw engages with an anchoring member.FIG.5illustrates a top view of a load sensing assembly mounted to a set screw according to an embodiment. FIG.6illustrates an example load sensing assembly according to an embodiment. The load sensing assembly illustrated inFIG.6may be mounted to an anchoring member according to various embodiments. Example anchoring members may include, without limitation screws, hooks, offset connectors, cross connectors, or other types of anchors or implants. As illustrated inFIG.6, a load sensing assembly for an anchoring member may include an antenna600, such as a RFID coil, an NFC antenna or other short-range communication transmitter and/or receiver. A load sensing assembly may include an integrated circuit602, one or more electronics components604and/or a strain gauge606. In an embodiment, one or more of the electronics components604may include a flexible electronics component, such as, for example, a flexible circuit or one or more electrical circuits. The electronics component604may be connected to the antenna600via a connecting member608. As shown inFIG.6, a connecting member608may position an electronics component perpendicularly to the antenna600. A connecting member608may include a first portion610that attaches to an antenna600and extends substantially vertically and perpendicularly from the antenna. The connecting member608may include a second portion612connected to the first portion and the electronics component. The second portion612may extend substantially horizontally and perpendicularly to the first portion610. The electronics component604may be positioned substantially perpendicularly to the second portion612. A connecting member608may be constructed integrally with an antenna600and/or electronics component604, or may be separately constructed and attached together in any suitable manner. In various embodiments, the integrated circuit602may be connected to a first surface614of the electronics component604as illustrated inFIG.6. The RFID chip602may be connected to a first surface614of an electronics component in any suitable manner. An integrated circuit602may include memory according to an embodiment. The memory may be used to store various information. For example, one or more measurements of a strain gauge606may be stored in memory. As another example, a unique identifier associated with a load sensing assembly, a component thereof, or an anchoring member may be stored in memory. Additional and/or alternate information or types of information may be stored according to this disclosure. A strain gauge606may be connected to an electronics component604via a second connecting member616. As illustrated inFIG.6, a second connecting member616may include a first portion618, a second portion620and a third portion622. The first portion618may connect to the electronics component604and may extend substantially perpendicularly to the electronics component. The second portion620of the second connecting member616may be connected to the first portion618of the second connecting member and may extend substantially perpendicular thereto. The third portion622of the second connecting member616may be connected to the second portion620of the second connecting member, and may extend substantially perpendicular to the second portion. The third portion622of the second connecting member616may have a top surface624and a bottom surface626. A strain gauge606may be connected to the bottom surface626in any suitable manner. The strain gauge606may be configured to measure a force between the set screw and a longitudinal member.FIG.7illustrates a different perspective of a load sensing assembly for an anchoring member according to an embodiment. As illustrated inFIG.8, a load sensing assembly may be connected to an anchoring member30. For example, a load sensing assembly may be connected to an anchoring member near a first end32of the anchoring member. The antenna600is sized to extend around the anchoring member30, for example, near the first end32. In various embodiments, an antenna600may be securely fitted around a portion of the anchoring member30. In other embodiments, an antenna600may be secured to the anchoring member in any other suitable manner. The antenna600may be positioned on the anchoring member30such that the integrated circuit602and electronics component604are positioned within an opening of the anchoring member30. For instance, as illustrated byFIG.8, an anchoring member30may have one or more openings800that extend from an outer portion of the anchoring member into the channel31of the anchoring member. As illustrated byFIG.8, the second portion of the first connecting member may extend into the opening800and may position the integrated circuit and/or the electronics component within the opening and/or the channel31. Such a positioning may result in the strain gauge606being positioned in the channel31at a location where it is possible to measure a force of a longitudinal member in the channel. In an alternate embodiment, a strain gauge606may be positioned on or attached to a washer or pressure ring611within an anchoring member as illustrated byFIG.14. In yet another embodiment, in situations where an anchoring member includes a hook member, a strain gauge606may be positioned on or attached to a hook portion of the hook member. Measurements obtained by the strain gauge606may be used to determine whether a longitudinal member is properly seated and/or torqued during and/or after implant. In various embodiments, a set screw having a load sensing assembly may be used with in connection with an anchoring member with or without a load, sensing assembly.FIG.9illustrates a set screw having a load sensing assembly engaged with an anchoring member that also has a load sensing assembly according to an embodiment. So that components of each can be clearly depicted, a longitudinal member is not shown inFIG.9.FIG.10illustrates a side view of the screw assembly shown inFIG.9according to an embodiment.FIG.11illustrates a non-transparent view of the screw assembly shown inFIG.9according to an embodiment. AlthoughFIGS.9-11illustrate an antenna located externally to a set screw, it is understood that the antenna may alternatively be located within at least a portion of the central opening of the set screw. FIGS.1-11illustrate a multi-axial tulip-head pedicle screw according to various embodiments. However, it is understood that other types of anchoring members may be used within the scope of this disclosure. For example, fixed head screws or screws having differently shaped heads may be used. As another example, a hook member, a cross-link connector, an offset connector, or a hybrid hook-screw member may be used as well.FIG.12illustrates an example hook member having a load sensing assembly according to an embodiment. In various embodiments, one or more measurements obtained by a strain gauge may be stored by an integrated circuit of a corresponding load sensing assembly such as, for example, in its memory. The integrated circuit may be interrogated by a reader. For instance, an RFID chip may be read by an RFID reader. As another example, an NFC chip may be read by or may otherwise communicate with an NFC reader or other NFC-enabled device. A reader may interrogate an integrated circuit when in a certain proximity to the integrated circuit. In certain embodiments, a reader may interrogate an integrated circuit that has been implanted into a patient as part of a set screw or anchoring member assembly. In other embodiments, an integrated circuit may communicate with a reader or other electronic device without being interrogated. An integrated circuit may transmit one or more measurements to the reader. This transmission may occur in response to being interrogated by the reader, or the transmission may be initiated by the integrated circuit. The reader may receive the transmitted measurements, and may cause at least a portion of the measurements to be displayed to a user. For instance, a physician may use a reader to interrogate an RFID chip of a patient's implant. The reader may include a display, or may be in communication with a display device, which may display at least a portion of the measurements received from the RFID chip. An integrated circuit may be passive, meaning that the chip has no internal power source and is powered by the energy transmitted from a reader. With respect to an assembly having a passive integrated circuit, the integrated circuit may not transmit information until interrogated by a reader. In another embodiment, an integrated circuit may be active, meaning that the chip is battery-powered and capable of broadcasting its own signal. An active integrated circuit may transmit information in response to be interrogated by a reader, but also on its own without being interrogated. For instance, an active integrated circuit may broadcast a signal that contains certain information such as, for example, one or more measurements gathered by an associated strain gauge. An active integrated circuit may continuously broadcast a signal, or it may periodically broadcast a signal. Power may come from any number of sources, including, for example, thin film batteries with or without encapsulation or piezo electronics. In various embodiments, one or more sensors may transmit information by directly modulating a reflected signal, such as an RF signal. The strain gauge sensors may form a Wireless Passive Sensor Network (WPSN), which may utilize modulated backscattering (MB) as a communication technique. External power sources, such as, for example, an RF reader or other reader, may supply a WPSN with energy. The sensor(s) of the WPSN may transmit data by modulating the incident signal from a power source by switching its antenna impedance. One or more measurements received from a load sensing assembly may be used to make determinations of the condition of a spinal implant and/or treatment of a spinal disorder. For instance, proper placement of a longitudinal member, set screw and/or anchoring member may result in an acceptable range of force measurements collected by a strain gauge of a load sensing assembly. Measurements outside of this range may indicate a problem with the placement or positioning of a longitudinal member, set screw and/or anchoring member such as, for example, loosening of a set screw and/or anchoring member, longitudinal member failure, construct failure, yield or fracture/breakage, improper torque, breakage of the bone segment or portion, the occurrence of fusion or amount of fusion, and/or the like. One or more tools or instruments may include a reader which may be used to gather information from one or more integrated circuit during or in connection with a procedure. For instance, a torque tool may be used to loosen or tighten a set screw. A torque tool may include a reader, or may be in communication with a reader, such that a user of the torque tool is able to obtain, in substantially real time, one or more measurements relating to the set screw and longitudinal rod placement that are measured by a strain gauge of a load sensing assembly of the set screw via the tool. For instance, as a user is applying torque to a set screw, the user may see one or more force measurements between the set screw and the longitudinal member in order to determine that the positioning of the set screw and/or longitudinal member is correct and that the proper force is being maintained. In certain embodiments, a tool or instrument may include a display device on which one or more measurements may be displayed. In other embodiments, a tool or instrument may be in communication with a display device, and may transmit one or more measurements for display on the display device via a communications network. In some embodiments, an electronic device, such as a reader or an electronic device in communication with a reader, may compare one or more measurements obtained from an integrated circuit to one or more acceptable value ranges. If one or more of the measurements are outside of an applicable value range, the electronic device may cause a notification to be made. For instance, an electronic device may generate an alert for a user, and cause the alert to be displayed to the user via a display device. Alternatively, an electronic device may send an alert to a user such as via an email message, a text message or otherwise. An integrated circuit of a load sensing assembly may store a unique identifier associated with the component to which the load sensing assembly corresponds. For instance, an integrated circuit of a load sensing assembly for a set screw may store a unique identifier associated with the set screw. Similarly, an integrated circuit of a load sensing assembly for an anchoring member may store a unique identifier associated with the anchoring member. The integrated circuit may transmit the unique identifier to an electronic device. For instance, when a reader interrogates an integrated circuit, the integrated circuit may transmit a unique identifier for a component that is stored by the integrated circuit to the reader. Having access to a unique identifier for a component may help a user ascertain whether the measurements that are being obtained are associated with the component of interest. Also, having access to a unique identifier for a component may help a user take inventory of one or more components. For instance, after spinal surgery, a physician or other health care professional may use a reader to confirm that all of the set screws and anchoring members allocated for the procedure have been used and are positioned in a patient. FIG.15Aillustrates a side view of a load sensing assembly mounted to the set screw50according to an embodiment.FIG.15Billustrates a side view of a load sensing assembly mounted to the set screw50according to an embodiment.FIGS.15C-15Eillustrate bottom views taken at cross-section A-A inFIG.15Aof sensors positioned on the load sensing assembly according to an embodiment. In one or more embodiments, the set screw50, external threads51, driving feature52, antenna300, integrated circuit302, electronics component304, and connecting member308include one or more of the same features discussed above with respect to the load sensing assembly mounted to the set screw50as described inFIGS.3through5. Accordingly, a description of these same features is not repeated. In one or more cases, the driving feature52of the set screw50may be positioned on top of the proximal end of the external threads51. The driving feature57is configured to receive a tool, such as a screw driver, during engagement with the anchoring member30. The driving feature52may include a bore59that extends from an outer top surface of the break-off head58and into a portion of the threaded portion51aof the set screw50. In one or more cases, the bore59may have a cylindrically shaped opening when view from a top surface of the set screw50a. In one or more other cases, the bore59may have a star shaped opening, e.g., a shape to receive a hexalobe screw driver, with an inner cylindrically shaped opening when viewed from a top surface of the set screw50a. The bore59may provide a working area for placing one or more sensors, such as sensors314a,314b,314c, and314dwithin the set screw50. For the cases in which the bore59has a star shaped opening with an inner cylindrically shaped opening, the working area of the inner cylindrically shaped opening may be 2 to 5 mm in diameter, and more preferably at or about 3.65 mm in diameter. For the cases in which the bore59has a cylindrically shaped opening, the working area for the cylindrically shaped opening may be 3 to 7 mm in diameter, and more preferably at or about 5.35 mm in diameter. For the cases in which strain gauges are used as sensors314a,314b,314c, and314din the driving feature52having the cylindrically shaped bore59, the strain gauges may experience higher strain values than a driving feature52having the star shaped opening with an inner cylindrically shaped bore59. The driving feature52may have an external shape configured to engage with a tool, such as a screw driver, to rotate the set screw50. The driving feature52may be configured in an external shape to enable a positive, non-slip engagement of the driving feature52by the tool. For example, in one or more cases, the outer perimeter of the driving feature52may be configured in a hexagonal shape. In one or more other cases for example, the outer perimeter, that is, the outer surface, of the driving feature52may be configured in a square shape, pentagonal shape, star shape, or the like. The driving feature52may include a slot, similar to slot53, for receiving or routing electronic connections as illustrated inFIGS.13A and13B. In one or more cases, the driving feature52may be configured to break-off from the threaded portion51a. In one or more other cases, the driving feature52may configured to not break-off from the threaded portion51a. In one or more cases, the antenna300, connecting member308, integrated circuit302, and the electronics component304may be arranged in a similar manner and configured to operate in a similar manner as discussed with respect toFIGS.3and4B. In one or more cases, the antenna300is configured to transmit signals from at least one of the integrated circuit302, the electronics component304, and sensors314a,314b,314c, and314dto a reader. In one or more cases, the antenna300is configured to receive signals from the reader. For example, the antenna300may receive an ON signal from reader, in which the ON signal powers on the load sensing assembly. In one or more cases, the antenna300may be sized to fit around a portion of the driving feature52, as shown inFIG.15A, and/or at least a portion of the exterior of the set screw50, as shown inFIG.4A. In one or more other cases, the antenna300may be sized to fit within a portion of the bore59of the threaded portion51a, as shown inFIG.15B, and/or within a portion of the central opening of the set screw50, as shown inFIG.4B. In one or more cases, the antenna300includes a ferrite core configured to amplify the transmission signals of the antenna300. In one or more cases, the sensors314a,314b,314c, and314dmay be connected to a portion of the bore59of the set screw50in any suitable manner including, without limitation via an adhesive. The sensors314a,314b,314c, and314dmay be strain gauges, impedance sensors, pressure sensors, capacitive sensors, temperature sensors, or the like. The sensors314a,314b,314c, and314dmay be connected to a portion of the bore59, i.e., the central opening. For the cases in which the sensors314a,314b,314c, and314dare strain gauges, the sensors314a,314b,314c, and314dmay be positioned to measure a force between the set screw50and the longitudinal member100when the set screw50engages with the anchoring member30, e.g., a nominal clamping force. The sensors314a,314b,314c, and314dmay be operably connected to a bottom surface312of the electronics component304. The sensors314a,314b,314c, and314dmay directly interface with the set screw50and the electronics component304. For the cases in which the sensors314a,314b,314c, and314dare temperature sensors, the sensors314a,314b,314c, and314dmay be positioned above the integrated circuit302. For the cases in which the sensors314a,314b,314c, and314dare capacitive sensors may be connected to a portion of the bore59. In one or more cases, the set screw50may include all of the same type of sensors, for example, the set screw50may include all strain gauges. In one or more other cases, the set screw50may include a mix of sensors, for example, the set screw50may include strain gauges and temperature sensors. FIGS.15C-15Eillustrate bottom views taken at cross-section A-A inFIG.15Aof sensors positioned on the load sensing assembly according to an embodiment. It should be noted thatFIGS.15C-15Eillustrate embodiments that utilize four sensors, such as sensors314a,314b,314c, and314d; however, it should be understood that embodiments are contemplated that use more than 4 sensors and that use less than 4 sensors. In one or more cases, the sensors314a,314b,314c, and314dmay be linearly arranged across the electronics component304, as shown inFIG.15C. That is, the sensors314a,314b,314c, and314dmay be linearly arranged across the bore59, i.e., the central opening, of the set screw50. The sensors314a,314b,314c, and314dmay directly interface with the set screw50and the electronics component304. In one or more cases in which the sensors are linearly arranged, two sensors, such as sensors314band314c, may be arranged in close proximity to one another over a central portion304aof the electronics component304, and two other sensors, such as sensors314aand314d, may be each arranged on opposite sides from one another on an outer portion304bof the electronics component304. For example, the sensor314amay be arranged on the outer portion304b, e.g., a distal end, of the electronics component304, and the sensor314dmay be arranged on the outer portion304b, e.g., of the electronics component304opposite the sensor314a. In one or more other cases in which the sensors are linearly arranged, the sensors may be linearly arranged on the electronics component304, such that there is equal spacing between each of the sensors. In one or more cases, the two sensors314band314carranged in close proximity to one another may be active strain gauges configured to measure the greatest area of strain on the set screw50. The two sensors314aand314darranged on the outer portion304bof the electronics component304may be compensating strain gauges configured to measure areas of lesser strain than those measured by sensors314band314c. The sensors314a,314b,314c, and314dare illustrated inFIG.15Cas being strain gauges having a “U” shape or substantially U-like shape. In one or more cases, the opening of the U-shaped sensor314aand the opening of the U-shaped sensor314dmay face each other. In one or more cases, the opening of the U-shaped sensor314band the opening of the U-shaped sensor314cmay face away from one another. In one or more cases, the openings of the U-shaped sensors314aand314dmay be positioned to face towards the central portion304aof the electronics component304and the bore59. In one or more cases, the openings of the U-shaped sensors314band314cmay be positioned to face towards the outer portion304bof the electronics component304and the bore59. In one or more cases, the sensors314a,314b,314c, and314dmay be circumferentially arranged around the electronics component304. The sensors314a,314b,314c, and314dmay be circumferentially arranged around the outer portion304bof the electronics component304, as shown inFIG.15D. That is, the sensors314a,314b,314c, and314dmay be circumferentially arranged around the outer portion of the bore59, i.e., the central opening, of the set screw50. The sensors314a,314b,314c, and314dmay directly interface with the set screw50and the electronics component304. In one or more cases, the two sensors314aand314cmay be active strain gauges arranged around the outer portion304bof the electronics component304in the areas of greatest strain on the set screw50. The two sensors314band314dmay be compensating strain gauges arranged on the outer portion304bof the electronics component304in the areas of lesser strain than those measured by sensors314aand314c. For example, for the cases in which the bore59has a star shaped opening with an inner cylindrically shaped opening, the sensors314aand314cmay be positioned to measure areas309aand309b, in which the strain of area309aranges from about 0.0054995 microstrain to about 0.0035 microstrain and the strain of area309branges from about 0.0035 microstrain to about 0.003 microstrain. The sensors314band314dmay be positioned to measure areas309cand309d, in which the strain of area309cranges from about 0.003 microstrain to about 0.0025 microstrain and the strain of area309dranges from about 0.0025 microstrain to about 0.002 microstrain. The sensors314a,314b,314c, and314dare illustrated inFIG.15Das being strain gauges having a “U” shape or substantially “U”-like shape. In one or more cases, the openings of the U-shaped sensors314cand314dmay be positioned to face away from the central portion304aof the electronics component304and the bore59and towards the outer portion304b. In one or more cases, the opening of the U-shaped sensor314bmay be positioned to face, following an arc-like path from the sensor314bto the sensor314a, a rear surface of the U-shaped sensor314a. In one or more cases, the opening of the U-shaped sensor314amay be positioned to face, following an arc-like path from the sensor314ato the sensor314d, a side surface of the U-shaped sensor314a. The openings of the U-shaped sensors314aand314dmay be positioned to face in the same direction. The openings of the U-shaped sensors314band314cmay be positioned to face in the same direction. The openings of the U-shaped sensors314a,314b,314c, and314dmay be positioned to face towards the outer portion304bof the electronics component304and the bore59. In one or more cases, a portion of the sensors may be arranged on the central portion304aof the electronics component304, and another portion of the sensors may be arranged on the outer portion304bof the electronics component304. For example, sensors314aand314bmay be arranged on the central portion304aof the electronics component304, and sensors314dand314cmay be arranged on the outer portion304bof the electronics component304, as shown inFIG.15E. In one or more cases, the two sensors314aand314barranged on the central portion304aof the electronics component304may be active strain gauges configured to measure the greatest area of strain on the set screw50. The two sensors314dand314carranged on the outer portion304bof the electronics component304may be compensating strain gauges configured to measure areas of lesser strain than those measured by sensors314aand314b. For example, for the cases in which the bore59has a cylindrically shaped opening, the sensor314bmay be positioned to measure areas311a,311b, and311c, in which the strain of area311aranges from about 0.009 microstrain to about 0.008 microstrain, the strain of area311branges from about 0.008 microstrain to about 0.007 microstrain, and the strain of area311cranges from about 0.007 microstrain to about 0.006 microstrain. The sensor314amay be positioned to measure areas311cand311d, in which the strain of area311branges from about 0.006 microstrain to about 0.005 microstrain. The sensor314dmay be positioned to measure areas311eand311f, in which the strain of area311eranges from about 0.005 microstrain to about 0.004 microstrain and the strain of area311franges from about 0.004 microstrain to about 0.003 microstrain. The sensor314cmay be positioned to measure area311f. In one or more cases, using the center of the electronics component304as a reference point, the sensors314cand314dmay be positioned at or about 90° from one another. In one or more cases, using the center of the electronics component304as a reference point, the sensors314aand314bmay be positioned at or about 90° from one another. The sensors314a,314b,314c, and314dare illustrated inFIG.15Eas being strain gauges having a “U” shape or substantially “U”-like shape. In one or more cases, the openings of the U-shaped sensors314cand314dmay be positioned to face away from the central portion304aof the electronics component304and the bore59and towards the outer portion304b. In one or more cases, the opening of the U-shaped sensor314bmay be positioned to face, following an arc-like path from the sensor314bto the sensor314a, a rear surface of the U-shaped sensor314a. In one or more cases, the opening of the U-shaped sensor314amay be positioned to face, following an arc-like path from the sensor314ato the sensor314d, a side surface of the U-shaped sensor314a. The openings of the U-shaped sensors314aand314dmay be positioned to face in the same direction. The openings of the U-shaped sensors314band314cmay be positioned to face in the same direction. The openings of the U-shaped sensors314a,314b,314c, and314dmay be positioned to face towards the outer portion304bof the electronics component304and the bore59. FIG.16Aillustrates an example diagram of a surgical implant system1600having a plurality of bone constructs and/or growth modulating implants having a growth monitoring (GM) system1602embedded in some or all of the implant bodies. The implant system1600may be configured to be surgically implanted in a patient40, such as a pediatric patient. The implant system1600may include a bone correction system1620A that is biocompatible for implantation into living tissue or bone. The bone correction system1620A may be configured to treat a deformity of the bone(s) or spine using various implants, such as bone constructs1625and/or at least one longitudinal member device1630A. In other embodiments, the bone correction system may be configured to correct deformity or growth of at least one bone of a limb, for example. In some embodiments, a vertebrae staple may be used. A vertebrae staple or other bone implant may modulate growth that can be measured by the electrical signals from a strain gauge embedded in the body of the implant. The embodiments may implant in two or more bones growth modulating implants of a bone correction system1620A. Each growth modulating implant includes an implant body having at least one sensor device embedded in the implant body. Although the embodiments illustrate vertebra bones, the load sensing assembly (FIG.3) may be embedded in other bone constructs such as those configured to be embedded and fastened in limbs or limb bone joints. In some embodiments, the growth modulating implants may use bone constructs to fasten plates to a limb's growth plate. The lower limb bones include the femur, tibia, fibula and patella, for example. The upper limb bones include the radius, ulna and humerus, for example, The GM system1602may include a strain gauge1635(e.g., strain gauge306) embedded in the body of a bone construct1625that may be positioned to measure a force or loading between the bone construct1625and a longitudinal member device1630A, such as a rod. The strain gauge1635may include a plurality of sensors314a,314b,314c, and314d(FIGS.15A-15E) to monitor data associated with the longitudinal growth of the bones in the spine. Alternately or in addition to, each longitudinal member device1630A may include at least one strain gauge1636. The strain gauge1636may include one or more sensors such as similar to sensors314a,314b,314c, and314d. InFIG.16A, the strain gauges1635and1636are denoted as triangle to simplify the drawings. The bone construct1625may include a load sensing assembly, as shown inFIG.3. The GM system1602may include a load sensing assembly (FIG.3) having one or more integrated circuits302(FIG.3) such as, for example, an RFID chip302or an NFC chip, one or more electronics components304and/or a strain gauge306and antenna300. The longitudinal member device1630A may include a load sensing assembly similar to the load sensing assembly ofFIG.3. The GM system1602may include an external monitoring device1640in communications with the sensors of the strain gauges1635and/or the sensors of the strain gauge1636. The external monitoring device1640may communicate with the sensors of the strain gauge1635and/or the sensors of the strain gauge1636using radio frequency communications. In various embodiments, the external monitoring device1640may communicate with the sensors of the strain gauge1635and/or the sensors of the strain gauge1636using wireless communications. For example, the external monitoring device1640may communicate with the sensors of the strain gauge1635and/or sensors of the strain gauge1636using electromagnetic or other energy fields to trigger the sensors or strain gauges and transmit the sensor measurement data to the external monitoring device1640or a remote server1650. In some embodiments, the external monitoring device1640may include a standalone electronic device located at the residence of the patient, hospital or other dwelling. The standalone electronic device may trigger the strain gauge1635and/or strain gauge1636to transmit the sensor data to the standalone electronic device, which may in turn transmit the sensor data to the remote server1650using the Internet or an Intranet or other communication protocols. The remote server1650may have a web-application running thereon to selectively serve graphical user interfaces to medical professionals, patients and patient's representatives or guardians. The remote server1650may also be selectively accessible by the external monitoring device1640to receive data analytics of the monitored growth of a bone, differential growth of a bone, and/or the growth rate of the bone. The remote server1650may also be selectively accessible by the external monitoring device1640to receive data analytics associated with lung capacity. The external monitoring device1640may also be served data representative of the operational status of the GM sensing system1602. Lung capacity may include lung function or ribcage/thorax volume. The external monitoring device1640may include a communication device, such as a web-enabled smart phone or body-wearable computing device, such as embedded in a smart watch. For example, the cellphone, mobile communication device, or smart watch may transmit local sensor data to a remote server1650. The external monitoring device1640may include a computing device, such as a laptop, personal computer, or tablet, or other electronic device, which may communicate sensor data to a remote server1650. In various embodiments, the remote server may be omitted and the external monitoring device1640performs the data analytics. In some embodiments, the surgical implant system1600may include graphical user interface(s) (GUIs) for displaying on a display of an external monitoring device1640data analytics associated with monitored sensor data. In various embodiments, the GUIs1645may provide a graphical representation of data analytics representative of a measured growth, differential growth or growth rate continuously or in periodic increments. The GUIs may provide a graphical representation of a notification or alert to the parent or guardian to seek medical attention or that intervention is needed. The GUIs1645may be compatible with various computing system platforms. In various embodiments, the GM sensing system1602may include a plurality of strain gauges1635, which may be configured to be embedded in one or more bones of a patient where growth, differential growth, and/or growth rate may be measured for the treatment of a bone deformity or bone disease, such as in a pediatric patient. In some embodiments, Each strain gauge635may include in an anchoring assembly10, such as shown inFIG.1and one or more sensors314a,314b,314c, and314d(FIGS.15A-15D). The sensors may be embedded in set screw50, as described above. Each sensor may have its own unique identifier and location identifier. The system1600may be configured to access the data of a particular strain gauge1635to interrogate its sensors for sensor data. In order to determine the longitudinal length between two or more bones, the system may use sensor data of two sensors that oppose each other, for example. The two sensors are in different strain gauges and may be coupled to the same longitudinal member device, for example. In order to determine the lung capacity, sensors coupled to opposite longitudinal member devices may be used to determine an expansion of the rib cage or thoracic cage. The bone correction system1620A may include a longitudinal member device1630A (e.g., longitudinal member100ofFIG.1). In various embodiments, the longitudinal member device1630A may include one or more strain gauges1636, denoted as triangles, embedded in the body of the longitudinal member device1630A. The longitudinal member device1630A may include a dynamically expandable rod1630B as will be describe inFIG.16B, a tether1630C as will be described in FIG.16C, a remotely controlled expandable rod1630D as will be described inFIG.16D, or other longitudinal member device. FIG.16Bis a perspective view of one particular embodiment of a bone correction system1620B having a dynamically expandable longitudinal rod1630B fastened to vertebrae V1, V2 associated with a rib cage. An example, expandable rod is described in U.S. Pat. No. 10,456,171, entitled “SPINAL CORRECTION SYSTEM AND METHOD,” assigned to Warsaw Orthopedic, Inc. and which is incorporated herein by reference in its entirety. The bone correction system1620B may include two side-by-side expandable rods, one being concave and the other being convex. The expandable longitudinal rod1630B may include a rod sleeve1633, a first rod element1631A and a second rod element1631B. In operation, the first rod element1631A and a second rod element1631B may telescope or extend out from ends of sleeve1633to increase or extend the length of rod1630B. The bone correction system1620B may include a first fastening element, such as, for example, bone construct1625configured to attach to a first end1634of rod element1631A to vertebra V1. The bone correction system1620B may include a second fastening element, such as, for example, bone construct1625configured to attach a first end1638of rod element1631B to vertebra V2, which is spaced apart over vertebrae from vertebra V1. Pilot holes are made in vertebrae V1, V2 for receiving bone constructs1625. The other ends of the rod element1631A and rod element1631B are slidably coupled to opposite ends of sleeve1633. In various embodiments, the bone constructs1625may be torqued on to ends1634,1638to attach the bone correction system1620B in place with vertebrae V. In some embodiments, the bone constructs1625may include one or a plurality of hooks, anchors, tissue penetrating screws, mono-axial screws, multi-axial screws, expanding screws, wedges, buttons, clips, snaps, friction fittings, compressive fittings, expanding rivets, staples, nails, adhesives, fixation plates and/or posts. These fixation elements may be coated with an osteoinductive or osteoconductive material to enhance fixation, and/or include one or a plurality of therapeutic agents. These bone constructs1625may include a strain gauge1635. The rod1630B may include one or more strain gauges1636. For example, the rod element1631A, rod element1631B and/or sleeve1633may include a strain gauge1636. Upon implantation of bone correction system1620B and completion of the procedure, bone correction system1620B is configured for in situ, non-invasive lengthening to compensate for patient growth. For example, during patient growth, an expansion force, due to separation of vertebrae V1, V2 attached to rod1630B that causes dynamic incremental movement of the rod, relative to sleeve1633along a longitudinal axial direction. In some embodiments, the rod may be configured to be extended using a magnetic field. In other embodiments, the rod may be configured to be extended using an electromagnetic field or electronically. In such an embodiments, the rod would be equipped with an integrated circuit and communication components to receive control signals from a remote source, as described above. The rod may include passive or active batteries as described above in relation to the load sensing assembly. FIG.16Cis a perspective view of one particular embodiment of a (fusionless) bone correction system1620C in accordance a fusionless system for vertebrae associated with a rib cage, such as disclosed in U.S. Pat. No. 9,220,536, entitled “SYSTEM AND METHOD FOR CORRECTION OF SPINAL DISORDER,” assigned to Warsaw Orthopedic, Inc., incorporated herein by reference in its entirety. The bone correction system1620C may include a longitudinal member device, such as, for example, a tether1630C. In the example ofFIG.16C, CX denotes a convex side of the vertebrae column and CV denotes the concave side of the vertebrae column. Although not shown to prevent overcrowding in the figure, the bone correction system1620C may include a longitudinal member device on both the convex side CX and the concave side CV. The tether1630C may include an elongated member1633that extends between a first end1634and a second end1638. Tether1630C may have a flexible configuration, which includes movement in a lateral or side-to-side direction and prevents expanding and/or extension in an axial direction upon fixation with vertebrae. In some embodiments, all or only a portion of tether1630C may have a semi-rigid, rigid or elastic configuration, and/or have elastic properties such that tether1630C provides a selective amount of expansion and/or extension in an axial direction. In some embodiments, the tether1630C may be compressible in an axial direction. Tether1630C can include a plurality of separately attachable or connectable portions or sections, such as bands or loops, or may be monolithically formed as a single continuous element. Tether1630C may have an outer surface and a uniform thickness/diameter. The outer surface may have various surface configurations, such as, for example, rough, threaded for connection with surgical instruments, arcuate, undulating, porous, semi-porous, dimpled, polished and/or textured according to the requirements of a particular application. The thickness defined by tether1630C may be uniformly increasing or decreasing, or have alternate diameter dimensions along its length. The tether1630C may have various cross section configurations, such as, for example, oval, oblong, triangular, rectangular, square, polygonal, irregular, uniform, non-uniform, variable and/or tapered. The tether1630C may have various lengths, according to the requirements of a particular application. It is further contemplated that tether1630C may be braided, such as a rope, or include a plurality elongated elements to provide a predetermined force resistance. The tether1630C may be made from autograft and/or allograft, as described above, and be configured for resorbable or degradable applications. The bone correction system1620C may include fixation elements, such as, for example, bone constructs1625that may be configured to be connected or attached with to tether1630C. The bone correction system1620C may include different types of fixation elements. The fixation elements or bone constructs1625are spaced along the length of tether1630C and are configured to affix to the tether. The fixation elements or bone constructs1625may be configured to connect to vertebrae along a plurality of vertebral levels. It is envisioned that the fixation elements may include one or a plurality of anchors, tissue penetrating screws, conventional screws, expanding screws, wedges, anchors, buttons, clips, snaps, friction fittings, compressive fittings, expanding rivets, staples, nails, adhesives, posts, fixation plates and/or posts. These fixation elements may be coated with an osteoinductive or osteoconductive material to enhance fixation, and/or include one or a plurality of therapeutic agents. The fixation elements may be fitted with a load sensing assembly as shown inFIG.3, previously described. In this example, the bone constructs1625of the growth modulating implants are implanted into bones V1-V4. The longitudinal growth may be measured, based on the sensors implanted in V1, V2, V3 and V4, between V1 and V2, V1 and V3, and V1 and V4. Likewise, a longitudinal growth may be measured between V2 and V3, V3 and V4 and V2 and V4. The signals from the implants on the concave and convex sides of the curve can determine if the curve is worsening, improving or the curve is being developed on the opposite side of the original curve. In this example a sensor that was reporting compression force may report tension, showing curve developed on the opposite side, or a higher compression or tension compared to a prior instance or the predicted value for a given time, meaning worsening of the curve, or the sensors on the concave and convex show similar change or rate of changes in the collected loading, taking the initial loading differences into account, showing a correction of the curve. In assembly, operation and use, a fusionless correction system1620C, similar to the system described above, is employed with a surgical procedure, such as, for a correction treatment to treat adolescent idiopathic scoliosis and/or Scheuermann's kyphosis of a spine. It is contemplated that one or all of the components of the fusionless correction system can be delivered or implanted as a pre-assembled device or can be assembled in situ. The fusionless correction system may be completely or partially revised, removed or replaced. The fusionless correction system may be used in any existing surgical method or technique including open surgery, mini-open surgery, minimally invasive surgery and percutaneous surgical implantation, whereby vertebrae V is accessed through a mini-incision, or sleeve that provides a protected passageway to the area. Once access to the surgical site is obtained, the particular surgical procedure can be performed for treating the spine disorder. The configuration and dimension of tether1630C is determined according to the configuration and dimension of a selected set of vertebrae and the requirements of a particular application. The longitudinal member device may include one or a plurality of flexible wires, staples, cables, ribbons, artificial and/or synthetic strands, rods, plates, springs, and combinations thereof. In an embodiment, the longitudinal member device may be a cadaver tendon. In one embodiment, the longitudinal member device may be a solid core. In one embodiment, the longitudinal member device may be tubular. FIG.16Dis a perspective view of one particular embodiment of a bone correction system1620D having a remotely expandable longitudinal rod1630D fastened to vertebrae associated with a rib cage. The bone correction system1620D is similar to the bone correction system1620B described above. Thus, only the differences will be described. InFIG.16D, the remotely expandable longitudinal rod1630D may include a internal rod controller1690configured to control the telescopic motion of the first and second rod elements1631A and1631B. The internal rod controller1690may be response to and controlled by remote rod controller1695. For example, the remote rod controller1695may provide a signal representative of the amount of extension to slide or telescope each of the first and second rod elements1631A and1631B. In some embodiments, the communication of the control signal may be magnetic. In other embodiments, the communication may electromagnetic or a radio frequency. The amount of correction of a longitudinal member device may be identified based on the intervention needed as derived from the sensor data. The bone correction system1620A,1620B,1620C or1620D described above may each include a GM sensing system1602, as previously described in relation toFIG.16A. The GM sensing system1602may include a remote server1650and an external monitoring device1640. The strain gauge1635and/or1636may communicate with the remote server1650and/or an external monitoring device1640, as previously described. The bone correction systems1620A,1620B,1620C and1620D may be spinal correction systems. The bone correction system is configured to be growth-friendly. FIG.17illustrates an example flowchart of a method1700for treating a bone abnormality and monitoring growth features, correction progression and/or lung capacity. The method steps, described herein, may be performed in the order shown or a different order. The method may include additional steps or some steps may be omitted. One or more steps of the method may be performed contemporaneously. The method1700may include (at1702) implanting bone correction system1620A,1620B,1620C or1620D, for example, having a GM sensing system1602into a subject bone using a growth-friendly or fusionless methodology. Some of the Implants (i.e., bone fastener1625or longitudinal member1630) may include sensors of the GM sensing system1602used to collect loading data related to changes in the growing spine after the surgery during which such spinal implants may be fixed to the anatomical landmarks including, vertebral body, rib cage, and/or pelvis. These landmarks describe the insertion point of the implants. Implants may be placed in any of these sites based on the patient's need. If the longitudinal member device1630A, expandable rod1630B or1630D or tether1630C is connected to the ribs a real-time measurement can also report the lung expansion capacity. This measure in addition to growth can be used to generate an alarm for intervention. The method1700may include (at1704) transmitting baseline sensor data from the one or more sensors (e.g., sensors314a,314b,314c, and314d) of each strain gauge1635of bone construct1625and/or strain gauge1636of the longitudinal member device1630A,1630B or1630C. The method1700may include (at1705) a monitoring and tracking process. The monitoring and tracking process may include (at1706) monitoring the sensor data from the GM sensing system1602. The monitoring and tracking process (at1705) will be described in more detail in relation toFIG.18. The method1700may include (at1708) determining whether an intervention is needed. An intervention may require a revision surgery or a surgery to adjust the tension or compression in the current implant system for the treatment of the bone deformity or disease is needed, for example. The intervention may require replacement of an implant or longitudinal member device. Intervention may include scheduling a visit either a clinical visit or a surgeon visit. In some embodiment, an intervention may include canceling an already scheduled visit, if intervention is not required. If the determination (at1708) is “NO,” monitoring continues (at1706). If the determination (at1708) is “YES,” the method1700may include (at1710) generating information representative of a notification that a fusion is necessary or an acceptable remedy for the treatment of the bone deformity or disease. For example, a notification that an intervention or clinical visit is needed may indicate one of tether replacement, rod expansion, revision surgery or removing and replacing the broken/fractured implant. The notification alerts may be sent to a doctor, a clinic, a patient, a patient representative or guardian, for example. If the lung capacity needs an intervention, a notification may be generated representative of the lung capacity and out-of-range indicator, for example. An intervention may include changing the position of the patient or check whether the rib cage is obstructed. The notification associated with the analyzed longitudinal growth or the growth rate may be based on patient specific data as determined from a patient's prior data, cohort specific data as determined from literature, or a combination of the patient specific data and the cohort specific data. The notification associated with the analyzed lung capacity or any other patient health parameter may be based on patient specific data as determined from a patient's prior data, cohort specific data as determined from literature, or a combination of the patient specific data and the cohort specific data. FIG.18illustrates an example flowchart of a method1705for monitoring and tracking bone growth, differential growth, growth rate, deformity development or progress, lung capacity and/or a system operational status. The method1705will be described in relation toFIG.16A. The sensor data and tracked bone longitudinal bone growth, differential growth, growth rate, deformity development, lung capacity may be selectively displayed (at1712) in one or more GUIs. Additionally, the operation status of any one implant may be displayed as well. As the growth rate varies between individuals, methods are needed to identify the optimal time of intervention to allow adequate growth or determine an interruption in growth modulation plan. For example, each child may experience a growth spurt according to their own body's progress. Hence, pinpointing the need for intervention, such as the result of a growth spurt, may be detected in real-time based on sensor data from the strain gauges or other sensors. The monitoring of the growth, differential growth, growth rate and/or the operational status of the system may be performed by one or more computing systems. For example, data analytics of the sensor data may be performed by a remote server1650and served to the external monitoring device1640. Alternately or in addition to, the external monitoring device1640may perform the data analytics and provide graphical user interfaces for display of the information representative of the resultant data analytics. The sensing system1602may monitor data associated with lung capacity or lung expansion if at least one rib or vertebra of the rib cage has an implant with a strain gauge implanted therein. The method1705may include (at1804) receiving baseline sensor data from the one or more sensors (e.g., sensors314a,314b,314c, and314d) of each bone construct1625and/or sensors of at least one stain gauge1636of the longitudinal member device1630A, for example, of the GM sensing system1602at time of implantation. During surgery, the baseline sensor data may be sent to a storage device such as associated with a remote server or other computing device. The computing device performing data analytic may be configured to receive or discover all sensor data being read from the sensors of the GM sensing system1602. The method1705may include (at1806) receiving sensor data from the one or more sensors (e.g., sensors314a,314b,314c, and314d) of each bone construct1625and/or sensors of at least one strain gauge1636of the longitudinal member1630of the GM sensing system1602, periodically or continuously. As described above, the computing device performing data analytic may be configured to receive or discover all sensor data being read from the sensors of the GM sensing system1602. In one embodiment the computing device translates the sensed data for example from a strain gauge to the rate of bone growth collected from various sensors, differential growth between two sensors, or remaining growth in the bone to facilitate interpretation. In another application, the sensed data can update the shape or alignment of the spine in an analytical model that corresponds to the changes in the sensed data. As such, the growth (increase in distance) and rotation (torsion in the longitudinal member or screw system) of the spine system can be visualized. The rotation may be generated into a two-dimensional (2D) or three-dimensional (3D) model. The method1705may determine and evaluate the operational status of the GM sensing system1602(at1808) implanted in a patient. The method1705may determine at least one patient health parameter of a patient (at1812), as will be discussed in more detail in relation toFIGS.19A-19C. The method1705may determine and evaluate at least one of the growth (at1818), growth rate (at1820) differential growth (at1821), bone deformity development and/or bone deformity progression (at1822), and/or lung capacity (at1823) associated with the patient. The method1705may determine the an improvement in the lung capacity when it reports a higher tension value during respiration cycle. On the other hand a lack of showing such tension from the sensor reading or a compressive force means a need for expansion of the longitudinal implants to allow growths and expand the distance between the ribs. The increase in the spinal deformity severity may be determined as a measure of growth that is out-of-range for a period of time or in relation to the baseline sensor measurements. A decrease in the severity of the spinal deformity or a stable spinal deformity shown for example with no change in Cobb angle may use baseline sensor data and accumulative growth measurements, to identify an amount of correction. For example, sensor data from at least one the longitudinal member device may be used to identify the amount of deformity correction. The blocks at1818,1820,1822, and1823are dashed to denote that one or more of these block may be optional or omitted. For example, if the bone correction system1620A,1620B or1620C may be affixed to a rib or vertebra of the thoracic spine, or one end may be affixed to the ribs and the other end to lumbar spine or pelvis, then the changes in lung capacity and/or expansion or a change in the spinal deformity, or spinal growth may be determined. By way of non-limiting example, when two or more ribs are connected via a sensor enabled implant, the change in the sensor data from these implants may determine a change in the lung capacity. However, if one end of an implant system is also connected to the spine, a change in the reported sensor data may be because of the changes in the spinal deformity or growth combined with a change in the lung function. The method1705may, in response to the determination (at1808) of the operational status of the GM system1602, determine (at1708) whether an intervention is needed. In some instances, the growth modulating system (e.g., the bone correction system1620A,1620B or1620C) experiences a component break, fracture, or loosening, an unexpected change that does not match the previous pattern of the data can be detected and reported. The operational status of the GM sensing system1602may detect the operational status of a sensor, a strain gauge, each component of the loading sensing assembly (FIG.3) or the implant. The strain gauge may produce an electrical signal. Differences in signal patterns may identify a fault, break, fracture, or other malfunction. If the determination (at1708) is “NO,” the method1705continues to evaluate the operational status of the GM system1602(at1808). If the determination (at1708) is “YES,” the method1705may (at1710) generate information representative of a notification or alert that an intervention or intervention process may be needed. By way of non-limiting example, an intervention process may require an intervention process that recommends the replacement of an adjustable longitudinal member device, expandable rod or tether, as the adjustment range is approaching an upper limit of the manufacture-specific recommendation. By way of non-limiting example, the operational status may evaluate the measurement of the sensors to evaluate the operational status of the function of the implant to treat the spine or the operational status of a sensor or strain gauge to perform the sensing function. Each of the implant, strain gauge and/or the sensors in the implant may have a within-normal limits operational range based on known manufacture-specific recommendations. Additionally, each component of the load sensing assembly (FIG.3) may have an operational range status. By way of non-limiting example, an intervention process may require a reversal of an adjustment, if an over adjustment condition is sensed. For example, sensed data from by the sensors of a strain gauge may determine that the implant in which the strain gauge is embedded is experiencing an out-of-range strain, pressure or loading. By way of non-limiting example, an intervention process may require repair or replacement of one or more bone constructs1625or longitudinal member1630, such as a revision surgery. The method1705may determine (at1824) whether one or more of growth, the growth rate, growth differential, deformity development or progress, and lung capacity are in an expected range. For example, in some embodiments, overall growth or growth rate can be determined from other bones, as well, that are not being treated or prior data or the same patients or age-, or gender-matched cohort. The patient's log of sensor data may include clinical measurements or other measurements of non-treated bones. If the sensor values are in range, a corresponding alert may be generated (at1710) representative of an in range sensor readings. If the determination (at1824) is “YES,” the method1705continues to evaluate the growth (at1818) and/or growth rate (at1820). If the determination (at1822) is “NO,” the method1705may generate (at1710) information representative of a notification or alert that the growth, growth rate and/or growth differential is not in range. The intervention may include non-surgical plan for rod expansion, surgical planning for replacing the tether, revision surgery for implanting new rods, and/or replacing other broken or loose implants. The generation of the notification or alert generated may include generating a text message, an email or other notification to stored contact information and method of communicating. A revision surgery may require a longer longitudinal member device1630A, for example. In various embodiments, an intervention process may include enlarging of the longitudinal member device by a change in dimension electronically, magnetically or electromagnetically. In other embodiments, the changes may be dynamic using mechanical mechanisms. If the method1705determines deformity development or progress is not in range (at1824), intervention may not be needed immediately, but an alert may still be generated (at1710). However, if the deformity correction is not in range (at1824), after a given amount of time, a different type of spine treatment may be needed. If the deformity correction is in range, a corresponding alert may be generated (at1710) representative of an in range sensor readings. If the method1705determines (at1824) that the lung capacity, for example measured by lung maximum voluntary capacity, is not in range, (at1708), an intervention may be determined to include additional monitoring of the breathing capability of the patient, patient may be scheduled for clinical visit or surgery to expand the growing rod that allows to increase the volume of the hemi-thorax or ribcage. FIGS.19A-19Cillustrates an example flowchartFIGS.19A-19Cillustrates an example method1812for analyzing at least one patient health parameter of a patient being treated by a bone correction system (e.g., bone correction system1620A,1620B, or1620C) using the surgical implant system1600. The method1812may include (at1902) classifying patient type based on patient specific data. To classify the patient, certain patient data may be needed. For example, the method1812may include (at1904) determining an ethnicity of the patient. The method1812may (at1906) determine the patient's age. During the treatment phase using the bone correction system, the age will change. The method1812may include (at1908) determining the gender of the patient. For example, a male child may grow at a different rate than a female child. The method1812may include (at1910) determining a condition or co-morbidities, such as a spinal condition of the patient. For example, some children may have a spinal deformity, such as scoliosis or other curvature abnormalities. A patient may have multiple conditions, such as dwarfism and scoliosis. These conditions may be the cause of an out-of-range sensor readings. The patient may include both a spinal condition and a lung disease or condition. The classified patient data may be used to train a machine-learning algorithm for the classified patient type. The method1812may include (at1912) determining an implant type, such as a bone construct, anchors, tissue penetrating screws, conventional screws, expanding screws, wedges, anchors, buttons, clips, snaps, friction fittings, compressive fittings, expanding rivets, staples, nails, adhesives, posts, fixation plates and/or posts. The implant type may have embedded in its body a strain gauge and/or load sensing assembly (FIG.3). The method1812may include (at1914) determining a sensor type. Each strain gauge may include a particular sensor type with identified sensing sensitivities. In some embodiments, each implant may include a plurality of sensors, such as without limitation a strain gauge sensor (i.e., force, pressure and/or tension sensor), impedance sensors, pressure sensors, capacitive sensors, and temperature sensors. The method1812may include (at1916) determining a longitudinal member device type, such as expandable rod, non-expandable rod, and/or tether. For example, rods with a fixed length may need revision quicker based on growth. In other embodiments, an expandable rod may be configured to expand to accommodate growth up to a predetermine limit set by the manufacturer's specification. In other embodiments, a tether may be configured to stretch up, if applicable, to a predetermined limit set by the manufacturer's specification. The system may track and determine (at1916) the current length of the expandable rod or tether and provide a current length to external monitoring device1640, in the form of an alert or notification. The sensors or strain gauges1636on a longitudinal member may be used to measure the expansion, such as the longitudinal growth, of the longitudinal member device. The method1812may include (at1917) determining whether the current length is at or approaching the manufacturer's specified limit. If the determination is “YES,” the method1812may determine if an intervention is needed. If so, an alert may be generated (at1710) to identify the need to replace the longitudinal member device, for example. Intervention needed may identify the need to start surgical pre-planning phase for a revision surgery, for example. By way of non-limiting example, the pre-planning phase may include generating a new rod curvature based on current patient data. The method1812may include (at1918) determining implant bone locations or treatment locations. For example, one or more of the bone constructs may be implanted in two or more vertebrae. In some instances, the two or more vertebra may be in the same spinal section, such as thoracic, lumbar, cervical, or a combination of spinal sections. In some embodiments, the growth may be determined by a single implant comprising a strain gauge having a plurality of sensors arranged in space relation, as shown inFIGS.15A-15E. Referring now toFIG.19B, the method1812may include determining whether any of the implant bone locations or treatment locations are associated with (at1920) a thoracic vertebra, (at1921) a lumbar vertebra, (at1922) the pelvis, (at1923) the sacrum, (at1924) the cervical vertebra, and (at1925) a limb. The pelvis may include three bones such as the hip bones, the sacrum and coccyx. Each bone location or treatment location may include a sensor or strain gauge. If the determination at1920,1921,1922and/or1923is “NO,” the process will end (at1926) for that bone, vertebra level, or section. If the determination at any of1920,1921,1922,1923,1924and/or1925is “YES,” the method1812may include (at1928) calculating an amount of longitudinal growth and/or a growth rate based each implant bone location and/or longitudinal member device. The growth rate is an amount of growth over a specified amount of time. The longitudinal growth may be measured between two bones, for example, separated by a longitudinal distance. The longitudinal growth may be the accumulative change in sensor data between the two bones, such as vertebra V1 and V2 ofFIG.16Bwhen the sensors report the spatial position of the bone or a change in the stress in the longitudinal implant which in turn can be translated to the change in the deflection of the longitudinal implant and subsequently the distance between the implants attached to the bone. Based on the classified patient data, in-range threshold used at1824for the growth or growth rate may vary. The steps for determining the location of the bone implant locations along with the classified patient data may allow a machine-learning algorithm to be trained with statistics or training data for the patient classification type based on one or more of the patient's age, gender, ethnicity and co-morbidity. Accordingly, the method1812may include (at1930) identifying longitudinal growth chart statistics1940based on the patient classified data to classify a patient. The longitudinal growth chart statistics may include trained data sets for training the machine-learning algorithm. Pediatric patient's grow at a faster rate in certain age groups. Thus, the thresholds may be updated, accordingly. The method may track the patient's age so that the training data may be automatically updated from one instantiation to another, during the monitoring process. Example growth ranges by age group and gender are shown in described in “The growing spine: how spinal deformities influence normal spine and thoracic cage growth,” by Alain Dimeglio, Eur Spine J (2012) 21:64-70, published online Aug. 30, 2011, incorporated herein by reference in its entirety. Example, measured growth in adolescence with scoliosis, is described in “Vertebral height growth predominates over intervertebral disc height growth in adolescents with scoliosis,” by Ian Stokes Phd. et al., in National Institute of Health (NIH) Public Access, Spine (Phila Pa 1976), PMC 2006 Aug. 10, incorporated herein by reference in its entirety. A trained data set may be correlated with both numerical sensor data from the training data set and training images of a population of subjects to determine longitudinal growth. To generate a training image, according to various embodiments, a representation may be labeled in an image or an entire image may be labeled. A labeled image may be an image that is identified to include the bone constructs, longitudinal member device and a pre-segmented portion (i.e., cervical vertebrae, thoracic vertebrae and/or lumbar vertebrae) of the training labeled image. The labeled image and sensor data, in conjunction or separately, may be used to train or allow a neural network, for example, to train and learn selected parameters, such as weights and biases in the bone correction system, based on type of longitudinal member device, type of bone construct and sensors. In some embodiments, the training data set of a current patient may be updated from time to time with values from segmented image data where a current longitudinal growth measurement is determined through imaging or sensor data. In another embodiment, patient's own data may be used to train the machine learning algorithm. Data can be used from a patient who goes under several growing rod expansion to train the algorithm for the next consecutive expansion. After the first expansion, the sensor data can be recorded, until their next clinic visit. At the consecutive clinical visit, when a second rod expansion is scheduled, medical images including radiographs, ultrasound, dynamic MRI may be recorded and related to the sensor data. After the second expansion, the machine learning algorithm can use the sensor data and determine a possible change, which is normally concluded from the medical images, specific to that patient. This improves the predictive algorithm for that specific patient. Similarly, other patients with similar characteristics can benefit from such cohort-specific algorithm. The longitudinal growth chart statistics1940may be used to identify for each implant location, an expected amount of growth or growth rate, an in-range threshold or out-of-range threshold, based on a patient's ethnicity1942, age1944, and gender1946. The longitudinal growth chart statistics1940may include growth statistics for the cervical vertebrae C1-C71948, growth statistics for the thoracic vertebrae T1-C121950, growth statistics for the lumbar vertebrae L1-L51952, growth statistics for the cervical vertebrae C1-C71948, growth statistics for the pelvis1956, and growth statistics for the sacrum S11958. A patient classification may identify the growth or growth rate. For example, a female patient in one age group may be expected to grow at a different rate than at a different age group. Likewise, a male patient in one age group may grow at a different rate than a female of the same age group. The embodiments described herein have application to limbs such as lower or upper limbs. The longitudinal growth chart statistics1940may include statistics associated with limbs1960. The method1812may generate alerts for both in-range results and out-of-range results (at1710). In some embodiments, the method1812when determining whether the analyzed health parameter is in-range, multiple parameters may be determined. For example, if the health parameter is growth or growth rate and if the bone or treatment locations include multiple locations, a growth differential between two implant locations in the same spine section may be determined, for example. Additionally, the growth or growth rate may be determined based on those bone or treatment locations in the same section, to report the growth or growth rate of a particular spine section. A growth differential may be determined based on two adjacent bone constructs that are connected to the same vertebra level, for example. Referring now toFIG.19C, the method1812may include (at1970) determining whether any of the implant bone locations are attached to a rib cage and/or thoracic cage vertebra. The method1812may include (at1972) determining motion or motion cycle of the rib cage and/or one or more vertebra of the thoracic cage. The motion or motion cycle may measure the expansion and contraction of the lungs as applied to one or more bones of the rib cage and/or vertebra of the thoracic cage using the strain gauge in the bone constructs and/or longitudinal member device. The rib cage moves out or extends during inhalation of a breath cycle and contracts when expiration occurs during the breath cycle. The method1812may include (at1974) determining a patient's lung capacity. The motion of the rib cage or vertebra of the thoracic cage may be measured by the strain gauge, for example, during each breath cycle. A trained data set may be correlated with both numerical sensor data from the training data set and training images of a population of subjects to determine expected growth, lung capacity or ribcage/thorax volume. To generate a training image, according to various embodiments, a representation may be labeled in an image or an entire image may be labeled. A labeled image may be an image identified to include the bone constructs, longitudinal member device and/or a pre-segmented portion (i.e., thoracic vertebrae and/or ribcage) of the training labeled image to which the implants are attached. The trained data set may include disc growth, as well, derived from images. The labeled image and sensor data may be used to train or avow a neural network, for example, to train and learn selected parameters, such as weights and biases in the bone correction system, based on type of longitudinal member device, type of bone construct, sensor arrangement of strain gauge and/or sensors. In some embodiments, the training data set of a current patient may be updated from time to time with segmented image data where current lung capacity measurements may be determined through imaging relative to the actual thoracic ratios and symmetry between left and right thoracic ratios. Example, thoracic ratios by gender and age are described in “A segmental analysis of thoracic shape in chest radiographs of children. Changes related to spinal level, age, sex, side and significance for lung growth and scoliosis,” by Theodoros Grivas et al., in J. Anat. 1991, 178, pp. 21-38, incorporated herein by reference in its entirety. Example, variances between the rib cage and thoracic spine morphology is described in “Association between rib shape and pulmonary function in patients with Osteogenesis Imperfecta,” by Juan Sanchis-Gimeno et al., J. of Advanced Research 21 (2020) 177-185, incorporated herein by reference in its entirety. Example of gender differences in the thoracic vertebrae is described in “Sex differences in thoracic dimensions and configuration,” by Francois Bellemare et al., Am. J. of Respiratory and Critical Care Medicine, vol. 168, 2003, pp. 305-312, incorporated herein by reference in its entirety. Lung function changes in childhood to adolescence are also described in “Lung function changes from childhood adolescence: a seven-year follow-up study,” Pavilio Piccioni et al., BMC Pulmonary Medicine (2015) 15:31, incorporated herein by reference in its entirety. The machine-learning algorithm may be trained with rib cage expansion and contraction measurements based on the sensor data of the strain gauge from patients with similar conditions, age, size and gender, for example. The machine-learning algorithm may be trained with rib cage expansion and contraction measurements from a universal set of patients. The classified patient data may adjust the in-range threshold used at1824for the current classified patient data. Although, the in-range threshold may be acceptable for a current patient, an alert or notification may still be generated (at1710) of the current sensor readings and selected threshold. In some embodiments, a minimum threshold may be used for lung capacity, for example measured by lung maximum voluntary capacity. Additionally, the method1812may include (at1976) determining a respiratory frequency. The patterns in sensor data used for expansion and contraction may also be used to determine respiratory frequency. Detection of changes in the frequency may be a sign of distress. Accordingly, at1824, an in-range threshold may be set to determine if the frequency is in range. The methods1700,1705, and1812may be implemented using hardware, firmware, software or a combination of any of these. For instance, methods1700,1705, and1812may be implemented as part of a microcontroller, processor, and/or graphics processing units (GPUs) and an interface with a register, data store and/or memory device2020(FIG.20) for storing data and programming instructions2022, which, when executed, performs the steps of methods1700,1705, and1812described herein. Implants (e.g., bone constructs1625and/or longitudinal member1630) transmit information to a receiver and processing unit that enables data visualization, analyses, and computation. The GM sensing system1602may use inferred or learned information from sensor measurements, for example, the variation in the measurements that can be deterministic of a change in the growth, growth rate, or the implant set itself that requires intervention. Machine-learning algorithms may use the loading information, such as from a strain gauge, as a function of time and learn the patterns of loading in different locations of the spine and as well as the pattern of changes in such loading over time for various pathologies, age groups, and implant types. Such changes can be learned by a predictive model as the data being accumulated to predict a need for intervention or alarm a problem with the implant of the GM sensing system1602. Sensors or strain gauges directly or indirectly attached to the rib cage can identify a change in the respiration and generate a notification for intervention. The intervention may include straightening the spine or expanding the rib cage to increase the required volume for the lungs. For example, the continuous data monitoring may identify the maximum/minimum loading in each respiration cycle. An alarm may be generated when such loading in a particular respiration cycle starts to diminish, a change in the lung capacity can be identified by data logging of sensor measurements associated with the rib cage or thoracic cage. The sensor data or sensor measurements may include a time stamp, at which the measurement data is recorded with reference to the surgery time and may be collected continuously or at specific times after surgery. Patterns representative of changes in the recorded measurement data may be learned by a machine-learning algorithm. Such machine-learning algorithm may then predict, as a function of the reported measurements in comparison to both the prior measurements and the baseline values, the time at which an intervention is required due to the growth or complications with the implants of the GM sensing system1602. The GM sensing system1602may measure, store, and report the data as a function of time. This means that at a given day, month, year or hour of the day or any time in between the sensed data can be recorded. As such, the linkage between the sensed data and the time, along with other patient specific parameters, for example, gender, age, therapy, and other co-morbidities can predict the sensed data as a given time, for example in one week. The machine-learning algorithm may determine that a measured value, in an upcoming future, by one or more sensors are higher than the expected measurement, identifying unexpected increase in the system's strain due to growth, lower than expected value, identifying lack of tension due to loosening or breakage/rupture, in one or more implants of the GM sensing system1602or does not match the trend or pattern of changes in such measurement indicating a change in the system that required medical intervention. The machine-learning algorithm may use data points over a period of time to identify a pattern of changes in the sensor measurement data and predict the value of such measurements at a later time. A discrepancy in the predicted value and the measured value above a predetermined threshold signals a need for clinical or surgical intervention and enables pre-planning for such intervention. The machine-learning algorithms may employ supervised machine learning, semi-supervised machine learning, unsupervised machine learning, and/or reinforcement machine learning. Each of these listed types of machine-learning algorithms is well known in the art. FIG.20depicts an example of internal hardware that may be included in any of the electronic components of an electronic device2000as described in this disclosure such as, for example, a computing device, a remote server, cloud computing system, external electronic device and/or any other integrated system and/or hardware that may be used to contain or implement program instructions. The system1600may store, in a memory device2020ofFIG.20, the operational status data of the GM sensing system1602, the growth of each monitored bone and the growth rate of the at least one bone. Data analytics of the received measurement data may include trending data of the operational status data of the GM sensing system1602. Data analytics of the received measurement data may include growth and the growth rate of one or more bones, each of which may each be displayed in a GUI (at1712), as numerical values and/or as graphical data. In a situation that multiple bones are monitored such as in a vertebral column, each bone or vertebrae, each segment (cervical, thoracic or lumbar) of bones, and/or the collection of bones being monitored in the vertebral column may be individually monitored such that graphical representations of the status of each of these may be selectively displayed in a GUI (at1712). For example, growth and/or growth rate of a particular vertebra may be displayed in a GUI (at1712). The growth and/or growth rate of a particular segment of the vertebral column may be selectively displayed in a GUI (at1712), especially, those vertebrae that are being treated. The growth and/or growth rate of all monitored bones in the vertebral column may be selectively displayed in a GUI (at1712). The GUIs may include GUIs1645(FIG.16A) that may be configured to display a growth chart or three-dimensional (3D) diagram of one or more of the cervical section, the thoracic section and the lumbar section, for example, A growth chart of a vertebra section may be selected by a user using the GUI1645. The GUIs1645(FIG.16A) may display lung capacity in the form of a 3D diagram of the rib cage expanding, for example, and/or a graph of the changes in lung capacity over time. The data in the GUIs is updated based on the user selection and real-time sensor data. The memory device2020may store the longitudinal growth chart statistics1940. The memory device2020may include machine-learning algorithms2023for analyzing the sensor data based on trained data for classified patients. The machine-learning algorithms2023may include a predictive algorithm that predicts when and how much correction in the bone correction system is required using the sensor data and patient specific parameters. For example, a correction system may use an expandable rod that is selectively adjusted to expand the length of the rod such as using a magnetic field. The predictive algorithm may predict an amount of expansion in the rod is needed. The amount of magnetic field to be applied to the rod may be a function of the predicted amount of correction to expand the length of the rod. The rod expansion may be based on manufacturer's recommendations and growth. The machine-learning algorithms2023may include a neural network such as artificial neural network (ANN) and convolution neural network (CNN), by way of non-limiting example. The memory device2020may include lung capacity statistics2026associated with rib cage expansion and contraction during a breath inhalation and expiration phases based on one or more of patient condition, age, gender, size, and/or co-morbidities or a patient's classification. The memory device2020may include a patient's sensor data log2028. A bus2010serves as the main information highway interconnecting the other illustrated components of the hardware. Processor(s)2005may be the central processing unit (CPU) of the computing system, performing machine-learning algorithms, calculations and logic operations as may be required to execute a program. CPU2005, alone or in conjunction with one or more of the other elements disclosed inFIG.20, is an example of a processor as such term is used within this disclosure. Read only memory (ROM) and random access memory (RAM) constitute examples of tangible and non-transitory computer-readable storage media, memory devices2020or data stores as such terms are used within this disclosure. The memory device2020may store an operating system (OS) of the computing device, a server or for the platform of the electronic device. Program instructions, software or interactive modules for providing the interface and performing any querying or analysis associated with one or more data sets may be stored in the computer-readable storage media (e.g., memory device2020). Optionally, the program instructions may be stored on a tangible, non-transitory computer-readable medium such as a compact disk, a digital disk, flash memory, a memory card, a universal serial bus (USB) drive, an optical disc storage medium and/or other recording medium. An optional display interface2030may permit information from the bus2010to be displayed on the display device2035in audio, visual, graphic or alphanumeric format. Communication with external devices may occur using various communication ports2040. A communication port2040may be attached to a communications network, such as the Internet or an intranet. In various embodiments, communication with external devices may occur via one or more short range communication protocols. The communication port or devices2040may include communication devices for wired or wireless communications and may communicate with a remote server1650. By way of non-limiting example, when in proximity, the external monitoring device1640may receive the sensor data from the GM sensing system and then communicate the sensor data to a remote server1650via communication devices2040. The hardware may also include a user interface2045, such as a graphical user interface (GUI), that allows for receipt of data from input devices, such as a keyboard or other input device2050such as a mouse, a joystick, a touch screen, a remote control, a pointing device, a video input device and/or an audio input device. The GUIs, described herein, may be displayed using a browser application being executed by an electronic device and/or served by a server (not shown). For example, hypertext markup language (HTML) may be used for designing the GUI with HTML tags to the images of the patient and other information stored in or served from memory of the server (not shown). In this document, “electronic communication” refers to the transmission of data via one or more signals between two or more electronic devices, whether through a wired or wireless network, and whether directly or indirectly via one or more intermediary devices. Devices are “communicatively connected” if the devices are able to send and/or receive data via electronic communication. In one or more examples, the described techniques and methods may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer). Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements. It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the techniques). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a medical device. As used herein, the term “about” in reference to a numerical value means plus or minus 10% of the numerical value of the number with which it is being used. The features and functions described above, as well as alternatives, may be combined into many other different systems or applications. Various alternatives, modifications, variations or improvements may be made by those skilled in the art, each of which is also intended to be encompassed by the disclosed embodiments. | 109,140 |
11857347 | DETAILED DESCRIPTION FIG.1shows an example of an apparatus1for monitoring nutrition, especially fermentation in a rumen of a ruminant, wherein a characteristic value of dissolved carbon dioxide inside the rumen is determined. The ATR sensor unit8includes several sensors. The IR-ATR dCO2 sensor itself4contains an IR-LED light for dCO2 and water3IR-LED light source. The light emitted by these sources is sent through the light channel2into the ATR prims1where the IR light is completely diffracted, a small amount of light in the evanescent wave penetrates the rumen fluid in contact with the small window20in the rumen bolus cap21. The amount of light absorbed by the sample is directly dependent on the dCO2 and water concentrations. The remaining attenuated IR light travels throughout the channel7into the Photodiode detectors6where the attenuated signal for dCO2 and water is sensed. The sensor unit8also comprises of a temperature sensor5and an energy harvester accelerometer,9. The sensor unit is connected with the mainboard16by an electrical connector, i.e. pin headers10. The information from the IR-ATR sensor is sent into the low power lock-in amplifier or LIA12which amplified the IR signal improving dCO2 detection. The information is sent into the microcontroller unit, MCU13. The temperature5, accelerometer9, water and the dCO2 signals are processed within the MCU13, compile and send it into the SD card memory14for storage. The rumen bolus RF module15is in standby mode until the receiver22(seeFIG.2) provides the signal to transfer the information. The whole device is powered for a lithium based battery17, kept in place by the holders11. The stainless steel casing18provides a way to hermetically isolate the bolus from the rumen environment. The plug in cap21is kept in place by a corrugated and rubber sealed male connector19. FIG.2shows how the receiver22gathers the information from the ruminants in the barn. A network of antennas26allocated through the ham or diary shed to provided the network for the two-way communication between the receiver and the boluses RF module. The boluses transmit of the information and wait for positive confirmation from the receiver22, once the receiver22confirm that the information has been appropriately received, the boluses returns to standby mode, the information stored in the SD-card14can be deleted. The units23,24,25can be applied to individual animals e.g. in risk of ruminal acidosis, to sentinel animals, e.g. animals per feeding group or to the whole herd. The information stored in the boluses is transmitted at set intervals according to the protocols set in the receiver22by the user. The receiver also acts as interface; in here the information is further processed for optimization and to also include animals ID, date, feeding groups and other physiological and nutritional information per animal, herd and group. The receiver is in direct contact with the server27via internet and telephone services to provide firmware updates, equipment diagnostic and big data analysis. A user interface28might display the information analyzed in the receiver's PCU and information from cloud services. The analysis displayed in the user interface28, information might include: health alarms, nutritional information for optimizing feed conversion efficiency and health status of cattle; see examples. The user interface28might also be used by the farmers to enter specific nutritional and physiological information, i.e. veterinary treatments, diet composition, etc., to improve the feedback and big data analysis. FIG.3shows how the information gather from the rumen boluses can be used to monitor nutrition and cattle health. The advantages of the disclosure are, per an embodiment, that an optimal rumen dCO2 concentrations lead to better anaerobiosis, higher milk productivity and lower risk of nutritional disease, high dCO2 map or high dCO2. High risk of nutritional disease can be found in diets that produce Critical dCO2 MAP, whereas Normal dCO2 MAP diets do not maximize feed conversion efficiency. For instance, the boluses can be placed in the rumen of a sentinel cattle within feeding, all the animals in the herd, or risk cattle, prone to nutritional diseases. A receiver as a part of the second communication unit controls a two way communication with the at least one sensor, store data rom all sensing units and provide the data for a feeding management module. A network of antennas that are conveniently deployed within a milking parlour, milking stall establish two-way communications, between apparatuses and the receiver. A feeding management module processes the information and provides analysis and recommendations. To reduce power consumption the apparatus is preferably in a “hearing mode” stand by and is activated on request. In a case that continuous monitoring is preferred the apparatus records the information in predeterminate time intervals, preferably every 15 seconds, and the information is compiled, stored and sent in predeterminate time intervals, preferably every 10 minutes by the bolus. The receiver establishes communication protocols with the bolus and the receiver gives positive feedback to the boluses that information has been received. In the case that a communication is not stablished, the information is stored in a data storage14. which is a part of the apparatus until the next uplink. Preferably the apparatuses1are asked to send information by the receiver during milking session. Therefore, when cows enter the milking place and the receiver stablished communication with the apparatus, and the apparatus send information. The receiver gives positive feedback to the apparatus1that all information has been received. If a communication is not stablished, the information is stored until the next milking. The information achieved by the apparatus1is used for monitoring nutrition and to improve the animal health (seeFIG.3). EXAMPLE Rumen boluses are applied to the whole herd or sentinel animal within feeding groups. The information of the boluses is processed and rumen maps suggest that feeding management provide low dCO2 concentrations (FIG.4). Diets are adapted by increasing starch, modifying the starch source and reducing the size of fiber on the total mix ration TMR. After further monitoring the sensors suggest that those modifications lead to the diet now provides high rumen dCO2 concentrationsFIG.4. The dCO2 data might also suggest that the diet should be provided in4feeding bouts throughout the day to increase dCO2 concentrations and avoid the rise of dCO2 to critical values (seeFIG.4). The data achieved by the boluses are also compared with the milk yield to find the optimal output for that particular diet. All the recommendations are recorded and provided in a daily report. The information can be also given to an automatic feeding system which uses the information to allocate feeding times and mixing conditions, preferably the type of components for example silage, hay and/or concentrates, and particle size to provide optimal dCO2 concentrations. All the features and advantages, including structural details, spatial arrangements and method steps, which follow from the claims, the description and the drawing can be fundamental to the invention both on their own and in different combinations. It is to be understood that the foregoing is a description of one or more preferred exemplary embodiments of the invention. The invention is not limited to the particular embodiment(s) disclosed herein, but rather is defined solely by the claims below. Furthermore, the statements contained in the foregoing description relate to particular embodiments and are not to be construed as limitations on the scope of the invention or on the definition of terms used in the claims, except where a term or phrase is expressly defined above. Various other embodiments and various changes and modifications to the disclosed embodiment(s) will become apparent to those skilled in the art. All such other embodiments, changes, and modifications are intended to come within the scope of the appended claims. As used in this specification and claims, the terms “for example,” “for instance,” “such as,” and “like,” and the verbs “comprising,” “having,” “including,” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open-ended, meaning that the listing is not to be considered as excluding other, additional components or items. Other terms are to be construed using their broadest reasonable meaning unless they are used in a context that requires a different interpretation. LIST OF REFERENCE NUMERALS 1ATR prims2light channel3IR-LED light source4ATR sensor5temperature sensor6photodiode detector7light channel (attenuated light)8Sensor unit9Energy harvesters, Accelerometer10Electronic connector11Battery holders12lock-in amplifier (LIA)13microcontroller (MGU)14micro-SD card15RF unit16mainboard17Lithium based battery18Stainless steel easing19Male connector for the plug-in cap20ATR prims window21Plug-in cap22Receiver23sensor in individual ruminants24sensors in sentinel group25sensors in groups26Antennas in the milking parlor and ham27server28Software interfaces | 9,320 |
11857348 | DETAILED DESCRIPTION Techniques for determining a timing uncertainty of a component of an optical measurement system are described herein. A photodetector capable of capturing individual photons with very high time-of-arrival resolution (a few tens of picoseconds) is an example of a non-invasive detector that can be used in an optical measurement system to detect neural activity within a body (e.g., the brain). However, each photodetector in an array of photodetectors has timing uncertainty when it triggers a detected event. Moreover, each time-to-digital converter (TDC) in an array of TDCs has its own timing uncertainty, differential nonlinearity (DNL), and integral nonlinearity (INL) characteristics. Such timing uncertainties may be systematic (e.g., caused by manufacturing process variations, etc.) and/or random. The systems, circuits, and methods described herein facilitate characterization of the timing uncertainty, such as nonlinearities and/or impulse response functions (e.g., jitter), of individual photodetectors, TDCs, and/or other components (e.g., other circuits of interest) within an optical measurement system. Based on this characterization, various actions may be performed. For example, systems, circuits, and/or methods described herein may compensate for the timing uncertainty associated with a particular component, rate (e.g., grade an effectiveness of) a particular component, and/or selectively disable a particular component so that measurement results (e.g., histograms) output by the optical measurement system are not skewed or otherwise affected by the timing uncertainties of a particular component. This may make the photon detection operation of the optical measurement system more accurate and effective. These and other advantages and benefits of the present systems, circuits, and methods are described more fully herein. FIG.1shows an exemplary optical measurement system100configured to perform an optical measurement operation with respect to a body102. Optical measurement system100may, in some examples, be portable and/or wearable by a user. Optical measurement systems that may be used in connection with the embodiments described herein are described more fully in U.S. patent application Ser. No. 17/176,315, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,309, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,460, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,470, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,487, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,539, filed Feb. 16, 2021; U.S. patent application Ser. No. 17/176,560, filed Feb. 16, 2021; and U.S. patent application Ser. No. 17/176,466, filed Feb. 16, 2021, which applications are incorporated herein by reference in their entirety. In some examples, optical measurement operations performed by optical measurement system100are associated with a time domain-based optical measurement technique. Example time domain-based optical measurement techniques include, but are not limited to, TCSPC, time domain near infrared spectroscopy (TD-NIRS), time domain diffusive correlation spectroscopy (TD-DCS), and time domain Digital Optical Tomography (TD-DOT). As shown, optical measurement system100includes a detector104that includes a plurality of individual photodetectors (e.g., photodetector106), a processor108coupled to detector104, a light source110, a controller112, and optical conduits114and116(e.g., light pipes). However, one or more of these components may not, in certain embodiments, be considered to be a part of optical measurement system100. For example, in implementations where optical measurement system100is wearable by a user, processor108and/or controller112may in some embodiments be separate from optical measurement system100and not configured to be worn by the user. Detector104may include any number of photodetectors106as may serve a particular implementation, such as 2nphotodetectors (e.g., 256, 512, . . . , 16384, etc.), where n is an integer greater than or equal to one (e.g., 4, 5, 8, 10, 11, 14, etc.). Photodetectors106may be arranged in any suitable manner. Photodetectors106may each be implemented by any suitable circuit configured to detect individual photons of light incident upon photodetectors106. For example, each photodetector106may be implemented by a single photon avalanche diode (SPAD) circuit and/or other circuitry as may serve a particular implementation. Processor108may be implemented by one or more physical processing (e.g., computing) devices. In some examples, processor108may execute instructions (e.g., software) configured to perform one or more of the operations described herein. Light source110may be implemented by any suitable component configured to generate and emit light. For example, light source110may be implemented by one or more laser diodes, distributed feedback (DFB) lasers, super luminescent diodes (SLDs), light emitting diodes (LEDs), diode-pumped solid-state (DPSS) lasers, super luminescent light emitting diodes (sLEDs), vertical-cavity surface-emitting lasers (VCSELs), titanium sapphire lasers, micro light emitting diodes (mLEDs), and/or any other suitable laser or light source. In some examples, the light emitted by light source110is high coherence light (e.g., light that has a coherence length of at least 5 centimeters) at a predetermined center wavelength. Light source110is controlled by controller112, which may be implemented by any suitable computing device (e.g., processor108), integrated circuit, and/or combination of hardware and/or software as may serve a particular implementation. In some examples, controller112is configured to control light source110by turning light source110on and off and/or setting an intensity of light generated by light source110. Controller112may be manually operated by a user, or may be programmed to control light source110automatically. Light emitted by light source110may travel via an optical conduit114(e.g., a light pipe, a light guide, a waveguide, a single-mode optical fiber, and/or or a multi-mode optical fiber) to body102of a subject. In cases where optical conduit114is implemented by a light guide, the light guide may be spring loaded and/or have a cantilever mechanism to allow for conformably pressing the light guide firmly against body102. Body102may include any suitable turbid medium. For example, in some implementations, body102is a head or any other body part of a human or other animal. Alternatively, body102may be a non-living object. For illustrative purposes, it will be assumed in the examples provided herein that body102is a human head. As indicated by arrow120, the light emitted by light source110enters body102at a first location122on body102. Accordingly, a distal end of optical conduit114may be positioned at (e.g., right above, in physical contact with, or physically attached to) first location122(e.g., to a scalp of the subject). In some examples, the light may emerge from optical conduit114and spread out to a certain spot size on body102to fall under a predetermined safety limit. At least a portion of the light indicated by arrow120may be scattered within body102. As used herein, “distal” means nearer, along the optical path of the light emitted by light source110or the light received by detector104, to the target (e.g., within body102) than to light source110or detector104. Thus, the distal end of optical conduit114is nearer to body102than to light source110, and the distal end of optical conduit116is nearer to body102than to detector104. Additionally, as used herein, “proximal” means nearer, along the optical path of the light emitted by light source110or the light received by detector104, to light source110or detector104than to body102. Thus, the proximal end of optical conduit114is nearer to light source110than to body102, and the proximal end of optical conduit116is nearer to detector104than to body102. As shown, the distal end of optical conduit116(e.g., a light pipe, a light guide, a waveguide, a single-mode optical fiber, and/or a multi-mode optical fiber) is positioned at (e.g., right above, in physical contact with, or physically attached to) output location126on body102. In this manner, optical conduit116may collect at least a portion of the scattered light (indicated as light124) as it exits body102at location126and carry light124to detector104. Light124may pass through one or more lenses and/or other optical elements (not shown) that direct light124onto each of the photodetectors106included in detector104. Photodetectors106may be connected in parallel in detector104. An output of each of photodetectors106may be accumulated to generate an accumulated output of detector104. Processor108may receive the accumulated output and determine, based on the accumulated output, a temporal distribution of photons detected by photodetectors106. Processor108may then generate, based on the temporal distribution, a histogram representing a light pulse response of a target (e.g., brain tissue, blood flow, etc.) in body102. Example embodiments of accumulated outputs are described herein. FIG.2illustrates an exemplary detector architecture200that may be used in accordance with the systems and methods described herein. As shown, architecture200includes a SPAD circuit202that implements photodetector106, a control circuit204, a time-to-digital converter (TDC)206, and a signal processing circuit208. Architecture200may include additional or alternative components as may serve a particular implementation. In some examples, SPAD circuit202includes a SPAD and a fast gating circuit configured to operate together to detect a photon incident upon the SPAD. As described herein, SPAD circuit202may generate an output when SPAD circuit202detects a photon. The fast gating circuit included in SPAD circuit202may be implemented in any suitable manner. For example, the fast gating circuit may include a capacitor that is pre-charged with a bias voltage before a command is provided to arm the SPAD. Gating the SPAD with a capacitor instead of with an active voltage source, such as is done in some conventional SPAD architectures, has a number of advantages and benefits. For example, a SPAD that is gated with a capacitor may be armed practically instantaneously compared to a SPAD that is gated with an active voltage source. This is because the capacitor is already charged with the bias voltage when a command is provided to arm the SPAD. This is described more fully in U.S. Pat. Nos. 10,158,038 and 10,424,683, which are incorporated herein by reference in their respective entireties. In some alternative configurations, SPAD circuit202does not include a fast gating circuit. In these configurations, the SPAD included in SPAD circuit202may be gated in any suitable manner or be configured to operate in a free running mode with passive quenching. Control circuit204may be implemented by an application specific integrated circuit (ASIC) or any other suitable circuit configured to control an operation of various components within SPAD circuit202. For example, control circuit204may output control logic that puts the SPAD included in SPAD circuit202in either an armed or a disarmed state. In some examples, control circuit204may control a gate delay, which specifies a predetermined amount of time control circuit204is to wait after an occurrence of a light pulse (e.g., a laser pulse) to put the SPAD in the armed state. To this end, control circuit204may receive light pulse timing information, which indicates a time at which a light pulse occurs (e.g., a time at which the light pulse is applied to body102). Control circuit204may also control a programmable gate width, which specifies how long the SPAD is kept in the armed state before being disarmed. Control circuit204is further configured to control signal processing circuit208. For example, control circuit204may provide histogram parameters (e.g., time bins, number of light pulses, type of histogram, etc.) to signal processing circuit208. Signal processing circuit208may generate histogram data in accordance with the histogram parameters. In some examples, control circuit204is at least partially implemented by controller112. TDC206is configured to measure a time difference between an occurrence of an output pulse generated by SPAD circuit202and an occurrence of a light pulse. To this end, TDC206may also receive the same light pulse timing information that control circuit204receives. TDC206may be implemented by any suitable circuitry as may serve a particular implementation. Signal processing circuit208is configured to perform one or more signal processing operations on data output by TDC206. For example, signal processing circuit208may generate histogram data based on the data output by TDC206and in accordance with histogram parameters provided by control circuit204. To illustrate, signal processing circuit208may generate, store, transmit, compress, analyze, decode, and/or otherwise process histograms based on the data output by TDC206. In some examples, signal processing circuit208may provide processed data to control circuit204, which may use the processed data in any suitable manner. In some examples, signal processing circuit208is at least partially implemented by processor108. In some examples, each photodetector106(e.g., SPAD circuit202) may have a dedicated TDC206associated therewith. For example, for an array of N photodetectors106, there may be a corresponding array of N TDCs206. Alternatively, a single TDC206may be associated with multiple photodetectors106. Likewise, a single control circuit204and a single signal processing circuit208may be provided for a one or more photodetectors106and/or TDCs206. FIG.3illustrates an exemplary timing diagram300for performing an optical measurement operation using optical measurement system100. Optical measurement system100may be configured to perform the optical measurement operation by directing light pulses (e.g., laser pulses) toward a target within a body (e.g., body102). The light pulses may be short (e.g., 10-2000 picoseconds (ps)) and repeated at a high frequency (e.g., between 100,000 hertz (Hz) and 100 megahertz (MHz)). The light pulses may be scattered by the target and then detected by optical measurement system100. Optical measurement system100may measure a time relative to the light pulse for each detected photon. By counting the number of photons detected at each time relative to each light pulse repeated over a plurality of light pulses, optical measurement system100may generate a histogram that represents a light pulse response of the target (e.g., a temporal point spread function (TPSF)). The terms histogram and TPSF are used interchangeably herein to refer to a light pulse response of a target. For example, timing diagram300shows a sequence of light pulses302(e.g., light pulses302-1and302-2) that may be applied to the target (e.g., tissue within a brain of a user, blood flow, a fluorescent material used as a probe in a body of a user, etc.). Timing diagram300also shows a pulse wave304representing predetermined gated time windows (also referred as gated time periods) during which photodetectors106are gated ON to detect photons. Referring to light pulse302-1, light pulse302-1is applied at a time t0. At a time t1, a first instance of the predetermined gated time window begins. Photodetectors106may be armed at time t1, enabling photodetectors106to detect photons scattered by the target during the predetermined gated time window. In this example, time t1is set to be at a certain time after time t0, which may minimize photons detected directly from the laser pulse, before the laser pulse reaches the target. However, in some alternative examples, time t1is set to be equal to time t0. At a time t2, the predetermined gated time window ends. In some examples, photodetectors106may be disarmed at time t2. In other examples, photodetectors106may be reset (e.g., disarmed and re-armed) at time t2or at a time subsequent to time t2. During the predetermined gated time window, photodetectors106may detect photons scattered by the target. Photodetectors106may be configured to remain armed during the predetermined gated time window such that photodetectors106maintain an output upon detecting a photon during the predetermined gated time window. For example, a photodetector106may detect a photon at a time t3, which is during the predetermined gated time window between times t1and t2. The photodetector106may be configured to provide an output indicating that the photodetector106has detected a photon. The photodetector106may be configured to continue providing the output until time t2, when the photodetector may be disarmed and/or reset. Optical measurement system100may generate an accumulated output from the plurality of photodetectors. Optical measurement system100may sample the accumulated output to determine times at which photons are detected by photodetectors106to generate a TPSF. As mentioned, in some alternative examples, photodetector106may be configured to operate in a free-running mode such that photodetector106is not actively armed and disarmed (e.g., at the end of each predetermined gated time window represented by pulse wave304). In contrast, while operating in the free-running mode, photodetector106may be configured to reset within a configurable time period after an occurrence of a photon detection event (i.e., after photodetector106detects a photon) and immediately begin detecting new photons. However, only photons detected within a desired time window (e.g., during each gated time window represented by pulse wave304) may be included in the TPSF. FIG.4illustrates a graph400of an exemplary TPSF402that may be generated by optical measurement system100in response to a light pulse404(which, in practice, represents a plurality of light pulses). Graph400shows a normalized count of photons on a y-axis and time bins on an x-axis. As shown, TPSF402is delayed with respect to a temporal occurrence of light pulse404. In some examples, the number of photons detected in each time bin subsequent to each occurrence of light pulse404may be aggregated (e.g., integrated) to generate TPSF402. TPSF402may be analyzed and/or processed in any suitable manner to determine or infer detected neural activity. Optical measurement system100may be implemented by or included in any suitable device. For example, optical measurement system100may be included, in whole or in part, in a non-invasive wearable device (e.g., a headpiece) that a user may wear to perform one or more diagnostic, imaging, analytical, and/or consumer-related operations. The non-invasive wearable device may be placed on a user's head or other part of the user to detect neural activity. In some examples, such neural activity may be used to make behavioral and mental state analysis, awareness and predictions for the user. Mental state described herein refers to the measured neural activity related to physiological brain states and/or mental brain states, e.g., joy, excitement, relaxation, surprise, fear, stress, anxiety, sadness, anger, disgust, contempt, contentment, calmness, focus, attention, approval, creativity, positive or negative reflections/attitude on experiences or the use of objects, etc. Further details on the methods and systems related to a predicted brain state, behavior, preferences, or attitude of the user, and the creation, training, and use of neuromes can be found in U.S. Provisional Patent Application No. 63/047,991, filed Jul. 3, 2020. Exemplary measurement systems and methods using biofeedback for awareness and modulation of mental state are described in more detail in U.S. patent application Ser. No. 16/364,338, filed Mar. 26, 2019, published as US2020/0196932A1. Exemplary measurement systems and methods used for detecting and modulating the mental state of a user using entertainment selections, e.g., music, film/video, are described in more detail in U.S. patent application Ser. No. 16/835,972, filed Mar. 31, 2020, published as US2020/0315510A1. Exemplary measurement systems and methods used for detecting and modulating the mental state of a user using product formulation from, e.g., beverages, food, selective food/drink ingredients, fragrances, and assessment based on product-elicited brain state measurements are described in more detail in U.S. patent application Ser. No. 16/853,614, filed Apr. 20, 2020, published as US2020/0337624A1. Exemplary measurement systems and methods used for detecting and modulating the mental state of a user through awareness of priming effects are described in more detail in U.S. patent application Ser. No. 16/885,596, filed May 28, 2020, published as US2020/0390358A1. These applications and corresponding U.S. publications are incorporated herein by reference in their entirety. FIG.5shows an exemplary non-invasive wearable brain interface system500(“brain interface system500”) that implements optical measurement system100(shown inFIG.1). As shown, brain interface system500includes a head-mountable component502configured to be attached to a user's head. Head-mountable component502may be implemented by a cap shape that is worn on a head of a user. Alternative implementations of head-mountable component502include helmets, beanies, headbands, other hat shapes, or other forms conformable to be worn on a user's head, etc. Head-mountable component502may be made out of any suitable cloth, soft polymer, plastic, hard shell, and/or any other suitable material as may serve a particular implementation. Examples of headgears used with wearable brain interface systems are described more fully in U.S. Pat. No. 10,340,408, incorporated herein by reference in its entirety. Head-mountable component502includes a plurality of detectors504, which may implement or be similar to detector104, and a plurality of light sources506, which may be implemented by or be similar to light source110. It will be recognized that in some alternative embodiments, head-mountable component502may include a single detector504and/or a single light source506. Brain interface system500may be used for controlling an optical path to the brain and for transforming photodetector measurements into an intensity value that represents an optical property of a target within the brain. Brain interface system500allows optical detection of deep anatomical locations beyond skin and bone (e.g., skull) by extracting data from photons originating from light source506and emitted to a target location within the user's brain, in contrast to conventional imaging systems and methods (e.g., optical coherence tomography (OCT)), which only image superficial tissue structures or through optically transparent structures. Brain interface system500may further include a processor508configured to communicate with (e.g., control and/or receive signals from) detectors504and light sources506by way of a communication link510. Communication link510may include any suitable wired and/or wireless communication link. Processor508may include any suitable housing and may be located on the user's scalp, neck, shoulders, chest, or arm, as may be desirable. In some variations, processor508may be integrated in the same assembly housing as detectors504and light sources506. As shown, brain interface system500may optionally include a remote processor512in communication with processor508. For example, remote processor512may store measured data from detectors504and/or processor508from previous detection sessions and/or from multiple brain interface systems (not shown). Power for detectors504, light sources506, and/or processor508may be provided via a wearable battery (not shown). In some examples, processor508and the battery may be enclosed in a single housing, and wires carrying power signals from processor508and the battery may extend to detectors504and light sources506. Alternatively, power may be provided wirelessly (e.g., by induction). In some alternative embodiments, head mountable component502does not include individual light sources. Instead, a light source configured to generate the light that is detected by photodetector504may be included elsewhere in brain interface system500. For example, a light source may be included in processor508and coupled to head mountable component502through optical connections. Optical measurement system100may alternatively be included in a non-wearable device (e.g., a medical device and/or consumer device that is placed near the head or other body part of a user to perform one or more diagnostic, imaging, and/or consumer-related operations). Optical measurement system100may alternatively be included in a sub-assembly enclosure of a wearable invasive device (e.g., an implantable medical device for brain recording and imaging). Optical measurement system100may be modular in that one or more components of optical measurement system100may be removed, changed out, or otherwise modified as may serve a particular implementation. Additionally or alternatively, optical measurement system100may be modular such that one or more components of optical measurement system100may be housed in a separate housing (e.g., module) and/or may be movable relative to other components. Exemplary modular multimodal measurement systems are described in more detail in U.S. patent application Ser. No. 17/176,460, filed Feb. 16, 2021, U.S. patent application Ser. No. 17/176,470, filed Feb. 16, 2021, U.S. patent application Ser. No. 17/176,487, filed Feb. 16, 2021, U.S. Provisional Patent Application No. 63/038,481, filed Jun. 12, 2020, and U.S. patent application Ser. No. 17/176,560, filed Feb. 16, 2021, which applications are incorporated herein by reference in their respective entireties. To illustrate,FIG.6shows an exemplary wearable module assembly600(“assembly600”) that implements one or more of the optical measurement features described herein. Assembly600may be worn on the head or any other suitable body part of the user. As shown, assembly600may include a plurality of modules602(e.g., modules602-1through602-3). While three modules602are shown to be included in assembly600inFIG.6, in alternative configurations, any number of modules602(e.g., a single module up to sixteen or more modules) may be included in assembly600. Moreover, while modules602are shown to be adjacent to and touching one another, modules602may alternatively be spaced apart from one another (e.g., in implementations where modules602are configured to be inserted into individual slots or cutouts of the headgear). Moreover, while modules602are shown to have a hexagonal shape, modules602may alternatively have any other suitable geometry (e.g., in the shape of a pentagon, octagon, square, rectangular, circular, triangular, free-form, etc.). Assembly600may conform to three-dimensional surface geometries, such as a user's head. Exemplary wearable module assemblies comprising a plurality of wearable modules are described in more detail in U.S. Provisional Patent Application No. 62/992,550, filed Mar. 20, 2020, which application is incorporated herein by reference in its entirety. Each module602includes a source604and a plurality of detectors606(e.g., detectors606-1through606-6). Source604may be implemented by one or more light sources similar to light source110. Each detector606may implement or be similar to detector104and may include a plurality of photodetectors (e.g., SPADs) as well as other circuitry (e.g., TDCs). As shown, detectors606are arranged around and substantially equidistant from source604. In other words, the spacing between a light source (i.e., a distal end portion of a light source optical conduit) and the detectors (i.e., distal end portions of optical conduits for each detector) are maintained at the same fixed distance on each module to ensure homogeneous coverage over specific areas and to facilitate processing of the detected signals. The fixed spacing also provides consistent spatial (lateral and depth) resolution across the target area of interest, e.g., brain tissue. Moreover, maintaining a known distance between the light emitter and the detector allows subsequent processing of the detected signals to infer spatial (e.g., depth localization, inverse modeling) information about the detected signals. Detectors606may be alternatively disposed as may serve a particular implementation. FIG.7shows an exemplary photodetector array702and a corresponding TDC array704that may be included in an optical measurement system (e.g., optical measurement system100). As shown, photodetector array702may include a plurality of photodetectors (e.g., photodetector706). Each photodetector may be similar to any of the photodetectors described herein. For example, each photodetector may be implemented by a SPAD. In some examples, photodetector array702is included in a detector (e.g., one of detectors606shown inFIG.6) located on a module (e.g., one of modules602shown inFIG.6). TDC array704may include a plurality of TDCs (e.g., TDC708). Each TDC may be similar to any of the TDCs described herein and may correspond to a different one of the photodetectors included in photodetector array702. For example, TDC708corresponds to photodetector706(i.e., TDC708is configured to detect photon arrival times for photodetector706). To detect a photon arrival time, TDC708may be configured to detect a photodetector output pulse generated by photodetector706when photodetector706detects a photon and, in response, record a timestamp symbol when the photodetector output pulse is detected. The timestamp symbol may, for example, be a multi-bit data sequence or code that represents an amount of elapsed time between when a light pulse including the photon is emitted and when TDC708detects the photodetector output pulse. A propagation time between when photodetector706detects a photon and TDC708records a corresponding timestamp symbol is represented inFIG.7by line710. This propagation time may be different for different TDC/photodetector pairs due to different timing uncertainties associated with each individual photodetector and TDC. The systems, circuits, and methods described herein may be configured to characterize (e.g., determine electrical characteristics of) a timing uncertainty of any of the photodetectors included in photodetector array702, any of the TDCs included in TDC array704, and/or any other component included in the optical measurement system100. In this manner, the systems, circuits, and methods described herein may characterize all or a portion of a propagation time (e.g., the propagation time represented by line710) between when a photodetector detects a photon and a corresponding TDC records a timestamp symbol. FIG.8shows an exemplary configuration800that may be used to characterize a timing uncertainty of a component802included in an optical measurement system. Component802may include a photodetector, a TDC, and/or any other circuit of interest within the optical measurement system. As shown, configuration800may include a signal generator804and a processing unit806in communication one with another. Signal generator804may be implemented by any suitable circuitry configured to generate a signal (e.g., an electrical signal or an optical signal) that may be applied to component802. Illustrative implementations of signal generator804are described herein. Processing unit806may be implemented by processor108, controller112, control circuit204, and/or any other suitable processing and/or computing device or circuit. For example,FIG.9illustrates an exemplary implementation of processing unit806in which processing unit806includes a memory902and a processor904configured to be selectively and communicatively coupled to one another. In some examples, memory902and processor904may be distributed between multiple devices and/or multiple locations as may serve a particular implementation. Memory902may be implemented by any suitable non-transitory computer-readable medium and/or non-transitory processor-readable medium, such as any combination of non-volatile storage media and/or volatile storage media. Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g., a hard drive), ferroelectric random-access memory (“RAM”), and an optical disc. Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM). Memory902may maintain (e.g., store) executable data used by processor904to perform one or more of the operations described herein. For example, memory902may store instructions906that may be executed by processor904to perform any of the operations described herein. Instructions906may be implemented by any suitable application, program (e.g., sound processing program), software, code, and/or other executable data instance. Memory902may also maintain any data received, generated, managed, used, and/or transmitted by processor904. Processor904may be configured to perform (e.g., execute instructions906stored in memory902to perform) various operations described herein. For example, processor904may be configured to perform any of the operations described herein as being performed by processing unit806. Returning toFIG.8, as shown, processing unit806may be configured to transmit a command to signal generator804that directs signal generator804to apply a signal to component802. Component802may be configured to generate a response to the applied signal. Exemplary responses are described herein. Processing unit806may detect the response and generate, based on the response, characterization data representative of a timing uncertainty associated with component802. Based on the characterization data, processing unit806may perform an action associated with component802. Examples of this are described herein. FIG.10shows an exemplary implementation1000of configuration800in which component802is implemented by a TDC1002and signal generator804is implemented by a phase locked loop (PLL) circuit1004and a precision timing circuit1006. PLL circuit1004and precision timing circuit1006together constitute a PLL circuit based architecture1008. Implementation1000further includes a photodetector1010, a multiplexer1012, and an output buffer1014. TDC1002may be similar to any of the TDCs described herein and may correspond to photodetector1010, which may be similar to any of the photodetectors described herein. For example, TDC1002may be configured to measure a time difference between an occurrence of a light pulse and an occurrence of a photodetector output pulse generated by photodetector1010, where the photodetector output pulse indicates that photodetector1010has detected a photon from the light pulse after the light pulse is scattered by a target. Multiplexer1012is configured to selectively pass, to TDC1002(e.g., by way of output buffer1014), output pulses generated by precision timing circuit1006or a photodetector output pulse output by photodetector1010. Processing unit806may control multiplexer1012by providing a MUX selector command to multiplexer1012. For example, the MUX selector command may cause multiplexer1012to selectively pass the output pulses generated by precision timing circuit1006to TDC1002when it desired to characterize a timing uncertainty of TDC1002(e.g., during a calibration mode). As shown, output buffer1014is in series with an output of multiplexer1012. In this configuration, the output of multiplexer1012is passed to TDC1002by way of output buffer1014. In some alternative configurations, output buffer1014is omitted such that the output of multiplexer1012is passed directly to TDC1002. PLL circuit1004is configured to have a PLL feedback period. The output pulses generated by precision timing circuit1006may have programmable temporal positions within the PLL feedback period. These programmable temporal positions may be specified by a timing command provided by processing unit806. In this manner, as described herein, the output pulses may be used to characterize a timing uncertainty of TDC1002(e.g., one or more nonlinearities of TDC1002). FIG.11illustrates an exemplary implementation of PLL circuit based architecture1008. PLL circuit based architecture1008may be configured to generate and set a temporal position (e.g., of a rising edge and/or of a falling edge) of a timing pulse that may be used to set a temporal position of one or more output pulses described herein. As shown, architecture1008includes PLL circuit1004communicatively coupled to precision timing circuit1006. PLL circuit1004includes a VCO1106, a feedback divider1108, a phase detector1110, a charge pump1112, and a loop filter1114connected in a feedback loop configuration. Phase detector1110may receive a reference clock as an input such that PLL circuit1004has a PLL feedback period defined by the reference clock. The reference clock may have any suitable frequency, such as any frequency between 1 MHz and 200 MHz. VCO1106may be implemented by any suitable combination of circuitry (e.g., a differential multi-stage gated ring oscillator (GRO) circuit) and is configured to lock to the reference clock (i.e., to a multiple of a frequency of the reference clock). To that end, VCO1106may include a plurality of stages configured to output a plurality of fine phase signals each having a different phase and uniformly distributed in time. In some examples, each stage may output two fine phase signals that have complimentary phases. VCO1106may include any suitable number of stages configured to output any suitable number of fine phase signals (e.g., eight stages that output sixteen fine phase signals). The duration of a fine phase signal pulse depends on the oscillator frequency of VCO1106and the total number of fine phase signals. For example, if the oscillator frequency is 1 gigahertz (GHz) and the total number of fine phase signals is sixteen, the duration of a pulse included in a fine phase signal is 1 GHz/16, which is 62.5 picoseconds (ps). As described herein, these fine phase signals may provide precision timing circuit1006with the ability to adjust a phase (i.e., temporal position) of a timing pulse with relatively fine resolution. Feedback divider1108is configured to be clocked by a single fine phase signal included in the plurality of fine phase signals output by VCO1106and have a plurality of feedback divider states during the PLL feedback period. The number of feedback divider states depends on the oscillator frequency of VCO1106and the frequency of the reference clock. For example, if the oscillator frequency is 1 gigahertz (GHz) and the reference clock has a frequency of 50 MHz, the number of feedback divider states is equal to 1 GHz/50 MHz, which is equal to 20 feedback divider states. As described herein, these feedback divider states may provide precision timing circuit1006with the ability to adjust a phase (i.e., temporal position) of a timing pulse with relatively course resolution. Feedback divider1108may be implemented by any suitable circuitry. In some alternative examples, feedback divider1108is at least partially integrated into precision timing circuit1006. As shown, the fine phase signals output by VCO1106and state information (e.g., signals and/or data) representative of the feedback divider states within feedback divider1108are input into precision timing circuit1006. Precision timing circuit1006may be configured to generate a timing pulse and set, based on a combination of one of the fine phase signals and one of the feedback dividers states, a temporal position of the timing pulse within the PLL feedback period. For example, if there are N total fine phase signals and M total feedback divider states, precision timing circuit1006may set the temporal position of the timing pulse to be one of N times M possible temporal positions within the PLL feedback period. To illustrate, if N is 16 and M is 20, and if the duration of a pulse included in a fine phase signal is 62.5 ps, the temporal position of the timing pulse may be set to be one of 320 possible positions in 62.5 ps steps. The timing pulse generated by precision timing circuit1006may be used within optical measurement system100in any suitable manner. For example, the timing pulse may be configured to trigger a start (e.g., a rising edge) of an output pulse used by a component within optical measurement system100. Alternatively, the timing pulse may be configured to trigger an end (e.g., a falling edge) of an output pulse used by a component within optical measurement system100. Alternatively, the timing pulse itself may be provided for use as an output pulse used by a component within optical measurement system100. In some examples, precision timing circuit1006may generate multiple timing pulses each used for a different purpose within optical measurement system100. PLL circuit based architecture1008is described in more detail in U.S. Provisional Patent Application No. 63/027,011, filed May 19, 2020, and incorporated herein by reference in its entirety. In some examples, processing unit806may use the output pulses generated by precision timing circuit1006to characterize a timing uncertainty of TDC1002by using the output pulses to characterize one or more nonlinearities of TDC1002. This may be performed in any suitable manner. For example, processing unit806may direct precision timing circuit1006to apply the output pulses to TDC1002by directing multiplexer1012to pass the output pulses to TDC1002by way of output buffer1014and by directing precision timing circuit1006to sweep the output pulses across a plurality of temporal positions. Processing unit806may generate characterization data for TDC1002by generating, based on timestamp symbols recorded by TDC1002in response to the output pulses, data representative of a transfer curve that represents a characterization of one or more nonlinearities of TDC1002. In some examples, this characterization may be performed while optical measurement system100is operating in a high resolution mode in which precision timing circuit1006is configured to sweep the output pulses across a plurality of relatively finely spaced temporal positions. FIG.12shows an exemplary transfer curve1200that may be generated by processing unit806and that represents a characterization of one or more nonlinearities of TDC1002. As shown, transfer curve1200represents a plot of actual temporal positions (the x-axis labeled “Time”) of the output pulses versus timestamp symbols recorded by TDC1002(the y-axis labeled “Code”). Ideally, the actual temporal positions of the output pulses and the timestamp symbols have a one to one correspondence, thereby resulting in linear transfer curve1200, as shown inFIG.12. However, one or more nonlinearities of TDC1002may cause transfer curve1200to have one or more irregularities (e.g., one or more nonlinearities). Based on characterization data that indicates that transfer curve1200has one or more irregularities, processing unit806may perform one or more suitable actions. For example, processing unit806may compensate for the one or more irregularities in transfer curve1200. To illustrate, processing unit806may program one or more digital offsets into timestamp symbols recorded by TDC1002to cause transfer curve1200to be linear. Additionally or alternatively, processing unit806may use the output pulses generated by precision timing circuit1006to characterize a timing uncertainty of TDC1002by using the output pulses to characterize an impulse response function (e.g., an electrical impulse response function, also referred to as jitter) of TDC1002.FIG.13shows an exemplary impulse response function1302that may be associated with TDC1002(or any other component within optical measurement system100that is being characterized). A time spread1304of impulse response function1302represents time uncertainty, which adds time uncertainty to an overall histogram measurement. In other words, the more time uncertainty in an impulse response function, the broader the histogram will be independent of the actual signal being measured. Processing unit806may characterize an impulse response function of TDC1002in any suitable manner. For example, processing unit806may be configured to direct precision timing circuit1006to generate output pulses each having the same programmable temporal position within the PLL feedback period of PLL circuit1004. Processing unit806may further direct precision timing circuit1006to apply the output pulses to TDC1002(e.g., by way of output buffer1012). Any suitable plurality of output pulses may be applied to TDC1002as may serve a particular implementation. Processing unit806may generate characterization data for TDC1002by determining a variation in timestamp symbols recorded by TDC1002in response to the output pulses and generating, based on the determined variation, data representative of a characterization of an impulse response function of TDC1002. Based on characterization data representative of the impulse response function of TDC1002, processing unit806may perform one or more suitable actions. For example, processing unit806may rate TDC1002based on the characterization data. This rating may be compared to other TDCs (e.g., in the same TDC array). In some examples, if the rating of TDC1002is outside a predetermined range of values (e.g., if the rating value deviates too much from a range of rating values for other TDCs in the same TDC array), thereby indicating that TDC1002has a relatively poor impulse response function, TDC1002and its corresponding photodetector may be disabled so as to not skew an overall histogram generated by optical measurement system100. TDC1002and its corresponding photodetector may be disabled in any suitable manner. For example, processing unit806may disable a power supply for TDC1002and/or its corresponding photodetector, transmit a control signal to TDC1002and/or its corresponding photodetector that turns off or disables TDC1002and/or its corresponding photodetector, and/or abstain from transmitting a gate on pulse to the photodetector. Processing unit806may be configured to additionally or alternatively characterize a timing uncertainty (e.g., an impulse response function, also referred to as jitter) of a photodetector included in optical measurement system100. For example,FIG.14shows an exemplary implementation1400of configuration800in which component802is implemented by photodetector1010and signal generator804is implemented by a light source1412configured to generate a plurality of light pulses. Implementation1400further includes output buffer1014and TDC1002. In some alternative embodiments, implementation1400further includes multiplexer1012so that processing unit806may selectively switch between characterizing photodetector1010and TDC1002. In implementation1400, processing unit806may be configured to direct light source1412(e.g., by transmitting a control signal to light source1412) to output a plurality of light pulses. Photodetector1010is configured to output a photodetector output pulse each time photodetector1010detects a photon from the light pulses. TDC1002is configured to record a timestamp symbol each time TDC1002detects an occurrence of the photodetector output pulse. Processing unit806may generate characterization data for photodetector1010by determining a variation in the timestamp symbols recorded by TDC1002and generating, based on the variation in the timestamp symbols recorded by TDC1002, data representative of a characterization of an impulse response function of photodetector1010. Based on characterization data representative of the impulse response function of photodetector1010, processing unit806may perform one or more suitable actions. For example, processing unit806may rate photodetector1010based on the characterization data. This rating may be compared to other photodetectors (e.g., in the same photodetector array). In some examples, if the rating of photodetector1010is outside a predetermined range of values (e.g., if the rating value deviates too much from a range of rating values for other photodetectors in the same photodetector array), thereby indicating that photodetector1010has a relatively poor impulse response function, photodetector1010may be disabled so as to not skew an overall histogram generated by optical measurement system100. Photodetector1010may be disabled in any suitable manner. For example, processing unit806may disable a power supply for photodetector1010, transmit a control signal to photodetector1010that turns off or disables photodetector1010, and/or abstain from transmitting a gate on pulse to photodetector1010. In configurations in which outputs of many TDCs (e.g., each TDC included in an array of TDCs) are combined into a single histogram, processing unit806may be configured to isolate each TDC/photodetector pair so that only one TDC/photodetector pair is active at any given time. This may allow a timing uncertainty of each individual TDC and/or photodetector to be characterized. For example, while processing unit806characterizes a timing uncertainty of TDC1002and/or photodetector1010, processing unit806may disable other TDCs within an array of TDCs (e.g., TDC array704) of which TDC1002is a part and other photodetectors within an array of photodetectors (e.g., photodetector array702) of which photodetector1010is a part. This may be performed in any suitable manner. The timing uncertainty characterizations described herein may be performed by processing unit806at any suitable time. For example, processing unit806may be configured to place optical measurement system100in a calibration mode (e.g., during a startup procedure for optical measurement system100) and perform one or more timing uncertainty characterizations described herein while optical measurement system100is in the calibration mode. The systems, circuits, and methods described herein may be used to characterize a timing uncertainty of any other component included in optical measurement system100as may serve a particular implementation. For example,FIG.15illustrates an exemplary implementation1500of configuration800in which processing unit806is configured to use output pulses generated by precision timing circuit1006to characterize a timing uncertainty of a circuit of interest1502. Implementation1500is similar to implementation1000, except that in implementation1500, multiplexer1012is configured to selectively pass output pulses, an output of circuit of interest1502, or an output of photodetector1010to TDC1002by way of output buffer1014. Circuit of interest1502may include any suitable circuit and/or electrical path within a detector that includes a photodetector. As mentioned, optical measurement system100may be at least partially wearable by a user. For example, optical measurement system100may be implemented by a wearable device configured to be worn by a user (e.g., a head-mountable component configured to be worn on a head of the user). The wearable device may include one or more photodetectors, modules, and/or any of the other components described herein. In some examples, one or more components (e.g., processing unit806, processor108, controller112, etc.) may not be included in the wearable device and/or included in a separate wearable device than the wearable device in which the one or more photodetectors are included. In these examples, one or more communication interfaces (e.g., cables, wireless interfaces, etc.) may be used to facilitate communication between the various components. FIGS.16-21illustrate embodiments of a wearable device1600that includes elements of the optical detection systems described herein. In particular, the wearable devices1600shown inFIGS.16-21include a plurality of modules1602, similar to the modules shown inFIG.6as described herein. For example, each module1602may include a source (e.g., source604) and a plurality of detectors (e.g., detectors606-1through606-6). The wearable devices1600may each also include a controller (e.g., controller112) and a processor (e.g., processor108) and/or be communicatively connected to a controller and processor. In general, wearable device1600may be implemented by any suitable headgear and/or clothing article configured to be worn by a user. The headgear and/or clothing article may include batteries, cables, and/or other peripherals for the components of the optical measurement systems described herein. FIG.16illustrates an embodiment of a wearable device1600in the form of a helmet with a handle1604. A cable1606extends from the wearable device1600for attachment to a battery or hub (with components such as a processor or the like).FIG.17illustrates another embodiment of a wearable device1600in the form of a helmet showing a back view.FIG.18illustrates a third embodiment of a wearable device1600in the form of a helmet with the cable1606leading to a wearable garment1608(such as a vest or partial vest) that can include a battery or a hub. Alternatively or additionally, the wearable device1600can include a crest1610or other protrusion for placement of the hub or battery. FIG.19illustrates another embodiment of a wearable device1600in the form of a cap with a wearable garment1608in the form of a scarf that may contain or conceal a cable, battery, and/or hub.FIG.20illustrates additional embodiments of a wearable device1600in the form of a helmet with a one-piece scarf1608or two-piece scarf1608-1.FIG.21illustrates an embodiment of a wearable device1600that includes a hood1610and a beanie1612which contains the modules1602, as well as a wearable garment1608that may contain a battery or hub. In some examples, a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein. The instructions, when executed by a processor of a computing device, may direct the processor and/or computing device to perform one or more operations, including one or more of the operations described herein. Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media. A non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device). For example, a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media. Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g. a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory (“RAM”), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.). Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM). FIG.22illustrates an exemplary computing device2200that may be specifically configured to perform one or more of the processes described herein. Any of the systems, units, computing devices, and/or other components described herein may be implemented by computing device2200. As shown inFIG.22, computing device2200may include a communication interface2202, a processor2204, a storage device2206, and an input/output (“I/O”) module2208communicatively connected one to another via a communication infrastructure2210. While an exemplary computing device2200is shown inFIG.22, the components illustrated inFIG.22are not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device2200shown inFIG.22will now be described in additional detail. Communication interface2202may be configured to communicate with one or more computing devices. Examples of communication interface2202include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor2204generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor2204may perform operations by executing computer-executable instructions2212(e.g., an application, software, code, and/or other executable data instance) stored in storage device2206. Storage device2206may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device2206may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device2206. For example, data representative of computer-executable instructions2212configured to direct processor2204to perform any of the operations described herein may be stored within storage device2206. In some examples, data may be arranged in one or more databases residing within storage device2206. I/O module2208may include one or more I/O modules configured to receive user input and provide user output. I/O module2208may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module2208may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module2208may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module2208is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. FIG.23illustrates an exemplary method2300that may be performed by a processing unit as described herein. WhileFIG.23illustrates exemplary operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown inFIG.23. Each of the operations shown inFIG.23may be performed in any of the ways described herein. In operation2302, a processing unit causes a signal to be applied to a component (e.g., TDC and its corresponding photodetector, photodetector(s) and/or any other circuit of interest) within an optical measurement system. In operation2304, the processing unit generates, based on a response of the component to the signal, characterization data representative of a timing uncertainty associated with the component. In operation2306, the processing unit performs, based on the characterization data, an action associated with the component. The action may include compensating for the timing uncertainty, rating the component, disabling the component, and/or any other suitable action. An exemplary optical measurement system includes a signal generator configured to generate a signal and a processing unit configured to direct the signal generator to apply the signal to a component within the optical measurement system, generate, based on a response of the component to the signal, characterization data representative of a timing uncertainty associated with the component, and perform, based on the characterization data, an action associated with the component. An exemplary optical measurement system includes a phase lock loop (PLL) circuit having a PLL feedback period, a precision timing circuit configured to generate output pulses having programmable temporal positions within the PLL feedback period, and a processing unit configured to use the output pulses to generate characterization data representative of a characterization of a timing uncertainty associated with a component of the optical measurement system and perform, based on the characterization data, an action associated with the component. An exemplary system includes a memory storing instructions and a processor communicatively coupled to the memory and configured to execute the instructions to cause a signal to be applied to a component within an optical measurement system; generate, based on a response of the component to the signal, characterization data representative of a timing uncertainty associated with the component; and perform, based on the characterization data, an action associated with the component. In the preceding description, various exemplary embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the scope of the invention as set forth in the claims that follow. For example, certain features of one embodiment described herein may be combined with or substituted for features of another embodiment described herein. The description and drawings are accordingly to be regarded in an illustrative rather than a restrictive sense. | 61,948 |
11857349 | DETAILED DESCRIPTION This disclosure relates generally to an apparatus and related method to facilitate testing via a computing device, such as can be utilized to administer a test for the evaluation of cognitive and/or neuromotor function. By way of example, the apparatus is configured to attach to a tablet computing device for performing diagnostic tests on patients, such as manual dexterity (e.g., peg test), cognitive and/or neuromotor tests. The device can include a moveable platform that provides a test fixture configured to move into and out of physical contact with a surface of the touch screen of the computing device. In some examples, the platform is pivotably connected to the computing device, such as by a test fixture base that attaches the apparatus to the housing of the computing device. An arrangement of receptacles can be formed in the platform to receive and retain contact members (e.g., pegs) within the receptacles as to render the contact members detectable by the computing device while received in the receptacles. For example, the receptacles can be a two-dimensional array (e.g., a grid) of apertures extending through the platform, which has a contact surface to mechanically and electrically connect the device with the computing device. As a further example, the base can include a housing to surround or otherwise attach to a perimeter portion of the computing device to enable the platform to move into an overlaying contact test position with a predetermined portion of the screen. The platform further may be moved away from the touch screen via the pivot (e.g., greater than about 180 degrees of rotation about the pivot) to a support position in which the platform operates as kickstand to support the housing and the computing device when positioned on a surface (e.g., a table or desk). In some embodiments, the platform may be rotated around the tablet, flush with the other side so as to permit the user to lay the tablet flat on a surface. Thus, the platform, receptacles and touch screen can be employed by a user to perform a manual dexterity test when in the contact position, such as disclosed herein, and enable full access to the touch screen when in the support position. In some examples, the apparatus includes a hinge that pivotally connects the platform with respect to the test fixture housing, and the hinge can be employed as part of an electrical circuit to provide an electrical path between the receptacles of the platform with an electrical ground of the computing device. When the platform engages the touch screen (e.g., in the test position), the electrical connection that is maintained between the receptacles and the chassis of the computing device enables the computing device (e.g., having a capacitive touch screen) to detect the presence and absence of individual contact members at each respective receptacle—with as well as without human contact with the contact members. As a result, the computing device can be programmed to measure individual peg insertion and removal time (e.g., a part of a manual function and/or neuromotor test). This disclosure also provides systems and methods that can be utilized to implement a performance test to assess various aspects a patient's neurological and cognitive function. The patient can have a neurological condition that affects cognitive and motor performance, such as multiple sclerosis (MS) or other neurological disorders (e.g., Parkinson's, essential tremor, stroke, concussion, etc.). For example, the performance test can be used to determine the severity of the neurological condition in the patient. Although the systems and methods are described herein with respect to MS and the MS performance test (MSPT), it will be understood that patients with a neurological disorder other than MS can also benefit from the cognitive-motor performance assessment described herein. Such testing can include preprogrammed tests that include use of the apparatus in conjunction with the computing device. The approach assessing cognitive-motor performance according to the systems and methods described herein can be easily implemented outside of clinical settings by patients themselves or family members. For example, the systems and methods can be executed using a portable computing device, such as a tablet computer or smart phone, which is configured with one or more sensors, including, but not limited to timers, accelerometers and gyroscopes. The portable computing device can be programmed to execute a set of test modules configured to assess cognitive-motor performance, such as a manual function test module, a cognitive processing speed test module, and a movement assessment test module (and other test modules that can be used to assess the cognitive-motor performance). The set of modules can also include a collection module to aggregate test data from the manual function test module, the cognitive processing speed test module, and the movement assessment test module (as well as other test modules that can be used to assess the cognitive-motor performance. The tests can be implemented to measure neurological function and/or neuropsychological function of a subject. For example, the tests can be employed as a test for MS severity as part of a clinical trial or other research protocol, or for patient monitoring for clinical assessment and care. FIG.1depicts an example of a system10that can be employed for testing and analysis of one or more patients. The system10can include one or more computing apparatuses (also referred to as testing apparatuses)12programmed to execute a plurality of tasks based on instructions stored in memory14. The computing apparatus12can be implemented in some embodiments as a portable computer, such as a tablet computer or smart phone. As such, the device may include a display/touch screen28that provides a human-machine interface (HMI) that a user, such as a patient, can employ to interact with the computing apparatus12. As used herein a patient can refer to a living subject (e.g., adult, child or animal) in need of treatment by a physician, physician assistant, advanced practice registered nurse, veterinarian, or other health care provider or the subject may be a healthy subject that is to be tested for other reasons. In some examples, a user can perform a series of tasks that involve physical interaction between the patient (e.g., using one or more fingers) and the touch screen28directly to manipulate one or more graphical objects displayed on the screen. In other examples, user can perform certain tasks through interaction with an external input device32that can be communicatively coupled with the system10(e.g., via physical or wireless connection with a corresponding port of the apparatus12). The interaction may involve contact between the external input device32and the display28or otherwise be responsive to the instructions and/or graphical elements presented on the display. In still other examples, the apparatus12can include one or more sensors30(e.g., one or more timers, accelerometers, gyrometers or gyroscopes) that can collect data in two or three dimensions responsive to patient movement and interactions during selected tasks. By configuring the testing apparatus (e.g., a tablet computing device) to perform a plurality of different test modules (e.g., stored in memory14), the over testing process is facilitated for patients as well enables recording a rich set of test data for evaluation of cognitive and neuromotor function for such patients. As an example, the sensor30can include one or more three-axis accelerometers. The one or more accelerometers can be configured to measure acceleration of the apparatus along one or more axis, such as to provide an indication of acceleration (e.g., an acceleration vector) of the apparatus in three dimensions. The one or more accelerometers can measure the static acceleration of gravity in tilt-sensing applications, as well as dynamic acceleration resulting from motion or shock. Additionally, the one or more accelerometers can possess a high resolution (4 mg/LSB) that can enables measurement of inclination changes less than 1.0°, for example. The one or more accelerometers may provide various sensing functions, such as activity and inactivity sensing to detect the presence or lack of motion, direction of motion, the smoothness of motion, and if the acceleration on any axis exceeds a user-defined level. The one or more accelerometers can also sense tapping (e.g., single and double taps) on a surface such as a touch screen as well as sense free-fall if the device is falling. These and other sensing functions can provide output data. An example accelerometer is the ADXL345 digital accelerometer available from Analog devices. Of course other accelerometers could be utilized. As another example, the sensor30can include a three-axis gyroscope (e.g., gyrometer) that can be configured to sense orientation of the device along three orthogonal axes. The gyroscope can provide output data corresponding to orientation of the apparatus12along three orthogonal axes. The gyroscope can be implemented as 3-axis MEMS gyro IC, such as including three 16-bit analog-to-digital converters (ADCs) for digitizing the gyro outputs, a user-selectable internal low-pass filter bandwidth, and a Fast-Mode I2C (400 kHz) interface. The gyroscope30can also include an embedded temperature sensor and a 2% accurate internal oscillator. An example gyroscope that can be utilized is the ITG-3200 3 IC available from InvenSense, Inc. Other gyroscopes could be utilized in other examples. In the example ofFIG.1, the system10can include input/output (I/O) circuitry26configured to communicate data with various input and output devices coupled to the system10. In the example ofFIG.1, the I/O circuitry26is connected to communicate with the display/touch screen28, the sensor30, the external input device32and a communication interface34. For example, the communication interface34can include a network interface that is configured to provide for communication with corresponding network36, such as can include a local area network or a wide access network (WAN) (e.g., the internet or a private WAN) or a combination thereof. As a further example, the communication interface34can send task data and/or analysis data derived from task data to a database38. The database38stores test results data, such as obtained for a plurality of patients (e.g., from one or more health institutions) based on testing using any of the modules disclosed herein. For instance, the system10can be programmed to upload and transfer such data to the remote database38, such as an electronic health record (EHR) for the patient. Such transfer of data can be HIPAA compliant and provided over a secure tunnel (e.g., HTTPS or the like). The transfer of task data and/or analysis data can be automated to occur upon completion of one or more tests. Since the testing is performed via computing device (e.g., tablet), the test results data can also include metadata associated with the testing environment (e.g., time, geographic location, temperature or the like) and the patient (e.g., demographic information, medical history or the like) to facilitate analysis of patient data. For instance, the data provided by the apparatus12can further be analyzed by an external analysis system39. The analysis system39can access the database38directly (e.g., within a firewall where the database38resides or it may access the database via the network36via a secure link. Results data acquired for one or modules for different patient cohorts can be aggregated together based on the testing metadata and assessed (e.g., by statistical processing) for a variety of purposes (e.g., clinical research and diagnosis). A provider may also employ an EHR system or other interface to access the test results stored in the database38. In this way, statistical analysis of a large patient population can be performed based on data collected from a plurality of different apparatuses, which can be distributed across a state, region, country or even the world. Moreover, since the set of tasks can be performed by patients using a portable computing apparatus (e.g., tablet computer, smartphone)12in the absence of a trained healthcare professional, a single provider or team of providers can monitor and service needs of a much larger patient population than would otherwise be possible for traditional MS testing, which typically requires that each patient visit and travel to a testing site for evaluation. Additionally, the approach disclosed herein can provide a patient-centric neurological and neuropsychological performance self-assessment system. By implementing such testing in the system as part of a self-administered testing platform, related scoring and analysis can be generated by the computer automatically because data is collected by such computer, obviating the need for human involvement, and allowing error-free score generation. Further the data collected is objective and as accurate as the sensors and collection system thus providing for more reliable data and statistics. As mentioned above, the analysis and scoring can relate to evaluation of a patient's neurological function, neuromotor function and/or neuropsychological function for the patient. The computing apparatus12can also include a processing unit (also referred to as processor)16and memory14. The memory14can include one or more non-transitory memory device configured to store machine readable instructions and/or data. The memory14could be implemented, for example as volatile memory (e.g., RAM), nonvolatile memory (e.g., a hard disk, flash memory, a solid state drive or the like) or combination of both. The processing unit16(e.g., a processor core) can be configured in the system for accessing the memory14and executing the machine-readable instructions. A user may enter commands and information into the computing apparatus12through one or more external input devices, such as the touch screen28or other user input devices (e.g., a force transducer and stylus apparatus, pegs, microphone, a joystick, a game pad, a scanner, or the like)32. Such external devices could be coupled to the computing system via the I/O circuitry26. By way of example, the memory14can store a variety of machine readable instructions and data, including an operating system18, one or more application programs20, other program modules22, and program data24. The operating system18can be any suitable operating system or combinations of operating systems, which can depend on manufacturer and system to system. In some examples, the application programs and program modules for implementing the functions of the test apparatus disclosed herein can be downloaded and/or updated and stored in the memory14for execution by the processor16. The application programs20, other program modules22, and program data24can cooperate to provide motor and cognitive testing via the computing apparatus12, such as disclosed herein. Additionally, application programs20, other program modules22, and program data24can be used for computing an indication of motor, cognitive or a combination of motor and cognitive functions of a patient based on the task data acquired during testing, such as disclosed herein. As a further example, the application programs20can be programmed to implement a battery of tests designed to gather task data for evaluation of a patient's MS condition. For example, the system10can include the following test modules programmed to collect data24, including a manual function test module, a cognitive processing speed test module, a 9 hole peg test, and a movement assessment test module (and other test modules that can be used to assess the cognitive-motor performance). The movement assessment test module can include one or both of a balance test module and a gait assessment module. The data24can be analyzed to characterize the patient's cognitive and motor performance, individually or both simultaneously, to provide a quantitative assessment of the patient's MS condition. The data24can be analyzed separately for each of a plurality of individual tests to compute a score for each test. Additionally or alternatively, the data24for the set of tests can be aggregated to compute an overall score for the patient, which can also be stored in the memory14as part of the data24. The analysis of the data24can be performed at the apparatus12, which is programmed to execute such testing. In other examples, the analysis of the data24can be performed remotely, such as by the remote system in response to the data being uploaded from the apparatus12to the remote database38. Regardless of whether the analysis is performed by the apparatus12, by the remote analysis system39or a combination thereof, since the analysis of the data can be performed by a computer according to test results data, the analysis can provide a more robust characterization of the neurological, neuropsychological and cognitive functioning. As a result, the approach disclosed herein can in turn ascertain more useful information in distinguishing MS or other conditions from excepted norms, and further distinguish severity within a condition and over time for each patient, such as based on a historical analysis of test data over period of time (e.g., one or more years). Additionally, such data can be automatically entered into clinical or research databases, thereby eliminating the need for manual entry of data by a human, and allowing error-free data entry. Further, the data may be saved in a format that makes longitudinal and/or population comparisons more efficient. FIGS.2and3depict examples of respective applications (e.g., stored in memory as machine readable instructions)40,50that can be used to produce the results test data that can be used to evaluate a patient's neurological and cognitive function. Each of the applications40,50can be stored in the memory14ofFIG.1and be executed by the processor16ofFIG.1, for example. The applications40,50each include machine readable instructions for an MS performance test (MSPT) and corresponding data that can be programmed to test and evaluate MS status and/or condition of a patient. The applications40,50each include modules that can employ a plurality of discrete tasks that capture corresponding data. In the examples ofFIGS.2and3, the modules include a manual function test module42,52; a cognitive processing speed test module44,54; a movement assessment test module46,56; and a collection module48,58. The applications40,50can also include one or more additional function test modules47,57. Application50also includes a scoring module60. The manual function test module42,52can evaluate a manual dexterity of a given patient in response to a first set of user inputs (FUI) based on a manual dexterity test executed by the manual function test module42,52. The manual function test module42,52can store corresponding manual dexterity test data (MDTD) in the memory based on the first set of user inputs (FUI) indicative of a measure of the given patient's manual dexterity. The cognitive processing speed test module44,54can evaluate a cognitive function of the given patient in response to a second set of user inputs (SUI) based on a cognitive processing speed test. The cognitive processing speed test module can store corresponding cognitive function test data (CFTD) in the memory based on the second set of user inputs (SUI) indicative of the given patient's cognitive function. The movement assessment test module46,56can evaluate center-of-gravity movement of the given patient in response to motion test data (MTD) acquired during a physical activity (PAI) of the given patient. The movement assessment test module46,56can store the motion test data (MTD) in the memory indicative of the center-of-gravity movement of the given patient. The collection module48,58can aggregate test data (TD) based on the manual dexterity test data (MDTD), the cognitive function test data (CFTD) and the motion test data (MTD). The collection module48,58can also aggregate data (AFTD) from any additional function test module47,57into the test data (TD). The modules of applications40,50can execute tests (also referred to as tasks or trials) that provide outputs that can be utilized to characterize the cognitive and motor state of the patient. The tasks can be programmed to provide and/or coordinate with a graphical user interface (GUI) that displays graphics corresponding to the test. The modules and/or tests can be programmed to collect data in response to user inputs and user interactions during the test. The data acquired during testing can vary based on the test being performed, the test module being executed, and the input devices activated to provide input data. The arrangement of this data and specificity can depend on application requirements and user preferences. Each of the applications40,50can sample active input devices for each test module and test combination, along which related data (e.g., identifying timing, test ID, module ID) to facilitate analysis thereof. The sample rate for a given input source further can vary depending on the input device operating parameters and the information being collected. Examples of input data that can be collected can include clock data, accelerometer data, gyroscope data, GUI data, UI device data and analysis data. The accelerometer data that can be acquired by sampling an output of one or more accelerometers (e.g., sensors30ofFIG.1) to provide an indication of acceleration along one or more orthogonal axes. The gyroscope data can be acquired by sampling an output of a gyroscope (also referred to as a gyrometer). The GUI data can represent user interactions received in response to user input (e.g., as can be made via display/touch screen28ofFIG.1) during a respective test. Text and graphical objects can be visualized on a touch screen to instruct the user for performing the various tests for each respective test module. The GUI data can also include graphical and other information that is provided as part of the test and results of the test responsive to user interactions. For example, the results and other information in the GUI data can include timing information obtained during the test, based on a system clock (e.g., of the computing apparatus12ofFIG.1) to provide timing information for when user inputs are received. Analysis and meaning attributed to the GUI data depending on the context of the test and test module being executed can also be stored, such as forming part of the GUI data or the analysis data. The data can also include user input (Up/device data that includes data collected from one or more user input device (e.g., from external device32ofFIG.1) during a respective test. For example, the user input device can include a single axis or multi-axis force (torque) transducer that can be utilized to measure a gripping force and associated coordination of a given patient under test. The device can be in the form of a cone-shaped or cylindrical structure to be gripped by the user and includes force transducer to measure the user's gripping force. Other force sensors may include, but are not limited to, the use of springs, strain gauges, piezoelectric materials, and electromagnetic transducers. In some examples, the gripping structure can be utilized to engage graphical objects presented on a display (e.g., a touch screen) via user interactions. The interactions can be detected via the touch screen to provide corresponding GUI data. Thus, it is understood, that the input data recorded for a given test can involve more than one type of data from one or more different input sources. In some example, the input device can also include other sensors (e.g., accelerometers and a gyroscope) such as to provide additional information associated with movement of the gripping structure by the user during the test. Depending on the capabilities of the UI/device data and test requirements, the UI/device data can also include other information relevant to tests or the test environment, such as timing information (e.g., timestamp applied to other data), temperature, altitude, user inputs received via user inputs at the device and the like. Thus, the input data can include a combination of data from disparate and separate devices (e.g., from a gripping device, clock, and from the touch screen) that can be utilized to perform each respective test. The type of movement and interactions requested can vary from test to test. In the example ofFIG.2, the analysis of the test data (TD) can be performed by a remote analysis system, while in the example ofFIG.3, the analysis of the test data (TD) can be performed by a scoring module60and a disability score(DS) can be provided to the remote database. The scoring module60can, for example, characterize the cognitive and motor abilities of the given patient based on percentiles of neurological normal function for the manual dexterity test data, the cognitive function test data and the motion test data. It will be appreciated that the scoring function and/or scoring module60can use another means to determine the cognitive and motor abilities of the patient with respect to neurological normative values that gives an understanding of the patient's disease state and/or progression. The scoring module60can compute one or more score that can be used to evaluate the cognitive and motor abilities of the patient. The score can be a score for a given test, such as implemented by each of the test modules52-58. In other examples, the score can be a combined score based on result data collected based on tasks executed for two or more of the test modules. In yet other examples, individual tasks of a given test can also be analyzed to compute a respective score. Each of the scores, regardless of the manner computed, can be stored in memory as part of the analysis data. As mentioned, the scoring function can be programmed to compute each score automatically based on the test data acquired by each respective test module. Scoring may also take into account patient longitudinal date, i.e. data taken during similar tests on the same patient during different sessions over a period of time. Additionally, since each of the tests can be implemented according to respective test modules, each respective module can be updated independently as new data and testing paradigms might become available. Thus the MSPT application is scalable and extensible. Examples of the manual performance test module that can be used to evaluate a patient's manual dexterity are shown inFIGS.4-6.FIG.4depicts an example of a manual performance test module62that can be used to evaluate a patient's manual dexterity.FIG.5depicts a schematic example70of a standard nine-hole peg test that can be used in conjunction with a touch screen computing device to evaluate the patient's manual dexterity.FIG.6depicts an example flow of the execution of a manual function test module80. FIG.4depicts an example of a manual performance test module62that can be used to evaluate the patient's manual dexterity. The user actions can be prompted by graphical and/or audible indicators to initiate the test. At element64, the first set of user inputs can be received, each in sequence, by the computing device (e.g., a tablet computer or a smart phone). The user inputs can be, for example, a touch by a user's finger or a peg device to a touch screen or the mobile computing device. At element66, the total time for the given patient to complete the first set of user inputs can be calculated. Other parameters can also be calculated (e.g., force, time for individual tasks, and the like). The total time (and other parameters) can be an output and/or a result of the manual function test module that is part of the test data and scored by a scoring function. FIG.5depicts a schematic illustration of an example implementation of a testing apparatus70corresponding to a computer-implemented (e.g., electronic) analog of a nine-hole peg test that can be used to evaluate the patient's manual dexterity. A platform constituting a test fixture72can be placed in a test position on a touch-sensitive screen human-machine interface of the testing apparatus (e.g., a tablet computer)70. As disclosed herein, the test fixture72can be pivotably connected to a base of the apparatus70, which is attached to the computing device, to provide for rotational movement of the test fixture with respect to the touch screen of the computing device between the test position and a support position in which the base is operative to support the base and the computing device. The test fixture72includes a plurality of receptacles (or holes)74a-ifor receiving contact members that, when placed in the receptacles while the test fixture is in the test position, enable interaction that is detectable by the computing device even in the absence of direct contact by the user. The test module (e.g., module62or80) is also programmed to expose a GUI (based on executing preprogrammed MSPT instructions stored in memory of the tablet computer there) on the touch screen73for instructing the user during the test. The instructions, including graphical indicators at locations to place the contact members, can be viewable through the receptacles and, in some examples, the test fixture. During testing, the manual function test module (module62) stores the test data corresponding to user inputs in response to placing the contact members into the receptacles while the test fixture is in the test position during the execution thereof. As mentioned, the test module calculates time values associated with moving respective contact members from the start position to the identified locations on the screen (predetermined locations that align with the receptacles), and stores the computed time values as part of the test data. The contact members can be electrically conductive pegs (e.g., metal pegs) can be removed from starting receptacles78a-iand inserted into the receptacles74a-i, and the touch screen interface can detect when the pegs are in contact with the screen. Examples of testing apparatuses are disclosed herein with respect toFIGS.24-33. Other examples of a testing apparatus70that includes a test fixture72and contact members78that can be utilized in conjunction with a computing device having a capacitive touch screen are disclosed in the above-incorporated U.S. patent application Ser. No. 14/503,928 filed Oct. 1, 2014 and entitled OBJECT RECOGNITION BY TOUCH SCREEN, which published as US Pat. Pub. 20150094621 and is incorporated herein by reference. As disclosed herein, when a contact member (e.g., one of the conductive pegs) engages or otherwise is capacitively coupled with the surface of the touch screen (e.g., a capacitive touch screen) with or without human contact, an electrically conductive circuit is established with the touch-sensitive surface, which includes an electrical path from the contact member to an electrical ground of the computing device. The path can establish a sufficient flow of electrons to enable the electrical characteristics (e.g., capacitance) of the touch-screen to change so that the engagement can be detected even in the absence of human contact. Since the contact member can be detected by the touch-sensitive surface in the absence of contact by the subject, based on the electrically conductive path that is established when a given peg is inserted into a receptacle of the test fixture overlying the touch screen surface, each peg can be detected by the touch screen interface during the test even after it is released by the user. The manual function test module (e.g., module62or80) can track data related to the nine-hole peg test, including, but not limited to: a position of at least one peg, as well as various times, including the time to complete the nine-hole peg test, a time for peg insertion, a time for peg removal, and/or a force used to insert or remove the peg. Pegs can have any shape such as elongated cylindrical members (e.g., having circular or other cross-sectional shapes). In one example of the test, the test is initiated with the pegs inserted in a row at the bottom of the screen, as demonstrated inFIG.5. Thus, each peg is detected by the touch screen in the row, resulting in a graphical indicator being displayed on the screen at the location corresponding to each peg. The test ends when the user returns all of the pegs to their starting positions in the row. The timing for moving each peg from the row to one of the nine holes can be computed automatically by the computing device and utilized for assessing the manual dexterity of the user. In a second example of the test, designed to more closely simulate a traditional 9-hole peg test, the pegs are placed in the center bowl (such may reside between the test receptacles74and the starting receptacles78. The test ends after the pegs have been inserted into and removed from all the wholes and all pegs are returned to the discard area or starting position. Various instructions75can be visible through the housing and/or adjacent to the housing (in an uncovered portion of the screen73) to help guide the user through one or more tests. Instructions can also be rendered as audio that can be provided via speakers (e.g., external speakers of the device or headphones connected to an audio jack). FIG.6shows example flow of the execution of the manual function test module80that can quantify manual dexterity during the performance of an upper extremity task. The manual function test module80can include a plurality of sub-modules, each of which can include respective functions. As shown inFIG.6, the sub-modules can include a setup module82, a data collection module84, a data processing module86and a data analysis module88.FIG.6is described with respect to a tablet computer and the electronic analog of the nine-hole peg test ofFIG.5, but it will be appreciated that other mobile computing devices and/or other types of test can be implemented by the manual function test module80. The setup module82can facilitate setting up the manual function test, such as can include data90specifying that the housing of the nine-hole peg test has been positioned on the touch screen, which can be automatically detected by the touch screen or in response to a user input. Additional data setup data92can be provided to specify that the pegs of the nine-hole peg test have also been positioned to their respective starting position, which can be detected automatically or in response to a user input responding to query. In an example, the mobile computing device executing the test module80can be a tablet computer (e.g., an iPad tablet computer available from Apple, Inc. or another computer having a touch screen interface). The housing of the test apparatus (housing72ofFIG.5; see alsoFIGS.24-33) can be positioned on the touch screen such that the holes in the housing can correspond to GUI input points on the touch screen. The pegs can be positioned in a row or in the discard tray or adjacent storage container depending on the test process and configuration of the housing of the testing apparatus. The pegs can be of a diameter smaller than the diameter of the holes, for example, to allow ease of fit, and a length greater than the distance between the touch screen and the holes in the housing, for example, to allow a user to place or remove a peg from the housing. The data collection module84can collect data related to the nine-hole peg test. The data collection module84can record a position of each peg (e.g., in the X and Y direction) on the screen94. The data collection module can sample the touch screen (e.g., via a touch screen API) for the detecting position data96representing a location each of the pegs at a predefined sample rate (e.g., about 60 Hz or a higher or lower rate). At each sampling interval, the time associated with any insertion and/or removal event of a peg can be recorded and stored in memory as insertion or removal data98. The data processing module86can be configured to process input data for subsequent analysis. For example, the data processing module can include a filter100to remove noise and artifacts from the collected data. For example, the filter can operate to remove artifacts due to “peg bounce” from data collected from the touch screen. The data processing module86can also be configured to identify a phase shift102from insertion of the peg to removal of the peg with respect to the test fixture that is overlying the screen. The data processing module can also include a timing monitor (e.g., clock)103to track timing associated with data collected during execution of the test module80. For instance, the timing monitor103can determine factors, such as the total time to complete one cycle of insertion and removal of all 9 pegs. The timing monitor103for example can associate a time stamp to all input data, including position data96from the touch screen and force information from a force transducer. Additionally, the timing monitor103can operate in conjunction with the touch screen interface to indicate a time of insertion and removal of each peg relative to location and removal from the storage tray or home row, and the difference in time to complete the tasks. In another example, the data collection module84can include a force calculator101programmed to compute force during a series of tasks for measuring the patient's manual dexterity. The manual function test module80can execute instructions, for example, to display a series of GUI objects on a display with which the user is to interact by employing one or more gripping apparatus (e.g., the external user input device32mentioned with respect toFIG.1). As one example, the user can be instructed to select an appropriate gripping device and move an end of the device into engagement with a GUI object displayed on the touch screen. Different shapes and sizes of device can be used or a single generic gripping device can be used. In addition to measuring gripping force during the test, the force calculator101can compute other movement and force related information (e.g., force variability) based on the output of a force transducer with which the user interacts and/or interaction with the touch screen. For example, detected data from the force transducer can be communicated to the computer (e.g., via a wired or wireless link) and the force calculator can convert the data in a force measurement. The manual function test module80can also record other test information, such as timing based on the timing monitor103and other information attributes based on how the user moves the gripping device and how the user interacts with the touch screen during each task. The data analysis module88can analyze the data and create the output data (e.g., MDTD) that is aggregated as part of the test data (e.g., TD) for future scoring. The data analysis module88can analyze one or more time parameters104. The time parameters104can include a total time to complete the test, an insertion time for a peg, and/or a removal time for a peg. The time can also be computed as a time difference between any two sequential events. Statistical data (e.g., mean and standard deviation) related to the time values can also be computed and stored in memory. The data analysis module88can also measure a learning or fatigue effect106with the peg insertion or removal time, such as based on an analysis of how timing changes between subsequent per insertions and/or removals during execution of a given session of the manual function test module80, such as when the same set of tasks is repeated as part of the manual function test or if different tests are performed. Examples of a standard cognitive processing speed test module that can be used to evaluate a patient's cognitive processing speed are shown inFIGS.7-10.FIG.7depicts an example of a cognitive processing speed test module110that can be used to evaluate a patient's cognitive processing speed.FIGS.8and9depict a schematic examples screen shots of interactive GUIs for cognitive processing speed tests116and124, respectively, which can be generated on a touch screen by the cognitive test module to evaluate a patient's cognitive processing speed.FIG.10depicts an example flow diagram demonstrating the execution of the cognitive processing speed test module130. FIG.7depicts an example of a cognitive processing speed test module110that can be used to evaluate a patient's cognitive processing speed. The cognitive processing speed test module110can include a symbol generator, a key generator, a timing monitor and an analysis function. At element112, each input of a set of user inputs can be received. The set of user inputs can be received from a user via a user interface, such as a touch screen of a mobile computing device (e.g., a tablet computer or a smart phone). At element114, the time between each input can be determined. Also at element114, whether the input is a correct or incorrect response to a prompt can be determined based on the user selection. The time and accuracy can be stored in memory. A score can be determined based on a number of correct responses in a time period for a speed test trail. The number of correct responses during the time period can be aggregated as part of the test data (TD). Additionally or alternatively, the score can be evaluated relative to pre-test data (from a control group, longitudinal patient data, and/or acquired during an un-timed pre-test). As an example, overall test control can employ the cognitive speed processing test module54to implement a test (e.g., using the computing apparatus12ofFIG.1) to require that a user repeatedly associate a symbol (e.g., a digit1-6ofFIG.8) provided by the symbol generator with a random or pseudorandom key (e.g., S1-S6ofFIG.9) generated by the key generator. Examples of the different symbols that can be associated with different numbers for the cognitive speed processing test module are shown inFIG.9, depicts an example screen shot showing a GUI124for implementing a processing speed test. As shown inFIG.8, the GUI can provide a key (e.g., randomly generated) and a sequence of characters that a user is to match during the testing118. The randomly generated key can provide random number/signal pairings for each administration. The participant records responses by using the keyboard at the bottom of the screen122. The middle section of the screen120is replaced with a new set a symbols when a response is recorded to the last symbol. The testing can record data indicative of both accuracy and speed for each phase of such testing. The processing speed test demonstrates comparable psychometric properties as the more traditionally used symbol digit modalities test. The cognitive processing speed test module110can also be programmed to provide additional measures beyond simple measure of accuracy. The timing monitor can record the time to complete each task, the test a whole. The timing monitor can also be employed to supply a time base for interactions during the test. For example, if the user is dragging a graphical object (e.g., with a finger or stylus), timing can be utilized to compute acceleration and deceleration effects for such user interactive dragging events. Other cognitive functions tested by the cognitive speed processing test module110can include memory recall, attention and mental fatigue. FIG.10depicts an example flow of the execution of the cognitive processing speed test module130that can be stored in memory and executed to evaluate a cognitive function of the given patient. The cognitive processing speed test module130can include a plurality of sub-modules, each of which can include one or more respective functions. As shown inFIG.10, the sub-modules can include a setup module132, a data collection module134, a data processing module136and a data analysis module138.FIG.10is described with respect to a tablet computer and in the context of the corresponding symbol digit modalities test shown inFIGS.8and9, but it will be appreciated that other mobile computing devices and/or other types of tests can be implemented by the cognitive processing speed test module130. Additionally or alternatively, a score can be determined which can be evaluated relative to pre-test data (from a control group, longitudinal patient data and/or acquired during an un-timed pre-test). The setup module132can present an instructional tutorial140on the mobile computing device to establish test competency. The data collection module134can collect data related to the cognitive processing speed test. The data collection module134can record each response with a time stamp142, sampling for responsive inputs at a suitable sample rate (e.g., about 60 Hz or a higher or lower rate)144. The responsive inputs can also be recorded with respect to test parameters146(e.g., key and symbol layout). The data processing module136can include a time calculator148to calculate the time between the individual input responses. The data processing module136can also include a function150to determine whether each individual input response is correct or incorrect. The data analysis module138thus can analyze the data and store corresponding output data (e.g., CPSTD) that is aggregated as part of the test data (e.g., TD) for subsequent overall test scoring. The data analysis module138can determine the total score correct in the time period152. The data analysis module138can also be programmed to identify any inter-trial learning or fatigue effect (and correct for these effects). Additionally or alternatively, a score can be evaluated relative to pre-test data (from a control group, longitudinal patient data and/or acquired during an un-timed pre-test). Examples of the movement assessment test module that can be used to evaluate a patient's center-of-gravity movement are shown inFIGS.11-19.FIG.11depicts an example of a movement assessment test module160that can be used to evaluate a patient's center-of-gravity movement.FIG.12depicts a schematic example168of a computing device (e.g., mobile computer apparatus)169attached to a patient's body for conducting a movement assessment test.FIG.13depicts another example of a movement assessment test module170that includes a balance test module172and a gait test module174.FIG.14depicts an example of a balance test module180that can evaluate a patient's balance based on measuring a patient's center-of-gravity movement.FIG.15depicts an example186of a balance test that can be used to evaluate a patient's balance.FIG.16depicts an example flow of the execution of the balance test module190.FIG.17depicts an example of a gait test module230that can evaluate a patient's gait based on a center-of-gravity movement.FIG.18depicts a schematic example of calculators used by the gait test module240to evaluate a patient walking a predetermined distance based on the patient's center-of-gravity movement.FIG.19depicts an example flow of the execution of the gait test module250. InFIG.11, the movement assessment test module160includes instructions executed to can evaluate a center-of-gravity movement of the given patient in response to motion test data acquired during a physical activity (static or dynamic). The movement assessment test module160can receive accelerometer data (e.g., multi-axial accelerometer data associated with a movement162) and gyrometer data (e.g., multi-axial gyrometer data associated with the movement164). The accelerometer data and gyrometer data can be sampled from an accelerometer and gyroscope of the computing device and stored in memory during a respective task. The tasks can include a balance task (e.g., provided by the balance test module172of the movement assessment test module170ofFIG.13) and/or a gait test (e.g., provided by the gait test module174ofFIG.13). To complete the tasks, the patient can wear or hold the portable computing device during a static test (e.g., balance test) or a dynamic test (e.g., gait test). For example, the movement assessment test module160ofFIG.11can be executed by a computing device169while attached to the patient, such as demonstrated inFIG.12.FIG.12demonstrates a mobile computing device (e.g., tablet computer or smart phone)169fixed on the patient's lower back at or approximating the sacral level. For instance, one or more straps or a belt171can be secured to the device and used to hold the computing device169, for example, on the patient's lower back during execution of the movement assessment test module160ofFIG.11. In some further embodiments, the computer device may be attached, for example, with Velcro, snaps, buttons, pockets, elastic material or ties. In some embodiments, the patient may hold the computing device. In some embodiments, the computing device may be attached to the head, back, chest, abdomen, arms and/or legs. This testing configuration can be used for both static testing (e.g., balance test) and/or dynamic testing (e.g., gait test). InFIG.11, at element166, the center-of-gravity movement can be calculated based on the acceleration data and the gyrometer data for the patient. The acceleration data and the gyrometer data can be acquired by one or more accelerometers and gyrometers in the computing device169. An angular displacement can also be computed based on the gyrometer data, which can be part of the center-of-gravity movement computed by the test module160at166. Movement assessment test module160can be programmed to translate the acceleration data and gyrometer data to the patient's center of gravity based on placement of the computing apparatus at a predetermined position during execution of the test module160. FIG.14depicts an example of a balance test module180that can be configured to evaluate a patient's balance based on a static center-of-gravity movement. The balance test module180can determine a volume of an ellipsoid in three-dimensional space corresponding to the center-of-gravity movement of the patient, demonstrated as function182. A center-of-gravity movement during a static balance test corresponds to a lack of balance. The center-of-gravity movement is analyzed for balance data under different conditions, demonstrated as function184. An example of the different conditions is shown inFIG.15, which depicts an example screen shot186showing a GUI for one type of balance test. In this example, instructions are provided to the user on how to implement the test, such as can include plurality of tests for a predetermined duration. Data from sensors (e.g., one or more accelerometers, magnetometers and a gyroscope) can be collected during each test and a corresponding score can be computed based on such results. FIG.16depicts an example flow of the execution of the balance test module190that can evaluate a balance function of the given patient. The balance test module190can include a plurality of sub-modules, each of which can include respective functions. As shown inFIG.16, the sub-modules can include a setup module192, a data collection module194, a data processing module196and a data analysis module198.FIG.16is described with respect to a tablet computer and the electronic analog of the balance test shown inFIG.15, but it will be appreciated that other mobile computing devices and/or other types of tests can be implemented by the balance test module190. The setup module192can position200the testing apparatus on the patient's back and configure the time interval for the balance test (e.g., 30 second trials202). The data collection module194can collect data from the accelerometer204and the gyroscope206, each sampled at, for example, 100 Hz. The data processing module196can normalize208the data for initial apparatus orientation and placement, perform a low pass filter210operation on the data, integrate212the gyroscope data to resolve angular displacement and calculate time-series center-of-gravity (COG) movement214from accelerometer, gyroscope, and angular displacement data. The data analysis module198can analyze the data and create the output data that is aggregated as part of the test data (e.g., TD) for future scoring. The data analysis module198can determine a 95% confidence interval (CI) of time-series center-of-gravity movement per axis216; a volume of an ellipsoid that encompasses the 95% CI; a log normalized volume220; and a per-axis analysis for the effect of eyes open and eyes closed222conditions. FIGS.17and18each depict examples of a gait test module230,240that can be programmed to evaluate a dynamic condition (e.g., walking speed in a 25-foot walk test) for the patient. The evaluation can be based on the accelerometer data and gyroscope data, which can be used in the computation of a walking speed, a cadence, a stride length, direction, and a variability in one or more of the other computed measures or other variations that might be determined from the acceleration and gyroscope data. FIG.17depicts a gait test module230that can determine a volume of an ellipsoid corresponding to a center-of-gravity movement of the patient232and analyze the center-of-gravity movement for gait data under walking conditions234. The analysis can be completed using the components ofFIG.18, an efficiency calculator242and a quality calculator244. The efficiency calculator242can compute a measure of gait efficiency for each axis based on the center-of-gravity movement determined along each axis during a gait trial where the patient is walking a predetermined distance. For example, efficiency can be based on a comparison of a measure of movement in the direction of locomotion relative to movement that is not in the direction of locomotion (e.g., anterior-posterior versus medial-lateral motion), such as can be derived from accelerometers and gyrometers attached to the patient during testing. The quality calculator244can compute a measurement of gait quality for each axis based on the center-of-gravity movement determined along each axis during the gait trial and based on the time for the patient to walk the predetermined distance. Gait quality, for example can be include efficiency as well as gait symmetry (e.g., a different between left and right side motions) and jerk/accelerations that might occur during testing—also based on measurements from accelerometers and gyrometers attached to the patient during testing. Gait data can be compared against controls, patient populations and longitudinal patient data. FIG.19depicts an example flow of the execution of the gait test module250that can include instructions executed to evaluate a dynamic motion task of the patient. The gait test module250can include a plurality of sub-modules. As shown inFIG.19, the sub-modules can include a setup module252, a data collection module254, a data processing module256and a data analysis module258.FIG.19is described with respect to a tablet computer, but it will be appreciated that other mobile computing devices and/or other types of tests can be implemented by the gait test module250. The setup module252can ensure that the apparatus is positioned on the patient's lower back260, establish parameters for a 25-foot walking trial262, and set a duration dependent on time to complete the 25-foot walk. The data collection module254can collect accelerometer data264(e.g., three dimensional accelerometer data from the apparatus) and gyroscope data266(e.g., three dimensional gyroscope data from the apparatus) both sampled at, for example, 100 Hz. The data collection module254can also determine a time for the patient to complete the 25-foot walk268. The data processing module256can normalize270the data for initial position (orientation and placement) of the apparatus, low pass filter the data272, integrate274the gyroscope data to resolve angular displacement, and calculate the time-series center-of-gravity movement276from accelerometer, gyroscope, and angular displacement data. As an example, the data analysis module258can determine a 95% confidence interval (CI) of the time-series center-of-gravity movement per axis278, determine a volume of ellipsoid that encompasses the 95% CI280, log normalize the volume282, and perform a per axis analysis for measure of gait efficiency and quality284. An example of an additional function test module (e.g., module47inFIG.2or module57inFIG.3) is a visual acuity test module. The visual acuity test module can include instructions programmed to evaluate visual function of the patient in response to user inputs, which can be stored in memory as the UI device data. The visual acuity test module can include a contrast control such as to provide tests for both static and dynamic visual acuity. For example a first part of test can establish baseline static acuity data for the patient. Following the static visual acuity test, the contrast control can vary the contrast in a dynamic manner for a plurality of tests. The data between static and dynamic visual acuity can be analyzed to ascertain an indication of patient visual acuity. The data can include an accuracy level for the test as well as a time to complete each phase of the test. Examples of the visual acuity test module that can be used to evaluate a patient's center-of-gravity movement are shown inFIGS.20-23.FIG.20depicts an example flow of the execution of the visual acuity test module290.FIGS.21-23depict schematic examples of a visual acuity test that can be used to evaluate a patient's visual acuity. FIG.20depicts an example flow that can include instructions executed by the visual acuity test module290. The visual acuity test module290can include a plurality of sub-modules, each of which can include one or more respective functions. As shown inFIG.20, the sub-modules can include a setup module292, a data collection module294, a data processing module296and a data analysis module298.FIG.20is described with respect to a tablet computer, but it will be appreciated that other mobile computing devices and/or other types of tests can be implemented by the visual acuity test module290. The setup module292can set the screen to full brightness301and position the apparatus302(e.g., 5 feet from the patient at eye level). The data collection module294can collect data regarding the line size, letters displayed, and gradient levels303, as well as the number of correct responses305recorded per line (e.g., of a possible5). The data processing module296can determine a per line log MAR score306that is calculated based on the line size and the score. The data analysis module298can determine the smallest readable letter at a given gradient level309. The smallest readable letter can be aggregated as part of the total data (TD). FIGS.21-23demonstrate examples of GUIs corresponding to different visual acuity tests that can be implemented for assessing a patient's visual function. In the examples ofFIGS.21-23different levels of visual contrast are provided, such as can correspond to 100% contrast, 2.5% contrast and 1.25% contrast. Other levels of contrast can be provided for testing a range of visual acuity. The testing can record data indicative of accuracy for the test as well as speed for such testing in response to user inputs indicating each respective letter via a corresponding user input (e.g., keypad or keyboard). FIGS.24-29illustrate an example testing apparatus300similar to the testing apparatus70ofFIG.5. The apparatus300includes a housing portion (e.g., constituting an enclosure)370for holding, storing, and transporting a computing device310in a compact, reliable manner while being lightweight and cost-efficient to produce. The computing device310is programmed with instructions executable (e.g., by one or more hardware processor) to perform one or more test modules to evaluate a patient's condition that affects cognitive and/or motor performance, such as disclosed herein. The housing370can be made by a number of different manufacturing techniques including, but limited to, CNC, machining, die casting, extrusion, laser-sintering (rapid manufacturing), 3D printing, silicone compression molding, thermoforming or laser-cutting and/or EVA foam molding. The housing370can be configured to be ergonomic and user-friendly by, for example, including one or more handles extending from the side(s) of the housing. The housing370should be easy and safe to carry while storing and protecting the computing device310and test fixture370. For instance, the housing370can be formed from a lightweight, durable material, such as a polymer or plastic. The housing370can be translucent and may be clear or colored. The housing370includes a base332and a platform constituting a test fixture330. In the example ofFIGS.24-29, the housing370has a rectangular shape and extends from a first end372to a second end374, which ends extend between and space apart opposing edges375and376. The base332of housing370further can include lower and upper housing portions381and382. The perimeter of the base332may be covered in a rubberized material to facilitate gripping and increase the surface roughness along its perimeter. The housing370includes an interior space378for receiving the computing device310therein (see, e.g.,FIGS.28A and28B). One side of the base332includes a notch380extending into the interior space378. The platform330is pivotally attached to the base332via a hinge337positioned within the notch380to provide for rotation of the platform330with respect to the base332and a computing device310attached within the base. The platform330is sized and shaped to fit within and be readily accessed through the interior space378of the housing370. This construction enables the platform330to pivot away from the computing device310and out of the interior space378by rotation of the platform in the direction R about the axis338. As a result, the test fixture330is movable relative to the computing device310between a testing position overlaying the touch screen within the interior space (see, e.g.,FIG.29) and a support position extending out of the housing to support the apparatus when placed on a surface (see, e.g.,FIG.24). In one example, the platform330can pivot through an arc of about 270° relative to the computing device310in the direction R. In other embodiments, platform330can rotate up to 360° around the hinge to lie flat on the side opposite the screen or interface. The hinge337can be a friction hinge, such as biased by one or more springs347(seeFIG.27E). In some examples, such as shown in the example ofFIGS.24,25,26A and26B, the platform330may be T-shaped and include a perimeter334. A contact surface339of the platform330can be planar and have a shape and size designed to fit within the interior space and onto the screen312of the computing device310in an overlaying relationship, such as corresponding to a testing position shown inFIGS.25and29(e.g., for implementing a manual function test). A plurality of receptacles340(demonstrated as340aand340b) are formed in the platform330, such as such as to receive contact members (e.g., pegs) as disclosed herein. The receptacles340can be formed as a plurality of apertures extending through the platform330to provide access to the screen312of the computing device when the platform is in the testing position to provide a corresponding test fixture. The apertures340extend as apertures completely through the platform332but, in other examples, alternatively can be blind, i.e., not extend entirely through the platform. The receptacles340can be arranged in one or more predetermined patterns according to testing requirements. As shown in the examples ofFIGS.24-29, one set of receptacles340aare arranged in a 3×3 array of evenly spaced rows and columns. This configuration is similar or identical to the apertures74a-iinFIG.5. The receptacles340amay extend completely through the base332or partially therethrough to form receptacles for receiving contact members therein. In some examples, another set of apertures340bextends into the base332and is arranged in a predetermined pattern in an area spaced from the array of apertures340a. As shown inFIG.29,9apertures340bare arranged in a linear array along an edge of the platform330, with recesses extending between adjacent pairs of apertures. The receptacles340bcan be similar or identical to the shaded apertures shown inFIG.5and described herein. By way of further example, the apertures340a,340bare configured to releasably receive contact members (e.g., electrically conductive pegs)400such as corresponding to the pegs discussed with respect toFIG.5. The pegs400can have any shape but are circular cylindrical in this example. Consequently, the receptacles340a,340bare likewise cylindrical. Different cross-sectional shapes of pegs and receptacles can be utilized in other examples, such as for conducting different tests. The apertures340a,340bcan be countersunk or chamfered to facilitate insertion of the pegs400. An interior sidewall of the apertures340aand340bcan be electrically connected to the housing of the computing device310via a corresponding electrically conductive material342aand342bthat extends from a location within the recesses between receptacles to the hinge337, which is electrically coupled to electrical ground of the computing device310. For example, the inner sidewall surface of the receptacles includes electrically conductive material342to contact pegs that are insertable therein, which connects the pegs via a corresponding electrically conductive path in the recesses between receptacles, which can include the hinge337. The hinge337further can complete an electric circuit between electrically conductive sidewall portions of respective receptacles of the platform330and the housing of the tablet computing device310. As shown inFIG.26A, for example, electrically conductive material342can be provided as a sheet between the contact surface339and the opposing surface341within the platform330. The conductive material342can include apertures that align with each of the receptacles340. For instance the conductive material342can extend along an interior sidewall of the receptacles340aand340b, such as may be in the form of a bushing or other electrically conductive traces can be disposed along an interior sidewall of the receptacles340. The conductive material342can be electrically coupled to the hinge337via an arrangement of electrically conductive traces or wires342adisposed in the body or along a surface of the test fixture platform330. Since the conductive material at or near the sidewall of the apertures340A is electrically connected to the housing computing device310, an electrically conductive path can be established from the touch-sensitive surface, through the sidewall, through the hinge to the housing of the computing device310. The hinge337also electrically connects the test fixture330to the computing device310, such as by forming part of an electrically conductive path. The path can establish a sufficient flow of electrons to enable the electrical characteristics (e.g., capacitance) of the touch-screen to change so that the engagement between the contact member and the touch screen can be detected even in the absence of human contact. Since the contact member can be detected by the touch-sensitive surface in the absence of contact by the subject, based on an electrically conductive path that is established when a given contact member is inserted into a respective aperture to contact the touch-sensitive surface, each individual contact member can be detected at a corresponding location during the test even after it is released by the user. FIGS.27A,27B,27C,27D and27Eillustrate an example of the hinge337and other parts constituting the electrical path for connecting the platform330to the chassis (electrical ground) of the computing device310. The hinge337includes a portion that is integrally formed with the platform330and includes an electrical contact341electrically connected to the electrical conductive material342within the apertures. A terminal block343is positioned within the base332of the housing adjacent the notch380. The terminal block343is electrically connected to the computing device310via a terminal lead351and includes an electrical contact345. A spring347extends between the contacts341,345and electrically connects the same as well as provides mechanical bias for friction during rotation of the platform relative to the base332. The contacts341,345and spring347are aligned along the axis338of the hinge337. Consequently, the spring347forms part of the path to establish electrical contact between the electrical ground of the computing device310and the test fixture330. The housing shields the connection therein. As shown inFIGS.27D and27E, an electrically conductive element (e.g., a wire)353can be connected to the terminal lead351via a screw or other fastener (bolt, conductive adhesive, solder or the like). The conductive element353can terminate in a plug354that is insertable into an audio or other jack of the computing device310for completing the path to electrical ground of the device (e.g., the jack includes a device ground connection). In some examples, the conductive element353can include a splitter356can be used to provide an additional auxiliary jack358to enable use of the audio jack while the testing apparatus is in use. The jack358thus can be exposed and accessible from external to the housing during operation. FIGS.28A and28Bdemonstrate assembly views of the apparatus300showing attachment between lower and upper housing portions381and382. InFIG.28A, the computing device310is positioned within the lower housing portion381, such as within a receptacle dimensioned and configured to receive the computing device therein. InFIG.28A, the platform is rotatably attached to the lower housing portion381via the hinge, as mentioned above. InFIG.28B, the upper housing portion382is placed over the lower portion381, such as to sandwich the computing device therein. The lower and upper housing portions can be connected together via snap fit, adhesive, ultrasonic welding or the like to provide the assembled apparatus300, such as shown inFIG.29. In use, the apparatus300is removed from a storage area and carried to/placed on a table or surface. A soft protective cover (if applicable) is removed. The test fixture330is pivoted about the hinge337in the direction R away from the touch screen312to access the entire touch screen and initiate the desired test. When the test is ready, the test fixture330is pivoted about the hinge337in the direction R to a position overlying the touch screen312. This places the apertures340aand/or the apertures340bin positions overlying predetermined portions of the touch-sensitive screen312(e.g., corresponding to the test position). For example, the computing device can be programmed to generate an interactive graphical user interface that includes interactive GUI elements aligned with one or more of the apertures340a, such as during a given test, such as disclosed herein. One or more of the pegs400can be removed from the aperture(s)342bor the chamber385and inserted into one of the apertures340a, allowing the pegs to extend entirely through the base332into proximity with the touch screen312. The touch screen312detects and determines when any of the pegs400are in contact with the GUI. The first end372of the upper housing portion382includes recessed chambers384,385accessible by a door386that is pivotably connected to the front of the housing370. The chambers384and385can include a series of parallel slots or other containing features, such as can be used for receiving and storing the pegs when not in use. The door386can securely lock (e.g., snap-fit) with the remainder of the housing370to ensure the door remains closed during storage, transport, and manipulation of the apparatus300. Due to the construction of the hinge337, the test fixture330can also function as a leg or kickstand to support the housing and computing device310in a generally upright orientation without leaning against another object or the aid of a person. As shown inFIG.24, the test fixture330can be rotated in the direction R out of the interior space378to a position extending behind the computing device310. When the angle between the rotated test fixture330and housing370approaches, for example, 90°, the test fixture is released. The friction hinge337maintains the desired angle between the rotated test fixture330and housing370while the rubberized perimeter334on the base332grips the surface on which the apparatus300is placed, e.g., countertop, table top, etc. In some embodiments, the hinge may comprise a ratchet or a locking device, such as a pin through the housing preventing rotation or a magnet, in addition to or in lieu of the friction fit. Consequently, the apparatus300has a desired viewing orientation for the user that is maintained by the friction hinge337and increased friction between the perimeter334and contact surface, thereby facilitating reading the touch screen312. In some alternate embodiments the text fixture can swing fully around to and flush with the back side of the tablet (opposite the screen). This ability would allow a user to lay the device flat on its back on a surface. As disclosed herein, the computing device310can be programmed to, such as part of neuromotor test program, e.g., a manual function test, for measuring individual peg400insertion and removal time in any of the apertures342a,342b, i.e., the 9 hole peg test (9HPT). In other examples, the same test program can include other test modules, such as for testing visual acuity by holding the computing device310at the correct angle for performing the test, and performing a timed 25 foot walk. In one testing example, such as for the door386is opened to access the pegs400, which are removed from the chamber385and place in the row of apertures340bin the test fixture330. The pegs400are then moved by hand from the row of apertures340bto the grid of apertures340a, with instructions provided on the touch screen312. The user can access a help button (not shown) on the computing device310if needed during the test. To this end, portions of the touch screen312can be accessible by the user while the test fixture330overlies the touch screen. Moreover, the test fixture330can be transparent to enable viewing of the touch screen312through the downwardly pivoted test fixture. Once the test is completed, the pegs400are placed back into the chamber385and the door386closed. The test fixture330is pivoted away from the touch screen312to complete any remaining tests. Upon completion of all tests, the test fixture330is again pivoted to a position overlying the touch screen312. The protective cover is replaced and the apparatus300carried back to storage. The housing370is advantageous in that it helps protect both the computing device310and the test fixture330. The housing370is semi-permanent and covers/protects nearly the entire computing device310, aside from the touch screen312, which remains at least partially accessible. The housing370also maintains easy access to the power button390and provides a convenient means of storing the pegs400when not in use. The periphery of the housing370is advantageously provided with notches, openings, etc. (not shown) to maintain access to all ports and buttons on the computing device310when stored therein, e.g., headphone jack, volume buttons, USB port, etc. For instance, the splitter356may be provided to enable the patient to listen to audio instructions while simultaneously performing the prescribed test(s). The splitter356can constitute an off-the-shelf splitter, a custom OEM external splitter or use connectors and wire assemblies built into the housing370. As a result, the housing370provides a protective cover for the computing device310that allows the computing device to be used efficiently with the manual dexterity test and any other assessment or questionnaire deliverable via the computing device. The patient can therefore readily listen to and/or visually see instructions provided by the computing device310. The housing370can also be made ergonomic to facilitate grasping, manipulation, and feel for the patient. FIGS.30-33illustrate alternative configurations for housings to be used with the test fixture330and computing device310described herein. InFIG.30, the housing470is a sliding type in which the computing device310is laterally slid into the interior space474of the housing. The housing has a U-shaped sidewall472defining the interior space474and including a series of recessed portions473contoured to the shape of the computing device310. The computing device310is slid laterally into the interior space474, sliding along the recessed portions473. The contour of the recessed portions473helps retain/lock the computing device310within the housing470. The test fixture330, which is shown with a generally rectangular configuration, can be secured to the computing device before or after the computing device is slid into the housing470or afterwards. The housing470can include a handle480at the open end of the sidewall472(or any other place along the sidewall) for grasping/manipulating the apparatus. InFIG.31, the housing570has a drop frame configuration having a rectangular sidewall572defining an interior space576. A recess574extends into portions of the sidewall572to form a ledge that receives the computing device310and test fixture330. One or more feet580can be secured to the bottom of the housing570. A splitter600for headphones can be connected to the computing device310. Similarly, inFIG.32, the housing670has a deep drop frame configuration having a rectangular sidewall672defining an interior space676. A recess674extends into portions of the sidewall672to form a ledge that receives the computing device310and test fixture330. FIG.33illustrates a housing770for a computing device310constituting a lap top. The housing770includes a rectangular sidewall772, a handle780extending from the sidewall, and an interior space776for receiving the computing device310. A panel774secured to the frame772pivots in the direction R to more fully enclose the computing device310within the interior space776—for protection—or to access the interior space. Routine collection of clinical data from neurological assessments is impaired by the requirement for qualified staff to administer tests and record the data in a consistent and timely manner. Collection and aggregation of this data over time is important in assessing disease progression and response to treatment in MS patients, as reflected in the widespread use of the traditional forms of these neurological assessments in clinical trials. The apparatus disclosed herein allows the patient to self-administer neurological tests that are widely accepted by the neurological community, yet not routinely used in clinical practice due to time and resource constraints. The immediate need for this design is therefore to allow a reduction in clinical staff required to administer functional tests, which is accomplished by having largely self-administered tests. The autonomous nature of the apparatus would also allow for use by patients that are ambulatory, e.g., in-home assessment by the patients themselves. This would allow for greater resolution in functional data when making clinical decisions. By reducing the workload on the clinical staff, a greater amount of data can be captured for each patient. The availability of this data to clinical staff will enhance the care of MS patients by providing routine, quantitative measures of function that are currently not captured. Further, acquisition of data by a computer based system allows for more reliable, standardized and objective data and easier storage, retrieval and analysis of the data, including analysis with respect to patient populations and longitudinal data. In view of the foregoing, it will be appreciated that the data collected via the approach disclosed herein provides facilitates automated assessment of a plurality of tests. For example, the approach provides a patient-centered neurological performance system, it can be used in non-medical setting (autonomously by the patient at home or other remote location) as well as medical settings typically not equipped to provide certain types of healthcare, such as at rural hospitals. The data collected for each given patient for a test sessions can be used for patient evaluation as well as for management of the patient's condition. Additionally, since the cost of the test system is inexpensive compared to many existing systems, the systems and methods disclosed herein facilitate clinical research projects, including clinical trials. The testing can be implemented, for example, via a tablet computer, and can employ a graphical user interface on a portable computing device to implement one or more neurological and neuropsychological performance test method. For instance, the test method(s) can be utilized to help characterize a patient's multiple sclerosis or other neurological disorder (e.g., Parkinson's or essential tremor). As disclosed herein, the method can be self-administered by the patient himself/herself (as opposed to traditional clinician supervised testing which needs to be done by a trained technician). Thus the approach disclosed herein facilitates distance-based monitoring such as through telemedicine. Additionally, since the testing can be self-administered, it enables a care provider (e.g. a physician) to monitor the patient's condition over time to determine the course of disease and the effect of intervention for each of a plurality of patients. The care provider can access a database to retrieve test results for a plurality of different patients that conducted the test at different remote locations, via a tablet computer where a test was implemented or a remote computer (e.g., smart phone, desktop PC or the like). As a further example, the test results can be communicated to one or more providers. This can be done by simply reviewing the results on the computing device or the results can be sent to the provider(s) via a network connection, as disclosed herein. The test results for one or more subjects, for example, can be stored in a database in a server for further analysis and comparison. For instance, test data can be aggregated for a plurality of patients, such as for clinical research (e.g., in MS), including clinical trials and other forms of clinical research. Such test results for multiple tasks completed over a different time intervals (e.g., over a period of a day or a given week) can be evaluated to set one or stimulation parameters. As will be appreciated by those skilled in the art, portions of the devices, systems and methods disclosed herein data processing system or computer program product. Accordingly, such features may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Furthermore, portions of the invention may comprise a computer program product on a computer-usable storage medium having computer readable program code on the medium. Any suitable computer-readable medium may be utilized including, but not limited to, static and dynamic storage devices, hard disks, optical storage devices, and magnetic storage devices. FIG.34shows an example of various testing modules of an example MSPT assessment system820. For example, the modules can include a treatment module822, a quality of life module824, a processing speed module826, a manual dexterity module828, a contrast sensitivity module830, and a walking speed module832. The modules can be implemented according to the examples disclosed herein with respect toFIGS.2-23. Results of the MSPT can be visualized on the display834. Additionally, aspects of each of the modules822-830can be displayed on the display834in a user-interactive manner. For example, the display834can be an input device, an output device, and/or an input/output device that can allow a user input and/or a resulting visualization. In some examples, the display can be part of a computing device that includes one or more processing unit and memory, which can execute instructions corresponding to the modules822-830and store data in the memory to document results of user interactions and measurements via the respective modules. As an example, the treatment module822is stored in memory as executable instructions to provide one or more questionnaires, such as a questionnaire related to upper extremity function, a questionnaire related to lower extremity function, a questionnaire related to sleep, a questionnaire related to fatigue, a questionnaire related to anxiety, a questionnaire related to depression, a questionnaire related to stigma, a questionnaire related to positive affect and well being, a questionnaire related to applied cognition, a questionnaire related to executive function, a questionnaire related to the ability to participate in social roles, a questionnaire related to satisfaction with social roles, and/or a questionnaire related to emotional and behavioral dyscontrol. A score can be provided per question as an integer value indicating the patient's response (e.g., 1 Never, 2 Almost never, 3 Sometimes, 4 Often, 5 Almost always). The quality of life module824is stored in memory as executable instructions to ask patients (e.g., in a graphical fashion via a GUI) to rate their quality of life for various questions related to neurological function. The questions can be broken into sub-tests based on domains of function that can include, for example: upper extremity, lower extremity, sleep, fatigue, anxiety, depression, stigma, positive affect and well being, cognitive function, satisfaction with social roles, and emotional and behavioral dyscontrol. The sub-tests can be independent from each other and may be administered serially with the results not affecting subsequent tests. A score can be provided per question as an integer value indicating the patient's response (e.g., 1 Never, 2 Almost never, 3 Sometimes, 4 Often, 5 Almost always). The processing speed module826is stored in memory as executable instructions to provide a symbol-digit matching test in which subjects can be given an answer key, displaying correct symbol-digit pairings. Then the subject can be presented with symbols and blank spaces beneath them and can be required to select the number that corresponds with each symbol based on the answer key. The test can be scored based on the total number of correctly matched symbol-digit pairs in two minutes. In some instances, the score can be additionally based on the response time per symbol and the number of incorrect responses. The manual dexterity module828is stored in memory as executable instructions to enable user interaction with the testing apparatus by allowing patients to manipulate physical pegs into a grid overlay with their dominant hand and their non-dominant hand in sequence (e.g., 2 trials per hand, 60 seconds per trial). The module828can correspond to the manual function test module disclosed with respect toFIGS.4-6. This test can be implemented using any of the example housings disclosed herein, such as demonstrated inFIGS.24-33with corresponding graphics appearing on the touch screen through the test fixture (e.g., constituting a grid overlay) of the testing apparatus with instructions that specify which hand to use during each trial. A score can be calculated based on the number of pegs correctly placed, a time to place the pegs, a number of pegs dropped, and the like. For example, time to place pegs can be calculated as the time between the touch screen interface detecting removal of a peg from its starting position and insertion of the given peg into the correct peg hole In some examples the manual dexterity test can be implemented (e.g., including with the housing and pegs ofFIGS.24-33) according to a workflow corresponding to instructions executed by the tablet computer of the testing apparatus. The contrast sensitivity module830can apply a low contrast visual sensitivity test such as disclosed herein. In some examples, the contrast sensitivity module830can apply the low contrast letter acuity test, which can show the patient lines consisting of a plurality of different optotypes (e.g., about 5 optotypes) of a fixed contrast level and size. Additionally or alternatively, the The walking speed module832can have functionality to enable patients to measure the time it takes for them to walk a specified distance. A score can be based on the time taken to walk the specified distance. In this module832, a patient performs tests to measure the time it takes for them to walk a specified distance. Prior to starting any trials, a patient may first answers questions provided by the tablet computing device, regarding their utilization of any walking aids or Ankle and Foot Orthoses (AFOs) usage. Once these questions are answered, the patient is presented with instructions instructing them how to successfully complete the module. Part of the instructions is testing the Low Energy Bluetooth (LE-BT) remote to ensure it is properly paired to the device. Other possible remote triggers include infrared, Near Field Communications, sound activation, light or laser activation, motion sensors, force sensors, accelerometers and so forth. After the instructions, the test phase begins. The patient makes their way to a prescribed testing course. Once at the starting line, they press the remote once to begin the trial. Upon crossing the finish line, they press the remote again to stop the trial. The patient then returns to the device to confirm that the trial was complete successfully. In the event the trial was not successful, the patient has the ability to repeat the trial. Repeating a trial stores the previous trial data but marks it as invalid. The patient repeats this cycle for every trial that is administered. An alternate administration method may involve an administrator or other person (e.g., friend or family member) tapping the iPad screen to start and stop a trial; this is to be used in place of or in conjunction with the LE-BT remote. The method of starting and stopping a trial will be recorded. In the event that a trial reaches maximum duration, the trial may be scored as having the maximum time and stored as successfully completed with a TIMEOUT=TRUE flag. The apparatus and computing device enable one or more of such tests to be readily self-administered by the subject, as opposed to by a trained technician; however, a trained technician can also administer such tests, if desired. This is enabled because the application of each test module and associated score scoring is automated by executable instructions programmed to process acquired testing data and to score tests based on testing data acquired during each of the tests by the computer via which the tests are administered. In some examples, the data from these tests can be aggregated at the computing device and transmitted to a provider database via a network. This process or sending the test data can also be automated. The test data can be collected (e.g., in a database) for many patients for a variety of evaluative purposes, such as to facilitate patient monitoring, provide population statistics, and drug development. As an example, the tests and associated instructions can be stored and executed on a server (e.g., database38ofFIG.1, such as on a web server) and accessed at another remote device (e.g., a computing device) for user interaction, such as via a web browser or other interface that is programmed to interact with the user interface that is generated. In some cases, the functionality can be distributed between the server and the remote device in which certain instructions are executed by the remote device and other instructions are executed by the server. In other examples, the instructions and data can be stored and executed locally in a computing device (e.g., a portable or mobile device), such as a tablet computer. FIG.35is a flow diagram depicting an example of a method900for performing testing for evaluation of cognitive and/or neuromotor function, such as for the MSPT. The method900can be implemented using a mobile computing apparatus, such as disclosed herein. The method begins at902by providing a computing device having a touch screen interface. The computing device includes memory to store instructions corresponding to at least a manual function test module (e.g.,62,80,828). As disclosed herein, the computing device can be used to store instructions to perform other test modules, including one or more of a cognitive processing speed test module (e.g.,110,130and/or826), a gait test module (e.g.,230,240,250,832), a balance test module (e.g.,160,170,190), and a visual acuity or contrast sensitivity test module (e.g.,290,830). At904, the method includes placing a test fixture (e.g., platform330) in a test position in which the test fixture is in an overlying position with the touch screen (see, e.g.,FIGS.5or25). As disclosed herein, the test fixture is pivotably connected to a base, which is attached to the computing device (e.g.,310). The connection provides for rotational movement of the test fixture with respect to the touch screen interface of the computing device between the test position (see, e.g.,FIGS.25and29) and a support position (see, e.g.,FIG.24) in which the base is operative to support the base and the computing device. The test fixture includes a plurality of receptacles for receiving a plurality of contact members that, when placed in the receptacles while the test fixture is in the test position, enable interaction that is detectable by the touch screen in the absence of direct contact by the user. At906, the method includes executing the manual function test module and storing test data corresponding to user inputs in response to placing the contact members into the receptacles while the test fixture is in the test position during the execution thereof. The manual function test module calculates time values associated with the placing of the contact members and store the time values as part of the test data in memory. At908, a determination can be made whether the testing method is complete. If additional testing modules are to be performed, the method proceeds to910. At910, the method further includes executing the next test module and storing other test data in the memory based on user interactions with the computing device during the execution thereof. In connection with performing the additional testing, the user can move test fixture from the testing position for the manual function test to the support or another position to provide desired access to the touch screen (e.g., the apparatus can lay flat or be supported by the test fixture acting as a kickstand). From912, the method returns to908. At908, if the determination is that the testing is complete, the method ends at912. Certain embodiments of the invention are described herein with reference to flowchart illustrations of methods, systems, and computer program products. It will be understood that blocks of the illustrations, and combinations of blocks in the illustrations, can be implemented by computer-executable instructions. These computer-executable instructions may be provided to one or more processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus (or a combination of devices and circuits) to produce a machine, such that the instructions, which execute via the processor, implement the functions specified in the block or blocks. These computer-executable instructions may also be stored in computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture including instructions which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. What have been described above are examples. It is, of course, not possible to describe every conceivable combination of components or methodologies, but one of ordinary skill in the art will recognize that many further combinations and permutations are possible. Accordingly, the invention is intended to embrace all such alterations, modifications, and variations that fall within the scope of this application, including the appended claims. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on. Additionally, where the disclosure or claims recite “a,” “an,” “a first,” or “another” element, or the equivalent thereof, it should be interpreted to include one or more than one such element, neither requiring nor excluding two or more such elements. | 98,006 |
11857350 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS FIG.1is a schematic view of a continuous analyte monitoring system120embodying aspects of the present invention. In some embodiments, the system120may include one or more of an analyte sensor100, a transceiver101, a display device102, and an activity tracker110. In some embodiments, the sensor100and transceiver101may include one or more of the structural and/or functional features described in one or more of U.S. Patent Application Publication No. 2013/0241745; U.S. Patent Application Publication No. 2013/0211213; U.S. application Ser. No. 14/453,078; and U.S. Patent Application Publication No. 2014/0018644; all of which are incorporated by reference in their entireties. In some embodiments, system120may include one or more accelerometers (e.g., accelerometer105and/or accelerometer111), which may be, for example, 3D accelerometers. In some embodiments, the one or more accelerometers may be configured to generate acceleration data corresponding to movement of the accelerometer. In some embodiments, the continuous analyte monitoring system120may use the one or more accelerometers to keep track of near continuous movements of a user's upper arm, wrist, leg, or core, for example. In some embodiments, the one or more accelerometers may be located in or part of one or more of the analyte sensor100, transceiver101, display device102, and activity tracker110. In some non-limiting embodiments, as illustrated inFIG.1, the transceiver101may include an accelerometer105. In some non-limiting embodiments, as illustrated inFIG.1, the system120may include an activity tracker110(e.g., an on-body activity tracker such as, for example and without limitation, a fitbit), which may include an accelerometer111in addition to or as an alternative to the accelerometer105. In some non-limiting embodiments, the analyte sensor100and/or display device102may additionally or alternatively include an accelerometer. In some embodiments, the transceiver101may be configured to generate activity information based on acceleration data generated by one or more accelerometers and may be configured to generate one or more alerts based on the activity information. In some embodiments, the transceiver101may be configured to additionally or alternatively receive activity information (e.g., from display device102or activity tracker110) that was generated (e.g., by display device102or activity tracker110) based on acceleration data generated by one or more accelerometers. In some embodiments, the transceiver101and/or display device102may communicate the one or more alerts to a user. In some embodiments, the alerts may be visual, audible, and/or vibratory alerts. In some embodiments, the transceiver101may be configured to receive data signals from the analyte sensor100and may be configured to generate analyte concentration values based on the received data signals. In some embodiments, the transceiver101may be configured to generate one or more alerts based on the analyte concentration values and the activity information. In some embodiments, the system120may include one or more displays. For example, in one non-limiting embodiment, the system120may include a display device102, and the display device102may include a display108configured to display (e.g., in a plot or graph) one or more of the analyte concentration values and the activity information with respect to time. In some embodiments, the transceiver101may additionally or alternatively include a display configured to display one or more of the analyte concentration values and the activity information with respect to time. In some embodiments, the analyte sensor100may include sensor elements112and a transceiver interface device103. The sensor elements112may be configured to detect the presence and/or concentration of an analyte (e.g., glucose, oxygen, cardiac markers, low-density lipoprotein (LDL), high-density lipoprotein (HDL), or triglycerides). In some non-limiting embodiments, the sensor elements112may include one or more of an analyte indicator, a light source configured to emit excitation light to the analyte indicator, and a photodetector configured to receive light from the analyte indicator. The amount of light received by the photodetector may be indicative of the presence or concentration of the analyte. Although, in some embodiments, as described above, the analyte sensor100may be an electro-optical sensor, this is not required, and, in some alternative embodiments, the analyte sensor100may be another type of analyte sensor, such as, for example and without limitation, an electro-chemical sensor. In some embodiments, the transceiver interface device103may be configured to (i) receive a power signal and generate power for powering the sensor elements112and/or (ii) convey data signals generated by the sensor elements112. In some non-limiting embodiments, the transceiver interface device103may be configured to wirelessly convey data signals, and the transceiver interface device103may include an inductive element (e.g., an antenna or coil). In some alternative embodiments, the transceiver interface device103may be configured to convey data signals via a wired connection to an external device (e.g., transceiver101), and the transceiver interface device103may include the wired connection. In some non-limiting embodiments, the analyte sensor100may be an implantable sensor or a transcutaneous sensor. In some embodiments, the sensor100may be implanted or inserted in a living animal (e.g., a living human). The sensor100may be implanted or inserted, for example, in a living animal's arm, wrist, leg, abdomen, peritoneum, intravenously, or other region of the living animal suitable for sensor implantation or insertion. For example, in one non-limiting embodiment, the sensor100may be implanted beneath the skin (i.e., in the subcutaneous or peritoneal tissues) such that no portion of the sensor100protrudes from the skin, and the transceiver interface device103may convey data signals wirelessly. Although, in some embodiments, the analyte sensor100may be a fully implantable sensor, this is not required, and, in some alternative embodiments, the sensor100may be a transcutaneous sensor having a wired connection to the transceiver101. For example, in some alternative embodiments, the sensor100may be located in or on a transcutaneous needle (e.g., at the tip thereof). In these embodiments, instead of wirelessly communicating (e.g., using inductive elements), the sensor100and transceiver101may communicate using one or more wires connected between the transceiver101and the transcutaneous needle that includes the sensor100. For another example, in some alternative embodiments, the sensor100may be located in a catheter (e.g., for intravenous blood glucose monitoring) and may communicate (wirelessly or using wires) with the transceiver101. In some embodiments, the transceiver101may be an electronic device that communicates with the sensor100to power the sensor100and/or receive data signals (e.g., photodetector and/or temperature sensor readings) from the analyte sensor100. In some embodiments, the transceiver101may include a sensor interface device104. The sensor interface device104of the transceiver101may be configured to receive the data signals from the analyte sensor100using wireless (e.g., inductive) and/or wired communication. In some embodiments where data signals are wirelessly conveyed between the transceiver101and the analyte sensor100, the communication between the transceiver101and the analyte sensor100may be, for example and without limitation, near field communication. In some embodiments (e.g., embodiments in which the sensor100is a fully implantable sensor), the transceiver101may implement a passive telemetry for communicating with the implantable sensor100via an inductive magnetic link for one or more of power and data transfer. In some embodiments, the magnetic transceiver-sensor link can be considered as “weakly coupled transformer” type. The magnetic transceiver-sensor link may provide energy and a link for data transfer using amplitude modulation (AM). Although in some embodiments, data transfer is carried out using AM, in alternative embodiments, other types of modulation may be used. In some non-limiting embodiments, the analyte monitoring system may use a frequency of 13.56 MHz, which can achieve high penetration through the skin and is a medically approved frequency band, for power and/or data transfer. However, this is not required, and, in other embodiments, one or more different frequencies may be used for powering and communicating with the sensor100. In some non-limiting embodiments, the transceiver101may be a handheld transceiver or a body-worn transceiver (e.g., a transceiver held in place by a wristwatch, an armband, belt, or adhesive). For example, in some embodiments where the transceiver101is an on-body/wearable device, the transceiver101may be held in place by a band (e.g., an armband or wristband) and/or adhesive (e.g., as part of a biocompatible patch), and the transceiver101may convey (e.g., periodically, such as every two minutes, and/or upon user initiation) measurement commands (i.e., requests for measurement information) to the sensor100. In some embodiments where the transceiver101is a handheld device, positioning (i.e., hovering or swiping/waving/passing) the transceiver101within range over the sensor implant site (i.e., within proximity of the sensor100) may cause the transceiver101to automatically convey a measurement command to the sensor100and receive a reading from the sensor100. In some embodiments, the transceiver101may calculate analyte concentrations from the analyte data received from the sensor100. However, it is not required that the transceiver101perform the analyte concentration calculations itself, and, in some alternative embodiments, the transceiver101may instead convey/relay the analyte data received from the sensor100to another device (e.g., display device105) for calculation of analyte concentrations (e.g., by a mobile medical application executing on the display device105). In some non-limiting embodiments, the analyte concentration calculation may include one or more features described in U.S. Patent Application Publication No. 2014/0018644, which is incorporated by reference in its entirety. In some embodiments, the transceiver101may include a display interface device106configured to convey information (e.g., alerts and/or analyte concentrations) to one or more display devices102. In some embodiments, a display device102may be a portable and/or handheld device. In some embodiments, the display device105may be a smartphone. However, this is not required, and, in alternative embodiments, the display device102may be a laptop computer, tablet, notebook, personal data assistant (“PDA”), personal computer, or a dedicated analyte monitoring display device. In some embodiments, the display device102may include a transceiver interface device107, which may be configured to communicate with the display interface device106of the transceiver101through a wired or wireless connection. In some embodiments, the display device102may include a processor109, and the display device102may have a mobile medical application installed thereon. In some non-limiting embodiments, the processor109may execute the mobile medical application. FIG.2is a schematic view of a transceiver101according to a non-limiting embodiment. In some embodiments, the display interface device106of the transceiver101may include an antenna of a wireless communication integrated circuit (IC)910and/or a connector902. In some non-limiting embodiments, the display interface device106may additionally include the wireless communication IC910and/or a connector IC904. In some embodiments, the connector902may be, for example, a Micro-Universal Serial Bus (USB) connector. The connector902may enable a wired connection to an external device, such as a display device102(e.g., a smartphone or a personal computer). The transceiver101may exchange data to and from the external device through the connector902and/or may receive power through the connector902. The connector IC904may be, for example, a USB-IC, which may control transmission and receipt of data through the connector902. The transceiver101may also include a charger IC906, which may receive power via the connector902and charge a battery908(e.g., lithium-polymer battery). In some embodiments, the battery908may be rechargeable, may have a short recharge duration, and/or may have a small size. In some embodiments, the transceiver101may include one or more connectors in addition to (or as an alternative to) Micro-USB connector904. For example, in one alternative embodiment, the transceiver101may include a spring-based connector (e.g., Pogo pin connector) in addition to (or as an alternative to) Micro-USB connector904, and the transceiver101may use a connection established via the spring-based connector for wired communication to a display device105(e.g., a smartphone or a personal computer) and/or to receive power, which may be used, for example, to charge the battery908. In some embodiments, the wireless communication IC910may enable wireless communication with an external device, such as, for example, one or more display devices105(e.g., a smartphone or personal computer). In one non-limiting embodiment, the wireless communication IC910may employ one or more wireless communication standards to wirelessly transmit data. The wireless communication standard employed may be any suitable wireless communication standard, such as an ANT standard, a Bluetooth standard, or a Bluetooth Low Energy (BLE) standard (e.g., BLE4.0). In some non-limiting embodiments, the wireless communication IC910may be configured to wirelessly transmit data at a frequency greater than 1 gigahertz (e.g., 2.4 or 5 GHz). In some embodiments, the wireless communication IC910may include an antenna (e.g., a Bluetooth antenna). In some embodiments, the transceiver101may include voltage regulators912and/or a voltage booster914. The battery908may supply power (via voltage booster914) to radio-frequency identification (RFID) reader IC916, which uses an inductive element919to convey information (e.g., commands) to the sensor100and receive information (e.g., measurement information) from the sensor100. In some non-limiting embodiments, the sensor100and transceiver101may communicate using near field communication (NFC) (e.g., at a frequency of 13.56 MHz). In the illustrated embodiment, the inductive element919is a flat antenna. In some non-limiting embodiments, the antenna may be flexible. However, the inductive element919of the transceiver101may be in any configuration that permits adequate field strength to be achieved when brought within adequate physical proximity to the inductive element114of the sensor100. In some embodiments, the transceiver101may include a power amplifier918to amplify the signal to be conveyed by the inductive element919to the sensor100. The transceiver101may include a peripheral interface controller (PIC) microcontroller920and memory922(e.g., Flash memory), which may be non-volatile and/or capable of being electronically erased and/or rewritten. The PIC microcontroller920may control the overall operation of the transceiver101. For example, the PIC microcontroller920may control the connector IC904or wireless communication IC910to transmit data via wired or wireless communication and/or control the RFID reader IC916to convey data via the inductive element919. The PIC microcontroller920may also control processing of data received via the inductive element919, connector902, or wireless communication IC910. In some embodiments, the transceiver101may include a sensor interface device, which may enable communication by the transceiver101with a sensor100. In some embodiments, the sensor interface device may include the inductive element919. In some non-limiting embodiments, the sensor interface device may additionally include the RFID reader IC916and/or the power amplifier918. However, in some alternative embodiments where there exists a wired connection between the sensor100and the transceiver101(e.g., transcutaneous embodiments), the sensor interface device may include the wired connection. In some embodiments, the transceiver101may include a display924(e.g., liquid crystal display and/or one or more light emitting diodes), which PIC microcontroller920may control to display data (e.g., glucose concentration values). In some embodiments, the transceiver101may include a speaker926(e.g., a beeper) and/or vibration motor928, which may be activated, for example, in the event that an alarm condition (e.g., detection of a hypoglycemic or hyperglycemic condition) is met. The transceiver101may also include one or more additional sensors930, which may include an accelerometer (e.g., accelerometer105) and/or temperature sensor, that may be used in the processing performed by the PIC microcontroller920. In some embodiments, the transceiver101may be a body-worn transceiver that is a rechargeable, external device worn over the sensor implantation or insertion site. The transceiver101may supply power to the proximate sensor100, calculate analyte concentrations from data received from the sensor100, and/or transmit the calculated analyte concentrations to a display device105(seeFIGS.1A,1B, and5). Power may be supplied to the sensor100through an inductive link (e.g., an inductive link of 13.56 MHz). In some embodiments, the transceiver101may be placed using an adhesive patch or a specially designed strap or belt. The external transceiver101may read measured analyte data from a subcutaneous sensor100(e.g., up to a depth of 2 cm or more). The transceiver101may periodically (e.g., every 2 minutes) read sensor data and calculate an analyte concentration and an analyte concentration trend. From this information, the transceiver101may also determine if an alert and/or alarm condition exists, which may be signaled to the user (e.g., through vibration by vibration motor928and/or an LED of the transceiver's display924and/or a display of a display device105). The information from the transceiver101(e.g., calculated analyte concentrations, calculated analyte concentration trends, alerts, alarms, and/or notifications) may be transmitted to a display device105(e.g., via Bluetooth Low Energy with Advanced Encryption Standard (AES)-Counter CBC-MAC (CCM) encryption) for display by a mobile medical application on the display device105. In some non-limiting embodiments, the mobile medical application may provide alarms, alerts, and/or notifications in addition to any alerts, alarms, and/or notifications received from the transceiver101. In one embodiment, the mobile medical application may be configured to provide push notifications. In some embodiments, the transceiver101may have a power button (e.g., button208) to allow the user to turn the device on or off, reset the device, or check the remaining battery life. In some embodiments, the transceiver101may have a button, which may be the same button as a power button or an additional button, to suppress one or more user notification signals (e.g., vibration, visual, and/or audible) of the transceiver101generated by the transceiver101in response to detection of an alert or alarm condition. In some embodiments, the transceiver101may provide on-body alerts to the user in a visual, audible, and/or vibratory manner, regardless of proximity to a display device105. In some non-limiting embodiments, as illustrated inFIG.2, the transceiver101may include one or more notification devices (e.g., display924, beeper926, and/or vibration motor928) that generate visual, audible, and/or vibratory alerts. In some embodiments, the transceiver100may be configured to vibrate and/or generate an audio or visual signal to prompt the user about analyte readings outside an acceptable limit, such as hypo/hyper glycemic alerts and alarms in the case where the analyte is glucose. In some embodiments, the transceiver101may store the measurement information received from the sensor100(e.g., in memory922). The measurement information received from the sensor100may include one or more of: (i) a signal channel measurement with a light source on, (ii) a reference or second signal channel measurement with the light source on, (iii) a light source current source voltage measurement, (iv) a field current measurement, (v) a diagnostic measurement, (vi) an ambient signal channel measurement with the light source off, (vii) an ambient reference or second signal channel measurement with the light source off, and (viii) a temperature measurement. In some embodiments, the transceiver101may additionally store (e.g., in memory922) other data with the measurement information received from the sensor100. In some non-limiting embodiments, the other data may include one or more of: (i) an analyte concentration (e.g., in mg/dL, such as, for example, within a range of 20.0 to 400.0 mg/dL) calculated by the transceiver101from the measurement information, (ii) the date and time that the analyte measurement was taken, (iii) accelerometer values (e.g., x, y, and z) taken from an accelerometer (e.g., accelerometer105of the transceiver101and/or accelerometer111of the activity tracker110), and/or (iv) the temperature of the transceiver101as measured by a temperature sensor of the transceiver101. In some embodiments, the transceiver101may keep track of the date and time and, as noted above, store the date and time along with the received analyte measurement information and/or analyte concentrations. In embodiments where the transceiver101includes an accelerometer, the accelerometer will enable tracking of activity levels of the subject that is wearing the transceiver101. This activity level may be included in an event log and incorporated into various algorithms (e.g., for analyte concentration calculation, trending, and/or contributing to potential dosing levels for the subjects). In some embodiments, the transceiver101may store (e.g., in memory922) any alert and/or alarm conditions detected based on the calculated analyte concentrations. In some embodiments, the continuous analyte monitoring system120may keep track of an analyte concentration (e.g., blood glucose concentration) in the user. In some embodiments, the continuous analyte monitoring system may facilitate user entry of one or more measurable physiological parameters (e.g., insulin and/or meal bolus and/or exercise regimen) on a mobile medical application. The mobile medical application may be running/executed on (a) the transceiver101and/or (b) the display device102(e.g., smartphone, receiver, laptop, tablet, notebook, or personal computer) in communication with the transceiver101, analyte sensor100, and/or activity tracker110. For example, in one non-limiting embodiment, the mobile medical application may be executed by a processor109of the display device102. In some embodiments having a smartphone or other display device102in communication with the transceiver, the communication between the transceiver101and the smartphone or other display device102may be wireless communication (e.g., using the Bluetooth wireless communication standard) or wired communication, and the smartphone or other display device102may receive information (e.g., analyte concentrations) from the transceiver101. In some embodiments, the continuous analyte monitoring system120may combine analyte concentration information with measurable physiological parameters to provide alerts regarding a current or projected physiological condition. For example, in some embodiments where the analyte is glucose, the continuous glucose monitoring system120may fuse glucose concentration information with measurable physiological parameters to provide alerts regarding a current or projected hypoglycemic condition. In some embodiments, the alerts may include predictive alerts that could be used to prevent a projected physiological condition. For instance, in some non-limiting embodiments, predictive alerts could be used to prevent hypoglycemic episodes (e.g., night time hypoglycemic episodes or in cases of hypoglycemia unawareness) that are preceded by extended duration of activity detected using the accelerometer. For example, the continuous analyte monitoring system may provide alerts to the mobile medical application which would trigger a text alert on the smartphone or other display device102such as “No meal was taken following extended activity. Please eat before going to sleep.” FIG.3is a flow chart illustrating an alerting process300embodying aspects of the present invention. In some embodiments, the alerting process300may include a step301of generating acceleration data corresponding to movement of an accelerometer. In some embodiments, the acceleration data may be generated by one or more accelerometers (e.g., accelerometer105and/or accelerometer111), which may be located in one or more of the analyte sensor100, transceiver101, display device102, and activity tracker111. In some embodiments, the alerting process300may include a step302of generating activity information based on the acceleration data. In some embodiments, the activity information may be generated by one or more of the transceiver101(e.g., by the microcontroller920of transceiver101), the display device102(e.g., by a mobile medical application being executed on the processor109of the display device102), and the activity tracker111. In some embodiments, the alerting process300may include a step303of generating analyte concentration values based on data signals received from the analyte sensor100. In some embodiments, the analyte concentration values may be generated by one or more of the transceiver101(e.g., by the microcontroller920of transceiver101) and the display device102(e.g., by a mobile medical application being executed on the processor109of the display device102). Although, in the embodiment illustrated inFIG.3, the step303of generating analyte concentration values occurs after steps301and302, this is not required, and, in some alternative embodiments, the step303may occur before or during with one or more of steps301and302. In some embodiments, the alerting process300may include a step304of generating an alert for the current or projected physiological condition or reaction based on one or more of the activity information and the analyte concentration values. In some embodiments, the alert may be generated by one or more of the transceiver101(e.g., by the microcontroller920of transceiver101) and the display device102(e.g., by a mobile medical application being executed on the processor109of the display device102). In some embodiments, an accelerometer (e.g., accelerometer105) may be placed inside a printed circuit board (PCB) of the transceiver101, and the accelerometer may be used to detect the movements of the user's body (e.g., the user's upper body in an embodiment where the transceiver is worn on the arm or the user's core when the transceiver is worn on the abdomen). In some embodiments, outputs of the accelerometer may correspond to accelerations along the X, Y and Z axis. In some embodiments, the outputs may be analog values that have been converted to digital values by an analog-to-digital converter (ADC). The accelerometer outputs may be used to interpret and quantify the movements of the user(s) wearing the transceiver. FIG.4is a flow chart illustrating an activity information generation process400embodying aspects of the present invention. In some non-limiting embodiments, the process400may be performed in step302of the alerting process300. In some embodiments, the activity information generation process400may include a step401of converting accelerating data into scalar acceleration values. In some embodiments, the conversion of step401may include converting raw data (e.g., digital values) from the accelerometer to g's (m/sec2). In some embodiments, the conversion of step401may include converting the resultant data to a scalar value. In some non-limiting embodiments, the resultant data may be converted to scalar values using the formula shown below: AccScalar=sqrt(X*X+Y*Y+Z*Z) (Formula 1) In some embodiments, circuitry (e.g., microcontroller920) of the transceiver101may carry out the conversion of the accelerometer raw data to g's and from the g's to the scalar values. However, this is not required, and in other embodiments a portion or all of the conversions may be carried out elsewhere, such as, for example and without limitation, by the display device102(e.g., a smartphone or personal computer) and/or by the activity tracker110.FIG.5is a graph showing a non-limiting example of scalar acceleration values (black line) of an accelerometer over time. In some embodiments, the process400may include a step403of comparing scalar acceleration values to one or more activity thresholds. That is, in some embodiments, the continuous analyte monitoring system120may use the scalar acceleration values to detect abrupt changes in acceleration based on one or more cutoff thresholds. These instances of high changes in acceleration typically correspond to changes in the position of the body and, thus, activity. Accordingly, in some non-limiting embodiments, the continuous analyte monitoring system120may detect activity by detecting when a scalar acceleration value exceeds the cutoff threshold (i.e., an activity threshold).FIG.5shows detected activities (stars) based on an activity threshold of 0.18 in the non-limiting example. Although an activity threshold of 0.18 is used in the embodiment shown inFIG.5, this is not required, and alternative embodiments may use one or more different activity thresholds. In some embodiments, the process400may include a step403of generating activity information based on the frequency of acceleration values that exceed the activity threshold. In some non-limiting embodiments, the step403may use the frequency of detected activities (e.g., within a time window) to characterize the activity in the acceleration information into three main categories: no activity/sedentary, moderate activity, and high activity.FIG.6is a graph showing an exemplary classification of the detected activity in the acceleration information into no activity, moderate activity, and high activity categories. Accordingly,FIG.6shows the intensity and duration of the physical activity of the user. In some embodiments, this information about the activity may be (i) provided to the user independently as a graph generated by the mobile medical application and displayed on a display (e.g., display108of the display device102) and/or (ii) fused with the analyte data from continuous analyte monitoring system for providing predictive alerts of possible health condition episodes (e.g., hypoglycemic episodes) before they happen.FIG.7is a graph showing an example of analyte measurements (e.g., continuous glucose monitoring system (CGMS) measurements) together with activity monitoring data. The graph may be displayed, for example and without limitation, by the display108of the display device102and/or by the display924of the transceiver101. In some embodiments, the continuous analyte monitoring system120may additionally or alternatively use the activity information to detect sleep patterns and/or step counts, which may be provided to the user using graphs and/or bar plots. In some non-limiting embodiments, the graphs and/or plots may be, for example and without limitation, generated by a mobile medical application running on processor109of the display device102and displayed by display108of the display device102. In some non-limiting embodiments, the continuous analyte monitoring system120may use galvanic or physiologic measurements (e.g., heartrate, heartbeat, and/or sweating) generated by one or more on-body or within body sensors in conjunction with the acceleration data generated by the one or more accelerometers to generate the activity information and generate the alerts regarding a current or projected physiological condition or reaction. Embodiments of the present invention have been fully described above with reference to the drawing figures. Although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions could be made to the described embodiments within the spirit and scope of the invention. | 32,720 |
11857351 | DETAILED DESCRIPTION OF THE INVENTION Referring toFIG.1, a robotic surgical system100is illustrated. The robotic surgical system100generally includes a multi-axis robot2, a tool4with an effector attachment device5on a distal end thereof, and an operator station6. Such a multi-axis robot2is disclosed in our Patent Application Nos. 62/616,673, 62/681,462, 62/423,651, 62/616,700, and Ser. No. 15/816,861 to Peter L. Bono. These applications disclose robotic surgical systems usable with the present system. The entireties of these disclosures are incorporated herein by reference. The tool4couples to the robot2via a tool changer9as described below. The tool4couples to and powers an effector in the form of a fastener driver7for rotation, as more fully described below. The multi-axis robot2includes a plurality of axes about which the tool4can be precisely maneuvered and oriented for surgical procedures. In a preferred, but non-limiting, illustrated embodiment, the multi-axis robot2includes seven axes of movement. The axes of movement include the base axis202, or first axis, which is generally centered within the base200and about which the first arm204rotates. The second axis206is substantially perpendicular to the first axis202and the axis about which the second arm208rotates. The second arm208includes the third axis210about which the third arm212rotates. The third arm212includes the fourth axis of rotation214, which is oriented substantially perpendicular with respect to the first axis202and substantially parallel to the second axis206. The fourth arm216rotates about the fourth axis214. The fourth arm216includes the fifth axis218about which the fifth arm220rotates. The fifth arm220includes the sixth axis222, which includes the most available rotation about the sixth axis222for the wrist224of the robot. The wrist224carries the tool4and effector attachment device5, and has a seventh axis of rotation228for the driver7of the tool4. The driver7is an effector that is operable to drive a threaded fastener, such as a screw8or the like, for example, a bone screw or pedicle screw. The wrist224is at the distal end of the fifth arm220. It should be noted that each axis of rotation provides an additional freedom of movement for manipulation and orientation of the tool4and hence driver7. It should also be noted that while the multi-axis robot2is only illustrated with the tool4, the preferred embodiment is capable of changing the effector to a variety of tools that are used to complete a particular surgery. Drives, not shown, are utilized to move the arms into their desired positions and orientations. The drives may be electric, hydraulic or pneumatic and combinations thereof without departing from the scope of the invention. Rotational position can be signaled to a computer230, as with an encoder (not shown), or the like, associated with each arm204,208,212,216,220, and other components having an axis of rotation. In the preferred embodiment, the drives are in electrical communication with the computer230, and may further be combined with a telemanipulator, or pendant (not shown). The computer230is programmed to control movement and operation of the robot(s)2through a controller portion231, and can utilize a software package such as that disclosed in U.S. Patent Application No. 62/616,700 to the present inventor. Alternatively, other software programming may be provided without departing from the scope of the invention. The computer230can have a primary storage device (commonly referred to as memory) and/or a secondary storage device that can be used to store digital information such as images. Primary and secondary storage are herein referred to as storage collectively, and can include one or both primary and secondary storage. The system100may further include sensors positioned along various places on the multi-axis robot2, which provide tactile feedback to the operator or surgeon232. The computer230is electrically connected or coupled to the multi-axis robot2in a manner that allows for operation of the multi-axis robot2, ranging from positions adjacent the robot to thousands of miles away. The computer230is preferably capable of accepting, retaining and executing programmed movements of the multi-axis robot2in a precise manner. In this manner, skilled surgeons can provide surgical care in areas, such as battlefields, while the surgeon is safe from harm's way. The controller231can include a movement control input device233, such as a joy stick, keyboard, mouse or electronic screen306that can be touch activated. The screen306can be part of the monitor234. Tool change and selection commands can be input using the screen306. As seen inFIG.1, the robotic surgical system100includes a magazine301for holding of one or more tools4and one or more effectors, such as a secondary tool like the fastener driver7, in positions for pickup and replacement by the robot2. The tool4is mounted to the robot2using a tool changer, which is designated generally as9. Such tool changers are known in the art, such as those made by ATI as models MC-16R, QC-11, and QC-21. As seen inFIGS.1,4and5, the magazine301includes a stand303positioned adjacent the robot2and a platform305that has one or more cradles mounted thereon. The cradles may include V-notches, clamps or the like that allow the tool4to be positioned repeatedly in a similar position to be picked up or dropped off by the robot. At least one effector cradle307is mounted on the platform305and is adapted to hold one or more effectors7(fastener driver), and preferably a fastener8coupled to each effector. As seen, the cradle307has a plurality of spaced apart open-end slots309, each adapted to receive therein and hold in position a respective effector7. In a preferred embodiment, an effector7has a shank311which projects generally vertically when stored in a cradle307. However, it is to be understood that a magazine could be provided where the shanks311project in different directions than generally vertically. In the case of a generally vertical shank311, removal of the effector7(driver) is effected by moving the effector7generally horizontally (laterally) out of a respective slot309. In a preferred embodiment, a shank311has one or more circumferentially extending grooves312recessed in an outer surface for a purpose later described. The magazine301also includes at least one cradle321configured for releasably holding one or more tools4in a manner and position to be extracted from the cradle for use and reinserted for storage. As shown, there are a plurality of cradles321that are substantially identical in shape, size and construction. A cradle321has a pair of spaced apart arms323with an open end space325between distal ends thereof. A through opening327is also provided between a pair of arms323and is in communication with the open space325. This configuration allows a tool4to be removed or inserted by vertical and/or horizontal movement of the tool4. As shown, the arms323of a cradle321are connected by a bight329. Means is provided to releasably retain a tool4mounted in a respective cradle321while retaining the tool4in a known position so that the robot2can reliably locate the tool4for pickup for use and reinsertion after use for storage. As shown, upwardly facing surfaces of each of the arms323are provided with a plurality of upwardly opening V-shaped notches331. The use of at least three notches331will define a plane so that the orientation and position of the tool4while mounted in the cradle321is known to the robot system100to facilitate coupling and decoupling of the tool4to the robot2via the tool changer9. It should be noted that while V-shaped notches are illustrated, other shapes suitable for repeatably locating the tools can be utilized without departing from the scope of the invention. A suitable effector is shown inFIG.5. The effector7is in the form of a fastener driver that has a coupling shank311adapted for being releasably secured to the tool4with the attachment device5that is shown as including a chuck351(FIG.3) associated with the tool4at its distal end353. A positioning flange355extends laterally outwardly of the shank311for helping retain the driver7releasably mounted in its cradle307and limit its longitudinal movement into the chuck351. In the illustrated embodiment, the fastener8is shown as a pedicle screw that has a tulip357, as is known in the art. The tulip357is mounted to the screw portion359of the fastener8. The tulip357has an internally threaded portion that threadably engages an externally threaded portion361of a terminal end portion363of the effector7. The thread handedness of the threaded portion361is the same as the thread handedness of the screw portion359, e.g., right handed. After the fastener8is installed, the tool4can effect reverse rotation, backing the threaded portion361out of the threaded portion of the tulip357. It is to be noted here that the terminal end portion363can be constructed to break if a predetermined tightening torque is exceeded by the tool4. If this occurs, the surgeon can then manually extract the remaining threaded portion361from the tulip357. The tool4is best seen inFIGS.2,3. The tool4includes a first coupling component401. The coupling component401is configured to be releasably gripped by a second coupling component403that is mounted to the distal end wrist224. The coupling components401,403combine to form the tool changer9, which is well known in the art, and allow for releasable mounting of tools and effectors to robot(s)2. As shown, the coupling component401includes a plurality of outwardly projecting arms407that are each sized and shaped to be received in a respective notch331for releasably mounting of the tool4to a respective cradle321. The coupling component401can be configured for supplying a compressed fluid, such as air from a source (not shown) of compressed fluid, through the robot2to a flow conduit411for a purpose later described. The coupling component401is mounted to a connecting arm414, which in turn connects the coupling component401to a tool head415. The chuck351is mounted in the tool head415at its distal end. The chuck351is constructed to releasably retain the shank311of the effector7within a socket portion417of the chuck351. The chuck351can be provided with a ball detent arrangement418, such as those found in air hose chucks, that cooperate with the grooves312to releasably retain a shank311in the chuck351. A powered rotary driver421, such as an electric motor or air motor, is mounted in the tool head415, and is coupled to the chuck351to effect its driving of the effector7upon command from a control component of the surgical system100, such as the operator station6, which in turn can be controlled by the appropriate medical personnel or programming as described above. The rotary driver421can include a transmission425between a motor and the chuck351to effect a gear reduction to provide reduced rotational speed and increased torque for the chuck351. The surgical system100can include a torque sensor associated with the rotary driver421, and be operable to limit the torque applied to the effector and/or fastener8to reduce the risk of unwanted breakage. The rotary driver421is provided with a suitable source of energy, for example, a compressed fluid or electrical energy. In the illustrated structure, the driver421is an electric motor and is provided with electricity through an electrical conductor426that is operable to receive electricity through a connector427that is coupled to the robot2, which has means for conducting electricity from a source (not shown) to the connector427. The tool4is provided with a chuck operator431that is operable on command to accomplish mounting of an effector7to the tool4, and demounting of an effector7from the tool4. As shown, a housing435is secured to the arm414. The motor421, transmission425and chuck351are all mounted in the housing421. As shown, the housing421is generally cylindrical along at least a majority of its length and contains the motor421and transmission425in the interior of the housing421. The chuck operator431is preferably powered and controlled remotely by components of the system100, either by programming of the computer230and/or medical personnel. Means is provided to effect powered remote operation of the chuck351for gripping and releasing the effector7. The chuck351includes a hood441that is sleeved onto the housing435and movable longitudinally relative thereto. The hood441selectively engages the ball detent arrangement418to effect selective gripping of an effector shank311. It is to be understood that the shank311could be provided with longitudinal flats to prevent relative rotation between the shank311and the chuck351. As shown, the chuck operator431includes a link443that is secured to the hood441and couples the hood to an operator engine447, such as a linear actuator. In the illustrated structure, the engine447includes a linear reciprocating device449, such as a reciprocating fluid powered piston or an electric solenoid. In the illustrated structure, the device449is a fluid powered piston in a cylinder that is connected to a source of compressed fluid through the conduit411as described above. Upon command, the engine447will have a component reciprocate to effect movement of the hood441to either open the chuck351for receipt of a shank311therein or the release of a shank311therefrom. All patents and publications mentioned in this specification are indicative of the levels of those skilled in the art to which the invention pertains. It is to be understood that while a certain form of the invention is illustrated, it is not to be limited to the specific form or arrangement herein described and shown. It will be apparent to those skilled in the art that various changes may be made without departing from the scope of the invention, and the invention is not to be considered limited to what is shown and described in the specification and any drawings/figures included herein. One skilled in the art will readily appreciate that the present invention is well adapted to carry out the objectives and obtain the ends and advantages mentioned, as well as those inherent therein. The embodiments, methods, procedures and techniques described herein are presently representative of the preferred embodiments, are intended to be exemplary, and are not intended as limitations on the scope. Changes therein and other uses will occur to those skilled in the art which are encompassed within the spirit of the invention and are defined by the scope of the appended claims. Although the invention has been described in connection with specific preferred embodiments, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments. Indeed, various modifications of the described modes for carrying out the invention, which are obvious to those skilled in the art, are intended to be within the scope of the following claims. | 15,052 |
11857352 | DETAILED DESCRIPTION OF THE INVENTION The present invention provides a composition comprising the compound having the structure: or a salt of the compound; 4-amino-2-[19F]-fluorobenzoic acid or a salt of 4-amino-2-[19F]-fluorobenzoic acid; and at least one acceptable carrier. The present invention provides a composition comprising the compound having the structure: or a salt of the compound; and 4-amino-2-[19F]-fluorobenzoic acid or a salt of 4-amino-2-[19F]-fluorobenzoic acid. In some embodiments, the ratio of the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:90 to 1:550. In some embodiments, the ratio of the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:99 to 1:500. In some embodiments, the ratio of the compound to the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:324 to 1:500. In some embodiments, the ratio of the compound to the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:300 to 1:500. In some embodiments, the ratio of the compound to the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:99 to 1:324. In some embodiments, the ratio of the compound to the 4-amino-2-[18F]-fluorobenzoic acid to the 4-amino-2-[19F]-fluorobenzoic acid is in a range from 1:90 to 1:100. In some embodiments, the ratio of the compound to the 4-amino-2-fluorobenzoic acid is about 1:99. In some embodiments, the ratio of the compound to the 4-amino-2-fluorobenzoic acid is about 1:100. In some embodiments, the ratio of the compound to the 4-amino-2-fluorobenzoic acid is about 1:134. In some embodiments, the ratio of the compound to the 4-amino-2-fluorobenzoic acid is about 1:500. In some embodiments, the radiochemical purity of 4-amino-2-[18F]-fluorobenzoic acid is at least 90%. In some embodiments, the radiochemical purity of 4-amino-2-[18F]-fluorobenzoic acid is at least 93.5%. In some embodiments, the radiochemical purity of 4-amino-2-[18F]-fluorobenzoic acid is at least 95% In some embodiments, the radiochemical purity of 4-amino-2-[18F]-fluorobenzoic acid is at least 97.5% In some embodiments, the radiochemical purity of 4-amino-2-[18F]-fluorobenzoic acid is at least 99%. In some embodiments, the composition further comprising 2-(fluoro-18F)-4-nitrobenzoic acid or a salt of 2-(fluoro-18F)-4-nitrobenzoic acid. In some embodiments, the composition further comprising 4-amino-2-(fluoro-18F)benzonitrile or a salt of 4-amino-2-(fluoro-18F)benzonitrile. The present invention provides a process for preparing the composition of the present invention comprising admixing at least one carrier with an amount of a compound having the structure: or a salt of the compound. In some embodiments, a process for preparing the compound having the structure: which comprises: (a) reacting 2,4-dinitrobenzonitrile with a [18F] fluorinating agent to obtain the compound having the structure: (b) hydrolyzing the nitrile group in the compound obtained in step (a) to obtain the compound having the structure: (c) reacting the compound obtained in step (b) with a reducing agent to obtain the compound having the structure: In some embodiments, a process for preparing the compound having the structure: which comprises: (a) reacting 2,4-dinitrobenzonitrile with a [18F] fluorinating agent to obtain the compound having the structure: (b) reacting the compound obtained in step (a) with a reducing agent to obtain the compound having the structure: and (c) hydrolyzing the nitrile group in the compound obtained in step (b) to obtain the compound having the structure: In some embodiments, the process wherein the [18F] fluorinating agent is potassium [18F] fluoride or tetra-n-butylammonium [18F] fluoride. In some embodiments, the process wherein step (a) further comprises a chelating agent. In some embodiments, the process wherein the chelating agent is a crown ether. In some embodiments, the process wherein the chelating agent is 4,7,13,16,21,24-Hexaoxa-1,10-diazabicyclo[8.8.8]hexacosane. In some embodiments, the process wherein step (a) further comprises a base. In some embodiments, the process wherein the base is potassium carbonate. In some embodiments, the process wherein step (a) is performed at room temperature. In some embodiments, the process wherein in step (b) the hydrolysis is facilitated by aqueous solution of a base. In some embodiments, the process wherein the base is potassium hydroxide. In some embodiments, the process wherein step (b) is performed at a temperature of 80-110° C. In some embodiments, the process wherein step (b) is performed at a temperature of about 105° C. In some embodiments, the process wherein in step (c) the hydrolysis is facilitated by aqueous solution of a base. In some embodiments, the process wherein the base is potassium hydroxide. In some embodiments, the process wherein step (c) is performed at a temperature of 80-110° C. In some embodiments, the process wherein step (c) is performed at a temperature of about 105° C. In some embodiments, the process wherein in step (c) the reducing agent is palladium-on-carbon, platinum (IV) oxide, nickel, nickel-aluminium alloy, spongy nickel, tin(II) chloride, titanium(III) chloride, iron metal or zinc metal. In some embodiments, the process wherein in step (c) the reducing agent is zinc metal. In some embodiments, the process wherein step (c) is performed at a temperature of 80-110° C. In some embodiments, the process wherein step (c) is performed at a temperature of about 105° C. In some embodiments, the process wherein steps (a) and (b) are conducted in the same pot; and step (c) is conducted in a separate pot. In some embodiments, the process wherein in step (b) the reducing agent is palladium-on-carbon, platinum (IV) oxide, nickel, nickel-aluminum alloy, spongy nickel, tin(II) chloride, titanium(III) chloride, iron metal or zinc metal. In some embodiments, the process wherein in step (b) the reducing agent is zinc metal. In some embodiments, the process wherein step (b) is performed at a temperature of 80-110° C. In some embodiments, the process wherein step (b) is performed at a temperature of about 105° C. In some embodiments, the process wherein steps (a) and (b) are conducted in the same pot; and step (c) is conducted in a separate pot. In some embodiments, a composition comprising the compound having the structure: or a salt of the compound; 4-amino-2-[19F]-fluorobenzoic acid or a salt of 4-amino-2-[19F]-fluorobenzoic acid; and at least one acceptable carrier,wherein the compound is prepared by any of the processes disclosed above. The present invention further provides a method of detecting the presence of an infectious bacteria in a subject which comprises determining if an amount of the compound having the structure: is present in the subject at a period of time after administration of the compound or salt thereof to the subject, thereby detecting the presence of the infectious bacteria based on the amount of the compound determined to be present in the subject. The present invention further provides a method of detecting the location of bacteria cells in a subject afflicted with an infection of the bacteria which comprises determining where an amount of the compound having the structure: is present in the subject at a period of time after administration of the compound or salt thereof to the subject, thereby detecting the location of the bacteria cells based on the location of the compound determined to be present in the subject. In some embodiments, the method further comprising quantifying the amount of the compound in the subject and comparing the quantity to a predetermined control. In some embodiments, the method further comprising determining the level of infection in the subject based on the amount of the compound in the subject. In some embodiments, the method wherein the determining is performed by a Positron Emission Tomography (PET) device. In some embodiments, the method wherein the bacteria cells express dihydropteroate synthase (DHPS). In some embodiments, the method wherein the subject is afflicted with a gram-negative bacterial infection other thanEnterococcus faecalis. In some embodiments, the method wherein the subject is afflicted with a gram-positive bacterial infection In some embodiments, the method wherein the subject is afflicted with aMycobacterium tuberculosisbacterial infection. In some embodiments, the method wherein the subject is afflicted with a Methicillin-sensitiveStaphylococcus aureusbacterial infection. In some embodiments, the method wherein the subject is afflicted with a Methicillin-resistantStaphylococcus aureusbacterial infection. In some embodiments, the method further comprising subjecting the subject to antibiotic treatment when the presence of an infectious bacteria or the location of bacteria cells is detected. In some embodiments, the method wherein the antibiotic is a penicillin, a cephalosporin, a macrolide, a fluoroquinolone, a tetracycline, a carbapenem or an aminoglycoside antibiotic. In some embodiments, a method of imaging bacteria cells of in a subject afflicted with an infection of the bacteria which comprises:(i) administering to the subject a composition comprising the compound having the structure: or a salt of the compound, and at least one acceptable carrier;(ii) imaging at least a portion of the subject;(iii) detecting in the subject the location of the compound, thereby determining the location of the bacteria cells present in the subject based on the location of the compound in the subject; and(v) obtaining an image of the location of the bacteria cells. In some embodiments, the above method further comprising:(vii) repeating steps (v)-(vii) one or more times. In some embodiments, a method of determining the location of bacteria cells of in a subject afflicted with an infection of the bacteria which comprises:(i) administering to the subject a composition comprising the compound having the structure: or a salt of the compound, and at least one acceptable carrier;(ii) allowing a sufficient period of time for bacteria cells in the subject to take up the compound;(iii) imaging at least a portion of the subject; and(iv) detecting in the subject the location of the compound, thereby determining the location of the bacteria cells present in the subject based on the location of the compound in the subject. In some embodiments of any of the disclosed methods, the compound accumulates in the cells of the bacteria. In some embodiments of any of the disclosed methods, the compound accumulates in the cells of the bacteria by incorporation into the folate biosynthesis pathway of the bacteria. 4-Amino-2-[18f]-fluorobenzoic acid ([18F]F-PABA) has the following structure: The present invention also provides a composition comprising [18F]F-PABA. The present invention also provides a composition comprising [18F]F-PABA and a pharmaceutically acceptable carrier. In some embodiments, a method for the detection of bacteria cells in a subject comprising:(i) administering to the subject an effective amount of the composition of the present invention;(ii) allowing a sufficient period of time for the bacteria cells to take up the [18F]F-PABA in the composition; and(iii) determining whether the bacteria cells are present in the host by detecting the [18F]F-PABA the subject. In some embodiments, a method of imaging bacteria cells of in a subject afflicted with an infection of the bacteria which comprises:(i) administering to the subject an effective amount of the composition of the present invention;(ii) imaging at least a portion of the subject;(iii) detecting in the subject the location of the [18F]F-PABA, thereby determining the location of the bacteria cells present in the subject based on the location of the [18F]F-PABA in the subject;(v) obtaining an image of the location of the bacteria cells; and optionally the following step:(vi) repeating steps (i)-(v) one or more times. In some embodiments, a method of determining the location of bacteria cells of in a subject afflicted with an infection of the bacteria which comprises:(i) administering to the subject an effective amount of the composition of the present invention;(ii) allowing a sufficient period of time for bacteria cells in the subject to take up the [18F]F-PABA;(iii) imaging at least a portion of the subject; and(iv) detecting in the subject the location of the [18F]F-PABA, thereby determining the location of the bacteria cells present in the subject based on the location of the [18F]F-PABA in the subject. In some embodiments of any of the disclosed methods, the bacteria lacks a folate salvage pathway. In some embodiments of any of the disclosed methods, the bacteria is methicillin sensitiveS. aureus(MSSA), methicillin-resistantS. aureus(MRSA), Gram negative bacteriaE. colior Gram negative bacteriaKlebsiela pneumoniaeorMycobacterium tuberculosis. In some embodiments of any of the disclosed methods, the [18F]F-PABA accumulates in the cells of the bacteria by incorporation into the folate biosynthesis pathway. In some embodiments of any of the disclosed methods, the [18F]F-PABA is a substrate for DHPS. In some embodiments of any of the disclosed methods, DHPS catalyses the condensation of 6-hydroxymethyl-7,8-dihydropteridine pyrophosphate to the [(18F]F-PABA to form 4-(((2-amino-4-oxo-3,4-dihydropteridin-6-yl)methyl)amino)-2-[18F]fluorobenzoic acid. In some embodiments, the infectious bacteria is Gram-negative bacteria or Gram-positive bacteria. In some embodiments, the infectious bacteria isMycobacterium tuberculosis. In some embodiments, the Gram-negative bacteria infection is drug-resistant or multi-drug resistant. In some embodiments, the Gram-positive bacteria infection is drug-resistant or multi-drug resistant. In some embodiments, theMycobacterium tuberculosisinfection is drug-resistant or multi-drug resistant. In some embodiments, the Gram-negative bacteria cells are drug-resistant or multi-drug resistant. In some embodiments, the Gram-positive bacteria cells are drug-resistant or multi-drug resistant. In some embodiments, the method wherein theMycobacterium tuberculosiscells are drug-resistant or multi-drug resistant. In some embodiments, the Gram-negative bacteria isEscherichia coli, Klebsiella pneunomiae, Burkholderia cepacia, Pseudomonas aeruginosaorAcinetobacter baumanii. In some embodiments, the Gram-negative bacteria is other thanEnterococcus faecalis. In some embodiments, the Gram-positive bacteria isEnterococcus faecalis, Enterococcus faecium, Staphylococcus aureus, Staphylococcus epidermidis, Staphylococcus haemolyticus, Staphylococcus lugdunensis, Staphylococcus saprophyticus, Staphylococcus hominis, Staphylococcus capitis, Streptococcus intermedius Streptococcus anginosus, Streptococcus constellatus, Streptococcus pneumoniae, Streptobacillus moniliformis, Streptococcus pyogenes, Streptococcus agalactiae, Actinomyces israelii, Arcanobacterium haemolyticum, Bacillus anthracis, Bacillus cereus, Bacillus subtilis, Clostridium difficile, Clostridium perfringens, Clostridium tetani, Corynebacterium diphtheria, Corynebacterium jeikeium, Corynebacterium urealyticum, Erysipelothrix rhusiopathiae, Listeria monocytogenes, Nocardia asteroides, Nocardia brasiliensis, Propionibacterium acnesorRhodococcus equi. In one embodiment, wherein the antibiotic includes, but is not limited to, fluoroquinolines, tetracyclines, macrolides, glycopeptides, sulfonamides, aminoglycosides, cephalosporins and/or penicillins. In one embodiment, wherein the antibiotic is selected from the group consisting of ampicillin, piperacillin, penicillin G, ticarcillin, imipenem, meropenem, azithromycin, erythromycin, aztreonam, cefepime, cefotaxime, ceftriaxone, ceftazidime, ciprofloxacin, levofloxacin, clindamycin, doxycycline, gentamycin, amikacin, tobramycin, tetracycline, tigecycline, rifampicin, vancomycin and polymyxin. In one embodiment, wherein the antibiotic is selected from the group consisting of gentamicin, amikacin, tobramycin, ciprofloxacin, levofloxacin, ceftazidime, cefepime, cefoperazone, cefpirome, ceftobiprole, carbenicillin, ticarcillin, mezlocillin, azlocillin, piperacillin, meropenem, imipenem, doripenem, polymyxin B, colistin and aztreonam. In one embodiment, the subject is afflicted with osteomyelitis or or enocarditis. In one embodiment, the subject is afflicted with diabetes. In one embodiment, a process for manufacturing a composition which comprises obtaining the compound having the structure: and combining the compound with a carrier so as to thereby manufacture the composition. In one embodiment, a process for manufacturing a composition which comprises obtaining the compound having the structure: by any one of the processes disclosed herein and combining the compound with a carrier so as to thereby manufacture the composition. As used herein, a “symptom” associated with a disease or disorder includes any clinical or laboratory manifestation associated with the disease or disorder and is not limited to what the subject can feel or observe. As used herein, “treating”, e.g. of an infection, encompasses inducing prevention, inhibition, regression, or stasis of the disease or a symptom or condition associated with the infection. The compounds of the present invention include all hydrates, solvates, and complexes of the compounds used by this invention. If a chiral center or another form of an isomeric center is present in a compound of the present invention, all forms of such isomer or isomers, including enantiomers and diastereomers, are intended to be covered herein. Compounds containing a chiral center may be used as a racemic mixture, an enantiomerically enriched mixture, or the racemic mixture may be separated using well-known techniques and an individual enantiomer may be used alone. The compounds described in the present invention are in racemic form or as individual enantiomers. The enantiomers can be separated using known techniques, such as those described in Pure and Applied Chemistry 69, 1469-1474, (1997) IUPAC. In cases in which compounds have unsaturated carbon-carbon double bonds, both the cis (Z) and trans (E) isomers are within the scope of this invention. The compounds of the subject invention may have spontaneous tautomeric forms. In cases wherein compounds may exist in tautomeric forms, such as keto-enol tautomers, each tautomeric form is contemplated as being included within this invention whether existing in equilibrium or predominantly in one form. In the compound structures depicted herein, hydrogen atoms are not shown for carbon atoms having less than four bonds to non-hydrogen atoms. However, it is understood that enough hydrogen atoms exist on said carbon atoms to satisfy the octet rule. This invention also provides isotopic variants of the compounds disclosed herein, including wherein the isotopic atom is2H and/or wherein the isotopic atom13C. Accordingly, in the compounds provided herein hydrogen can be enriched in the deuterium isotope. It is to be understood that the invention encompasses all such isotopic forms. It is understood that the structures described in the embodiments of the methods hereinabove can be the same as the structures of the compounds described hereinabove. It is understood that where a numerical range is recited herein, the present invention contemplates each integer between, and including, the upper and lower limits, unless otherwise stated. Except where otherwise specified, if the structure of a compound of this invention includes an asymmetric carbon atom, it is understood that the compound occurs as a racemate, racemic mixture, and isolated single enantiomer. All such isomeric forms of these compounds are expressly included in this invention. Except where otherwise specified, each stereogenic carbon may be of the R or S configuration. It is to be understood accordingly that the isomers arising from such asymmetry (e.g., all enantiomers and diastereomers) are included within the scope of this invention, unless indicated otherwise. Such isomers can be obtained in substantially pure form by classical separation techniques and by stereochemically controlled synthesis, such as those described in “Enantiomers, Racemates and Resolutions” by J. Jacques, A. Collet and S. Wilen, Pub. John Wiley & Sons, N Y, 1981. For example, the resolution may be carried out by preparative chromatography on a chiral column. The subject invention is also intended to include all isotopes of atoms occurring on the compounds disclosed herein. Isotopes include those atoms having the same atomic number but different mass numbers. By way of general example and without limitation, isotopes of hydrogen include tritium and deuterium. Isotopes of carbon include C-13 and C-14. It will be noted that any notation of a carbon in structures throughout this application, when used without further notation, are intended to represent all isotopes of carbon, such as12C,13C, or14C. Furthermore, any compounds containing13C or14C may specifically have the structure of any of the compounds disclosed herein. It will also be noted that any notation of a hydrogen in structures throughout this application, when used without further notation, are intended to represent all isotopes of hydrogen, such as1H,2H, or3H. Furthermore, any compounds containing2H or3H may specifically have the structure of any of the compounds disclosed herein. Isotopically-labeled compounds can generally be prepared by conventional techniques known to those skilled in the art using appropriate isotopically-labeled reagents in place of the non-labeled reagents employed. In the compounds used in the method of the present invention, the substituents may be substituted or unsubstituted, unless specifically defined otherwise. It is understood that substituents and substitution patterns on the compounds used in the method of the present invention can be selected by one of ordinary skill in the art to provide compounds that are chemically stable and that can be readily synthesized by techniques known in the art from readily available starting materials. If a substituent is itself substituted with more than one group, it is understood that these multiple groups may be on the same carbon or on different carbons, so long as a stable structure results. In choosing the compounds used in the method of the present invention, one of ordinary skill in the art will recognize that the various substituents, i.e. R1, R2, etc. are to be chosen in conformity with well-known principles of chemical structure connectivity. It is understood that substituents and substitution patterns on the compounds of the instant invention can be selected by one of ordinary skill in the art to provide compounds that are chemically stable and that can be readily synthesized by techniques known in the art, as well as those methods set forth below, from readily available starting materials. If a substituent is itself substituted with more than one group, it is understood that these multiple groups may be on the same carbon or on different carbons, so long as a stable structure results. In choosing the compounds of the present invention, one of ordinary skill in the art will recognize that the various substituents, i.e. R1, R2, etc. are to be chosen in conformity with well-known principles of chemical structure connectivity. The various R groups attached to the aromatic rings of the compounds disclosed herein may be added to the rings by standard procedures, for example those set forth in Advanced Organic Chemistry: Part B: Reaction and Synthesis, Francis Carey and Richard Sundberg, (Springer) 5th ed. Edition. (2007), the content of which is hereby incorporated by reference. The compounds used in the method of the present invention may be prepared by techniques well known in organic synthesis and familiar to a practitioner ordinarily skilled in the art. However, these may not be the only means by which to synthesize or obtain the desired compounds. The compounds used in the method of the present invention may be prepared by techniques described in Vogel's Textbook of Practical Organic Chemistry, A. I. Vogel, A. R. Tatchell, B. S. Furnis, A. J. Hannaford, P. W. G. Smith, (Prentice Hall) 5thEdition (1996), March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure, Michael B. Smith, Jerry March, (Wiley-Interscience) 5thEdition (2007), and references therein, which are incorporated by reference herein. However, these may not be the only means by which to synthesize or obtain the desired compounds. Another aspect of the invention comprises a compound used in the method of the present invention as a pharmaceutical composition. In some embodiments, a pharmaceutical composition comprising the compound of the present invention and a pharmaceutically acceptable carrier. As used herein, the term “pharmaceutically active agent” means any substance or compound suitable for administration to a subject and furnishes biological activity or other direct effect in the treatment, cure, mitigation, diagnosis, or prevention of disease, or affects the structure or any function of the subject. Pharmaceutically active agents include, but are not limited to, substances and compounds described in the Physicians' Desk Reference (PDR Network, LLC; 64th edition; Nov. 15, 2009) and “Approved Drug Products with Therapeutic Equivalence Evaluations” (U.S. Department Of Health And Human Services, 30thedition, 2010), which are hereby incorporated by reference. Pharmaceutically active agents which have pendant carboxylic acid groups may be modified in accordance with the present invention using standard esterification reactions and methods readily available and known to those having ordinary skill in the art of chemical synthesis. Where a pharmaceutically active agent does not possess a carboxylic acid group, the ordinarily skilled artisan will be able to design and incorporate a carboxylic acid group into the pharmaceutically active agent where esterification may subsequently be carried out so long as the modification does not interfere with the pharmaceutically active agent's biological activity or effect. The compounds used in the method of the present invention may be in a salt form. As used herein, a “salt” is a salt of the instant compounds which has been modified by making acid or base salts of the compounds. In the case of compounds used to treat an infection or disease caused by a pathogen, the salt is pharmaceutically acceptable. Examples of pharmaceutically acceptable salts include, but are not limited to, mineral or organic acid salts of basic residues such as amines; alkali or organic salts of acidic residues such as phenols. The salts can be made using an organic or inorganic acid. Such acid salts are chlorides, bromides, sulfates, nitrates, phosphates, sulfonates, formates, tartrates, maleates, malates, citrates, benzoates, salicylates, ascorbates, and the like. Phenolate salts are the alkali earth metal salts, sodium, potassium or lithium. The term “pharmaceutically acceptable salt” in this respect, refers to the relatively non-toxic, inorganic and organic acid or base addition salts of compounds of the present invention. These salts can be prepared in situ during the final isolation and purification of the compounds of the invention, or by separately reacting a purified compound of the invention in its free base or free acid form with a suitable organic or inorganic acid or base, and isolating the salt thus formed. Representative salts include the hydrobromide, hydrochloride, sulfate, bisulfate, phosphate, nitrate, acetate, valerate, oleate, palmitate, stearate, laurate, benzoate, lactate, phosphate, tosylate, citrate, maleate, fumarate, succinate, tartrate, naphthylate, mesylate, glucoheptonate, lactobionate, and laurylsulphonate salts and the like. (See, e.g., Berge et al. (1977) “Pharmaceutical Salts”,J. Pharm. Sci.66:1-19). The compounds of the present invention may also form salts with basic amino acids such a lysine, arginine, etc. and with basic sugars such as N-methylglucamine, 2-amino-2-deoxyglucose, etc. and any other physiologically non-toxic basic substance. As used herein, “administering” an agent may be performed using any of the various methods or delivery systems well known to those skilled in the art. The administering can be performed, for example, orally, parenterally, intraperitoneally, intravenously, intraarterially, transdermally, sublingually, intramuscularly, rectally, transbuccally, intranasally, liposomally, via inhalation, vaginally, intraoccularly, via local delivery, subcutaneously, intraadiposally, intraarticularly, intrathecally, into a cerebral ventricle, intraventicularly, intratumorally, into cerebral parenchyma or intraparenchchymally. The compounds used in the method of the present invention may be administered in various forms, including those detailed herein. The treatment with the compound may be a component of a combination therapy or an adjunct therapy, i.e. the subject or patient in need of the drug is treated or given another drug for the disease in conjunction with one or more of the instant compounds. This combination therapy can be sequential therapy where the patient is treated first with one drug and then the other or the two drugs are given simultaneously. These can be administered independently by the same route or by two or more different routes of administration depending on the dosage forms employed. As used herein, a “pharmaceutically acceptable carrier” is a pharmaceutically acceptable solvent, suspending agent or vehicle, for delivering the instant compounds to the animal or human. The carrier may be liquid or solid and is selected with the planned manner of administration in mind. Liposomes are also a pharmaceutically acceptable carrier as are slow-release vehicles. The dosage of the compounds administered in treatment will vary depending upon factors such as the pharmacodynamic characteristics of a specific chemotherapeutic agent and its mode and route of administration; the age, sex, metabolic rate, absorptive efficiency, health and weight of the recipient; the nature and extent of the symptoms; the kind of concurrent treatment being administered; the frequency of treatment with; and the desired therapeutic effect. A dosage unit of the compounds used in the method of the present invention may comprise a single compound or mixtures thereof with additional antitumor agents. The compounds can be administered in oral dosage forms as tablets, capsules, pills, powders, granules, elixirs, tinctures, suspensions, syrups, and emulsions. The compounds may also be administered in intravenous (bolus or infusion), intraperitoneal, subcutaneous, or intramuscular form, or introduced directly, e.g. by injection, topical application, or other methods, into or topically onto a site of disease or lesion, all using dosage forms well known to those of ordinary skill in the pharmaceutical arts. The compounds used in the method of the present invention can be administered in admixture with suitable pharmaceutical diluents, extenders, excipients, or in carriers such as the novel programmable sustained-release multi-compartmental nanospheres (collectively referred to herein as a pharmaceutically acceptable carrier) suitably selected with respect to the intended form of administration and as consistent with conventional pharmaceutical practices. The unit will be in a form suitable for oral, nasal, rectal, topical, intravenous or direct injection or parenteral administration. The compounds can be administered alone or mixed with a pharmaceutically acceptable carrier. This carrier can be a solid or liquid, and the type of carrier is generally chosen based on the type of administration being used. The active agent can be co-administered in the form of a tablet or capsule, liposome, as an agglomerated powder or in a liquid form. Examples of suitable solid carriers include lactose, sucrose, gelatin and agar. Capsule or tablets can be easily formulated and can be made easy to swallow or chew; other solid forms include granules, and bulk powders. Tablets may contain suitable binders, lubricants, diluents, disintegrating agents, coloring agents, flavoring agents, flow-inducing agents, and melting agents. Examples of suitable liquid dosage forms include solutions or suspensions in water, pharmaceutically acceptable fats and oils, alcohols or other organic solvents, including esters, emulsions, syrups or elixirs, suspensions, solutions and/or suspensions reconstituted from non-effervescent granules and effervescent preparations reconstituted from effervescent granules. Such liquid dosage forms may contain, for example, suitable solvents, preservatives, emulsifying agents, suspending agents, diluents, sweeteners, thickeners, and melting agents. Oral dosage forms optionally contain flavorants and coloring agents. Parenteral and intravenous forms may also include minerals and other materials to make them compatible with the type of injection or delivery system chosen. Techniques and compositions for making dosage forms useful in the present invention are described in the following references: 7 Modern Pharmaceutics, Chapters 9 and 10 (Banker & Rhodes, Editors, 1979); Pharmaceutical Dosage Forms: Tablets (Lieberman et al., 1981); Ansel, Introduction to Pharmaceutical Dosage Forms 2nd Edition (1976); Remington's Pharmaceutical Sciences, 17th ed. (Mack Publishing Company, Easton, Pa., 1985); Advances in Pharmaceutical Sciences (David Ganderton, Trevor Jones, Eds., 1992); Advances in Pharmaceutical Sciences Vol. 7. (David Ganderton, Trevor Jones, James McGinity, Eds., 1995); Aqueous Polymeric Coatings for Pharmaceutical Dosage Forms (Drugs and the Pharmaceutical Sciences, Series 36 (James McGinity, Ed., 1989); Pharmaceutical Particulate Carriers: Therapeutic Applications: Drugs and the Pharmaceutical Sciences, Vol 61 (Alain Rolland, Ed., 1993); Drug Delivery to the Gastrointestinal Tract (Ellis Horwood Books in the Biological Sciences. Series in Pharmaceutical Technology; J. G. Hardy, S. S. Davis, Clive G. Wilson, Eds.); Modem Pharmaceutics Drugs and the Pharmaceutical Sciences, Vol 40 (Gilbert S. Banker, Christopher T. Rhodes, Eds.). All of the aforementioned publications are incorporated by reference herein. Tablets may contain suitable binders, lubricants, disintegrating agents, coloring agents, flavoring agents, flow-inducing agents, and melting agents. For instance, for oral administration in the dosage unit form of a tablet or capsule, the active drug component can be combined with an oral, non-toxic, pharmaceutically acceptable, inert carrier such as lactose, gelatin, agar, starch, sucrose, glucose, methyl cellulose, magnesium stearate, dicalcium phosphate, calcium sulfate, mannitol, sorbitol and the like. Suitable binders include starch, gelatin, natural sugars such as glucose or beta-lactose, corn sweeteners, natural and synthetic gums such as acacia, tragacanth, or sodium alginate, carboxymethylcellulose, polyethylene glycol, waxes, and the like. Lubricants used in these dosage forms include sodium oleate, sodium stearate, magnesium stearate, sodium benzoate, sodium acetate, sodium chloride, and the like. Disintegrators include, without limitation, starch, methyl cellulose, agar, bentonite, xanthan gum, and the like. The compounds used in the method of the present invention may also be administered in the form of liposome delivery systems, such as small unilamellar vesicles, large unilamellar vesicles, and multilamellar vesicles. Liposomes can be formed from a variety of phospholipids such as lecithin, sphingomyelin, proteolipids, protein-encapsulated vesicles or from cholesterol, stearylamine, or phosphatidylcholines. The compounds may be administered as components of tissue-targeted emulsions. The compounds used in the method of the present invention may also be coupled to soluble polymers as targetable drug carriers or as a prodrug. Such polymers include polyvinylpyrrolidone, pyran copolymer, polyhydroxylpropylmethacrylamide-phenol, polyhydroxyethylaspartamidephenol, or polyethyleneoxide-polylysine substituted with palmitoyl residues. Furthermore, the compounds may be coupled to a class of biodegradable polymers useful in achieving controlled release of a drug, for example, polylactic acid, polyglycolic acid, copolymers of polylactic and polyglycolic acid, polyepsilon caprolactone, polyhydroxy butyric acid, polyorthoesters, polyacetals, polydihydropyrans, polycyanoacylates, and crosslinked or amphipathic block copolymers of hydrogels. Gelatin capsules may contain the active ingredient compounds and powdered carriers, such as lactose, starch, cellulose derivatives, magnesium stearate, stearic acid, and the like. Similar diluents can be used to make compressed tablets. Both tablets and capsules can be manufactured as immediate release products or as sustained release products to provide for continuous release of medication over a period of hours. Compressed tablets can be sugar-coated or film-coated to mask any unpleasant taste and protect the tablet from the atmosphere, or enteric coated for selective disintegration in the gastrointestinal tract. For oral administration in liquid dosage form, the oral drug components are combined with any oral, non-toxic, pharmaceutically acceptable inert carrier such as ethanol, glycerol, water, and the like. Examples of suitable liquid dosage forms include solutions or suspensions in water, pharmaceutically acceptable fats and oils, alcohols or other organic solvents, including esters, emulsions, syrups or elixirs, suspensions, solutions and/or suspensions reconstituted from non-effervescent granules and effervescent preparations reconstituted from effervescent granules. Such liquid dosage forms may contain, for example, suitable solvents, preservatives, emulsifying agents, suspending agents, diluents, sweeteners, thickeners, and melting agents. Liquid dosage forms for oral administration can contain coloring and flavoring to increase patient acceptance. In general, water, asuitable oil, saline, aqueous dextrose (glucose), and related sugar solutions and glycols such as propylene glycol or polyethylene glycols are suitable carriers for parenteral solutions. Solutions for parenteral administration preferably contain a water soluble salt of the active ingredient, suitable stabilizing agents, and if necessary, buffer substances. Antioxidizing agents such as sodium bisulfite, sodium sulfite, or ascorbic acid, either alone or combined, are suitable stabilizing agents. Also used are citric acid and its salts and sodium EDTA. In addition, parenteral solutions can contain preservatives, such as benzalkonium chloride, methyl- or propyl-paraben, and chlorobutanol. Suitable pharmaceutical carriers are described in Remington's Pharmaceutical Sciences, Mack Publishing Company, a standard reference text in this field. The compounds used in the method of the present invention may also be administered in intranasal form via use of suitable intranasal vehicles, or via transdermal routes, using those forms of transdermal skin patches well known to those of ordinary skill in that art. To be administered in the form of a transdermal delivery system, the dosage administration will generally be continuous rather than intermittent throughout the dosage regimen. Parenteral and intravenous forms may also include minerals and other materials such as solutol and/or ethanol to make them compatible with the type of injection or delivery system chosen. The compounds and compositions of the present invention can be administered in oral dosage forms as tablets, capsules, pills, powders, granules, elixirs, tinctures, suspensions, syrups, and emulsions. The compounds may also be administered in intravenous (bolus or infusion), intraperitoneal, subcutaneous, or intramuscular form, or introduced directly, e.g. by topical administration, injection or other methods, to the afflicted area, such as a wound, including ulcers of the skin, all using dosage forms well known to those of ordinary skill in the pharmaceutical arts. Specific examples of pharmaceutically acceptable carriers and excipients that may be used to formulate oral dosage forms of the present invention are described in U.S. Pat. No. 3,903,297 to Robert, issued Sep. 2, 1975. Techniques and compositions for making dosage forms useful in the present invention are described-in the following references: 7 Modern Pharmaceutics, Chapters 9 and 10 (Banker & Rhodes, Editors, 1979); Pharmaceutical Dosage Forms: Tablets (Lieberman et al., 1981); Ansel, Introduction to Pharmaceutical Dosage Forms 2nd Edition (1976); Remington's Pharmaceutical Sciences, 17th ed. (Mack Publishing Company, Easton, Pa., 1985); Advances in Pharmaceutical Sciences (David Ganderton, Trevor Jones, Eds., 1992); Advances in Pharmaceutical Sciences Vol 7. (David Ganderton, Trevor Jones, James McGinity, Eds., 1995); Aqueous Polymeric Coatings for Pharmaceutical Dosage Forms (Drugs and the Pharmaceutical Sciences, Series 36 (James McGinity, Ed., 1989); Pharmaceutical Particulate Carriers: Therapeutic Applications: Drugs and the Pharmaceutical Sciences, Vol 61 (Alain Rolland, Ed., 1993); Drug Delivery to the Gastrointestinal Tract (Ellis Horwood Books in the Biological Sciences. Series in Pharmaceutical Technology; J. G. Hardy, S. S. Davis, Clive G. Wilson, Eds.); Modem Pharmaceutics Drugs and the Pharmaceutical Sciences, Vol 40 (Gilbert S. Banker, Christopher T. Rhodes, Eds.). All of the aforementioned publications are incorporated by reference herein. The active ingredient can be administered orally in solid dosage forms, such as capsules, tablets, powders, and chewing gum; or in liquid dosage forms, such as elixirs, syrups, and suspensions, including, but not limited to, mouthwash and toothpaste. It can also be administered parentally, in sterile liquid dosage forms. Solid dosage forms, such as capsules and tablets, may be enteric-coated to prevent release of the active ingredient compounds before they reach the small intestine. Materials that may be used as enteric coatings include, but are not limited to, sugars, fatty acids, proteinaceous substances such as gelatin, waxes, shellac, cellulose acetate phthalate (CAP), methyl acrylate-methacrylic acid copolymers, cellulose acetate succinate, hydroxy propyl methyl cellulose phthalate, hydroxy propyl methyl cellulose acetate succinate (hypromellose acetate succinate), polyvinyl acetate phthalate (PVAP), and methyl methacrylate-methacrylic acid copolymers. The compounds and compositions of the invention can be coated onto stents for temporary or permanent implantation into the cardiovascular system of a subject. Variations on those general synthetic methods will be readily apparent to those of ordinary skill in the art and are deemed to be within the scope of the present invention. Each embodiment disclosed herein is contemplated as being applicable to each of the other disclosed embodiments. Thus, all combinations of the various elements described herein are within the scope of the invention. This invention will be better understood by reference to the Experimental Details which follow, but those skilled in the art will readily appreciate that the specific experiments detailed are only illustrative of the invention as described more fully in the claims which follow thereafter. EXPERIMENTAL DETAILS Example 1. Synthesis of [18F]F-PABA 2,4-Finitrobenzonitrile is used as the starting material. The ortho-nitro group is replaced by18F through a nucleophilic aromatic substitution reaction followed by oxidation of the nitrile group to a carboxylic acid and then reduction of the p-nitro group to an amine (Scheme 1). The radiosynthesis including purification and formulation is accomplished in 120 min with a typical decay-corrected yield of 37.0% and radiochemical purity of −97.5%. Typical specific activity of the final tracer is 19 mCi/μg, which may range from −5 mCi of tracer. In a GMP facility 100 mCi of [18F]F-PABA may be produced. Three-Step Synthesis (See Scheme 2) Step 1 [18F]Fluoride dissolved in ddH2O was transferred to a reaction vial containing Kryptofix 2.2.2 (8 mg) and potassium carbonate (1 mg). The solution was dried by azeotropic distillation by adding acetonitrile portion wise. The solid residue was re-solubilized with 0.2-0.3 ml of DMSO containing a required amount of the precursor 2, 4-dinitrobenzonitrile (compound 1, 2 mg). The reaction mixture was stirred in the sealed vial for 10 min at RT. The color of the reaction mixture changed from yellow to maroon. The reaction mixture was then diluted with 10 ml H2O and run through a Waters Oasis plus HLB followed by a Waters C18 Sep-pak. The loading procedure was carried out by air pushing or vacuum drawing with a flow rate around 0.8 mL/min. The air that drives the solvent through the cartridges was kept running for another minute after the elution procedure to ensure all solvent was pushed out. The two cartridges were then eluted with 3 ml MeOH. The MeOH solution was directed back into the original reactor and was dried under vacuum. Step 2 1 ml 2M KOH solution was added to the dried reaction vial. The reaction mixture was heated to 105° C. and stirred for 10 min. The reaction was quenched by 2 ml 2M acetic acid solution and additional 5 ml H2O. The reaction mixture was run through another Waters Oasis plus HlB followed by a Waters C18 Sep-pak. It was then eluted with 2×1.2 ml MeCN into the second reactor which contained 10 mg Zn and 45 mg NH4CI. MeCN was dried under vacuum. Step 3 1 ml H2O was added to dried reaction vial 2. The reaction mixture was heated to 105° C. and stirred for 5 min in presence of 10 mg zinc powder and ˜45 mg ammonium chloride. The product was then filtered through a venting 0.22 um filter and the reactor/filter washed with 4 ml of water. This mixture was then loaded onto 250×10 mm C18 column and eluted with 5% ethanol, 0.5% acetic acid at 4 ml/min. Purified [18F]F-PABA elutes at around 18 minutes. [18F]FPABA can be conveniently prepared with a commercial remote chemistry units, such as the GE Tracerlab FXN pro. The FXN chemistry unit is reconfigured such that reservoirs 7 and 8 are connected with a cross piece to the line running between reservoir 6 and valve VX4. For the synthesis, [18F] fluoride is trapped on a Waters Sep-Pak light Accell plus QMA cartridge. The [18F] fluoride is then eluted with 1 mL of potassium carbonate (4 mg/mL)/Kryptofix® [2.2.2] (14.4 mg/mL) in 96% acetonitrile into the reaction vessel. This solution is evaporated to dryness under a stream of nitrogen before being heated to 100° C. for one minute. The reaction vial is cooled to 40° C. and 2.0 mg of 2,4-dinitrobenzonitrile in 1 mL DMSO added. The reaction vessel is sealed and stirred for 6 minutes. The mixture is then diluted with 8 mL of water before being loaded onto conditioned an Oasis HLB and Sep-Pal light C18 cartridges in series. The cartridges are then back flushed with 3 mL acetonitrile to elute the desired 2-[18F] fluoro-4-nitrobenzonitrile which is returned to the original reaction vessel. The cartridges are eluted with 8 mL of water so that they can be reused after the second reaction. The acetonitrile is removed under a stream of nitrogen gas at 40° C. for 3 minutes. One milliliter of 2 M potassium hydroxide is added to the residue. The reaction vessel is sealed and heated to 105° C. for 10 minutes. The reaction vessel is then cooled to 40° C. before 2 mL of 2M acetic acid and 5 mL of water are added. The mixture is stirred before being passed over previously used HLB and C18 cartridges. The cartridges are then back flushed with 1.5 ml acetonitrile to elute 2-[18F] fluoro-4-nitrobenzoic acid to a second reaction vessel containing 10 mg of zinc powder. The acetonitrile is removed under a stream of nitrogen at 60° C. Sixty milligrams of ammonium chloride in 1 mL of water and 0.1 mL of 2M acetic acid are then added to the second reaction vessel. The reaction vessel is then sealed and heated up to 105° C. for five minutes. The reaction vessel is cooled and the contents flushed through a 0.22 μm filter using 4.5 mL of water. This filtered solution is mixed and purified with HPLC (Phenomenex Luna 10 μm C18(2)100 Å, 250×10 mm) using an eluent of 0.5% acetic acid/5% ethanol at a flow rate of 5 mL/min. The desired product, 4-amino-2-[18F] fluorobenzoic acid elutes at 18 minutes. The overall synthesis time is 85 minutes, with a mean decay corrected yield of 30% (n=6). Starting with 400-1000 mCi of 18F, the mean specific activities of [18F]F-PABA is 34 mCi/μg. The mean radiochemical purity is 99.1%. Example 2. F-PABA is a Substrate for DHPS and is not Toxic to Either Bacterial or Mammalian Cells Expressed and purified dihydropteroate synthase (DHPS) fromS. aureus(saDHPS) was cloned. DHPS is the enzyme that installs PABA (p-aminobenzoic acid) in the folate biosynthesis pathway (Scheme 2). It has been demonstrated that the PABA analog PAS (2-aminosalicylate) is incorporated into folic acid inM. tuberculosis(Chakraborty, S. et al. 2013), suggesting that PAS is a substrate for DHPS. Using a coupled assay, it was determined that the kinetic parameters for saDHPS with PABA, PAS and F-PABA. Importantly, all three compounds have similar kcat and Km values indicating that F-PABA is an alternative substrate for saDHPS. Since PAS is an antibacterial compound whose mechanism of action may be related for the ability of this compound to compete with PABA for DHPS, we determined the antibacterial activity and cytotoxicity of F-PABA for several bacterial species as well as Vero cells. In each case no growth inhibition was observed up to 200 μg/ml. Unlike PAA, 2-F-PABA has no antibacterial activity (Table 1). TABLE 1MIC (μg/ml)2-F-PABAPASM. tuberculosis>1000.08S. aureus>200>200E. coli>200>200 Example 3. [18F]F-PABA is Taken Up byS. aureus, E. ColiandK. pneumoniae, but not byE. faecalis The ability of different bacterial species to take up [18F]F-PABA was studied. The radiotracer accumulated in both methicillin sensitiveS. aureus(MSSA, Newman) and methicillin-resistantS. aureus(MRSA), as well as the Gram negative bacteriaE. coliandKlebsiela pneumoniae. In the case of MSSA we also demonstrated that heat-killed cells were unable to take up [18F]F-PABA (FIG.1). In contrast, [18F]F-PABA was not taken up byEnterococcus faecalis. E. faecalishas a folate salvage pathway and can take up folate from the environment. Thus, folic acid biosynthesis is dispensable in this organism, which also explains why sulfonamides are not used to treat infection byE. faecalis. These studies suggest that F-PABA uptake depends on on the de novo biosynthesis of folate. Example 4. [18F]F-PABA Accumulates at the Site ofS. aureusInfection in Rat Triceps and Mouse Thigh Infection Models Initial in vivo studies focused on soft tissue models of MSSA infection. This included a mouse thigh infection model and rat triceps model.FIG.2shows data for the accumulation of [18F]F-PABA in the triceps of an infected rat. Fifty μL of 109CFU of NewmanS. aureusBHI culture was injected into the right triceps of a rat. After 10-15 hr the rats were imaged following iv administration of 0.8-1.2 mCi of [18F]F-PABA. The images clearly show the accumulation of radioactivity in the right but not the left triceps. In addition to monitoring the time course of [18F]F-PABA biodistribution, we also quantified tracer levels by postmortem ex vivo counting. While the [18F]F-PABA distributed to all tissues and organs with the exception of the brain, significant tracer accumulation was only observed in the right triceps, as well as the kidney, bladder and GI tracts due to tracer clearance. At 60 min tracer levels were 5.4× higher in the infected right triceps compared to the uninfected left triceps. This compares favorably with other tracers. Example 5. [18F]F-PABA does not Accumulate at the Site of Sterile Inflammation One of the main limitations of using FDG to image infection is that FDG accumulates in the mammalian cells involved in the inflammatory response to infection. It was analyzed how inflammation affected the biodistribution of [18F]F-PABA by generating an inflammatory response using 50 μL of 1012 CFU of NewmanS. aureusheat-killed bacteria.FIG.3shows a comparison of levels of [18F]F-PABA in the triceps of a rat in which the right triceps is the site of bacterial infection whereas the left triceps is the site of sterile inflammation. Significantly, radiotracer levels are 10-fold higher at the site of infection compared to the site of sterile inflammation, indicating that the accumulation of [18F]F-PABA at the site of infection is likely not due to uptake by cells involved in the inflammatory response. Example 6. [18F]F-PABA can be Used to Monitor the Change in Bacterial Load Caused by Antibiotic Treatment A key goal is to identify a tracer that can be used to quantify bacterial load and monitor the change in bacterial load during and following antibiotic treatment. InFIG.4it is shown that the accumulation of radiotracer in the triceps of an infected rat correlates with bacterial load following administration of vancomycin. The bacterial burden of infected triceps before treatment was 10.8±0.7 Log 10 CFU and showed accumulation of 0.051±0.008% 1 D/cc. After 3 doses of vancomycin, the bacterial burden decreased by almost 3 logs to 8.1±0.3 Log 10 CFU, and resulted in about a 3-fold decrease in tracer levels (0.015±0.005% 1 D/cc). A further 3 doses of vancomycin treatment resulted in an additional 1 log decrease in bacterial burden to 7.0±0.9 Log 10 CFU and similar levels of tracer accumulation of 0.013±0.002% 10/cc). In contrast the tracer levels in the uninfected triceps was 0.007±0.001% 1 D/cc. This data shows that [18F]F-PABA can be used to monitor the response to drug treatment and indicates that the limit of detection in this particular model is 7 Log 10 CFU. Healthy bacterial burden found in soft tissue infections in humans (of whichS. aureusis the leading cause, accounting for over 60% of all the cases) averages 8.3 Log 10 CFU, showing that 2-[18F]F-PABA is sufficiently sensitive to detect clinically relevant infections. Example 7. Bacterial Infections An amount of a composition comprising the compound having the structure: or a salt of the compound; 4-amino-2-[19F]-fluorobenzoic acid or a salt of 4-amino-2-[19F]-fluorobenzoic acid; and at least one acceptable carrier, is administered to a subject. The location of the composition is detected to determine the presense of an infectious bacteria in the subject. An amount of a composition comprising the compound having the structure: or a salt of the compound; 4-amino-2-[19F]-fluorobenzoic acid or a salt of 4-amino-2-[19F]-fluorobenzoic acid; and at least one acceptable carrier, is administered to a subject afflicted with a bacterial infection. The location of the composition is detected to determine the location of the infectious bacteria in the subject. Discussion The composition, and method of synthesizing the same, described herein contains 2-[18F]fluoro-4-aminobenzoic acid ([18F]F-PABA), a PET tracer which shows highselectivity for bacteria imaging in soft-tissue rodent models of MSSA, MRSA andE. coliinfection. Compared to FDG and other recently developed bacterial infection tracers, [18F]F-PABA has several advantages. It is specific for bacterial infection and capable of differentiating infection from sterile inflammation. It shows high signal-to-background ratio in animal infection models. It is able to quantify bacterial burden. It can be produced with high yield using a rapid radiosynthesis method.[18F]F-P ABA may also be used applications in bacterial infection diagnosis. As a bacterial infection tracer which can potentially diagnose infections caused by a broad spectrum pathogens includingS. aureus, E. coli, Klebsiella pneumoniaeand etc., [18F]F-PABA has great commercialization potential. Some infectious conditions caused byS. aureuscan serve as good illustrations of the commercialization potential.S. aureusis the leading cause of many different clinically important infections including skin and soft tissue infections, osteomyelitis and infectious endocarditis. Osteomyelitis is the infectious condition of bone, which leads to inflammation and bone necrosis. With an incidence of 21.8 per 100,000 person/years in the United States, osteomyelitis is a serious infectious disease that can result in limb amputation and even death. Indeed, osteomyelitis is the leading cause of non-traumatic amputation in US and worldwide. Osteomyelitis is closely associated with diabetes, a disease that affects 7% of the world's population. Approximately 15% of the diabetic patients would develop foot ulcers in their life time, with an annual incidence of 1 to 4%. Among the diabetic patients with foot ulcers, over 50% would be infected and develop Diabetic foot infections. Infective endocarditis (IE) is the infection of the endocardial surface of the heart. The estimated incidence rate of IE is 30 to 100 per million person-years, andS. aureusis the causative agent in over half the cases. Despite medical advances, the in-hospital mortality rate of IE is 9.6 to 26%, partly because a definitive diagnosis cannot be reached at an early stage of infection due to the lack of a sensitive diagnostic method. Currently, the diagnosis of IE depends on a combination of microbiological tests and echocardiography together with clinical signs of infection. However, none of these methods provide sufficient sensitivity or specificity to make a rapid and one-step definitive diagnosis. Patients who are at high risk of these infections are all potential markets of [18F]F-PABA. Moreover, the potential of [18F]F-PABA is not limited toS. aureusinfections. Since our data has already shown that [18F]F-PAABA is also able to diagnoseE. coliandKlebsiella pneumoniaeinfections, patients who are suspected of infections caused by these two bacterial species are also potential pool. Besides, due to the existence and essentiality of folate biosynthesis pathway, which incorporates [18F]F-P ABA into bacterial cell components, for various bacterial species includingMycobacterium tuberculosis(M. tb),[18F]F-P ABA has the commercialization potential of diagnostic tool for such infections. The composition and method described herein provides a fluorine-18-labeled analog of p-aminobenzoic acid (2-fluoro-4-aminobenzoic acid, F-PABA). F-PABA is a non-toxic substrate (MIC >100 μg/ml) for DHPS and is not toxic to either bacterial or mammalian cells. It has several advantages over FDG and other reported bacterial infection tracers:1. [18F]F-PABA is selectively taken up by live bacteria (MRSA,E. coliandM. tuberculosis) but not mammalian cells;2. It is specific for bacterial infection, and capable of differentiating infection from inflammation;3. It can be produced using a rapid radiosynthesis method with high radiochemical yield;4. It accumulates in a wide range of bacteria, includingE. coli, S. aureusandKlebsiella pneumomae;5. It shows very good signal-to-background ratio in in vivo infection models; and6. It is capable of quantifying bacterial burden, and therefore can be used to monitor drug treatment efficacy and to assist new antibacterial agent development. REFERENCES Bettegowda, C. et al. (2005) Imaging bacterial infections with radiolabeled 1-(2′-deoxy-2′-fluoro-beta-D-arabinofuranosyl)-5-iodouracil. Proc Natl Acad Sci USA 102: 1145-1150.Chakraborty, S. et al. (2013) Para-Aminosalicylic Acid Acts as an Alternative Substrate of Folate Metabolism inMycobacterium tuberculosis.339(6115), 88-91.Gowrishankar, G. et al. (2014) Investigation of 6-[18F]-Fluoromaltose as a Novel PET Tracer for Imaging Bacterial Infection. PLoS One. 9(9), e107951.Li, Z. B. Et al. (2008) The synthesis of18F-FDS and its potential application in molecular imaging. Mol Imaging Biol. 10, 92-98.Namavari, M. et al. (2015) Synthesis of[18F]-labelled Maltose Derivatives as PET Tracers for Imaging Bacterial Infection. Mol Imaging Biol. 17(2), 168-176.Weinstein, E. A. et al. (2014) Imaging Enterobacteriaceae infection in vivo with18F-fluorodeoxysorbitol positron emission tomography. Sci Transl Med. 6(259), 259ra146. | 60,893 |
11857353 | In conjunction with the following detailed description of various aspects of some embodiments of the invention, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. DETAILED DESCRIPTION OF THE INVENTION Introductory Overview: The present invention, in some of its embodiments, relates to apparatus and methods of tomography in the field of nuclear medicine (sometimes referred to herein as “N-M tomography”), and more particularly to N-M tomography systems as a whole, to detector configurations for N-M tomography systems, and methods of using and/or upgrading such systems and detector arrangements. In some embodiments, the invention also relates to methods of upgrading existing N-M tomography systems and detectors. In an exemplary embodiment of the invention, there is provided a single system which is easily (e.g., field) upgradable and also capable of dual-mode functionality, e.g., having the capability of selectably operating in either SPECT or PET mode (or both) without materially sacrificing performance in either mode. In an exemplary embodiment of the invention, there is a provided a single detector that can be selectably operable in either a SPECT a PET mode with good resolution and sensitivity. In an exemplary embodiment of the invention, there is a provided a system to be capable of operating simultaneously in both a SPECT and a PET mode. Some embodiments of the present invention are directed to the issues discussed above, and/or to other aspects of N-M tomography technology. Aspects of some embodiments of the invention are equally applicable to single function SPECT and PET systems, and to dual function SPECT and PET systems, unless otherwise indicated. An aspect of some embodiments of N-M tomography systems according to the present invention resides in detector systems comprised of multiple detector heads (for example, 3-18 heads), each head including multiple individual detector elements (for example, 4-10 or more individual detector elements). The detector units are arranged to form a bore defining a space within which the patient as a whole or a part of the patient's body, i.e., an ROI, is examined. In some of such embodiments, the detector heads are moveable (e.g., during imaging and/or as a set up for an imaging session) relative to the patient carrier and/or to each other and/or to the gantry in various ways allowing adjustment of the size and/or shape of the bore according to the particular ROI without obstruction or collision of adjacent detector heads. In some embodiments, the systems are operable to adjust the size and shape of the bore during a scan. Optionally, rapid reconfiguration (e.g., faster than 2 hours, 1 hour, 30 minutes, 10 minutes, 5 minutes per detector head) of the detector heads from one position and/or orientation to another for step and shoot operation is facilitated by a light-weight design of the detector heads and/or by counterbalancing. Optionally, in some embodiments of the invention, the various degrees of freedom can also be implemented in a horizontal system, in which the patient stands or sits, e.g., such that system axis (as well as the main patient axis) is vertical or at an angle to both vertical and perpendicular, and the gantry is relatively horizontal (or at an angle, such as between 20 and 80 degrees to the horizontal. In some embodiments of the invention, the system provides single-mode functionality, in which case, the detector heads are comprised of only SPECT detectors, or only PET detectors. In some embodiments, the system provides dual-mode functionality, in which case, the detector system includes separate detector heads carrying PET and SPECT detectors, or detector heads that include both PET and SPECT detectors. Optionally, the detector heads are constructed of detectors that can selectably operate as either PET or SPECT detectors or both SPECT and PET detectors simultaneously. Optionally, such detector heads may include the detector elements, collimators, and/or circuitry that can operate, for example, in single photon detection mode and in coincidence mode. Optionally, timing circuitry is provided in a detector head and coincidence circuitry in a more central location such as a CPU. Alternatively, coincidence circuitry as well is provided in association with a detector head (e.g., module, possibly a head on an arm), optionally the head receiving data on other detections from another head, for example, via a system bus interconnecting heads. In some embodiments, PET detector heads are T-shaped, or L-shaped (e.g., with wider part facing the bore), and SPECT detector heads are rod or I-shaped. The detector head include elongated stems that serve as the axis for extension/retraction and the detector elements arrayed on the ends of the stems in various polygonal configurations including square or rectangular or triangular, or in circular or arc-shaped configurations, or combination thereof. Optionally, in rest positions, the detector elements extend longitudinally in the direction of the system axis. Optionally, the detector heads are comprised of a single detector element or a plurality of detector elements forming a pixilated detector. For convenience, the terms “detector arrangement” or “detector system” or “detector unit” will sometimes be used in reference to multiple and single-detector heads, and to heads that carry SPECT and/or PET detectors, and without distinction as to detector head shape. An aspect of some embodiments of N-M tomography systems according to the present invention resides in variable-geometry detector systems in which the detector heads are moveable relative to each other and to the gantry in various ways to improve sensitivity and spatial resolution for imaging different ROIs. An aspect of some embodiments resides in variable-geometry detector systems in which the detector units are non-uniformly arranged on the gantry with (possibly) large gaps between them wherein the adjustability of the detector heads still provides full 360 degree detector coverage possibly without loss (or with improvement) of sensitivity and/or spatial resolution for differently sized and shaped ROIs and/or at different positions along the body of a subject under examination. In some embodiments of the present invention, such configurations possibly reduce the overall number of detectors needed for a given level of spatial resolution and sensitivity, and thus reduce the overall system cost. The variable geometry features may allow trading off cost and imaging efficiency. The detectors represent a major component in the cost of a system, so being able to vary the bore size and shape according to the size of the patient and/or the location and/or the size of the ROI, may allow generating good images with a smaller number of detectors than would be needed in conventional fixed-geometry systems. However, for a larger ROI or a larger patient, longer imaging times may result from reducing the number of detectors. However, this may allow a majority of studies to be performed fast and/or at a lower cost. In some embodiments, the detector heads are extended and retracted by a linear extension and retraction mechanism. Optionally, for operation in a PET mode, the detector heads are extended individually or in opposed pairs. Optionally, the opposed pairs are diametrically opposed. Optionally, multiple detector heads are mounted on and/or moved together along a single arm for in-out and/or lateral motion. Optionally, at least some of the detector heads are not moved and data for imaging is optionally collected from both moved and unmoved detector heads. Optionally, a detector head includes electronic circuitry that supports more than one set of separate detector elements. For example, one detector head may include circuitry to support processing of signals from two detector heads which are connected by a data cable. In another example, a detector head is set up to support multiple types of detectors and/or collimators, which may be selectively mounted thereon. Optionally, an RFID code or other machine readable indicator on the detector and/or collimator serve to indicate to the processing circuitry (e.g., in arm or in main machine) which type of detection is available and/or to guide data acquisition, acquisition planning and/or reconstruction, according to the ability of the detector. Optionally, the indicator, or a different storage location includes a table or a set of parameters matching the type of detector and parameters or software for using the detector. If the size and/or shape of the individual detector heads; (particularly but not exclusively in the case of PET detector heads) do not permit sufficient extension to reduce the bore size to a desired degree without collision or interference between adjacent detector heads, several options are available in accordance with some embodiments of the invention. In some embodiments, only some of the detector heads are extended. For example, every other detector head (i.e., one-half the total number of detector heads) are extended, and the others remain un-extended. Optionally, the un-extended detector heads are not used during the scan. Optionally, the un-extended detector heads and the extended detector heads are used during the scan. Alternatively, the angular orientation of at least some or all of the detector heads may be varied relative to the axes of extension of the respective detectors to increase the amount that the detectors can be extended without collision. This can be advantageous since allowing a greater range of bore size adjustability can, potentially, better accommodate differently sized and shaped ROIs and ROIs at different locations along the subject's body. Optionally, the angular orientation of the detector heads can be varied either in pairs or individually. Further optionally, one or more of the heads that are connectable to the same arm can be angularly oriented independently or differently than other heads. According to some embodiments of the invention, at least some of the detector heads are angularly adjustable to a desired orientation in a plane perpendicular to a longitudinal axis of the individual detectors. (This feature is generally referred to below as “rotation” relative to the longitudinal axis.) Typically, but not necessarily, depending on the shape of the detector head, the longitudinal axis corresponds to an axis of extension and retraction. Optionally, the rotational orientation can be varied from a rest position by up to 90 degrees (or more) such that for some angles and detector dimensions, some or all the detector heads overlap. This may not only facilitate extending at least some of the detectors to obtain a smaller bore size, but may also result in obtaining good 360 degree coverage with a smaller number of detectors. According to some embodiments of the invention, at least some of the detector heads are angularly adjustable to a desired orientation that is not in a plane perpendicular to a longitudinal axis of the detector heads. (This feature is generally referred to below as “tilting” relative to the longitudinal axis.) Optionally, the desired tilt angle is achieved by rotation around an axis parallel to an axis of elongation of the overall system. According to some embodiments of the invention, increased reduction in bore size is achieved by axially spacing the detector heads on the gantry. Optionally, the detector heads are axially spaced on one side of a single ring on the gantry. Alternatively, the detector heads are arranged on opposite sides of a single ring. Alternatively the detector heads are arranged on one or two sides of two separately spaced rings comprised in the gantry. Optionally, for example, in the case of a PET system, adjacent detector head pairs are located at different axial spacing to avoid interference between adjacent detector heads. Extension in combination with one of the orientation options may allow a reduced number of detector heads and/or detector head-pairs in the detector array while still providing good 360 degree coverage. This can significantly reduce the cost of the detector array. In addition, reducing the number of detector pairs allows the gantry to be constructed with open spaces between the detectors along the periphery of the gantry, which facilitates upgradability as described below. It should also be noted that extension of the detector heads to create a smaller bore size generally has the effect of positioning the detector heads closer to the ROI, consequently, each detector head subtends a larger solid angle around the ROI, and is able to collect more photons emitted from the ROI. The result is that overall system sensitivity may be improved and/or a smaller number and/or size of detectors may be used. Such approaching is optionally used in PET and/or in non-tomographic imaging modes, such as planar imaging. Positioning the detector heads closer to the ROI may also improve the spatial resolution by decreasing nonlinearity as discussed below. An aspect of some embodiments of the invention resides in N-M tomography systems in which the detector heads include detector positioning arrangements that are operable to extend and retract and to angularly orient desired combinations of detectors. Optionally, the positioning arrangements are comprised of a first mechanism associated with each detector head to effect extension and retraction, and a separate mechanism to angularly orient the detector heads. Optionally, a single mechanism for extending and retracting and angularly orienting the detector heads is associated with each of the detector heads. An aspect of some embodiments of the invention resides in N-M tomography systems in which the individual detector heads are translatable, e.g., movable laterally or circumferentially on the gantry, continuously during the scan and/or in steps so that the spacing between the detector heads can be changed. For example, each detector head can be translated 5, 10, 15, or 20 degrees, or a greater or lesser or intermediate amounts from a nominal equally spaced configuration. Optionally, each detector head is movable independently from the others, or jointly with one or some or all of the others. Optionally, in combined PET-SPECT dual function systems, either the PET and SPECT detector heads, or both are circumferentially movable. Optionally, the PET and SPECT detectors are located at (and/or attached to the gantry at) different axial positions, for example, on one or both sides of a single rotor disc, or on separate rotor discs, optionally to provide mechanical clearance. As used herein, the term “circumferential movement” refers to rotation of the gantry ring or rings, and also includes movement of the detector heads on the gantry. Likewise, circumferential movement includes translational movement of each of the detector heads individually, i.e., independently, or in groups within the ring, and/or movement of an entire ring relative to other rings. Translation of the detector heads can be advantageous in various situations. For example, on a gantry having a small number of detector heads e.g., as purchased by a customer with limited resources, there may be large gaps between detector heads. Similarly, when the system is constructed of a segmented gantry (optionally with each segment including more than one detector head) the segments can be moved radially outward so the bore is expandable when necessary (e.g. to accommodate an obese patient). In either case, the gaps between the heads may degrade data acquisition. Circumferential movement effectively shifts the gaps and allows capturing data from the gaps to complete the missing views. Optionally, according to some embodiments, the gaps can be closed by rotation of the gantry or alternatively by relative rotation among the detectors, for example, by translating the detector heads circumferentially on a single-ring gantry, of in the case of a multiple-ring gantry, by rotating one or more of the rings relative to the others, to complete the full set of angles between pairs of detectors. Effectively, the bore is expanded radially and gaps are created, the circumferential motion (if any) allows the gaps to be filled. Optionally, the system is constructed so translation can occur during a scan or in steps for step and shoot operation. In some embodiments, gaps (e.g., between 1 and 30 degrees, for example, between 2 and 10 or 20 degrees) in an axial direction and/or in a circumferential direction are tolerated. Optionally, reconstruction weights (e.g., for sensitivity) certain directions according to the presence and/or size of gaps therein. A feature of some embodiments of N-M tomography systems according to the present invention resides in a gantry that is slidable and/or rotatable laterally, to capture an image from a “body slice” which is orthogonal or not orthogonal to the main body-axis, and/or move along the body of the patient, for example to capture “slice by slice”. In an exemplary embodiment of the invention, a controller can selectively move one or more of the detector heads alone, independently, in groups and/or separately but optionally in synchrony with other detector heads. An aspect of some embodiments of N-M tomography systems according to the present invention resides in matching the detector bore to the ROI by making the extension and/or angular orientation adjustable while the scan is being performed. A related aspect of some embodiments resides in detector arrays per se having detectors that are extendable and/or angularly adjustable during a scan. By way of summary, adjustment of the size and shape of the bore as well as improved sensitivity and resolution is achieved by optionally providing one or more of the degrees of freedom listed below. For the gantry (e.g., in either SPECT or PET operation unless otherwise noted):(a) The gantry can be a full circle, or a partial circle.(b) The gantry can include one ring or multiple rings. The planes defined by the rings may be parallel to each other, or non-parallel. In a PET mode, the gantry may be rotated circumferentially, either continuously during generation of data for a slice, or in steps (referred to herein as “step and shoot” operation).(c) The gantry may move vertically relative to the system axis, or can be tilted (e.g., using a motor and/or a gear) to one or more non-vertical orientations, and/or to one or more orientations that are non-orthogonal to the system axis to obtain views that can overcome attenuation or other obstruction or scatter, or to obtain additional complementary information that helps stabilize the image reconstruction process.(d) Optimally, some or all the adjustments mentioned above can be performed manually. Optionally or alternatively these adjustments can be motorized and controlled by the system controller. For the Detector Heads (e.g., in SPECT or PET operation):(e) Some or all the heads can move in and out, radially, i.e. extend and retract, to increase or decrease the bore size. The extension/retraction can be the same for all the detector heads, or may be different depending on the location of a particular ROI in the body;(f) The detector heads can move laterally relative to each other and/or to the gantry. The movement of the detector heads can be linear or along a non-straight-line path, for example, a curved or piece-wise linear paths elected to avoid or reduce collisions between adjacent detector heads;(g) The detector heads can be rotated around an axis which is substantially orthogonal to the system axis, e.g., around the axis of extension/retraction;(h) The detector heads can be tilted in one or more planes relative to the axis of extension/retraction. Tilting can be effected by rotating the head around an axis which is substantially parallel to the system axis, or by rotation around an axis which is non-parallel to the system axis;(i) The system controller can be programmed to move the detector heads in a manner that prevents collision during movement, for example, by calculating dynamics to predict a collision and slow down movement as needed. Optionally or alternatively, sensors are provide (e.g., IR or ultrasonic proximity sensors) at the sides of the detectors, to detect imminent collisions. An aspect of some embodiments of the invention resides in PET systems or in dual purpose systems operating in a PET mode, in which the detector heads are arranged around less than the full 360 degrees of the gantry and in which the gantry rotates, e.g., as in typical SPECT systems. Optionally, the scan is performed at a succession of axial slices. Optionally, the rotation is continuous at each axial position. Alternatively, the gantry rotates in steps of less than 360 degrees and is temporarily stationary at each step. Optionally the size of the steps is based on the size of the gaps between the detectors. Optionally, the continuous or step-wise gantry rotation during a PET scan is repeated for each of a succession of axial slice positions. Optionally, in embodiments in which PET detectors are mounted on more than one axially spaced rotor on the gantry, the rotational speed of each rotor may be the same or different. An aspect of some embodiments of N-M tomography systems according to the present invention resides in the detector arrays being positionable at one or more desired distances from the patient's body. Optionally, positioning may be done before or during a scan, optionally continuously, or between axial slices. An aspect of some embodiments of the present invention resides in N-M tomography systems that include proximity detection capability to prevent contact between the detector array and the body of the patient. Optionally, proximity detection capability is provided by contact sensors, or by acoustic sensors, or by IR sensors, or by optical sensors. An aspect of some embodiments of the present invention resides in N-M tomography systems that include proximity detection capability to prevent contact between a detector and an adjacent detector and/or to prevent contact and/or pinching of body parts between detectors (e.g., if patient moves his harm into harm's way, for example, proximity detection capability is provided by contact sensors, or by acoustic sensors, or by IR sensors, or by optical sensors. In an exemplary embodiment of the invention, contact sensors are acceptable, because detector motion uses low forces, detectors are low-weight and/or covered with a soft layer and/or detectors can be quickly stopped (e.g., using brakes or suitable motor/actuator action). An aspect of some embodiments of N-M tomography systems according to the present invention resides in detector arrays that are allowed to make contact with the patient's body, but with such a low contact force and/or velocity that injury to the patent does not occur. In some such embodiments, the detector arrays are counterbalanced on the linear actuator arms so the force needed to extend the detector arrays is acceptably small (e.g., using a stepper motor which generates up to 3 Kg force only). Optionally, the extension force is small enough that the patient can move the detector array away from his or her body. Optionally, the actuator allows such back driving, for example, using gears which can be back driven or by a linear actuator which can be overridden by patient applied force. Further, because of the small mass of the individual detector heads, impact with the body is optionally small. Also because of the low mass, the velocity is easily reduced before impact. An aspect of some embodiments of the present invention resides in N-M tomography systems having modular and/or scalable detector arrays. A related aspect of some embodiments of the invention resides in modular or scalable detector arrays per se. Modularity can allow initial assembly of N-M tomography systems having detector arrays with a desired number of individual detector heads according to a particular customer's initial needs, and facilitates subsequent upgrading. This can give a customer the option, both at the time the system is acquired, and/or at the time of an upgrade, to trade off cost versus quality, for example, as described herein. For example, three, four, six, eight, twelve, or an intermediate or greater number of detector heads can be provided initially, and more added later as needs and/or financial resources of the customer change. Optionally or alternatively, detectors can be replaced with different and/or better detectors. In an exemplary embodiment of the invention, either by way of identifying data provided by the added detector heads, or by information provided manually to the system controller, the software knows what detectors have been installed, and the information can be used in the course of data acquisition and/or image reconstruction. In some embodiments, the detector arrays as originally assembled are for single-mode SPECT or PET systems. Optionally, the detector arrays as originally assembled include both SPECT and PET detector heads allowing dual-mode system functionality. Optionally, as part of an upgrade, existing detectors may be replaced by better or improved detectors, for example, having faster circuitry, larger detection area, and/or better energy and/or spatial resolution. Optionally, detectors can be added and/or replaced when upgrading either a single-mode or a dual-mode system to improve the functionality of the system. Optionally, an upgrade can convert a single-mode system into a dual-mode system. Optionally, x-ray CT capability may also be provided in new and/or upgraded single and dual-mode systems. Optionally, features contributing to modularity according to some embodiments include, without limitation, one or more of the following:(a) The gantry which carries the detector array is rotatably mounted in the initially assembled system, whether it is single-mode (SPECT or PET) or dual mode;(b) Connection of the detectors to the coincidence processing electronics is through a rotatable coupling arrangement;(c) The image reconstruction electronics and/or the coincidence processing electronics are adapted to recognize the number and type (SPECT or PET) of detector heads in the detector array, either automatically, or by programming at the time of assembly or upgrading of an existing system;(d) The detector heads provide identification as to connection and/or type for auto-recognition by the electronics sub-systems;(e) A rotary drive system is easily installed as part of an upgrade;(f) The electronic sub-systems are modular to facilitate converting a single-mode system into a dual mode system. Various ways to implement some of the above-described features will be apparent to those skilled in the art, especially in view of the exemplary methods of implementations are described below. In some embodiments of the invention, PET detectors are used, i.e., that are used for detecting high energy photon pairs travelling in opposite directions by identifying the locations in which 2 photons hit 2 detectors simultaneously (up to photon travel time and detection time), thus enabling identifying the orientation from which the photon has been emitted at a much finer precision. For example, such embodiments allow detection along a line with a width of about 4-6 mm, taking into account the distance of the positron traveling until annihilated (about 2-4 mm), and the pixel width in each detector (for example 2-3 mm) Optionally, however, in some embodiments, acquisition of PET (high energy) photons can be done without coincidence detection using “SPECT methodology” (detecting each photon separately) by providing a thick collimators and detector heads capable of detecting typical PET and SPECT photons. Furthermore, coincidence detection circuitry can optionally include time-of-flight analysis circuitry to determine where along the estimated emission line the positron was emitted, for example at a longitudinal resolution of about 1-5 cm, for example 2-3 cm. For example, an optional system clock can be shared by the detector heads. As an optional alternative, processing is in central location which pre-calibrates travel time from each detector of signals. In some embodiments, the electronic circuitry connected to some or all of the detector heads includes one or more of the following optional capabilities:(a) Photon characterization by energy level (for example, within all the range between 40 Key and 511 Key or more);(b) Detection time with resolution sufficient to determine coincidence with photon detection in another detectors,(c) Detection time with resolution sufficient to determine time of flight for obtaining high longitudinal resolution along the detected coincidence line;(d) Detection of count rate in case of high flux of photons, for example when a high intensity radiation source is activated such as X-ray source;(e) Optionally, the electronics include multiple separate channels that allow independent amplification and front-end processing for each detector or small group of detectors (e.g., 1-5 detectors) and/or a small number of pixels (e.g., between 10 and 1000, for example 100 pixels). A potential advantage is that malfunction of one or more pixels or detectors and/or blinding of one or more pixels or detectors by a “hot spot” (high intensity source) desirably will not prevent other detectors from properly function and detect photons emitted from other regions, for example as described in US patent publication 2008-0230702-A1.(f) Optionally, the processing channels may also be modular, for example, being field replaceable and/or include their own housings. An aspect of some embodiments of the invention pertains to a method of using N-M tomography systems including detector arrays having some or all the adjustability features described herein that involves preparing the patient in the normal manner, setting up the system for an examination by adjusting the bore size and/or shape, then axially scanning the region of interest, optionally axially. Optionally the adjustment is achieved by extending at least some of the detector heads and, if necessary, angularly orienting at least some of the detector heads in the detector array according to the size and/or shape of the ROI and/or the axial position of the ROI along the body of the patient. Optionally, the angular orientation is adjusted by rotation of at least parts of some of the detector heads. Optionally, the angular orientation is adjusted by tilting at least some of the detector heads. Optionally the adjustments can be made during scanning, for example in response to change in body cross-section, imaging mode. Optionally, the method applies to single and dual-purpose systems. In a system providing dual-mode functionality, the method further optionally includes selectable operating the system in a SPECT or in a PET mode. In some embodiments, the method involves providing a plurality of additional detector units that include built-in mechanisms for extension/retraction and angular orientation. Optionally, the additional detector units are mounted in alternating relationship with the existing detector units on the gantry. Optionally, if the pre-existing system does not provide automatic detection of the number and type of detector units, the method further includes adding automatic detection or programming the emission detector subsystem according to the number and type of detector units in the upgraded system. Optionally, the position adjusting arrangement provided is operable to extend and retract the detector units, and/or to alter the angular orientation of the detector units by rotation and/or tilting at least some of the detector units. Optionally, the position adjusting arrangement includes separate extension/retraction and angular orientation mechanisms. Optionally, each added detector unit includes its own mechanisms for extension and retraction, and/or for rotating or tilting. Optionally, the detector units of the upgraded system are mounted so that some of them in the upgraded detector array are axially spaced from others, but on one side of a gantry ring. Optionally, some of the detector units in the upgraded detector array are mounted on opposite sides of a detector carrier ring. Optionally some of the detector units in the upgraded system are mounted on axially offset detector carrier rings. It should be understood, that upgradability as described herein is feasible in systems that do not contain scalable detector arrays, but the benefits may be attenuated since the entire preexisting detector array may need to be replaced, and the emission processing and/or image reconstruction sub-systems may have to be reprogrammed or even replaced. An aspect of some embodiments of the present invention resides in N-M tomography systems in which the detector units are capable of responding to photons in a range of energies including both PET and SPECT ranges. Optionally, the detectors and associated emission data processing systems are selectably responsive to PET or SPECT photons, or, simultaneously responsive to PET and SPECT photons to generate or reconstruct visual 3D images of regions of interest (ROI) of a patient being examined. In such embodiments, and also in other exemplary embodiments described herein, parts of the data processing systems are optionally contained within or mounted on the detector units. In exemplary embodiments described herein, the radioactive emission detector is comprised of a scintillator optically coupled to an array of photomultipliers. Alternatively, the detector is a direct conversion semiconductor array, or a silicon photomultiplier (SiPM) (see: Roncali et al., supra). Optionally, in the exemplary embodiments described herein, the emission detector elements are pixilated. Alternatively, at least some detector elements are non-pixilated. Optionally, in the exemplary embodiments described herein, the emission detector elements are formed of a known material including, but not limited to, Lutetium Oxyothosilicate (LSO), Lutetium Yittrium Oxyothosilicate (LYSO), Cadmium Zinc Telluride (CZT), Cadmium Telluride (CdTe), Cesium Telluride (CsTe), Cesium Iodide (CsI), or of any other suitable and desired material presently known or hereafter discovered or created. An aspect of some embodiments of the present invention resides in detector units including collimators that permit selectable operation in either a PET or SPECT mode, optionally without physical reconfiguration thereof. Optionally, the detector units are operable to simultaneously produce PET and SPECT images. Optionally, in such embodiments and in other exemplary embodiments described herein, the collimators are formed of a material that effectively blocks photons having energy in the range used for SPECT imaging, but is relatively transparent to photons having energy in the range used for PET imaging. Optionally, the material forming the collimators blocks no more than about 50% of incident PET photons. Optionally, the collimators are formed of Tungsten, or Tungsten Carbide or Lead or Gold or depleted Uranium or a combination of these materials. Optionally, the amount of blocking for PET detection is selected so that at angles or directions where a higher sensitivity is desired, there is less blocking. An aspect of some embodiments of the present invention resides in detector units including collimators having adjustable geometry that permits changing the spatial resolution of the detectors. Optionally, such adjustment permits the detector units to be selectably operated either as PET or SPECT detectors, or optionally to simultaneously produce PET and SPECT images. Optionally, according to some embodiments, the collimator geometry can be varied in one or more of the following ways:a) increasing the length of the septa forming the collimator cells (the term “length” referring to the height of the collimator perpendicular to the plane of the detector module);b) increasing the spacing of the septa (i.e., the pitch) in one and/or both directions relative to a surface of the detector element (optionally, by removing one or more septa);c) tilting some of the septa in one and/or both directions relative to a surface of the plane of the detector element;d) increasing or decreasing the pitch of the septa (i.e. the distance between adjacent septa walls) in one and or both directions relative to a surface of the detector element: the pitch can be decreased, for example, by forming the collimator of two or more relatively moveable parts parallel to the plane of the detector element;e) forming the collimator with a shutter to adjust the effective size of the area exposed to incoming photons: optionally the shutter is slidable or tiltable or in the form of an iris. Optionally, the detector element has a planar surface relative to which the septa are moveable. In some embodiments, these modifications are carried out while the collimator is attached to the detector. In some embodiments, the collimator is removed, modified and reattached. In some embodiments, the reconfiguration is provided in a laboratory and/or during manufacture. Optionally, the collimator-detector pair is pre-configured at multiple collimator states. In an exemplary embodiment of the invention, a pressure clamp and/or a screw clamp and/or a locking rod (through the septa) mechanism are used to hold septa in place relative to a body of the collimator. An aspect of some embodiments of the present invention resides in detector units including collimators that have a first set of leaves in a first arrangement and a second set of leaves arranged to intersect with the first set. Optionally, an average height perpendicular to a detector surface and/or an average thickness of leaves of the first and second sets are different. This may result in different viewing angles in different direction and/or different amount of PET sensitivity in different directions. An aspect of some embodiments of the present invention resides in a method of imaging comprising using an N-M tomography system to collect PET and/or SPECT from a ROI using a single set of detectors. Optionally, PET or SPECT data are collected simultaneously. Optionally, when PET and SPECT data are collected simultaneously, PET and SPECT images are generated simultaneously using separate electronic subsystems. In an exemplary embodiment of the invention, the amount of axial, circumferential and/or radial overlap at least 5%, 10%, 20%, 30% or intermediate or greater percentage of the dimension and/or detector area of the detector head. Optionally or alternatively, the overlap is less than 90%, 80%, 50%, 40% or smaller or intermediate percentages thereof. Rotation around an axis of a detector can be for example, 10, 30, 40, 70 or smaller or intermediate degrees. Optional Features of Some Embodiments of the Invention: The discussion below concerns further optional features of some embodiments of the invention according to the aspects of the invention discussed above. It should be understood that one or more of these features may be combined with any embodiments of the detector units and methods described herein above and/or below and/or provided with other systems, unless otherwise clearly stated:a. Multiple detector heads arranged around a gantry which are moveable relative to the patient carrier and/or to each other and/or to the gantry to create a variable-geometry bore over a wide range of sizes without obstruction or collision of adjacent detector heads.b. Adjustability of the bore geometry during a scan, optionally between steps of a step-and-shoot scan, or between axial positioning of the gantry for acquisition of data for a succession of axial slices. Optionally, adjustment may be performed both between step-and-shoot positions and between axial positions, or “on the fly” according to a pre-set program for a specific patient scan, both during scan or in a sequences scan scenario.c. Rapid reconfiguration of detector geometry facilitated by light-weight design of the detector heads and/or by counterbalancing the detector units.d. Variable bore geometry implemented in conventionally configured systems (with the patient lying on a horizontal carrier) and also systems, in which the patient stands or sits such that system axis (as well as the main patient axis) is vertical, and the gantry is relatively horizontal.e. PET detector heads that are T-shaped, or L-shaped, and SPECT detector heads that are rod or I-shaped.f. Detector heads including elongated stems that serve as an axis for extension/retraction with the detector elements arrayed on the ends of the stems in various polygonal configurations including square or rectangular or triangular, or in circular or arc-shaped configurations, or combination thereof. Optionally, in rest positions, the detector elements extend longitudinally in the direction of the system axis.g. Detector heads that are comprised of a single detector element or of a plurality of detector elements, for example, pixilated detectors. For convenience, the terms “detector arrangement” or “detector system” will sometimes be used in reference to multiple and single-detector heads, and to heads that carry SPECT and/or PET detectors, and without distinction as to detector head shape.h. Variable-geometry detector systems in which the detector units are non-uniformly arranged on a gantry with large gaps between them wherein the adjustability of the detector heads still provides full 360 degree detector coverage without loss of sensitivity and spatial resolution for differently sized and shaped ROIs and at different positions along the body of a subject under examination. In some embodiments of the present invention, such configurations possibly reduce the overall number of detectors needed for a given level of spatial resolution and sensitivity, and potentially reduce the overall system cost.i. Detector heads that are extended and retracted by linear actuators. Optionally, in a PET mode, detector heads that are extended individually or in opposed pairs. Optionally, the opposed pairs are diametrically opposed. Optionally, multiple detector heads are mounted on and/or moved together along a single arm for in-out and/or lateral motion.j. Detector arrays in which some of the detector heads are not moved and data for imaging is optionally collected from both moved and unmoved detector heads.k. Detector heads that include electronic circuitry that supports more than one set of separate detector elements, e.g., of different types and/or geometric location.l. Variable geometry detector arrays in which the detector heads; (particularly but not exclusively in the case of PET detector heads) are moveable in ways that permit bores small enough for efficient imaging of relatively small organs such as the brain, the throat or an extremity, without collision or interference between adjacent detector heads. Options include:(i) making only some of the detector heads, for example, every other detector head extensible, (with the un-extended detector heads optionally used or are not used during the scan,(ii) making the angular orientation of at least some of the detector heads adjustable relative to the axes of extension of the respective detectors to increase the amount that the detectors can be extended without collision. Optionally, the angular orientation of the detector heads can be varied either in pairs or individually. Further optionally, one or more heads that are connectable to the same arm can be angularly oriented independently or differently than other heads,(iii) making at least some of the detector heads rotatable or tiltable. It should also be noted that extension of the detector heads to create a smaller bore size has the effect of positioning the detector heads closer to the ROI, consequently, each detector head subtends a larger solid angle around the ROI, and is able to collect more photons emitted from the ROI, The result is that overall system sensitivity may be improved.m. making detector heads translatable, i.e., movable circumferentially on the gantry, continuously during the scan or in steps so that the spacing between the detector heads can be changed. For example, each detector head can be translated 5, 10, 15, or 20 degrees, or a greater or lesser or intermediate amounts from a nominal equally spaced configuration. Optionally, each detector head is movable independently from the others, or jointly with one or some or all of the others. Optionally, in combined PET-SPECT dual function systems, either the PET and SPECT detector heads, or both are circumferentially movable. Optionally, the detectors heads can be moved in one or more straight line segments or along a curve.n. Locating translatable PET and SPECT detectors at different axial positions, for example, on one or both sides of a single rotor disc, or on separate rotor discs to provide mechanical clearance.o. Configuring the detector arrays so that gaps between then can be effectively closed by rotation of the gantry. (As used herein, the term “circumferential movement” also refers to rotation of the gantry ring or rings, continuously and/or in steps) as well as by translating the detector heads circumferentially on the gantry to complete the full set of angles between adjacent detector heads. Optionally, the system is constructed so translation can occur during a scan or in circumferential steps (for step and shoot operation) or between axial positions.p. Making the gantry slidable and/or rotatable laterally, to capture an image from a “body slice” which is orthogonal or not orthogonal to the main body-axis, and/or move along the body of the patient, for example to capture “slice by slice”.q. Providing for selective movement of one or more the detector heads alone, independently, in groups and/or separately but in synchrony with other detector heads.r. The gantry can be fully circular, or a partial circle, as well as other shapes.s. The planes defined by multiple gantry rings may be parallel to each other, or non-parallel.t. The gantry may move vertically relative to the system axis, or can be tilted to one or more non-vertical orientations (e.g., be mounted on a motorized axle), and/or to one or more orientations that are non-orthogonal to the system axis to obtain views that can overcome attenuation or other obstruction or scatter, or to obtain additional complementary information that helps stabilize the image reconstruction process.u. Gantry rotation is continuous at each axial position for both PET and SPECT imaging. Alternatively, the gantry rotates in steps, the size of which is optionally based on the size of the gaps between the detectors. Optionally, in embodiments in which PET detectors are mounted on more than one axially spaced rotor on the gantry, the rotational speed of each rotor may be the same or different.v. Providing proximity and/or side detection capability to prevent contact between the detector array and the body of the patient. Optionally, proximity detection capability is provided by contact sensors, or by acoustic sensors, or by IR sensors, or by optical sensors, or in any other suitable and desired manner.w. Permitting the detector heads to make contact with the patient's body, but with such a low contact force and/or velocity that injury to the patent does not occur.x. Counterbalancing the detector heads so the force needed to extend the detector heads is acceptably and safely small. Optionally, the force is small enough so that a patient can easily resist the force or can move the detector array away from his or her body by hand if necessary. The optional small effective mass of the individual detector heads, allows the velocity to be easily reduced before impact.y. The detector head counterbalancing mechanisms include adaptive motion feedback capability for safety control and acquisition continuation if the detector head is touched or pushed back, for example, by the patient.z. Configuring the system so that the detector arrays are modular or scalable.aa. Either by way of identifying data provided by the detector heads added as part of an upgrade, or by information provided manually to the system controller, the system software is made aware of what detectors have been installed, and the information can be used in the course of data acquisition and image reconstruction.bb. The gantry which carries the detector array may be rotatably mounted in the initially assembled system, whether it is single-mode (SPECT or PET) or dual mode;cc. The detector heads are connected to the processing electronics through a rotatable coupling arrangement;dd. The gantry is configured so it requires only minimum disassembly for upgrading, including to facilitate installation of a rotary drive system, thereby helping to permit on-site upgrading;ee. The electronic sub-systems are modular to facilitate converting a single-mode system to a dual mode system.ff. Coincidence detection circuitry for PET imaging can include time-of-flight analysis circuitry to determine where along the estimated emission line the positron was emitted, for example at a longitudinal resolution of about 1-5 cm, for example 2-3 cm. For example, an optional system clock can be shared by the detector heads. As an optional alternative, processing is in central location which pre-calibrates travel time from each detector of signals.gg. The electronic circuitry connected to some or all of the detector heads includes one or more of the following optional capabilities:(i) Photon characterization by energy level (for example, within all the range between 40 Key and 511 Key);(ii) Detection time resolution sufficient to determine coincidence with photon detection in another detector,(iii) Detection time resolution sufficient to determine time of flight for obtaining high longitudinal resolution along the detected coincidence line,(iv) Detection of count rate in case of high flux of photons, for example when a high intensity radiation source is activated such as X-ray source.hh. The electronics include multiple separate channels that allow independent amplification and front-end processing for each detector or small group of detectors and/or a small number of pixels (e.g., between 10 and 1000, for example 100), such that any malfunction of one or more pixels or detectors and any blinding of one or more pixels or detectors by a “hot spot” (high intensity source) do not prevent other detectors from properly function and detect photons emitted from other regions. Optionally, the processing channels may also be modular. An aspect of some embodiments of the invention resides pertains to a method of using an N-M tomography system including detector arrays optionally having some or all the adjustability features described herein. In some embodiments, both PET and SPECT imaging can be performed, either sequentially or simultaneously. Optionally, the method involves preparing the patient in the normal manner, setting up the system for an examination by adjusting the bore size and/or shape, then scanning the ROI. Optionally the adjustment is achieved by extending at least some of the detector heads and, if necessary, angularly orienting at least some of the detector heads in the detector array according to the size and/or shape of the ROI and/or the axial position of the ROI along the body of the patient. In a system providing CT capability, the latter modality may optionally also be employed as part of a unified examination. An aspect of some embodiments of N-M tomography systems according to the present invention resides in use of collimated detectors for PET imaging, with image reconstruction software that compensates for reduced off-axis sensitivity resulting from photon absorption by the collimator septa, for example, by weighting of photon counts according to their direction of impact. Exemplary System Features: Referring again toFIG.1A, in some exemplary embodiments of the invention, gantry12is mounted so that in addition to conventional functionality by which it moves along the length of patient carrier16(or vice versa) to capture emission data from a succession of “slices” orthogonal to the length of the patient's body, (i.e., the body-axis), it is optionally also constructed so it can slide transversely (e.g., on a rail) or tilt relative to the body-axis (e.g., the rail is mounted on an axle or an actuator is provided at either end to raise/lower a side of the rail), to capture emission data in planes that are not orthogonal to the body-axis. This capability can be advantageous, for example, if it is desired to acquire photons from viewing angles with less attenuation and scatter due to bones (e.g. taking different viewing angles to overcome attenuation and scatter by the ribs, etc), or to improve uniformity, quality and stability of the image reconstruction process by providing to the reconstruction algorithm information from additional viewing angles. Modularity and Upgrade: An aspect of some embodiments of the invention is modularity of the detector arrays. Referring now toFIGS.3A-3D, several implications of this are described in the context of a detector array300for a SPECT system according to some embodiments of the invention. In the illustrated context, it should be understood that modularity applies to the design of detector array300such that it may be assembled from a desired number of individual detector heads302a,302b, etc. according to the specification of the customer. FIG.3Aillustrates a detector array300with three individual detector heads30(two being a practical lower limit in conventional systems).FIGS.3B-3Drespectively illustrate systems employing detector arrays having four, six and 12 individual detector heads302. Exemplary configurations of detector heads embodying features of the present invention are described below. FIGS.3A-3Dalso illustrate the trade-off resulting when increasing numbers of detector heads are provided: performance is increased in terms spatial resolution and/or sensitivity and/or speed of image data acquisition for a range of ROI sizes and shapes and longitudinal (axial) locations along the body of the subject of an examination, but at potentially significant increased cost for the detector heads. Another implication of modularity is illustrated inFIGS.4A and4B: a system can initially be assembled with a detector array400providing single-mode functionality (here illustrated as SPECT functionality) and can conveniently be upgraded into a dual-mode system that provides both SPECT and PET functionality. Thus,FIG.4Ashows an as-built detector array400for a SPECT system having six detector heads402with considerable open space404between the individual detectors.FIG.4Bshows a detector array406, for example, after an upgrade of detector array400. Here, six PET detector heads408have been installed in alternating relationship with SPECT detector heads402. As will be understood,FIG.4Bcan represent an as-built configuration or an upgraded detector array comprised only of 12 PET detectors408or only of 12 SPECT detectors402. In an exemplary embodiment of the invention, a modular attachable/detachable component includes a housing suitable for exterior viewing/environment, for example, with suitable paint and/or markings. In an exemplary embodiment of the invention, the attachment of the component to the rest of the system includes separate electrical, data and mechanical connectors. For example, plug-socket connectors may be used for power and data and a mechanical interlock used for mechanical connection. In some embodiments, the component will interlock to a movable part of the system. In an exemplary embodiment of the invention, two interlocks are used; one interlock providing alignment between the detector and the system (or gantry), for example, a plurality of pins matching recesses and/or other geometries and a second interlock provides interference to prevent retraction, for example, using one or more screws, bolts or a locking rod. Optionally, a separate element (from the alignment geometry), such as a rectangular rod, is used to convey forces between the system and a removable detector Basic Detector Head Configurations FIG.5shows exemplary details of SPECT and PET detector heads according to some embodiments of the invention mounted on one side of a single gantry500. SPECT detector heads502are shown as rod or I-shaped with arcuate, e.g., approximately cylindrical photon-collecting surfaces510extending into the plane of the figure. PET detector heads504are shown as T-shaped with a stem portion506extending radially toward the system axis at the center of gantry500and a detector-carrying portion508oriented tangentially to the periphery of the bore. It will be appreciated that other external configurations are also within the scope of some embodiments. For example, both the SPECT and PET detector heads can be L-shaped or otherwise have different shapes. Optionally, different detectors with different abilities have different shapes and/or sizes. Optionally and preferably, in some embodiments, the detector-carrying portions508of PET detector heads504are configured as plates that extend in the direction of the system axis, i.e., into the plane of the figure. This can be advantageous, for example in that it allows a desired degree of overlap between slices as the emission data is being collected, or wider slice width, to perform faster body scan (in the case of multiple-slice scan). The large detector head configuration for the PET detector heads504can be advantageous because for optimal and uniform PET image reconstruction, pairs of PET detectors sometimes need to cover as much as possible of the whole 360 degrees of possible photon emission from each location in the ROI, and/or with a sufficient axial extent. Having large detector surfaces for the PET detector heads may minimizes gaps in which coincidence lines are not covered and/or otherwise increase sensitivity, and may avoid reduced uniformity and/or sensitivity. SPECT detectors, on the other hand can acquire 180 degrees around the ROI in several positions at different times, so, in some embodiments they can be narrower and move to obtain the necessary viewing angles. In an exemplary embodiment of the invention, care is taken, however, that the PET detectors are not so large that they obscure the view of the SPECT detectors when only the latter are in use. As described herein, the dynamically variable geometry of the detector heads according to some embodiments of the invention facilitates optimizing or near-optimizing the size and/or shape of both the PET and SPECT detectors. Potentially contributing to optimization is the placement of the detectors on one or more gantry rings which can move and operate independently as discussed herein. In an exemplary embodiment of the invention, the system controller30(seeFIGS.2A-2B) may be programmed to plan and/or control the motion of the detectors heads according to a desired (e.g., optimal) data collection from the desired ROI, while optionally preventing adjacent detector heads from colliding and/or obscuring each other's field of view. In some exemplary embodiments, the SPECT detector heads include detectors (e.g., radiation sensitive elements thereof) that cover overall about 1-40 cm, or 1-20 cm, for example, about 2-8 cm along the circumferential dimension, i.e., along the gantry, for example about 4 cm. In the axial direction, i.e., in the direction of the system axis, the SPECT detectors optionally cover overall about 10-40 cm, for example about 12-32 cm, or 15-30 cm, or 16-28 cm, or about 16 cm, or about 20 cm, or about 24 cm, or about 28 cm or intermediate sizes. In some exemplary embodiments, collimators extend radially, i.e. toward the patient's body a distance of a few cm (e.g. 1-20 cm, 1-4 cm, for example about 2-3 cm). In some embodiments, the SPECT detectors (with the collimator) of each such detector head are rotatable, for example, around an axis parallel to the system axis. In some embodiments, the overall space required to enable such free rotation of the head along the circumferential dimension can be about 6-15 cm wide, for example 7-12 cm, for example about 10 cm. In an example, the PET detector heads include detectors that cover overall about 2-50 cm along the circumferential dimension, for example about 2-40 cm, or 2-35 cm, or 3-30 cm, or 5-28 cm, or 10-28 cm, or 15-25 cm, or 20-25 cm, and cover overall about 2-35 cm along the axial dimension, for example about 2-30 cm, or 2-25 cm, or 2.5-20 cm, or 2.5-17 cm, or 2.5-15 cm, or 3-10 cm, or 3-9 cm, or 3-8 cm, or 3-5 cm, for example less than 15 cm, and optionally less than 10 cm, or for example about 5 to 9 cm, or for example 7 to 8 cm. Dynamically Variable Bore Geometry: Conventionally, in both PET and SPECT systems, the bore size is not adjustable, and therefore the sensitivity and/or spatial resolution vary according to the particular ROI and body location. The conventional solution has been to provide a bore adequate to accommodate a full-body scan and accept degradation in performance when a smaller bore would have been preferable for a particular ROI as explained below. Another conventional solution has been special small-bore systems for brain or neck scans. According to some embodiments of the present invention, N-M tomography systems are provided with a dynamically variable-geometry bore. Several ways to achieve this are implemented by providing the degrees of freedom for gantry configurations and/or detector head extension/retraction and/or angular orientation. In an exemplary embodiment of the invention, these are pre-selectable, i.e., before performance of a scan, and/or adjustable during the scan, either continuously or in steps. FIG.4Bshows a 12 detector head system with alternating PET and SPECT detector heads on a circular gantry in which the heads are all in a fully retracted configuration providing a maximum bore size for both PET and SPECT operation. Optionally, a non-one to one relationship of the number of PET and SPECT is possible. In the configuration shown inFIG.5, all the PET detector heads504have been extended to provide a reduced bore size for PET operation. Alternatively only SPECT detectors are extended. It may be seen thatFIG.5represents approximately the maximum extension possible without collision of adjacent heads to the illustrated size and configuration of the PET detector heads illustrated. FIG.5also shows exemplary extension/retraction mechanisms510and512respectively for the PET and SPECT detector heads, described more fully below. FIGS.6A-6D, illustrate spatial resolution improvement that can be achieved when the bore size is increased or decreased to take account of the particular ROI and body location. The example shows a SPECT only system with 12 detector heads602on one side of a single-ring circular gentry600, but the same benefits can be achieved in PET only systems, and in systems capable of both SPECT and PET operation. FIG.6Ashows detector heads602in a retracted configuration to provide a large bore, for example, a conventional 90 cm. bore, for performing a full body scan.FIG.6Billustrates detector heads602fully extended to provide a small bore, for example, 20 cm., as it would be used, for example, when performing a brain scan or a scan of the neck. It should be appreciated that the cross-section of an ROI will generally not be circular, particularly in the case of a body scan, so the possibility non-uniform resolution and/or sensitivity may exist. According to some embodiments of the invention, if desired (or for other reasons) this can be alleviated in some instances by varying the orientation of the detection surfaces dynamically during the scan. For the arrangement illustrated inFIGS.6A and6B, this can be achieved by rotating the detector-carrying portions604of detector heads602around an axis parallel to the system axis during a scan. The detectors can optionally be rotated individually or together. Since such rotation can effectively fill gaps between adjacent detector heads, it may also allow obtaining good sensitivity and resolution with a smaller number of detectors, and thereby result in a less costly system. It should also be noted that in an exemplary embodiment of the invention, only the detector element bearing parts of the detector heads need to be moved. Since these are not heavy, rapid dynamic changes in orientation are practical. One potential effect (which may be beneficial) resulting from matching the bore size to the ROI being examined is an increase in the acceptance angle for incoming and/or scattered photons. This is illustrated inFIG.6C. Here, a small ROI604is assumed to be centered in a bore606athat is large compared to the ROI, or in a bore606bthat is closely matched to the size of the ROI. In an exemplary embodiment of the invention, an emission event605is assumed to be centered in ROI at604. As a result of scattering the two photons travel along angularly displaced paths:607afor the un-scattered photon and607bfor the scattered photon, instead of path607c. The acceptance angle α, i.e. the angular error between paths607band607c, is a function of the bore size and the detector pixel size according to the relationship: α=2*tan-1(pixelsizebore_size) In an exemplary embodiment of the invention, an acceptance angle α1 for the large bore can be increased to an angle α2 of, for example, up to about three or four times without degrading or decreasing resolution by reducing the bore size to match the ROI. Another effect, potentially beneficial which may result from matching the bore size to the ROI being examined is illustrated inFIG.6D. This relates to reduction of the so-called “non-co-linearity effect” resulting from residual momentum of the electron and positron annihilation. A positron event609produces emission of a photon pair that do not travel in exactly opposite directions. One photon travels along a path611ainstead of611band is detected by a detector head608ainstead of608b, while the other travels along path611cand is detected by detector head608caligned with detector head608b. By decreasing the bore size from 90 cm to 20 cm, the photon on path611ais detected by detector head608b, whereby degradation of resolution due to non co-linearity can be reduced by a factor of up to, for example, 2 or 3 or more with a smaller bore size. In general, the improvement factor is dependent on the starting and ending bore sizes. In the illustration it is reduced from 90 cm to 20 cm. The angle error due to non-colinearity is about ±0.25 degree. This error is estimated, for example, based on the energy range of the residual momentum of the electron and positron annihilation. The resolution degradation correspond to the shift between the theoretical event position (when colinearity is perfect) to the actual event position. The error can be estimated with simple trigonometry as Err=tan(alpha)*(Bore_size/2). The error for the 90 cm and 20 cm bore size can therefore be reduced by up to about 4.5. Adjustment of bore size by extension and/or retraction of the individual detector heads may be achieved in various ways. A linear motion mechanism can be implemented, for example, with a DC or AC motor, or linear actuator, by a stepper motor or hydraulically. Position detectors, for example, limit switches, resettable counters, or digital or analog encoders, may be used to provide position feedback and/or avoid over extension and/or collision. The scope of the invention is not necessarily limited by these methods and other ways will be apparent to those skilled in the art as well. FIGS.6E-6Gillustrate an arrangement generally designated at620for extending and retracting a detector head622, according to some exemplary embodiments of the invention.FIG.6Eis a top and side perspective view,FIG.6Fis a side elevation, andFIG.6Gillustrates an exemplary linear actuator. Sub-assembly620includes a detector head622located at one end624of an arm626, which is movably mounted on a linear rail628extending radially on a gantry630. Within or (optionally) attached to the outside of arm626is a linear actuator arm634(FIG.6G). Control of extension and retraction is provided by system controller30(seeFIGS.2A-2B). A weight632chosen to balance the weight of detector head622and arm626is moveably mounted on rail628and attached to arm626for example, by a suitable pulley arrangement and belt or cable or in any other suitable and/or desired manner. For example, the weight can be between 0.5 Kg and 30 Kg, for example, between 3 Kg and 20 Kg, for example, 7 Kg. Optionally, the moving part of the detector weighs between 1 and 30 Kg. Optionally, the entre arm module weights between 2 and 50 Kg, for example, between 5 and 30 Kg. Referring toFIG.6G, in which some parts are omitted in the interest of clarity, in the illustrated example, arm634includes a driving member636, for example, a sprocket, and a driven member638which may also be a sprocket. Input power is provided by a rotary actuator such as a motor described above (not shown) attached to sprocket634636. A chain (not visible inFIG.6G) is carried by sprockets636and638. Attached to the chain are travelers642and640, which respectively carry detector head622and its arm626, and counterweight632. As will be appreciated, in an exemplary embodiment of the invention, when the chain is driven, the detector head and counterweight move in opposite directions, as indicated by arrows644and646. Such an arrangement allows use of a very small force for extension and retraction while gravitation does not produce any motion or resistance since the counterweight provides a balancing counter force equal to the projection of the total force (vector) along the path of linear motion. In some exemplary embodiments, for a detector head weighting about 20 kg, counter-balance can be provided by a weight of approximately 19.5 kg, so a force of only about 0.5 kg will be needed to be employed for moving the detector head. Such a very gentle force potentially reduces risk of patient injury in case of a collision. Moreover, in case of a collision, the patient can typically easily resist such gentle force and/or move the arms away regardless of their orientation. Additional or alternative collision avoidance protection may be provided by proximity sensors of various types. Some options include pressure sensors, acoustic, e.g., ultrasound sensors, and optical sensors mounted on the detector units and coupled to control the actuator motor in a suitable manner In one example, a controller receives an alert when the distance is below a certain threshold. In another example, the motor is stopped (or a brake activated or motor disengaged) by a dedicated electrical circuit reading the sensor. In some embodiments, the detector heads are brought into close proximity to the patient's body, e.g., within less than 20 cm, or less than 10 cm, or less than 5 cm or within 1-2 cm or in substantial contact or larger or smaller or intermediate distances. Optionally, in some embodiments, contact with the patient's body is allowed, but the allowable force on the body/skin is limited, for example, to less than 1000 g, or less than 200 g, or less than 50 g, or less than 10 g or intermediate contact forces. In an exemplary embodiment of the invention, the contact area is at least 1-10 cm squared, optionally by having detector not have sharp edges in direction of body. In an exemplary embodiment of the invention, consideration is given to assuring that adjacent heads do not interfere mechanically or operationally.FIGS.7A and7B,8A-8D,9A-9C,10A and10B, and11A-11Cillustrate exemplary design solutions according to some embodiments of the invention. FIGS.7A and7Billustrate a dual-mode system700comprised of a detector array706comprised of six PET detector heads702a-702farranged in three pairs702a,d,702b,e, and702c,fand six SPECT detectors704. System700is illustrated as being used in PET mode. InFIG.7A, The PET detectors comprised in array706are in and extended operating configuration for example suitable for performing full body scans. However, because of the shape of the PET detectors,FIG.7Ais also about the minimum practical extension that is achievable due to the size and shape of the detector heads. In an exemplary embodiment of the invention, the controller includes a geometry engine which simulates the space filling properties of the detectors so as to plan and/or monitor motion in a manner which avoids interference between moving (and/or unmoving components). Optionally or alternatively, the geometry engine also calculates lines of sight and/or obstructions. In an alternative embodiment, allowable paths and/or positions (e.g., for one or more detectors) are pre-calculated and provided to the system, which uses such paths and/or positions. To achieve a smaller bore size with detector array706, only some, for example one-half of, or any subgroup of, the detector heads, for example, detector heads702b,702d, and702f, are extended, as illustrated inFIG.7B. The other detector heads702a,702c, and702eremain in a fully retracted position. In some embodiments, only the extended detector heads are used. In other embodiments, all the detector heads are used, but the sensitivity and/or resolution of the extended detector heads may be not the same as for the un-extended detector heads. This is optionally accounted for in the course of processing the emission data. FIGS.8A-8Dillustrate other ways to achieve smaller bores, one such way, again in a dual-mode system being operated in a PET mode.FIG.8Ashows the detector element carrying portions of PET detector heads806a,806c, and806erotated 90 degrees around their respective longitudinal axes (i.e., in a plane perpendicular to the longitudinal axes. Compared withFIG.7A, it may be seen that a smaller bore is achieved. FIG.8Bshows a configuration in which all the PET detector heads802have been rotated by 90 degrees, allowing the greatest reduction in bore size. FIG.8Crepresents a situation in which all of detector PET heads802have been rotated by less than 90 degrees, for example, 45 degrees. Such rotation may take place, for example before during or after extension, and/or during a scan as described below. FIG.8Cillustrates the effect of rotating the PET detector heads806a-806fto an angle between 0 degrees and 90 degrees relative to the orientation inFIG.8A, followed by extension. As may be seen fromFIG.8C, such rotation can result in overlap of adjacent detector heads. In some embodiments of the invention, as the angle of rotation increases toward 45 degrees, the overlap increases, but decreases as the angle of rotation increases further toward 90 degrees, at that point, the limiting case of no overlap is reached, as illustrated inFIG.8B. The rotation shown inFIGS.8A-8Dcan be implemented in various ways. For example, small bidirectional motors (not shown) may be mounted on gantry814and connected through a position tracking arrangement, for example, using encoders and/or limit switches coupled to detector shafts808a, etc. Optionally, each detector head unit may include a built-in rotation actuator. Other mechanical arrangements may also be used, for example, a hydraulic arrangement, or manual adjustment may be also be employed. Optionally, for this and/or for actuators for extension and retraction, the motors are controlled by the system controller. Another way of reducing bore size is illustrated inFIGS.9A-9C.FIG.9Aillustrates a system900in which the individual detector heads902a-902fin a detector array906are partially extended, for example, for a full-body PET scan, and also are oriented in planes perpendicular to the respective axes of elongation of the detectors as in the other embodiments described up to now. However, it should be noted that the extension illustrated inFIG.9Ais about the greatest possible extension since attempted further extension is blocked by peripheral contact between the detector heads. In an exemplary embodiment of the invention, to create clearance for further extension and reduction of the bore size, the detector heads are tilted out of the perpendicular planes by pivoting them around axes that run parallel (and/or perpendicular and/or other axes that are in the plane of the detector) to the longitudinal axis of the system, e.g., in the direction parallel to direction of relative movement by which the successive axial slices are produced. As a consequence, as illustrated inFIG.9B, adjacent detector heads, e.g., heads904aand904boverlap peripherally allowing a degree of extension of detectors902a-902fnot achievable with the heads in the respective perpendicular planes as illustrated inFIG.9A. Optionally, to achieve yet further reduction in bore size, heads904a-904fare tilted even further as illustrated inFIG.9Cthereby increasing the overlap and allowing additional extension. The tilting shown inFIGS.9A-9Ccan be implemented in various ways similar to those employed in the embodiments ofFIGS.8A-8C, as will be understood by persons skilled in the art. It is noted in these and other embodiments, that the detector plane need not be flat. For example, it may be curved being a section of a cylinder, a sphere and/or other conic section and/or other curved and/or piecewise linear shape. FIGS.10A and10Billustrate a detector array1000employing a further way of achieving a wide range of bore sizes, which is optionally used together with other methods as described herein. Here, detector array1000includes six PET detector heads1002a-1002fand six SPECT detector heads1004a-1004fmounted on a gantry ring1006. To accommodate the shape of the PET detectors, detector head pairs1002a,1002dand1002c,1002fare located in a first plane on the gantry, while intervening detector head pairs1002b,1002e, are located on a second plane spaced axially from the plane of the other two pairs. FIG.10Aillustrates extension of the PET detectors to achieve a first desired bore size. To achieve a smaller bore size, the detectors are extended as desired, as shown inFIG.10B. Because of the axial spacing, even after the detectors have been extended sufficiently that they would come in peripheral contact if they were situated in a single plane, further extension is possible because the axial spacing allows the tip of one detector, for example1002a, to pass behind the adjacent tip of the next detector1002b. As will be appreciated, suitable modification of the programming of the coincidence detection sub-system may be made to account for the axial spacing. In some embodiments the programming does not need to be modified (but only parameters or look up tables) since it just needs to note the non-uniform viewing when determining relative counts/normalizing from different areas (e.g., using sensitivity maps and/or other calibration maps). FIGS.11A-11Cillustrate another arrangement for achieving a wide range of bore sizes involving axial spacing, which may be used, for example, alone or with other methods described herein.FIG.11Ais an end view of assembled detector array1100.FIG.11Bis a perspective view with the top portion cut away andFIG.11Cis a perspective view rotated 90 degrees from that ofFIG.11Bto show internal construction details. Here, detector array1100is comprised of six PET detectors1102and six SPECT detector heads1104. To accommodate the shape of the PET detectors, detector pairs1002a,1002dand1002c,1002fare located on a first ring1106of a gantry1108, while intervening detector pair1102b,1102e, is located on a second ring1110spaced axially from the ring of the other two pairs. Optionally, since ring1110carries only one PET detector pair, all the SPECT detectors may be mounted on that ring. Optionally, more than two rings may be provided.FIG.11Dshows an embodiment in which the detector heads are arranged in three layers, either on one ring or on three separate rings. Additional rings, for example, 3, 4, 5, 6 or more may be provided (and optionally added in a modular manner). In an exemplary embodiment of the invention, the detectors on different rings are of different types, sizes and/or qualities. Detector Head Movement on the Gantry and Gantry Rotation: A potential benefit of the variable geometry aspect of some embodiments of the invention is the possibility of obtaining good resolution and sensitivity around the entire ROI with a reduced number of detector units that are laterally or circumferentially moveable on a gantry.FIG.12Aillustrates schematically one way of translating detector heads on a gantry. Here, eight detector heads1202a-1202hare slidably mounted on a track assembly1204comprised of separated track segments1206a-1206h. In an exemplary embodiment of the invention, heads1202may be moved along their respective track segments by a linear motion arrangement similar to that described in connection withFIG.6G, or by any other suitable and desired arrangement. Various movement options may be provided, as will be apparent to those skilled in the art in light of the disclosure herein, for example, but not limited to (a) prepositioning (b) steps of a step and shoot regimen, (c) adjustment of position between axial slices, and (d) “on the fly” adjustment during gantry rotation or a spiral scan. It should be recognized that combination of the indicted or other options are also contemplated. Further it should be recognized that the detector heads may be positioned either uniformly or non-uniformly around the gantry. Movement over any desired range depending on the number of detector heads possible, for example, 20 degrees (i.e., ±10 degrees from a central position). The described arrangements are applicable to PET and SPECT detector heads as well as dual purpose heads as described below. FIG.12Billustrates an alternative arrangement in which the detector head move circumferentially on the gantry. Here, a detector array1208mounted on a gantry1209includes six detector heads1210a-1210f. With respect to at least some designs, the concepts being described are equally applicable to PET and SPECT detectors. As in other embodiments described, the detector heads can be extended and/or retracted, and/or optionally rotated or tilted, to vary the bore size. In an exemplary embodiment of the invention, for example, to achieve a variation in spatial resolution around the ROI, some of the detector heads, for example detector heads1210a,1210c, and12104, are moveable circumferentially on gantry1209as indicated by arrows1212a,1206c, and1212e. As a result, in the embodiments of bothFIGS.12A and12B, there can be more detectors in some areas, and in other areas, there can be fewer detectors. Optionally, detector heads can be concentrated in the wide areas to provide enhanced resolution in those areas. Optionally or alternatively, different detector head qualities are used in different areas, for example, to support non-uniform data collection protocols. A possible benefit of the translational embodiment ofFIG.12Ais that it may be easier to implement. Arcuate motion, or translation over a range of ± about 15 to 30 degrees, for example, ±20 degrees from a central position can give good results. Another way to enhance resolution around an ROI with a reduced number of detector heads is to provide gantry rotation for PET operation as in SPECT operation. This concept is illustrated inFIG.13. In a non-rotating system, reducing the bore size from the conventional 90 cm to 30 cm for a small ROI for a particularly configured detector head results in a larger angle of acceptance α2 as compared to α1, but may decrease the angular resolution, e.g., as previously noted. However, if the gantry is continuously rotated, or continuously rotated during imaging of successive axial slices, e.g., with the same number of detector heads, and a bore size of 30 cm, the angle of acceptance α3 can be made smaller, for example, smaller than even α1, and the lost angular resolution can be recovered. Optionally, the speed of rotation is selected according to the desired acceptance angle. In an exemplary embodiment of the invention, with a rotating gantry, the detector array can be configured with a number of detector heads arranged over less than the full 360 degrees around the ROI, for example, over between 180 and 320 degrees. In a static system, that would result in a potentially large gap in coverage. Rotation assures that emission events from all parts of the ROI will be detected as the detector array rotates, even though generation of the image data may require more time. A rotating gantry can also be provided in embodiments in which the detector heads are mounted on axially spaced gantry rings and/or translatable on the gantry, yielding the benefits of both a wide range of bore configuration adjustability and increased angular resolution with fewer detector heads. Optionally, the two gantry rings can be arranged to rotate at different speeds. FIG.14illustrates a potential benefit of varying the detector geometry during a full-body scan. For a normal patient1402, a small bore is used for the slices in the regions1402aand1402bof the lower legs, neck, and head, and a larger bore size for the region1404cof the upper legs and the torso. In contrast, for an obese patient1404, a larger number, for example, four bore sizes gives better results. Thus, for regions1404aand1404ecovering the lower legs, the head and the neck, a first bore size is used. For regions1404band1404ccovering the upper legs and the upper torso, a second larger bore size is used. For region1404dcovering the lower torso and the chest, a third bore size, even larger than those for the other regions is used. A further option is to vary the bore size on-the-fly as the scan proceeds, resulting in a bore size that dynamically follows the contour of the patient's body. It should also be noted that for an ROI that is relatively small, decreasing the bore size can be facilitated by making the patient carrier transversely adjustable or with a part that is narrower than the overall width. Exemplary Method of Use: While it is believed that the method of use of the various embodiments described above should be apparent to those skilled in the art from the foregoing description, this may be summarized in conjunction with the flow diagram ofFIGS.15and16. For purposes of discussion, it is assumed that a PET procedure or a SPECT procedure is to be performed either in a single or dual mode system, but it should be understood that the discussion is also applicable to simultaneous performance of PET and SPECT procedures. As shown, at1502, suitable preparation of the patient, including injection of the radioactive tracer is optionally undertaken. At1504, optionally while (and/or before, and/or after) the tracer is circulating through the bloodstream to the ROI, the size and shape of the ROI is determined, for example by a conventional transmission CT. Optionally, the CT may be performed using a CT capability (e.g., using an x-ray or radiation source) included in the N-M tomography system itself, or by use of a separate CT system. At1506, the bore geometry and if necessary, the configuration of the collimator septa are adjusted to accommodate the size, shape, and location of the ROI and the desired spatial resolution according to the nature of the procedure being performed. Depending on the required size of the bore, the detectors are extended as needed. If the detectors cannot be extended sufficiently to provide as small enough bore, only some of the detectors are extended. Alternatively, according to the features of the particular system, the detectors are rotated and/or tilted to the required angular orientation, and optionally then the detectors are extended. At1508, after sufficient time (and/or during this time) for the tracer to travel through the patient's bloodstream to the ROI, the scan is performed, and at1510, the image reconstruction is performed. FIG.16illustrates a more complex scan procedure, again applicable in general to a PET or a SPECT procedure, or simultaneous performance of both PET and SPECT procedures. Here,1602and1604are the same as1502and1504, but at1606, a scan regimen is programmed for in-scan detector and collimator geometry variation. This may include one or more of the following features:a) Preliminary bore size adjustment;b) Continuous gantry rotation during the scan or at each axial slice;c) Gantry rotation speed adjustment;d) Continuous bore size adjustment (both extension and angular orientation of the detector heads) during the scan at particular axial positions, or continuously over the entire scan;e) Variable positioning of the detector heads on the gantry (initially or over the course of the scan);f) Variable collimator configuration initially and/or over the course of the scan. At1608, after sufficient time for the tracer to travel through the patient's bloodstream to the ROI, the scan is performed according to the programmed regimen, and at1610, the image reconstruction is performed. Detector Unit Arrangements and Configurations: Referring again toFIGS.2A-2B, it will be recalled that a conventional SPECT detector unit is comprised of an emission detector element32in the form of a scintillator that provides an optical (or electrical) light pulse in response to impingement of a gamma-ray photon, an array of photomultiplier tubes (PMT)34that convert the optical light pulses into electrical signals from which the images are reconstructed and a collimator arrangement42with openings aligned with the PMTs to provide a narrow acceptance angle for each PMT, i.e., to ensure that photons striking the detectors do so at a relatively narrow range of angles. Alternatively, the scintillators and the PMT can be replaced by a direct conversion semiconductor detector, for example, a SiPM as described herein. Conventional PET detector units can be similarly configured, but do not include a collimator array because incidence angle information is extracted based on coincident detection of two photons emitted by a single radioactive decay. Scintillation detector elements are typically formed as unitary structures, and such structures are also employed in some embodiments of the invention. Alternatively, the detector elements according to some embodiments are pixilated, e.g., formed of an array of discrete small detector pixels. This can be advantageous particularly for PET imaging, in that it enables identifying the orientation from which the photon has been emitted at a much finer precision, for example, along a line with a width of about 4-6 mm, between two locations of coincident detections taking into account the distance of positron travel before annihilation and the pixel width. In an exemplary embodiment of the invention, dual use (PET and SPECT) detector systems are provided. In an exemplary embodiment of the invention, pixilation may in some cases facilitate optionally providing time-of-flight analysis circuitry for PET operation to determine where along the estimated emission line the positron was emitted, for example at a longitudinal resolution of, for example, about 1-5 cm, for example 2-3 cm (e.g., by measuring such time per pixel or group of pixels). Optionally, the electronic circuitry connected to some or all of the detector units can also provide photon energy characterization (e.g. energy level) within the entire SPECT and PET range of about 40 KeV to 511 KeV, and detection of count rate in case of high flux of photons, for example when a high intensity radiation source is activated such as X-ray source, and/or detection time resolution sufficient for coincidence and time of flight detection. Optionally, the signal processing electronics can also include multiple separate channels that allow independent amplification and front-end processing for each detector or small group of detectors and/or a small number of pixels (e.g., between 10 and 1000, for example 100), such that any malfunction of one or more pixels or detectors and any blinding of one or more pixels or detectors by a “hot spot” (high intensity source) do not prevent other detectors from properly functioning and detect photons emitted from other regions. One suitable way to achieve this is shown in commonly assigned U.S. Pat. No. 8,445,851 commonly owned herewith, the content of which is hereby incorporated herein in its entirety as if fully set forth. The detector pixels can be arranged in various configurations according to embodiments of the invention. In an example, the detector can be configured as a symmetrical matrix of 8×8 pixels, 10×10 pixels, 12×12 pixels, 16×16 pixels 20×20 pixels, 32×32 pixels, or larger or smaller or intermediate sized matrices. Alternatively, the pixels may be arranged in asymmetric configurations, for example 16×32 pixels, 16×64 pixels, 8×16 pixels, and other larger, smaller or intermediate sized configurations. In a non-limiting example, detector pixels have dimensions in the range of about 0.1-20 mm, for example, about 0.2-15 mm, for example 0.5-10 mm, for example 1-5 mm, for example 1-2 mm or 2-3 mm or 2-4 mm. While in an exemplary embodiment of the invention the pixels are symmetric (e.g., square), this need not be the case, for example, the pixels may be elongate in a certain direction, for example, having a factor of between 1.1 and 4 or more between two orthogonal dimensions thereof. In another non-limiting example, the pixel pitch (i.e., spacing between pixels), is symmetrical in two directions, for example, about 2.5 mm or about 1.25 mm or about 1 mm, or about 2 mm or about 3 mm, or larger, or smaller or intermediate values. In another example, the pixel dimension in one direction is different than the dimension in another direction, for example 2×3 mm, 1.5×2.5 mm, 2×2.5 mm, etc. In some exemplary embodiments, the detectors have dimensions in the range of about 1 cm to about 15 cm, for example in the range of 2 to 8 cm, for example in the range of 3 to 5 cm, for example 4 cm. Exemplary Reconstruction Variations In some exemplary embodiments of the present invention photons are detected by solid state detectors and electronic circuitry that are configured for acquisition of single photons of typical SPECT energy levels, and/or single photons of high energy such as 511 KeV, and/or coincidence detection of pairs of photon received as a result of a single positron emission from the radiopharmaceutical. In exemplary embodiments of the invention detector modules are capable of detecting more than one of these modes, for example detecting a wide range of energies single photons such as from 40 KeV to 511 KeV. In other exemplary embodiments of the invention detector modules are capable of detecting 511 KeV photons both as single events (if coincidence of a pair of photons was not detected) and as coincidence photons detection. In an exemplary embodiment of the present invention a collimator is used on the detector which provides wide collection angle of 511 KeV, but with some preferred orientation of detection (for example, about 20%, or 30%, or 50%, or 70%, or 100% or 150% or 200% or 300% higher probability in a certain direction (e.g., having an angular aperture of between 0.001 and 10 degrees in a largest dimension) compared with most other directions, for example about 100% higher, which is about twice the probability, in a main direction). In an example, information from photons that are detected as part of a coincidence is processed by the reconstruction algorithm as being probably received from a location along a line of sight between the two detection locations, and information from photons that were detected only in one location with no detection of the coincidence can be either ignored or being processed in a SPECT-like probability analysis based on the detection probability function (“functional”, detection probability map) which depends on the collimator properties and its preferred orientation. In an example, the reconstruction algorithm combines information from both SPECT-like analysis and coincidence-like analysis. In an example, the analysis based on individually detected photons (as a single detected photon, and/or as each of a pair of coincidence photons) is used to form an interim information, for example interim reconstruction of the radiopharmaceutical distribution in 3D volume, and that interim information is used as a prior information to a further 3D image reconstruction based on coincidence analysis. In this approach, for example, analysis based on one approach is either fully integrated with, and/or iteratively integrated with, and/or used as a prior info for, analysis based on the other approach. In an example, PET-like analysis serves as prior knowledge for SPECT-like analysis. In an example, SPECT-like analysis serves as prior knowledge for PET-like analysis. In an example, the SPECT-like analysis and the PET-like analysis are iteratively performed, either one providing for preprocessing or prior knowledge for the other, or one serves for post-filtering of the other, or algorithms merged with one-another, or any combination thereof. In an exemplary embodiment of the invention, when reconstructing data from multiple energies, a priori probability of correlation between two energies is optionally used. For example, for a given body structure, the apriori probability of a SPECT event (or event at one energy) may be different from the a priori probability of a SPECT or PET event at a different part of the structure and/or at the same part. Optionally, a previous image is used as an input to indicate the differences in radiopharmaceutical distributions at the different detected energies. In one example, heart muscle is detected using one energy and a diseased location in the heart is detected using another energy. Reconstruction of the shape and/or location of the heart using the first energy may be used to limit (e.g., anchor) the other energy to fit within the boundaries of the heart as reconstructed by the first energy or as matching a model sized and shaped using the first energy. In an exemplary embodiment of the present invention the reconstruction algorithm is adaptive to take into account variable location of the detectors. Unlike conventional PET, where the algorithm assumes that detectors positions and orientation is known and fixed in advance, and in particular the relative location and orientation of the detectors is known (one detectors relative to the other detectors), in an example of the present invention the algorithm forms probability distribution maps that factor the de-facto location of the detectors during the acquisition, as customized per patient and/or instant of time. In an example, the distance between detectors and the body, the location of the detectors in space and the relative position of the detectors (one relative to others) varies from one detector to another and from one patient to another. Moreover, in an example, the algorithm is configured differently than conventional PET-reconstruction algorithms to form probability maps for photons to be detected by a detector, to be calculated based on the location and orientation of the detector during the acquisition. The probability to obtain coincidence detection changes as the detectors move and are positioned closer to the body, as the line of sight between any two detectors becomes different than that which was pre-fixed in conventional ring-based detectors. In an example, a 3D image reconstruction algorithm calculates a probability function (“the functional”) of a radiopharmaceutical for a voxel (a small volume in a certain 3D location in space) to emit a positron that converts to 2 photons (following annihilation of the positron) that would be detected as a coincidence event by a pair of pixels (one from each of two opposing detectors), taking into account the position of the detectors and their orientation as a result of the detector motion in-out towards the body. In an example, the probability of detection as a single photon event is calculated too, taking into account, for example, the position of the detectors as a result of the detector motion in-out towards the body and/or collimator or detector design. In an example, the orientation of the detectors is also being used as part of the calculation of the probability function. In an example, the detectors also move laterally, for example by motion of the gantry and/or linear motion and/or rotation thereof, and the reconstruction algorithm forms the probability functions taking into account the gantry lateral motion and its effect on the position and orientation of the detectors. In some examples, such motion is done before photon acquisition begins, and the algorithm accounts for it. In another example, such motion occurs during photon acquisition process, and the algorithm accounts for it by having the probability function calculated to take into account the dynamic changes due to the relative motion during the scan. In an example, the detectors rotate around one or more local axes, for example by rotating around an axis of rotation per detector structure (or per group of detectors), for example, independently rotating detectors (or groups) around an axis which is more-or-less parallel to the main axis of the patient body. In this example, the reconstruction algorithm forms the probability functions taking into account the detector rotation. In some examples, such rotation is done before photon acquisition begins, and the algorithm accounts for it. In another example, such rotation occurs during photon acquisition process, and the algorithm accounts for it by having the probability function calculated to take into account the dynamic changes due to the orientation changes during the scan. In an example, a combination of the some or all of above components of the reconstruction algorithm is used to enable adaptation of the reconstruction algorithm and use of the probability functions taking into account the ability of the detectors to move before and/or during the scan. For example, the adaptation is provided using a sensitivity or energy correction and/or by modifying the model of the detectors used in reconstruction. As noted herein, in some cases planning or RT acquisition is modified, for example, to ensure a sufficient photon count from a desired location, to ensure a desired quality, to avoid data collection from undesired regions and/or to ensure stability of reconstruction. In another example of the present invention the system is capable of simultaneously acquire high energy (e.g. PET, 511 KeV) and lower energy (SPECT, X-Ray) photons. In an example, such simultaneous acquisition of photons from multiple energy levels allows simultaneous image acquisition of multiple radiopharmaceuticals. For example, simultaneous imaging of radiation from PET isotope (e.g. radiopharmaceutical based on one or more of F-18, C-11, N-13, O-15, Rb-82, Cu-62, Ga-68, Iodine), and from a SPECT isotope (e.g. radiopharmaceutical based on one or more of Tc99m, Tl201, I123, In111). In an example, the acquisition of photons emitted by two or more radiopharmaceuticals is simultaneous, and the image reconstruction algorithm generates 3D images of the distribution of two or more radiopharmaceuticals within the ROI, thus avoids problems associated with registration of images from different sources that are acquired by different systems and/or at different time. While the term PET has been used for convenience, other coincidence detection methods may be used. Similarly, other single photon detection methods than SPECT may be used. Such a system as described herein may be selectively operated, for example, in a single mode (coincidence or non-coincidence) and/or in a dual mode. In an exemplary embodiment of the invention, for coincidence detection, time stamps per photon are obtained at about microseconds or sub-microsecond resolution, and time-of-flight processing is optionally obtained if time-stamp is obtained at sub-nanosecond resolution. In these cases, a processing means, such as a central CPU (in this or other embodiments) can identify the matching photons and analyze the coincidence emission line, and if available also the estimated position along the line based on time-of-flight calculation. Collimator Arrangements and Configurations: FIGS.17-22Dillustrate exemplary collimator configurations, including, for example designs that are adjustable to provide high and low spatial resolution. This capability may be advantageous for SPECT imaging using N-M tomography systems providing adjustable bore geometry as discussed below, and also to permit selectably using the same detector units for PET or SPECT imaging, or for both PET and SPECT imaging simultaneously. In some embodiments, no physical adjustment is applied, for example, using software correction to adapt the received signal for different desired collimation conditions. Referring first toFIG.17, there is shown a representation of an exemplary basic adjustable collimator designated1700. For reference purposes, the Z-direction is taken as perpendicular to the plane of the sensor element and the Y-direction is the direction parallel to the machine or patient axis, The X-direction is orthogonal to the Y and Z axes, and with the Y-axis defines a plane parallel to the plane of the sensor element. It is note that non-rectangular collimators may be used, for example, hexagonal. As illustrated inFIG.17, collimator1700, is formed of two orthogonal sets of septa1702and1704. Septa1702lie in X-Z planes (spaced in the Y-direction). Septa1704lie in Y-Z planes (spaced in the X-direction). Septa1702are formed with slits1706at spaced intervals in the X direction. Septa1704are formed with slits1708at spaced intervals in the Y direction. The spacing between slits1706corresponds to the X-direction spacing (or multiples thereof) of septa1704, while the spacing between slits1708corresponds to the X-direction spacing (or multiples thereof) of septa1702. Septa1702and1704fit together as shown to form an “egg-crate” array of collimator cells1710that can easily be assembled. As illustrated, the spacing of septa1702(in the Y direction) and1704(in the X direction) are the same so that cells1710are square. However, it should be understood that the septa spacing can be different in the X and Y directions so that cells1710are rectangular. Another variation is to make the slit spacing and/or the spacing of septa1704non-uniform allowing different size cells at different locations in the collimator. These variations are simple to achieve with the illustrated design. A plastic frame (not shown, optionally positioned between the collimator and the ROI) is optionally used to press the septa together and against the detector, for example, using one or more screws to provide the pressure. Other attachment mechanism may be used as well, for example connecters which interconnect the speta and/or attach a flange at the end of one or more septa to the detector. 18A-22D illustrate exemplary (but non-limiting) ways to vary the resolution of collimators according to some embodiments of the invention. Before proceeding to a description of some of these embodiments, the following points should be noted:1.FIG.18Ashows a fragment1800of a collimator formed by spaced Y-Z plane septa1802and X-Z plane septa1804. These define collimator cells1806(four of which are shown) that provide photon travel paths to the detector element. For simplicity,FIGS.18A-22Dshow only part of a single row of the collimator. The full collimator generally includes multiple (e.g., 4, 10, 20, 30 or intermediate or greater numbers) parallel rows like1800. Also, while only four cells1806are shown, the actual number of cells (as well as the number of parallel rows1800) will depend on the overall dimensions of the emission detector element, and the pitch of the septa (e.g., the spacing between septa or septa centers).2. In some embodiments, the septa pitch in the Y-direction is the same as that in the X-direction whereby the individual collimator cells1806are square. Alternatively, the septa pitch in the Y-direction is different from that in the X-direction whereby the individual collimator cells1806are rectangular. Alternatively or additionally the pitch can be variable in the X and/or Y directions. In some embodiments the septa are not straight. For example, the arrangement may be of radially (e.g., extending from one or more points) arranged fixed septa and circular speta mounted thereon. Optionally or alternatively, one or more septa may be inclined away from a perpendicular to the detector.3. In some embodiments, in addition to or instead of different pitch in the X and/or Y directions, and/or variable pitch in the X and Y directions, the septa thickness may be different in the X and/or Y directions. As an additional or alternative option, the septa thickness may be variable in one or both the X and Y directions. As another alternative or additional option, the septa length may be different in the X and Y directions. As yet another alternative or additional option, the septa length may be variable in one or both the X and Y directions. The variation within a detector collimator for one or more of these parameters may be, for example, within a factor of 1.1, 2, 3, 4, or intermediate or greater factors. Thus, some embodiments provide non-uniformly sized and/or shaped collimator cells.4. Collimator configurations and geometry variations as illustrated inFIGS.18A-22Dcan be used with both pixilated and with non-pixilated detectors.5. Conventional PET detectors do not include collimators. However, it has been found that certain materials, for example, tungsten and the others mentioned above more efficiently absorb photons having energy in the 40-250 KeV range emitted by tracers used for SPECT imaging than the high energy (511 KeV) PET photons. Forming the collimator septa of such materials helps make it possible to use detectors having collimators as described herein for both PET and SPECT imaging with only slightly less efficiency (i.e., sensitivity, e.g., 30%, 20%, 10% or intermediate or smaller reduction of sensitivity) in the PET mode, but with wider effective viewing angle (i.e., angle of acceptance) since the high energy photons are able to pass multiple septa, for example, 3, 4, 5 or 6 septa. Stated differently, despite the decrease in detection probability (resulting from decreased sensitivity as the angle of acceptance increases), useful SPECT detectors can still provide high effective sensitivity for PET detection.6. In exemplary embodiments, blockage for PET energies is less than 80%, 60%, 50%, 20% or intermediate percentages depending, for example, on the acceptance angle, septa design, and/or material as described herein. As a specific non-limiting example, the collimator may be formed of tungsten septa about 0.2 mm in thickness, and having 1.03 mm square cells, with a pitch of 1.23 mm and height in the Z-direction of 14.5 mm, and overall horizontal septa length of 10.8 mm.7. Use of wide angle reconstruction techniques for SPECT imaging, for example, the ML-EM algorithm, allows a single adjustable-geometry collimator to be used for SPECT imaging for multiple distances between the detector and the target area, e.g., a range factor of 1.5, 2, 3, 5 or intermediate or greater values. Suitable known reconstruction techniques for PET imaging include without limitation, FBP (filtered back projections), iterative algorithms with or without PSF (point spread function), and modeling. These may also permit collimated detector units formed for example, of tungsten septa to be used for PET imaging to detect photons over a range of detector angular extent along the system axis, for example, e.g., a range factor of 1.5, 2, 3, 5 or intermediate or greater factors.8. It should further be noted that conventional SPECT algorithms assume a collection angle and a detection probability map (resulting from the collimation and perhaps other factors) that is part of the algorithm. In PET imaging, as the detectors are exposed to the entire imaged volume, the coincidence line is used for the reconstruction process. However, in the some of the exemplary embodiments, the variable septa geometry affects PET reconstruction to some extent, in steps—sensitivity being changed (and generally reduced with some steps) as the detection angle increases. In an exemplary embodiment of the invention, the algorithms suitable for PET image reconstruction advantageously take account of the septa configuration to attribute to photons coming from a particular coincidence line a detection probability, such that the probability is factored into the calculation: lower probability means that the source in that direction is actually “hotter”/“brighter” than it looks. Therefore, the value attributed to radiation received by a particular pixel is related to the counts received from that direction divided by the probability of detection from that location. This concept is valid for both SPECT and PET reconstruction because the probability of detection varies in different angles related to the existence of the collimator and the septa configuration, but also the fact that different angles of detection hit different effective thickness of septa, the fact that collimation may be variable, and/or the variable geometry of the detector heads as previously described. FIGS.18A-18Cillustrate an embodiment in which spatial resolution is enhanced by lengthening the Y-Z plane septa1802and/or the X-Z septa1804thereby narrowing the angle of photon acceptance. InFIG.18A, collimator fragment1800is shown in an un-elongated configuration which provides a wide angle of acceptance for SPECT imaging. for example, ranging from about 1 and about 30 degrees, and advantageously, between about 5 and about 15 degrees.FIG.18Bshows elongation of the collimator in the Z-direction to reduce the angle of acceptance and thereby improve the spatial resolution.FIG.18Bshows extension of only the X-Z plane septa1804, whileFIG.18Cshows elongation of both the X-Z septa and the Y-Z septa. It should be understood that alternatively for the situation illustrated inFIG.18B, the Y-Z plane septa1802can be extended instead of X-Z plane septa1804. Extension and retraction of septa1802and1804may be achieved by any suitable and desired mechanism. This is shown schematically as a coupling rod1806for septa1804, A small motor, for example a stepper motor or a multiple position relay (not shown) attached to coupling rod1808provides for step-wise movement. Alternatively, continuous adjustment can be provided, for example, by a suitable motor and position sensors. The septa can also be extended and retracted manually. Extending either septa1802or1804as described can be useful, for example to increase resolution in a certain orientation—while possibly reducing sensitivity in that direction. Retracting the septa does the opposite. In an exemplary embodiment of the invention, tilting of speta (e.g., by moving a bore-side thereof while using the detector side as a pivot) is used for when the detector is rotated out of plane during arrangements as described above. A louver-like mechanism may be used. In an exemplary embodiment of the invention, the reconstruction algorithm is adapted for the collimator configuration, for example, based on a look-up table with different parameters for different collimator configurations. Optionally, a sensor which reports the collimator configuration or the command for collimator adjustment is used to calculate such parameters and/or as input to a reconstruction algorithm Which uses such measurement to modify the reconstruction (e.g., sensitivity correction and desired count rate for stability of reconstruction and/or image quality). In an exemplary embodiment of the invention, planning of data acquisition takes into account possible collimator variations, for example, by calculating desired configurations and/or by comparing options with different collimator arrangements. In an exemplary embodiment of the invention, when acquisition depends on pervious acquisition, collimator and/or detector configuration are changed in a manner which preferentially provides data (photon detections) from a desired location and/or time and/or to preferentially block data from a certain location and/or time. FIGS.19A and19Billustrate an embodiment in which the pitch of the septa is varied.FIG.19Ashows a collimator fragment1900comprised of X-Z plane septa1902and Y-Z plane septa1904defining collimator cells1906.FIG.19Ashows a configuration in which the X and Y dimensions of the collimator elements are equal.FIG.19Bshows X-Z plane septa1902shifted in the Y-direction by one-half the septa pitch. Alternatively, Y-Z plane septa1942may be shifted (in the X-direction). In either case, the cross-sectional area of the opening in collimator cells1906is reduced by for example, 25-75 percent, for example, 50 percent thereby decreasing the acceptance angle and increasing the spatial resolution. In some embodiments, both (some or all thereof) the X-Z and Y-Z plane septa1902and1904can be shifted, thereby reducing the area of the collimator cell openings, for example, by 75 percent, and further decreasing the acceptance angle and correspondingly increasing the spatial resolution. Moving septa1902and/or1904can be effected using an arrangement similar to that described in connection withFIGS.18A-18C(but adapted to provide Y-direction movement) or in any other suitable and desired manner. FIGS.20A and20Billustrate another embodiment in which the pitch of the septa is varied. InFIG.20A, a collimator fragment2000, is formed by X-Z plane septa2002and Y-Z plane septa2004defining collimator cells1906. As inFIG.19Athe septa pitch is the same for septa2002and2004. FIGS.20A-20Bshow the upper ends of X-Z plane septa2002tilted in the Y-direction by one-half the septa pitch. Alternatively, Y-Z plane septa1942may be shifted (in the X-direction). In either case, the area of opening in collimator elements1906is reduced by, for example, by 25-75 percent, for example, 50 percent thereby decreasing the acceptance angle and increasing the spatial resolution. The tilting may be achieved in generally the same manner as in the embodiments ofFIGS.18A-18C and19A-19B, except that the bottom ends2008of septa2002are pivotally mounted. In some embodiments tilting provides a parallel collimation. In other embodiments, it provides a fan-in collimation or a fan-out collimation. Different adjustment and adjustment types may be provided for different detectors and/or for different parts of a same detector, for example, during a same session or even simultaneously, for example, responsive to ROI location, ROI type, time form injection of tracer and/or arm and/or bore geometry. FIGS.21A-21Dillustrate resolution adjustment using an arrangement of layered or vertically tandem collimator sub-units or parts, in accordance with some embodiments of the invention.FIGS.21A and21Bare respectively an end elevation and a side perspective view of a two part collimator fragment2100comprised of first sub-unit2102and a second sub-unit2104located above sub-unit2102. In this context, the word “above” is to be understood as meaning closer to the detector element. Therefore, sub-unit2102is positioned closer to the source of radiation than sub-unit2104. Sub-unit2102is formed a first set of septa2106extending in an X-Z plane, and a second set of septa2108extending in a Y-Z plane. Sub-unit2104, in contrast, is formed only of one set of septa2110extending in the X-Z plane. Alternatively, sub-unit2104can be formed of two sets of septa like sub-unit2102. Also, while sub-unit2104is above sub-unit2102, alternatively, the sub-units can be reversed so that sub-unit2102is above sub-unit2104. FIG.21Bshows two ways the collimator cells of collimator2100may be decreased in size to provide a smaller acceptance angle, and therefore higher resolution for SPECT imaging: septa2108can be moved in the X-direction and septa2110can be moved in the Y-direction. If only septa2108are moved, the area of the cell openings is decreased by up to 50 percent. If both sets of septa2108and2110are moved, cell opening area can be decreased by up to an additional 25 percent. FIG.21Cillustrates an alternative collimator configuration2112comprised of an upper sub-unit2114formed by X-Z plane septa2118and a lower sub-unit2116formed by X-Z plane septa2120and Y-Z plane septa2122. In this embodiment, sub-unit2114includes a second set of X-Z plane septa2124positioned between septa2110, and sub-unit2102includes a second set of Y-Z plane septa, one of which is shown at2126positioned between septa2122. In this configuration, only intermediate septa2114and2126are moveable. FIG.21Dshows a further alternative embodiment2128in which the septa are configured as in the embodiment ofFIG.21Bexcept that the septa2122and2130are tiltable rather than slidable in the Y and X directions, respectively. An un-illustrated variation of collimator2112ofFIG.21Chas three sub-units or parts in vertically tandem relationship. The three parts may be of equal length in the Z-direction, or any other desired proportion, for example, 1:2:1 (i.e., so that middle part is twice as long as the top and bottom parts and thereby provides one-half the length of the collimator. Other proportions are also possible. Optionally, collimator resolution in this embodiment is increased by moving the Y-Z plane septa forming the middle part in the X and/or Y directions as in the embodiment ofFIG.21C. Optionally or additionally, septa forming the top and bottom parts are moved. Another un-illustrated three-part collimator is similar to collimator2128ofFIG.22D, in which resolution is increased by tilting the septa of the central part. The three layer configurations may be desirable in that they may provide a more symmetrical high resolution pattern in the X and Y directions. FIGS.22A-22Dillustrate embodiments in which the opening area of the collimator cells is changed by shutter-like mechanisms. InFIGS.22A and22B, collimator fragment2200is formed by X-Z plane septa2202and Y-Z plane septa2204that define collimator cells2206. At the upper ends of cells2006are pairs of cooperating triangular shutter leaves2208and2210. Shutter leaves2208may be fixed in place while leaves2210are slidable in the X direction, for example, by an actuator rod2212. In the open position illustrated inFIG.22Aleaves2210substantially overlie leaves2208. In the closed position illustrated inFIG.22Bleaves2210have been shifted to partially close the tops of cells2006. FIG.22Cillustrates an embodiment in which the tops of the collimator cells2206are closed by rotating flaps or leaves2214on a mechanism shown schematically as an actuator rod2216. FIG.22Dillustrates an embodiment in which the tops of the collimator cells2206are closed by an iris-like shutter2218. This embodiment may be advantageous in that it allows achievement of very small area openings for the collimator cells. Actuation of shutter2218may be by a convention rotating mechanism, as in a photographic camera, or in any other suitable and desired way. The dimensions of the collimators illustrated inFIGS.18A-22Dmay be varied over ranges, for example, as indicated in the non-limiting examples given below.1. The pitch of the collimator septa may be in the range of about 1 mm to about 3 mm Larger or smaller pitch values are also possible.2. Typical septa thickness may be in the range of about 0.2 mm to about 0.3 mm Again, larger or smaller thicknesses are also possible.3. The height of the septa (in the Z-direction) may be in the range of about 13 mm to about 25 mm, or larger or smaller values.4. The length of the septa (in the X and Y directions) may range from about 8 mm to about 20 mm or larger or smaller values. Exemplary Relationships Between Collimator and Pixilated Detector Configurations: The following discussion describes some embodiments as non-limiting examples of collimator configurations in relation to pixilated detectors, using the above collimator designs or using other collimators, such as machined slabs. In a first embodiment, the collimator septa are aligned with the pixel-pitch of the detector, i.e., the septa are positioned at the borders of each pixel. In such a configuration, the collimator cells are aligned with the detector pixels with one pixel per collimator cell. In another embodiment, the septal pitch in one direction, (e.g., the X direction as defined in connection withFIG.17-18A-18C), or the other direction (or both) is greater than the pixel pitch, for example, in an integer ratio, of 1:2 or 1:3, or 1:4, or 1:5, or greater. In such an arrangement, if the septal pitch matches the pixel pitch in one direction, and is two times the pixel pitch in the other direction, there will be two pixels within each collimator cell. If the septal pitch is twice the pixel pitch in both directions, there will be four pixels in each collimator cell. This configuration may allow the generation of multiple different views within the same collimator cell. In other embodiments the septa are not aligned with some or all pixel boundaries. In a further embodiment, all the septa in one or both the X-Z and Y-Z planes (also as previously defined may be oriented so that they are not parallel to each other. This configuration forms a collimation structure in which multiple pixels have different views, passing through the same cell. In one example, a multiple pinhole structure is provided. In one example, multiple apertures are provided. In another example, the multiple apertures are arranged to form coded aperture collimation structure, for example of a type known in the art. It should be noted that coded aperture techniques are known to those skilled in the art for use in gamma ray imaging, and will not be described here in the interest of brevity. In another embodiment, the septa pitch is smaller than the pixel pitch, for example, according to an integer ratio, of 1:2, or 1:3, or 1:4, or 1:5 or greater. This may be achieved by positioning one or more additional septa at the middle of a pixel to provide, multiple collimator cells within each pixel. With this configuration it is possible to obtain a particular viewing angle (collection angle) with shorter collimator septa. For example, if for a certain desired collection angle one would use a pixel pitch of 2.5 mm and a single collimator cell per pixel with septa length of about 20 mm, similar performance can be obtained by providing two collimator cells per pixel with a septa length of 10 mm Shorter septa may be advantageous since they may permit having a smaller detector head with better maneuverability. In another embodiment, the collimator cell pitch is different in the X and Y directions, and also different than the pixel pitch. For example, the collimator septa may be pitched at 2 mm in one direction and at 3 mm in the orthogonal direction. In general, the septa spacing may be larger or smaller than the pixel spacing and also different in the X and Y directions/pitch. For example, N collimator septa may be evenly spread over K pixels in the X direction, and M collimator septa may be evenly spread over L pixels in the Y direction. In a further exemplary embodiment of the invention the length of the collimator septa is different in the X and Y directions. For example, in the case of pixel pitch of about 2.5 mm, the X-Z plane septa can be about 18 mm long and the Y-Z plane septa can be about 24 mm long. With this exemplary configuration it is possible for a pixilated detector having square pixels to provide different view angles and collection angles in each direction and to allow sensitivity and resolution of reconstruction to be optimized where the camera scanning is very asymmetric in its nature. For example: a camera scan and reconstruction may form many X-Z “slices” along the Y axis which is parallel to patient body main axis and the Y resolution of the detector is very influential on the reconstruction resolution in the Y axis. This is different than the X-Z resolution, as the depth dimension is obtained by different views and rotations within the X-Z plane, where the X resolution of detector is just one factor and the distance, rotations, translations and reconstruction algorithm determine the resulting resolution. In these cases it may well be that an improved result can be obtained with a collection angle which is different in X axis than in Y axis. This is optionally achieved with this unique approach of different septa length in the X-Z and Y-Z planes. Example FIGS.23A-23Cillustrate qualitatively the result of a simulation study performed on an exemplary collimated detector as described herein formed of 0.2 mm tungsten septa. InFIG.23A, the horizontal axis represents ±θ, the angle between an approach path2302from an emission event2303off the centerline2304of a collimator cell2306(seeFIG.23B), two curves are shown: curve2308for a typical SPECT isotope, and2310for a typical PET isotope. As may be seen, SPECT performance is very directional, but for small values of θ, PET performance is comparable. However, for PET isotopes, off-axis detection is reduced, but not by so much as to prevent use of collimated detectors according to some embodiments hereof for selectable or simultaneous SPECT and PET imaging. FIG.23Cprovides an understanding of the saw-tooth shape of the off axis detection probability for PET imaging. Here, it may be seen that the path2310for an emission event at some off-axis position2308must pass through two septa2312and2314to impinge on the center of a detector pixel2316, and suffers additional attenuation. Moreover, having the reconstruction algorithm compensate for the off-axis attenuation (e.g., with angle dependent sensitivity weighting, e.g., calculated and/or measured during calibration) can potentially improve PET performance significantly. As used herein the term “about” refers to ±10%. The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. The term “consisting of” means “including and limited to”. The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof. Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. It is the intent of the applicant(s) that all publications, patents and patent applications referred to in this specification are to be incorporated in their entirety by reference into the specification, as if each individual publication, patent or patent application was specifically and individually noted when referenced that it is to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety. | 134,989 |
11857354 | DETAILED DESCRIPTION FIG.3illustrates an abdominal aorta15that has an abdominal aortic aneurysm (AAA)16. An AAA16is a vascular dilatation on the abdominal aorta15. The aorta15branches into femoral arteries17(e.g., arteria iliaca communis). The aortic aneurysm16is treated by inserting a stent graft (e.g., a composite vascular stent), as illustrated inFIG.4. Guide wires18and catheters19, by which stent grafts20are inserted, are inserted into the aorta15through the femoral arteries17by way of both groins. In the case of complex stent grafts20that also encompass the femoral arteries17, a final stent may be composed of “part-stents.” For example, an iliacal stent22, as a part-stent for the other femoral artery17, is to be “flange-mounted” onto an aortic stent21as a main stent, which projects through the AAA into one of the femoral arteries17, through a so-called window. Based onFIGS.5and6, the principle of a 2D/3D and a 2D/2D overlay are explained in more detail. In order to provide the physician with additional information as assistance when inserting AAA stents, a previously recorded reference image is overlaid anatomically correctly over a current fluoroscopy image generated by a C-arm system2to4. The reference image may be a 3D data set or volume data set of the aorta15with the abdominal aortic aneurysm16according toFIG.4(e.g., a presegmented preoperative computed tomography or rotational angiography using a C-arm angiography system). FIG.5shows the overlay of a current fluoroscopy image with the pre-interventionally generated volume data set23, which, for example, may be present as a 3D grating model, as depicted by way of example in the cube. The 3D grating model is mapped by 3D projection24into the fluoroscopy image as 2D segmentation25, as symbolized by the dotted lines26. A 2D/3D overlay image27is produced as a reference image. In contrast, inFIG.6, there is no volume data set23with 3D grating model, but only a 2D projection image28(e.g., an angiography). The abdominal aortic aneurysm16in a section29of the 2D projection image28is segmented. Using 2D projection30, this 2D segmentation31is projected into the current fluoroscopy image (even if only from precisely this view), and a 2D/2D overlay image32is obtained as a reference image. Although a one-off administration of contrast agent is to be provided for this 2D overlay using, for example, a digital subtraction angiography (DSA), the advantage compared to the “normal” roadmap is that certain changes in the C-arm system2to4such as zoom, source image distance (SID), and/or small movements of the patient positioning table with the tabletop5may be tracked. In the case ofFIGS.5and6, only the outline of the 2D projection is ever illustrated, not the complete model. The method acts to get from the 2D projection image28to the 2D/2D overlay image32are as follows. The aorta15with the abdominal aortic aneurysm16is segmented in the 2D projection image28, and the outlines of the segmented aorta are superimposed as 2D segmentation31into the native fluoroscopy image of the 2D projection image28, of the angiography. Vascular Deformation During the Intervention If now starting from the circumstances according toFIG.3, a rigid or inflexible medical instrument19′ that may not be distorted by the vascular wall is inserted, for example, via a femoral artery17. The vascular wall may in some cases deform to the femoral artery17′ to a greater or lesser extent. If, starting from a reference image33, as schematically illustrated inFIG.8, with 2D segmentation31in front of a spinal column34, in which the circumstances according toFIG.3are sketched, this vascular deformation caused by the inflexible medical instrument19′ is not corrected in the corresponding presegmented fluoroscopy image35. An overlay error36is produced, as is schematically illustrated inFIG.9. The overlay error36is based on an inaccuracy or an “incongruence” during the overlay that may lead to uncertainties during the following intervention, in which the overlay serves as a navigation aid. This overlay error36manifests in a virtual deviation of the mapping of the femoral artery17from the mapping of the rigid medical instrument19′ in the deformed femoral artery17′. Presegmentation of the Reference Volume A presegmentation37of the reference volume with the aneurysm by automatic or user-assisted 2D or 3D image processing may be provided. In this case, for example, the surface38or the outline of the vessels and a course of the vessels may be determined in the form of the center line39of the vessels. This may happen both with the 2D reference image and with the 3D reference image. Medical instruments such as, for example, catheters or guide wires may be identified and tracked in 2D images. In this case, a partially flexible 2D/3D or 3D/3D registration may be provided (e.g., of 2D and 3D angiographies). FIG.11illustrates an initial situation for generating a projection matrix for a fluoroscopy image, in which a fluoroscopy image40of the patient6with an inserted medical instrument19′ who is lying on the tabletop5of the patient positioning couch is generated from a particular angulation of the C-arm2. Generation of a Virtual Projection Matrix PM2 The C-arm projection according toFIG.11, which maps the object (e.g., the patient6) as a fluoroscopy image40, may be described using a projection matrix PM1. By rotating by a particular predetermined angle α, a projection matrix PM2(e.g., a virtual projection matrix), based on which the object may be projected (e.g., virtually projected) from a different angulation, may be calculated. In the fluoroscopy image40, the projected medical instrument19′ may be segmented and represented as a polygon line41, as shown inFIG.12. Virtual Projection42 Based onFIG.13, it is shown how, with the help of the virtual projection matrix PM2, the polygon line41is projected as a two-dimensional point set of a virtual center line43of the vessels in which the instrument19′ to be reconstructed is located. This produces a virtual projection42. The center lines43are interpolated (e.g., quadratically) and produce a second, virtual projection of the instrument as a virtual polygon line44. Reconstruction of the Medical Instrument The medical instrument may be reconstructed in 3D using the two polygon lines (e.g., the polygon line41that corresponds to the actual projection of the instrument and the virtual polygon line44that corresponds to the estimated, virtual projection of the instrument) with the projection matrices PM1and PM2, as is demonstrated inFIG.14. Principle of the Proposed Correction in the Image Plane This proposed correction is explained in more detail in the image plane based onFIGS.15and16. A reference image45that shows the status before the insertion of the medical instrument19(′) is overlaid with the actual position and location of the inserted medical instrument19′, as identified by automatic image processing or a position sensor. This indicates the current course of the vessel (e.g., the deformed femoral artery17′). The reference image45(e.g., the center line39of the segmentation25) is then distorted according to the displacements46, so that the current and the assumed course of the deformed femoral artery17′ are again congruent, as is represented in the distorted reference image47. FIGS.15and16illustrate, for example, situations that show the principle of medical instruments19′ being inserted by different distances. The overlay, distortion and displacement46′ in the reference image48are adjusted differently depending on the position and penetration depth of the medical instrument19′, as the distorted reference image49, in which the whole visible femoral artery17′ is deformed, shows. As a result, however, the courses of the femoral artery17′ and of the medical instrument19′ are again congruent. Based onFIG.17, the method sequence of the method of one or more of the present embodiments is explained in more detail. According to act S1, a volume data set23of the target region or of the examination object15and17is captured and stored. The capture may be achieved, for example, by a computed tomography system or C-arm computed tomography system (e.g., in accordance with the DynaCT method). In act S2, the volume data set23is registered to the C-arm2. A determination of the center line39according to act S3is achieved by extracting information about an assumed course of the examination object15,17in the volume data set23inside the target region. From this volume data set23, according to act S4, at least one 2D projection image28of a medical instrument18,19,19′ inserted in the target region is generated from a suitable C-arm angulation. According to act S5, a 2D/3D merger of the at least one 2D projection image28and of the registered volume data set23is provided in order to generate a 2D overlay image27,32. In act S6, by detecting the medical instrument18,19,19′ inserted in the target region in a first 2D projection (e.g., the fluoroscopy image40) using a first projection matrix PM1(e.g., in the 2D overlay image), the course of the medical instrument18,19,19′ is, for example, identified. According to act S7, a virtual 2D projection42is generated by a virtual second projection matrix PM2by rotating the first 2D projection by a particular predetermined angle α, and an approximation of the instrument18,19,19′ is performed in a virtual 2D projection42. In act S8, a 3D reconstruction of the instrument18,19,19′, in which the 3D position of the medical instrument is determined from the 2D identification under the projection PM1(e.g., the polygon line41) and from the approximation from the virtual projection PM2(e.g., the virtual polygon line44), is provided by triangulation from the two projections40and42. In act S9, the reconstructed instrument18,19,19′ is overlaid with the reference images45and48. At least a part of the reference images45and48is subjected to distortions46and46′ such that the current and assumed course of the vessels are made to be congruent. In this case, at least a part of the vessel17′ that corresponds to the course of the vessel is made to coincide with the respective part of the inserted instrument19′, for which the position information is available. The overlays and distortions46,46′ of the reference images45and48are adjusted depending on the position and penetration depth of the instrument19′, so that the distorted X-ray images47and49are obtained. FIG.18illustrates act S7in more detail. In subact S7a, the virtual second projection matrix PM2is first generated. According to subact S7b, a virtual projection42of the instrument19′ is generated by the virtual projection matrix PM2. In subact S7c, an approximation of the inserted instrument19′ is performed in the virtual projection42. This approximation may serve as the basis for the 3D reconstruction of the instrument19′. FIG.19shows a flow chart of one embodiment of the complete method sequence according toFIGS.17and18, based on which the individual method acts and the sequence thereof may be better followed. A capture50of a 3D volume data set23of the target region or of the examination object15and17is achieved, for example, by a CT angiography performed before an intervention or a C-arm computed tomography recorded during the intervention (e.g., in accordance with the DynaCT method). In act51, the 3D volume data set23is registered to the C-arm2. From this registered 3D volume data set23, information about an assumed course of the examination object15,17in the volume data set23inside the target region is determined by extraction52of the center line39of the vessels. From the 3D volume data set23, 2D projection images28of a medical instrument18,19,19′ inserted in the target region are generated53under a suitable C-arm angulation. A 2D/3D merger54of the 2D projection images28and of the registered volume data set23is provided in order to generate 2D overlay images27,32. From this data, a first projection matrix PM1is derived55. The course of the medical instrument is extracted by detecting56the medical instrument18,19,19′ inserted in the target region in a first 2D projection with a first projection matrix PM1. By rotation about a particular predetermined angle α, a second projection matrix PM2(e.g., a virtual projection matrix) is generated57. Using the virtual projection matrix PM2, a virtual projection42of the instrument19′ is generated58. In this virtual projection42, the inserted medical instrument18,19,19′ is detected by approximation59. Following the detection of the instrument18,19,19′, a 3D reconstruction60of the instrument18,19,19′ takes place, whereby at least a part of the reference image45and48is subjected to distortions46and46′ such that the current and the assumed course of the vessels are made to be congruent. In this case, a part of the vessel17′ corresponding to the course of the vessel is made to coincide with the relevant part of the inserted instrument19′, for which the position information is available. The overlays and distortions46of the reference images45and48are adjusted depending on the position and penetration depth of the instrument19′, so that the distorted X-ray images47and49are obtained. This 3D reconstruction60may, for example, be displayed61on a display of the monitor bracket9. The principle of correction based on the repair of an aortic aneurysm is described in summary below in an exemplary embodiment. Basic preconditions for the method of one or more of the present embodiments include a 3D volume (e.g., a previously performed CT angiography or a C-arm CT such as Siemens DynaCT®) that is registered to the C-arm (or to the relevant fluoroscopy images) being recorded during the intervention. Information (e.g., about a semi-automatic or automatic 3D segmentation, depending on the data set used) over the course of the vessels (e.g., the center lines of the vessels and/or the course of vascular lumina and/or other corresponding information (seeFIG.10)) is provided. A facility to identify and track inserted medical instruments (e.g., the instrument for inserting stents) is provided. This may happen, for example, via corresponding identification or tracking of the medical instruments in the 2D fluoroscopy images. One or more of the present embodiments relate to the 3D reconstruction of a medical instrument inserted into the vessel under the aforementioned preconditions from just one X-ray projection. The method includes X-ray projection of the instrument or device. During the intervention, fluoroscopy of the patient6, for example, with an inserted medical instrument19′ takes place from a suitable C-arm angulation, in which a deviation between overlay of the reference image and an actually projected instrument is established, as has already been explained based onFIG.9. The medical instrument is identified in a first 2D X-ray projection with a first projection matrix. The result of this detection of the medical instrument19′ in the first 2D X-ray projection with a first projection matrix PM1is a two-dimensional polygon line41that corresponds to the position of the medical instrument19′ in this projection (e.g., the fluoroscopy image40). A “virtual” second 2D X-ray projection is generated with the help of a virtual projection matrix PM2. With the generation of the second virtual projection matrix PM2, a projection matrix is basically composed of intrinsic parameters (e.g., pixel size, etc.) and extrinsic parameters (e.g., translation and rotation of the projected 3D object). For the virtual second projection matrix PM2, any intrinsic parameters may be assumed or, for simplicity's sake, the parameters that also apply for the first projection matrix PM1. The extrinsic parameters are selected such that the rotation by a sufficiently large angle differs from the first projection matrix PM1(e.g., by a rotation by 90° about the patient axis). The registered object is projected virtually by the second projection matrix PM2from a different side. A virtual second projection42is generated. From the registered reference volume, the center line of the vessels that contain the instrument19′ to be reconstructed are forward-projected using the virtual second projection matrix PM2. In the case of an aortic aneurysm16, this would, for example, be the center line39of the aorta15and of the femoral artery (e.g., the deformed femoral artery17′) to be corrected. The result is a two-dimensional point set of the virtual center line43, which corresponds to the projections of the center lines. The medical instrument is approximated in the virtual second projection. From the projected points of the virtual center line43, the position of the medical instrument19′ is estimated from the virtual projection42of the virtual second projection matrix PM2. Assuming that the inserted instrument19′ is located basically in the corresponding vessels17, a smoothing interpolation of the center line projection is assumed as the position of the instrument. Depending on the assumed inflexibility of the medical instrument19′, this may be a linear, quadratic or more flexible spline interpolation. The result is a two-dimensional polygon line that corresponds to the estimated position of the medical instrument19′. The medical instrument is 3D reconstructed. The 3D position of the medical instrument is determined as a reconstruction of the 2D identification under the projection PM1(e.g., the polygon line41) and from the approximation from the virtual projection PM2(e.g., the virtual polygon line44) by triangulation from two projections40and42. The assumptions for the correction are then the same, as described, for example, in DE 10 2010 012 621 A1, and are listed here once again for the sake of completeness. The position identified in the fluoroscopy image40indicates the current course of the vessel17, since the medical instrument19′ is located inside the vessel17. The reference image45or the center line of the segmentation of the reference image45is correspondingly distorted by displacements46and46′, such that the current and the assumed course of the vessel are again congruent. The corresponding part of the vessel17′ is made to coincide with the respective part of the identified medical instrument19′ (FIG.15). The overlay of the reference image45,48is adjusted differently depending on the position and penetration depth of the instrument19′ (seeFIGS.15and16). The remainder of the course of the vessel (i.e., the part into which no medical instrument has yet been inserted) is extrapolated, for example, on the assumption of “smoothness conditions,” since vessels do not generally bend sharply or anything similar. In this case, regions far removed from the identified instrument19′ are, for example, not deformed at all (e.g., the renal arteries) if an instrument19′ is inserted into the femoral arteries17,17′, more specifically such that a smooth course of the vessels without discontinuities or breaks is maintained. Other embodiments may be provided optionally or alternatively. Alternatively to the above-described precondition 2, the information about the course of the vessels (e.g., 2D or 3D) may also be defined manually by the user (e.g., drawn in) and may also be given by a mathematical description (e.g., a high-degree polynomial or another suitable function). The adjustment of the overlay may take place, for example, by adjusting the functional parameters depending on the position of the identified instrument. Alternatively to the above-described precondition 3, the position of the inserted instrument may also be defined manually by the user (e.g., drawn in). Not just one but several instruments may be identified or tracked. Thus, for example, other stationary instruments (e.g., guide wires inserted into the renal arteries) may be identified and tracked in order to provide that the overlays are congruent at several points. Generally, the method may be extended to all procedures that profit from the overlay, where appropriate, of presegmented reference images (e.g., including when replacing aortic valves, interventions in coronary vessels, etc.). Due to the correction of overlaid reference images (e.g., for the stents in an aorta), displacements that arise from the insertion of instruments are corrected essentially automatically. For example, using the 3D correction of one or more of the present embodiments, the 3D position of the inserted instrument may be determined from just one X-ray projection. It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims can, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent, and that such new combinations are to be understood as forming a part of the present specification. While the present invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description. | 21,629 |
11857355 | DETAILED DESCRIPTION As detailed above, typical C-arm x-ray systems have an energy-integrating x-ray detector in a flat-panel detector (FPD) geometry. While these energy-integrating x-ray detectors can be particularly well-suited for some imaging tasks, such as, during fluoroscopy, for digital subtraction angiography (DSA) sequences, and for three-dimensional (3D) cone beam computed tomography (CBCT) acquisitions, these energy-integrating FPDs can be ill-suited for other imaging tasks. For example, energy-integrating x-ray detectors are particularly insufficient for procedures that require superior low-contrast detectability, or high spatial resolution (e.g., at least due to the relatively bigger pixel element sizes) or quantitative material (e.g. iodine) information. Another type of x-ray detector is a photon-counting detector, which is typically implemented with a conventional CT scanner having a bore that houses an x-ray source and detector assembly (e.g., that both rotate around a single axis of rotation). Photon-counting detectors are different than energy-integrating x-ray detectors in that they can spatially discriminate individual x-ray photons (emitted from the x-ray source) generating signals that are proportional to the energy of the x-ray photon. In other words, individual sensors (e.g., pixel elements) of the photon-counting detector can determine individual x-ray photons and their corresponding energies. Conversely, for energy-integrating x-ray detectors, a given x-ray photon that is directed at a given individual sensor (e.g., a pixel element) of the energy-integrating x-ray detector is sensed as a peak in time, with the given x-ray photon possibly also being sensed (partially) by adjacent sensors (e.g., from the given x-ray photon being absorbed by the scintillator and remitted in all directions as light sensed by the sensors). Thus, due to the ability of individual x-ray photon discrimination for the photon-counting detectors, the size of individual sensors can be reduced, which can greatly improve image resolution to effectively discern small structures of a subject when utilizing x-ray photon-counting detectors. Although some conventional CT scanners have adopted photon-counting detectors, photon-counting detectors have not been widely adopted in interventional radiology suites. For example, while photon-counting detectors have better spatial resolution than energy-integrating x-ray detectors, the energy-integrating x-ray detectors are generally better for a greater number of different imaging tasks than the photon-counting detectors (e.g., at least due to the greater sensitivity of the energy-integrating x-ray detectors). So, many interventional radiology suites being able to only have a single x-ray system (e.g., due to cost constraints) prefer to have the energy-integrating x-ray detector system. As another example, some imaging tasks require a cone beam CT (e.g., for a 3D image acquisition). This would then require replacing the energy-integrating x-ray detector with a photon-counting detector of a similar spatial footprint, which would be far more costly. Thus, at least due to costs, and the decrease in quality (or inability) to complete particular imaging tasks, interventional x-ray systems have not adopted the photon-counting x-ray detectors. Recognizing these drawbacks, and in an effort to bring spectral imaging to C-arm systems, U.S. application Ser. No. 16/890,960 provides systems and methods to provide energy-resolving photon counting detectors (PCDs) in the C-arm gantry environment, a cost-effective and flexible manner. The PCD provides adequate coverage along both axial (x-y) and z-directions. While facilitating retrofitting to existing systems and being comparatively cost-effective to having two different systems, the additional PCD detector does add cost, as well as some technical issues, such as scatter-induced quantification inaccuracies. As will be described herein, the present disclosure provides systems and methods for a multi-detector system. In one non-limiting example, a PCD design having a limited footprint that may be tailored specifically for particular clinical applications, such as minimally invasive image-guided interventions (IGI), can be used. The PCD design may be formed by two or more PCD modules to make a multi-detector system. Additionally or alternatively, the PCD may be integrated with a FPD having a different footprint. Systems and methods are provided for integrating and producing any of a variety of images and other clinically-relevant reports from the multi-detector system. In the non-limiting example ofFIG.1, a CT x-ray imaging system100is shown. The illustrated non-limiting example is a “C-arm” that includes a gantry102having a C-arm to which an x-ray source assembly104is coupled on one end and an x-ray detector array assembly106is coupled at its other end. However, the systems and methods provided herein may be likewise use with traditional diagnostic CT systems that have closed gantries or bores. Regardless of the gantry geometry, the gantry102enables the x-ray source assembly104and detector array assembly106to be oriented in different positions and angles around a subject108, such as a medical patient or an object undergoing examination that is positioned on a table110. When the subject108is a medical patient, this configuration enables a physician access to the subject108. The x-ray source assembly104includes at least one x-ray source that projects an x-ray beam, which may be a fan-beam or cone-beam of x-rays, towards the x-ray detector array assembly106on the opposite side of the gantry102. The x-ray detector array assembly106includes at least one x-ray detector, which may include a number of x-ray detector elements. Examples of x-ray detectors that may be included in the x-ray detector array assembly106include flat panel detectors, such as so-called “small flat panel” detectors, in which the detector array panel may be around centimeters in size. Such a detector panel allows the coverage of a field-of-view of approximately twelve centimeters. Together, the x-ray detector elements in the one or more x-ray detectors housed in the x-ray detector array assembly106sense the projected x-rays that pass through a subject108. Each x-ray detector element produces an electrical signal that may represent the intensity of an impinging x-ray beam and, thus, the attenuation of the x-ray beam as it passes through the subject108. In some configurations, each x-ray detector element is capable of counting the number of x-ray photons that impinge upon the detector. During a scan to acquire x-ray projection data, the gantry102and the components mounted thereon rotate about an isocenter of the C-arm x-ray imaging system100. The gantry102includes a support base112. A support arm114is rotatably fastened to the support base112for rotation about a horizontal pivot axis116. The pivot axis116is aligned with the centerline of the table110and the support arm114extends radially outward from the pivot axis116to support a C-arm drive assembly118on its outer end. The C-arm gantry102is slidably fastened to the drive assembly118and is coupled to a drive motor (not shown) that slides the C-arm gantry102to revolve it about a C-axis, as indicated by arrows120. The pivot axis116and C-axis are orthogonal and intersect each other at the isocenter of the C-arm x-ray imaging system100, which is indicated by the black circle and is located above the table110. The x-ray source assembly104and x-ray detector array assembly106extend radially inward to the pivot axis116such that the center ray of this x-ray beam passes through the system isocenter. The center ray of the x-ray beam can thus be rotated about the system isocenter around either the pivot axis116, the C-axis, or both during the acquisition of x-ray attenuation data from a subject108placed on the table110. During a scan, the x-ray source and detector array are rotated about the system isocenter to acquire x-ray attenuation projection data from different angles. By way of example, the detector array is able to acquire thirty projections, or views, per second. The C-arm x-ray imaging system100also includes an operator workstation122, which typically includes a display124, one or more input devices126, such as a keyboard and mouse, and a computer processor128. The computer processor128may include a commercially available programmable machine running a commercially available operating system. The operator workstation122provides the operator interface that enables scanning control parameters to be entered into the C-arm x-ray imaging system100. In general, the operator workstation122is in communication with a data store server130and an image reconstruction system132. By way of example, the operator workstation122, data store sever130, and image reconstruction system132may be connected via a communication system134, which may include any suitable network connection, whether wired, wireless, or a combination of both. As an example, the communication system134may include both proprietary or dedicated networks, as well as open networks, such as the internet. The operator workstation122is also in communication with a control system136that controls operation of the C-arm x-ray imaging system100. The control system136generally includes a C-axis controller138, a pivot axis controller140, an x-ray controller142, a data acquisition system (“DAS”)144, and a table controller146. The x-ray controller142provides power and timing signals to the x-ray source assembly104, and the table controller146is operable to move the table110to different positions and orientations within the C-arm x-ray imaging system100. The rotation of the gantry102to which the x-ray source assembly104and the x-ray detector array assembly106are coupled is controlled by the C-axis controller138and the pivot axis controller140, which respectively control the rotation of the gantry102about the C-axis and the pivot axis116. In response to motion commands from the operator workstation122, the C-axis controller138and the pivot axis controller140provide power to motors in the C-arm x-ray imaging system100that produce the rotations about the C-axis and the pivot axis116, respectively. For example, a program executed by the operator workstation122generates motion commands to the C-axis controller138and pivot axis controller140to move the gantry102, and thereby the x-ray source assembly104and x-ray detector array assembly106, in a prescribed scan path. The DAS144samples data from the one or more x-ray detectors in the x-ray detector array assembly106and converts the data to digital signals for subsequent processing. For instance, digitized x-ray data is communicated from the DAS144to the data store server130. The image reconstruction system132then retrieves the x-ray data from the data store server130and reconstructs an image therefrom. The image reconstruction system132may include a commercially available computer processor, or may be a highly parallel computer architecture, such as a system that includes multiple-core processors and massively parallel, high-density computing devices. Optionally, image reconstruction can also be performed on the processor128in the operator workstation122. Reconstructed images can then be communicated back to the data store server130for storage or to the operator workstation122to be displayed to the operator or clinician. The C-arm x-ray imaging system100may also include one or more networked workstations148. By way of example, a networked workstation148may include a display150, one or more input devices152, such as a keyboard and mouse, and a processor154. The networked workstation148may be located within the same facility as the operator workstation122, or in a different facility, such as a different healthcare institution or clinic. The networked workstation148, whether within the same facility or in a different facility as the operator workstation122, may gain remote access to the data store server130, the image reconstruction system132, or both via the communication system134. Accordingly, multiple networked workstations148may have access to the data store server130, the image reconstruction system132, or both. In this manner, x-ray data, reconstructed images, or other data may be exchanged between the data store server130, the image reconstruction system132, and the networked workstations148, such that the data or images may be remotely processed by the networked workstation148. This data may be exchanged in any suitable format, such as in accordance with the transmission control protocol (“TCP”), the Internet protocol (“IP”), or other known or suitable protocols. FIG.2shows a schematic illustration of an example of a multi-detector system200of the detector assembly106. The multi-detector system200forms part of the x-ray detection system106. It can include a dedicated processing system206that may be in communication with the data-acquisition system144, or the processing functionality of the processing system206can be integrated into the data-acquisition system144, such as providing computer code to achieve the functionality described herein. The multi-detector system200includes an energy-integrating x-ray detector202that can sense x-rays emitted from the x-ray source assembly104in the energy integrated manner, such as in the form of a FPD. The multi-detector system200can also include a photon-counting detector assembly or system204configured to sense x-rays emitted from the x-source assembly104and determine individual x-ray photons and their corresponding energies, described above as a PCD system. In this way, the multi-detector system includes both an energy-integrating detector202and the photon-counting detector204that, together, are configured to receive the x-rays emitted from the x-ray source simultaneously. In some non-limiting examples, the detectors202,204are integrated and coupled to a gantry of a CT system102, such as the end of the C-arm106, or a traditional diagnostic CT system. In one non-limiting example, the multi-detector system200may be formed as illustrated inFIGS.3A-3B. In particular, one non-limiting example in accordance with the present disclosure combines a PCD module300with a FPD module302. The FPD module302extends as a panel, for example, in an x-y plane304and a z-direction306. In this way, it generally preserves the detector array and field of view (FOV), and functionality of traditional FPDs. Arranged over or integrated with the FPD module302the PCD module300to define a detecting area of the multi-detector system200. Both the FPD module302the PCD module300can acquire x-rays simultaneously. This configuration advantageously reduces the overall system cost compared to a system that completely replaces foregoes an FPD in favor of a large-area PCD. That is, the PCD module300is designed to have a detector array that is constrained to a predetermined geometry that covers less physical area than the FPD module302. The PCD module300and FPD module302may have different shapes from each other. In one, non-limiting example illustrated inFIG.3A, the FPD module302is a rectangle and the PCD module300is formed from a first submodule308and a second submodule310. In the illustrated, non-limiting example, the first submodule308forms a rectangle intersecting with the second submodule310, which is formed as an elongated strip. In this way, the illustrated the PCD module302may for a “dagger” shape, or any of a variety of other shapes. In this dagger shape, the PCD module302may be formed of two separate detector arrays, where a first is rectangular-shaped and a second is strip-shaped or “I” shaped. Alternatively, these two submodules308,310may be integrated to form a single array of detectors forming the dagger shape or another shape. As illustrated by hidden lines311, the “I” shape may be formed by one detector array sandwiched between two rectangular detector arrays arranged on either side of the “I” shape to form the rectangular shape. Alternatively, the “dagger” shape (or other shape) may be formed by one functional detector array, such as illustrated inFIG.3B, where the hidden lines311are removed. Though the specific geometries of the PCD module300and the FPD module302may be selected based on imaging preferences or clinical applications, this “dagger” shape can be advantageous because the second submodule310forming the strip provides data for full axial FOV for spectral and ultrahigh-resolution PCD-CT imaging at a given longitudinal location in the z-direction306. The first submodule308forming the rectangle provides data for volume-of-interest (VOI) 3D and region-of-interest (ROI) 2D spectral and ultra-high-resolution imaging. Locations of the VOI and ROI can be selected by the treating physicians based on the full FOV CBCT or fluoroscopic images. Other geometries or numbers of submodules308,310are also possible. For example, instead of a rectangle, other shapes may be used, including squares, circles, ovals, or any of a variety of polygons or other shapes. Furthermore, instead of an elongated strip, a variety of dispersed modules may be arranged transversely to the first geometry or across the FPD module302. Regardless of the shapes or manner of integration utilized, the PCD module300may be integrated with the scintillator-based energy integrating FPD module302to form a single overall multi-detector or hybrid FPD-PCD detector. The PCD module300and FPD module302may be integrated in any of a variety of configurations. For example, the PCD module300may be inset within the FPD module302, such that the FPD module302surrounds the sensing elements of the PCD module300, to create a flush surface akin to a standard FPD detector panel. In this way, the PCD module300and the FPD module302, together, form a continuous detector surface. That is, a single continuous surface may extend along the x-y plane304and the z-direction306. In this way, no additional bulk or larger overall profile is created by the multi-detector system200, as compared to a traditional, single-detector FPD detector panel. Alternatively, the PCD module300may be mounted over the FPD module302. Irrespective of particular geometries or configurations, when the full FOV of the FPD module302is required, the data provided by the PCD module300can be processed to form a seamless whole image together with the data provided by the FPD module302. The PCD module300and FPD module302can share electronics system, as will be described with respect toFIG.3B. For example, the PCD module300and FPD module302can utilize a shared electronics board. Alternatively, the PCD module300can be mounted in front of the existing FPD, and a motorized device can be used to translate the PCD module300out of the FOV for the C-arm system to return to conventional FPD-based imaging modes. In this case, during an IGI process, when a clinical scenario requires spectral or high-resolution 3D or 2D imaging, the PCD module300can be automatically translated into the FOV. The output data of the PCD module300can be used to create any of a variety of images. The data output of the PCD module300can be conceptualized as a series of data outputs corresponding to a series of energy bins. That is, as one non-limiting example, the output of the PCD module300can include raw counts associated with each of a plurality of energy bins. Moreover, the data from the first submodule308can be processed separately from the data from the second submodule310, or the two can be processed together. In this regard, the data from each submodule308,310of the PCD module300can be represented as a series of energy bins, from energy bin “1”312,320, to energy bin “2”314,322, to energy bin “3”316,324, through energy bin “n”318,326. The data from second submodule310of the PCD module300can be used to reconstruct an axial FOV high-resolution image and/or a spectral image328. The output from the first submodule308of the PCD module300can be used to reconstruct 3D volumes of interest (VOI) with high resolution and/or spectral PCD cone beam CT images. Additionally, the output from the first submodule308of the PCD module300can be used to reconstruct 2D high-resolution and/or spectral images332. Furthermore, the data from the first and second submodules308,310of the PCD module300can be combined with the data from the FPD module302. With the combined data, full-FOV 2D x-ray images and/or full-FOV 3D cone Beam CT data334can be reconstructed. Referring toFIG.3B, data from the multi-detector system200can be selective combined. In one example, data from the PCD module300can be combined with the output data of the scintillator-based energy integrating FPD module302to form, for example, a single full-FOV whole image. Likewise the output data of the scintillator-based energy integrating FPD module302can be used independently336or can be combined with additional data, as descried by the operator or dictated by the clinical application, as will be described. Data associated with each energy bin320-326of the PCD module300can be weighted by the respective energy of the bin338. That is, the energy-weighted data of different bins can be weighted and summed or otherwise combined together. The weighting factors of each energy bin can be calculated, experimentally calibrated, empirically (heuristically) determined, or assigned based on theory, unlike a data from the FPD module302, which is not binned. To compensate for mismatched spatial resolution between the PCD module300and the FPD module302, weighted image assembled from the data form the PCD module300can be filtered340until, for example, the spatial resolution and image textures match that of the FPD module302or another user-selected criteria to create a synthesized FPD image342. Parameter(s) of the filter340can be determined theoretically, experimentally, or empirically (heuristically). Through this process, the data from the PCD module300can be combed with the data from the FPD module302to form a seamless whole image334, where any physical gaps, if there are any, can be compensated via digitally interpolating or stitching the gaps344using images336of the FPD module302and the PCD module300. Thus, full-FOV 2D x-ray images and/or full-FOV 3D cone beam CT images can be produced despite the fact that multi-detector system200covers the full-FOV using two modules300,302that are of different types/resolutions. Additionally or alternatively, dual imaging subtraction can be performed to create a mask image without the need for a separate mask image scan. EXPERIMENTS In one non-limiting example of a system created using the geometry illustrated inFIG.3, a 51×0.6 cm2submodule310forming a strip was combined with a 5×10 cm2submodule308forming a rectangle that, together, formed the PCD module300. The PCD module300was mounted on a C-arm gantry over the FPD module302to acquire preliminary experimental results as a proof-of-concept for the dagger PCD design and to demonstrate the potential benefits of 2D and 3D PCD imaging in IGIs. The prototype formed a multi-detector system (FPD and PCD) constructed based on a Siemens Artis Zee interventional x-ray system C-arm gantry. The original C-arm system has a 40 cm×30 cm CsI:Tl FPD with 14-bit analog-to-digital converter (ADC) and 154 μm pixels. When operated under the CBCT imaging model, pixels of the FPD were binned (e.g., 4×4) to meet the frame rate requirement. The two PCD submodules were attached to the gantry separately using customized mounting devices. Both PCDs were manufactured by DirectConversion AB, Sweden: where the strip-shaped submodule was a XC-Hydra FX50 with a 0.75 mm layer of cadmium telluride (CdTe) as the x-ray sensor and a maximal readout frame rate of 150 fps. The rectangular-shaped submodule was Thor FX10 with 2 mm of CdTe and a maximal frame rate of 1000 fps. Both PCDs had two adjustable energy thresholds, 100% pixel fill factor, and 100 μm pixels. Unlike in MDCT, the x-ray tube in the interventional system was operated under the pulsed x-ray mode. Therefore, a synchronization between each PCD readout and each x-ray pulse was needed. This was achieved by feeding the “X-ray On” signal from the high voltage generator of the Siemens system to the trigger input of each PCD. It is well known that the C-arm gantries wobble during rotation, and the C-arm with the mounted PCD module was no exception. Based on experimental data, the addition of the PCD to the C-arm gantry did not introduce any additional mechanical deformation. All observed geometric distortion came from the mechanical deformation of the original C-arm gantry. To correct for the wobbling-induced artifacts in the PCD-CT images, two customized geometric calibration phantoms were used. The first one was for the geometric calibration of the rectangular submodule. It was similar to the so-called helix phantom commonly used for the geometric calibration of FPD-based CBCT, except much smaller, with a diameter of only 3 cm and a length of 5 cm to fit in the limited axial FOV of the rectangular PCD submodule footprint. It contained 41 steel bearing balls (BBs) arranged along a helical trajectory with an angular increment of 30 and a z-pitch of 1.27 mm. The second geometric calibration phantom was used for the strip-shaped submodule. Due to the narrow z-coverage of the strip-shaped PCD submodule, helix phantoms were not applicable because no more than one BB can be seen by the submodule. Therefore, 11 BBs in a second phantom were arranged in the same axial plane. The coplanar design ensured all 11 BBs would show up on each projection image captured by the strip-shaped submodule For each PCD submodule and calibration phantom, a PCD-CT scan was performed and the projection matrices were estimated for each angle. During image reconstruction, the projection matrices were applied in the pixel-driven backprojection step. Phantom and in vivo animal experiments were performed to evaluate the 2D and 3D imaging performance of the two PCD submodules. The first image object was a 16 cm acrylic phantom that contains six inserts. Four inserts contained iodine with concentrations ranging from 10 to 20 mg/ml. The remaining two inserts contained 100 mg/ml and 200 mg/ml calcium (Ca). In 125 kV FPD-CBCT images of this phantom, the 100 mg/ml Ca insert and the 10 mg/ml iodine insert demonstrated the same CT number of 322±20 HU. To address this “HU-degeneracy” problem, the strip-shaped submodule was used to acquired full axial FOV dual-energy PCD-CT images with the two energy thresholds of the PCD set to 15 and 63 keV. The recorded PCD images used 4×4 pixel binning. After the geometric correction, a PCD nonuniformity correction method was applied to both the low-energy (LE) and high-energy (HE) bin images, and then an image-domain material decomposition was performed to generate iodine basis images, virtual non-contrast images, and effective Z images using the HU ratio between the LE and HE images to differentiate between iodine and Ca inserts. The nonuniformity correction method is described in M. Feng, X. Ji, R. Zhang, K. Treb, A. M. Dingle, and K. Li, “An experimental method to correct low-frequency concentric artifacts in photon counting CT,” Phys. Med. Biol. , Vol. 66, pp. 175011, 2021., which is incorporated herein by reference in its entirety. To demonstrate the spatial resolution benefits of the PCD, the strip-shaped submodule was used to scan an anthropomorphic head phantom that contains iodinated cerebral vessel models. The PCD was operated under an ultra-high resolution (UHD) mode, in which no binning was applied to the native 100 μm pixels, and a high-resolution reconstruction kernel was used for to generate UHD images. The UHD-mode acquisition was also applied to a Catphan phantom and an adult farm pig (53 kg) in vivo. To demonstrate the capability and benefits of VOI PCD-CT imaging using the rectangular-shaped submodule, a 3.5 mm stent with a kinked section was scanned by both UHD PCD-CT and FPD-CBCT. All acquisitions were performed at 125 kV, 7 s rotation speed, with 494 projection views that cover an angular span of 200, and 0.15 μGy per frame, and were reconstructed with a conventional filtered backprojection (FBP) algorithm with the Parker short scan weighting. Except for the pig study and stent images, all FPD-CBCT acquisitions used a narrow (2.5 cm) collimation along the z-direction. FIG.4is a set of correlated phantom PCD-CT images of the 16 cm phantom acquired using the strip-shaped submodule operated under the dual-energy mode. With the detector non-uniformity correction method developed in Feng et al. directly referenced above, high-quality and ring artifact-free PCD-CT images were generated for the LE and HE bins, which were used to generate material basis and other quantitative images that can differentiate inserts with the same CT number in the FPD-CBCT image.FIG.5provides a correlated series of images of an anthropomorphic head phantom and the Catphan600 phantom. More particularly,FIG.5compares FPD-CBCT images with PCD-CT images acquired using the strip-shaped submodule operated under UHD mode. As can be seen inFIG.5, for the head phantom results, distal cerebral vessels were completely or partial missed on FPD-CBCT images, but were clearly visualized on C-arm PCD-CT images. When all distal and smaller artery branches (0.5 mm) are considered, the CNR was 6.9 [95% CI: 5.8, 8.0] in PCD-CT and 2.9 [95% CI: 2.1, 3.7] in FPD-CBCT. The improved small vessel visualization is due to the intrinsically superior spatial resolution of the PCD. As further shown inFIG.5by the Catphan images, the UHD PCD-CT was able to resolve the finest line pair pattern (21 lp/cm), compared with the 12 lp/cm limiting spatial resolution of FPD-CBCT. The in vivo pig images shown in FIG.6is a set of images of a pig.FIG.6demonstrated a similar spatial resolution benefit of PCD-CT. With the proposed geometric calibration and detector non-uniformity corrections, no distortions or ring artifacts can be observed in the PCD-CT images.FIG.7shows PCD-CT VOI images acquired using the rectangular-shaped module operated under UHD mode. Both FPD-CBCT and PCD-CT images were acquired with matched beam collimation and matched radiation dose. The images were reconstructed with a matched isotropic voxel size of 0.07 mm. Even when the reconstruction kernel is matched between the PCD-CT and FPD-CBCT, the UHD PCD-CT shows the stent much more clearly and with better resolution. When the high-resolution capabilities of the PCD-CT and FPD-CBCT are pushed to their limits with the sharper kernels, the FPD-CBCT again fails to resolve the stent as clearly as the PCD-CT. In summary, the multi-detector FPD-PCD system described herein can be used to upgrade existing C-arm interventional x-ray systems or create new systems. In either case, the systems and methods provided herein provide spectral and ultra-high resolution capabilities, and also have been experimentally demonstrated from using prototypes. The results confirmed multiple advantages of PCD-based IGIs. For example, spectral and quantitative imaging is available to help resolve ambiguous findings during procedures. As another example, ultra-high spatial resolution can be used to help resolve small perforating blood vessels and interventional devices. The particular geometry used in the experiments described herein that includes a strip-shaped submodule and a rectangular-shaped submodule combining to form the PCD, demonstrate mutually complementary designs, particularly, when mounted on or combined with a FPD. The system provides superior flexibility such that the system can operate to provide traditional FPD images, or can provide improved resolution, multi-spectral capabilities, or other functionality, each of which can be chosen by physicians based on the specific clinical needs. That is, the systems and methods provide, for example, 1) spectral imaging capability; 2) much superior soft-tissue contrast detectability; 3) much higher spatial resolution, compared to traditional FPD systems. Furthermore, the system does not include complex mechanical structures or moving parts. Rather, it can be selectively controlled by the operator and the processing system, for example using electronic switching and or data processing. Although some of the discussion above is framed in particular around systems, such as the various isolation system, those of skill in the art will recognize therein an inherent disclosure of corresponding methods of use (or operation) of the disclosed systems, and the methods of installing the disclosed systems. Correspondingly, some non-limiting examples of the disclosure can include methods of using, making, and installing isolation systems. Although the invention has been described and illustrated in the foregoing illustrative non-limiting examples, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed non-limiting examples can be combined and rearranged in various ways. Furthermore, the non-limiting examples of the disclosure provided herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other non-limiting examples and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless specified or limited otherwise, the terms “mounted,”“connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected”and “coupled” are not restricted to physical or mechanical connections or couplings. Also, the use the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “right”, “left”, “front”, “back”, “upper”, “lower”, “above”, “below”, “top”, or “bottom” and variations thereof herein is for the purpose of description and should not be regarded as limiting. Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings. Unless otherwise specified or limited, phrases similar to “at least one of A, B, and C,”“one or more of A, B, and C,” etc., are meant to indicate A, or B, or C, or any combination of A, B, and/or C, including combinations with multiple or single instances of A, B, and/or C. In some non-limiting examples, aspects of the present disclosure, including computerized implementations of methods, can be implemented as a system, method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a processor device, a computer (e.g., a processor device operatively coupled to a memory), or another electronically operated controller to implement aspects detailed herein. Accordingly, for example, non-limiting examples of the invention can be implemented as a set of instructions, tangibly embodied on a non-transitory computer-readable media, such that a processor device can implement the instructions based upon reading the instructions from the computer-readable media. Some non-limiting examples of the invention can include (or utilize) a device such as an automation device, a special purpose or general purpose computer including various computer hardware, software, firmware, and so on, consistent with the discussion below. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier (e.g., non-transitory signals), or media (e.g., non-transitory media). For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, and so on), optical disks (e.g., compact disk (CD), digital versatile disk (DVD), and so on), smart cards, and flash memory devices (e.g., card, stick, and so on). Additionally, it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Those skilled in the art will recognize many modifications may be made to these configurations without departing from the scope or spirit of the claimed subject matter. Certain operations of methods according to the invention, or of systems executing those methods, may be represented schematically in the FIGS. or otherwise discussed herein. Unless otherwise specified or limited, representation in the FIGS. of particular operations in particular spatial order may not necessarily require those operations to be executed in a particular sequence corresponding to the particular spatial order. Correspondingly, certain operations represented in the FIGS., or otherwise disclosed herein, can be executed in different orders than are expressly illustrated or described, as appropriate for particular non-limiting examples of the invention. Further, in some non-limiting examples, certain operations can be executed in parallel, including by dedicated parallel processing devices, or separate computing devices configured to interoperate as part of a large system. As used herein in the context of computer implementation, unless otherwise specified or limited, the terms “component,” “system,” “module,” etc. are intended to encompass part or all of computer-related systems that include hardware, software, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components (or system, module, and so on) may reside within a process or thread of execution, may be localized on one computer, may be distributed between two or more computers or other processor devices, or may be included within another component (or system, module, and so on). As used herein, the term, “controller” and “processor” and “computer” include any device capable of executing a computer program, or any device that includes logic gates configured to execute the described functionality. For example, this may include a processor, a microcontroller, a field-programmable gate array, a programmable logic controller, etc. As another example, these terms may include one or more processors and memories and/or one or more programmable hardware elements, such as any of types of processors, CPUs, microcontrollers, digital signal processors, or other devices capable of executing software instructions. | 39,888 |
11857356 | DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS The method according to an embodiment of the invention for capturing medical images of the human body or body of an animal comprises:(3) performing a data-acquisition scan, and(4) signaling the start and/or the end of a manual contrast administration relative in time to the data-acquisition scan. The apparatus according to an embodiment of the invention for capturing medical images of the human body or body of an animal comprises an acquisition unit, which is designed to perform a data-acquisition scan, and a signaling unit, which is coupled to the acquisition unit and is designed to output, relative in time to the data-acquisition scan, a signal for starting and/or stopping a manual contrast administration. An apparatus according to an embodiment of the invention is for controlling the acquisition of medical images, which apparatus comprises a unit that is designed to control, in particular to start and/or to end, a data-acquisition scan, and a signaling unit, which is coupled to said unit and is designed to output, relative in time to the data-acquisition scan, a signal for starting and/or stopping a manual contrast administration. An idea of at least one embodiment of the invention is to signal the start and/or the end of a manual contrast administration (contrast agent is administered without controlled and/or automatic injector, e.g. it is administered by a standard commercial syringe) relative in time to a scan, which in particular is to be performed subsequently. The signaling unit may be in particular a visual, haptic and/or acoustic signaling unit, which is connected to the acquisition unit (for example CT machine) and gives instructions on the manual (non-automated) administration of contrast agent, e.g. using a syringe. The instruction may thus be a light signal and/or acoustic signal, for example. This may instruct, for instance, continuous pressing of the syringe plunger and/or stopping the injection. In particular, different signals can be provided for this purpose. It is also possible that the start of a signal signals the start of the contrast administration, and the end of the signal signals the end of the contrast administration. Thus the contrast agent is then meant to be administered in particular precisely during the output of the e.g. visual or acoustic signal. The apparatus according to at least one embodiment of the invention and the method according to at least one embodiment of the invention provide considerable assistance during manual injection of contrast agent. This has a positive effect on the expected image quality, because it is possible to make better use of the contrast-enhancing effect. At the same time, it increases the safety of the patients because the amount of contrast agent is kept lower than for an injection without assistance. In environments that lack the financial means for purchasing/operating a controlled injector, for example, there is no need to compromise on patient safety. In an example embodiment of the method, the signaling of the start of the manual contrast administration is timed with respect to a manual or automatic start of the acquisition process. The signal, which instructs a start of the manual administration of contrast agent, can be triggered, for example, after a predetermined or preset time delay after switch-on of the acquisition apparatus or an activation and/or a start of the acquisition procedure. In principle, any point in time can be selected for signaling the start of the contrast administration. What is important is that this point in time is known so that it is possible to calculate further values therefrom, for instance the time until a bolus arrives at a region of interest, the start of the data scan or the duration of the contrast administration. In another example embodiment of the invention, triggering the data-acquisition scan is timed with respect to the signaled start of the manual contrast administration. A best possible start time for the data-acquisition scan can be chosen using the known time for the start of the contrast administration (assuming that this starts at the time of the start signal). For instance, a delay to the scan start with respect to the injection start can be set, if applicable according to further parameters such as, for example, patient size, patient weight and/or spatial relationship between injection location and ROI. Thus the data acquisition starts at a predetermined time delay after the signal for starting the contrast administration. It is also possible that triggering the scan comprises a bolus triggering, i.e. initiating the process of bolus triggering (or start of a monitoring scan) is timed with respect to the signaled start of the manual contrast administration. For example, the monitoring scan can be performed simultaneous to, or at a predetermined delay with respect to, signaling the start of the contrast administration. In another example embodiment, the point in time of signaling the end of the manual contrast administration is calculated. For example, a defined injection duration can be provided. The end of the contrast-agent injection is then calculated on the basis of the signaled start time for the contrast-agent injection and the defined injection duration. In another example embodiment, the signalling of the end of the manual contrast administration is timed with respect to a characteristic time interval ΔT, wherein the characteristic time interval ΔT is a time difference between the arrival of contrast agent at a predefined position in the body (ROI) and the point in time of signaling the start of the manual contrast administration. The characteristic time ΔT is hence the time that elapses for a patient between start of the injection (assuming that the injection is started at the time of the signaling) and arrival of the contrast agent at the desired position in the body. The signaling of the end of the contrast administration preferably takes place at the latest at a time that lies in advance of the end of the scan by the magnitude of the time interval ΔT. In an example embodiment, the time interval ΔT is estimated. An empirical value determined from analyzing a patient collective is preferably used for the estimate. This analysis preferably additionally takes into account additional patient parameters such as size and weight, for instance. In an alternative embodiment, the time interval ΔT is calculated. In particular, it is preferred that the time interval ΔT is determined on the basis of a bolus triggering. In this case, the time interval ΔT in particular equals the difference between a time at which a defined threshold value (attenuation value, gray level, enhancement) is attained and the point in time of signaling the start of the contrast administration. Thus the time interval ΔT is preferably calculated on the basis of a monitoring scan at the predefined position in the body. The arrival of a bolus in the region of interest (ROI) is detected by the monitoring scan, and therefore the time delay with respect to the start of the contrast-agent injection can be determined. In another example embodiment of the method according to the invention, the calculation of the time interval ΔT takes into account the point in time of signaling the start of the manual contrast administration. More preferably, the time interval ΔT is calculated as the difference between attaining a defined threshold value (trigger) and the point in time of signaling the start of the manual contrast administration. Alternatively or additionally, the point in time of signaling the end of the manual contrast administration can be calculated on the basis of a flow model. The point in time of signaling the end of the manual contrast administration, or the time interval ΔT, can be determined or calculated in particular using a hydrodynamic or pharmacokinetic flow model. This could then also take into account, for instance, the length of time for the already injected amount of contrast agent. In addition, the point in time of signaling the start of the contrast administration and/or the trigger signal (attaining a defined threshold value) is preferably also used. In another example embodiment of the invention, an advance signal is produced before signaling the start and/or end of the manual contrast administration. The advance signal is used in particular for eliminating, or more precisely taking into account, response times. For instance, the advance signal can be triggered a predefined time before the actual signal. In an example embodiment, the advance signal comprises a countdown. For example, a countdown can be performed before the actual signaling (before the signal for starting and/or stopping the contrast administration), for instance a countdown using color coding (light) or frequency coding of an audio tone (acoustic signal), in order to hit the start point and/or end point of the manual injection more closely. In terms of the apparatus, in an example embodiment of the invention, the apparatus comprises a processing unit, which is designed to calculate a start time and/or an end time for the manual contrast administration. In an example embodiment of the invention, the processing unit is designed to calculate the end time for the manual contrast administration on the basis of a characteristic time interval ΔT, wherein the characteristic time interval ΔT is a time difference between the arrival of contrast agent (a contrast-agent bolus) at a predefined position in the body and the point in time of signaling the start of the manual contrast administration. FIG.1shows an example of the sequence of a diagnostic imaging process using contrast administration. The bottom region shows the timing diagram of a CT scan (CT). What is known as a monitoring scan42is performed first, which is used to trigger the actual examination scan44. In this scan, images are captured in a time sequence. The scan44is triggered when a predefined threshold value46(signal level, gray level, attenuation value or CT value) is exceeded in a defined region of interest (ROI, e.g. a large vessel). The scan44in this case starts at a predefined delay after the predefined threshold value46is attained. The scan44is started at the time TSand ends at the time TE. The central region of the diagram shows the signal level S (gray level, attenuation value or CT value) in the timing diagram. The administration of contrast agent (KM) causes the signal level S to rise and to attain a predefined threshold value at the point46. The monitoring scan42ends and the examination scan44is triggered when the threshold value46is attained. The examination scan44then starts with a delay after being triggered. The top region of the diagram shows schematically the contrast agent KM, or more precisely the injection of the contrast agent. The injection of a contrast-agent bolus40starts at the time TI,on. The contrast agent is preferably injected at a constant rate in this process. The injection ends at the time TI,off. The method according to an example embodiment of the invention and the apparatus according to an example embodiment of the invention are used to signal to a user the time TI,onand/or the time TI,off. Hence the user knows when he is meant to start and/or end the injection of a contrast-agent bolus. The method according to an example embodiment of the invention in particular contains the following steps:1. (Manually) activating the signal for instructing the start of the manual injection. The start of the injection TI,onis signaled at a certain delay after the, possibly manual, activation. In principle, any point in time can be selected for the time TI,on, which preferably lies at a certain time interval after the switch-on and/or activation of the signal and/or of the apparatus. Signaling the start time of the injection is used in particular for activating the monitoring scan42and/or the scan44on the basis of this time.2. Starting a bolus triggering or a monitoring scan, i.e. periodic sequence of scans at a predefined, sensible position in the body, and measurement of the signal levels (CT values) inside an ROI. Initiation of the start time of the bolus triggering is timed with respect to the time TI,on(for example simultaneously or at a predetermined delay).3. On a defined threshold value46being attained (trigger):a) triggering the scan44, preferably at a predefined delay, andb) calculating a characteristic time ΔT, which elapses for this patient between start of the injection and arrival of the contrast agent at the desired position in the body. The time interval between injection start (TI,on) and attainment of the threshold value46(TT) can be set as a conservative estimate for ΔT.4. Calculating a time at which the injection can be stopped (TI,off), e.g. as TI,off=TE−ΔT, where TEdenotes the end of the scan (=end of the radiation). Alternatively, the defined stop of the injection could also be calculated as TI,off=TS−ΔT, where TSdenotes the start of the scan (=start of the radiation).5. Deactivating the signal at TI,off, but at the latest at TE−ΔTmin, where ΔTminis a sensible minimum time length for arrival of the contrast agent, which time length has been predefined e.g. from the analysis of a large patient collective.6. Stopping the scan at TE. The switch-off time TI,offof the signal for the contrast-agent injection can advantageously also be calculated in step 4 using hydrodynamic or pharmacokinetic flow models. These could then also take into account, for instance, the length of time for the already injected amount of contrast agent, and the trigger signal. This applies likewise to defining an optimum parameter ΔT (see step 3). The method can also be used without bolus triggering, with the parameter ΔT being estimated in this case, for example. In the simplest case, it could be set to equal ΔTmaxor ΔTmin, which is the maximum or minimum expected arrival time length obtained from the analysis of a large patient collective. Again in this case, patient size and patient weight could be included in order to obtain a better estimate. The signal for manual injection can be made, for example, visually (e.g. using a lamp) and/or acoustically (signal tone). An enhanced indication for eliminating response times is also possible. After calculating TI,off, a countdown can be performed e.g. from the current instant in time, which countdown, for example, can be color-coded (light) or coded in terms of the frequency of the audio tone (acoustic signal). This makes it easier for the user to hit the start point and/or end point of the manual injection more closely. As an alternative to an injection stop based on the parameter ΔT, a predefined injection duration (e.g. ten seconds) could also be set in order to assist the manual injection. This could be done, for example, by making the signal last as long as the desired injection duration. FIG.2shows an example embodiment of an apparatus10according to the invention for implementing the method according to the invention. The apparatus10comprises an acquisition unit20, which in this case is an X-ray machine, in particular a C-arm machine. The acquisition unit20comprises an X-ray source22and an X-ray detector23, which are attached to the ends of a C-arm21. The C-arm21can be tilted about a patient couch25. The X-ray detector23is preferably a digital X-ray detector, which can produce digital X-ray images of a patient1lying on the patient couch25. The C-arm21is mounted so that it can move on a stand24. The acquisition unit20is designed to capture two-dimensional projected images (fluoroscopy images) at short time intervals (preferably at least one image every two seconds). The acquisition unit20is controlled by a control unit32. The control unit32is coupled to a signaling unit36, which is designed to signal at least one signal for signaling the start of a manual injection of contrast agent and/or the end of such an injection. The signaling can be acoustic and/or visual, for example. The essential factor is that the injection is not controlled automatically. A processing unit34, which can be part of the control unit32, is provided for calculating the time TI,onand/or the time TI,off(or for calculating the time interval ΔT). The processing unit34is designed to determine or to calculate the start time and/or the end time of the manual contrast administration. The signaling unit36is designed to output according to the start time and/or end time determined by the processing unit34, a user-perceptible signal for starting and/or stopping the manual contrast administration. Control unit32and signaling unit36can be part of a control apparatus30, which can also be referred to as an apparatus30for instructing a manual contrast administration. The apparatus30forms a separate aspect of the invention. The apparatus30is designed to signal the start and/or the end of a manual contrast administration and to control in synchronization therewith the acquisition unit20(start and end of a monitoring scan42and/or of a (examination) scan44). An example embodiment of the invention can also be described in particular as follows: A control apparatus for a medical machine (apparatus for capturing medical images), in particular of a computed tomography machine, comprisinga signaling unit, which is designed to output a signal that signals a start and/or an end of the administration of a contrast agent, anda unit that is designed to control the start and/or the end of an operating state of the medical machine at a predefined time related in time to a signal that is output by the signaling unit. A medical machine, in particular a computed tomography machine, comprising the control apparatus described above. The signal in particular may be a haptically, acoustically and/or visually perceptible signal. The point(s) in time for outputting the signal that signals the start and/or the end of the administration of a contrast agent can be determined as described above. Predetermined values for these point(s) in time can preferably be stored in the control apparatus. The point(s) in time for the start or the end of the operating state of the medical machine can be determined as described above. Predetermined values for these point(s) in time can preferably be stored in the control apparatus. Electronic storage, for instance, can be used for storing predetermined time points. The operating state of the medical machine can be e.g. an operating state during which a monitoring scan or an examination scan is performed. Different user-selectable points in time or combinations of points in time for outputting the signal that signals the start and/or the end of the administration of a contrast agent, and for the start or the end of the operating state of the medical machine, can be stored in the control apparatus for different examination protocols (e.g. depending on the region of interest (ROI), e.g. thorax, heart, predefined blood vessels), different contrast agents or different patient characteristics (e.g. male/female, weight, size, clinical condition, pulse, blood pressure, medication). Alternatively, predetermined points in time can be adjusted according to different patient characteristics so that the points in time can be used to adapt defined intervals (e.g. the duration of an operating state) to a patient under examination. An embodiment of the invention also relates to a method for determining a point in time for outputting a signal that signals a start and/or an end of the administration of a contrast agent, and for determining a point in time for the start or the end of an operating state of a medical machine at a predefined point in time related in time to the signal. An embodiment of the invention also relates to a method for operating a medical machine, whereina signaling unit signals a start and/or an end of the administration of a contrast agent,an operating state of the medical machine is timed to start and/or end with respect to a signal that is output by the signaling unit. The times for the start and/or for the end of an interval can be indicated in particular by the start or end of a signal. In this respect, the term “signal” includes the start, the duration and the end of the signal, each taken independently. Although the invention has been illustrated and described in greater detail using an example embodiment, the invention is not limited by the disclosed examples, and a person skilled in the art can derive therefrom other variants that are still covered by the scope of protection of the invention. FIG.3shows an example embodiment of a method performed by the apparatus10. In operation102, the method includes calculating a start time for manual contrast administration. In operation104, the method includes calculating a time at which the injection can be stopped. In operation106, the method includes performing an examination scan. In operation108, the method includes outputting a signal for starting manual contrast administration. In operation110, the method includes outputting a signal for ending manual contrast administration. | 21,300 |
11857357 | DETAILED DESCRIPTION The following description is presented to enable any person skilled in the art to make and use the present disclosure and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown but is to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including” when used in this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Generally, the word “module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage devices. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an erasable programmable read-only memory (EPROM). It will be further appreciated that hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof. It will be understood that the term “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, sections or assembly of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose. It will be understood that when a unit, engine, module, or block is referred to as being “on,” “connected to,” or “coupled to,” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale. The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in an inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts. According to one aspect of the present disclosure, an imaging system may be provided. The imaging system may include a detector and a collimator. The detector may be configured to detect photons. The collimator may have at least two sets of pinholes. The at least two sets of pinholes may include a first set of first pinholes and a second set of second pinholes. Each second pinhole of the second set of second pinholes may be equipped with a filter configured to filter the photons. According to another aspect of the present disclosure, a method for generating an image may be provided. A first projection data set associated with a first portion of photons each of which having a first energy may be obtained. A second projection data set associated with a second portion of photons each of which having a second energy may be obtained. An image may be generated based on the first projection data set and the second projection data set. In some embodiments, the first projection data set and the second projection data set may be obtained using the imaging system provided in the present disclosure. Accordingly, compared to using an imaging system with a traditional collimator that only has pinholes without filters, by using the collimator that has the first set of first pinholes without filters and the second set of pinholes with filters, more photons may be allowed to pass through the collimator and be detected, thereby improving the sensitivity of the imaging system. The two sets of pinholes may be configured to perform spectral filtrations on the photons. Besides, by using the collimator that has two sets of pinholes and using a radioactive tracer having at least two characteristic peaks, the detector may acquire multiplexing projection data with different spectral sensitivity, and the multiplexing projection data may be encoded with spectral filtrations and be decomposed, and thus, a higher contrast to noise ratio of the imaging system may be achieved, and the sensitivity may be further improved. Because the second pinholes are equipped with filters that can filter photons of different energies at different ratios, an image without multiplexing artifacts may be generated based on the projection data (i.e., the first projection data set and second projection data set) obtained by the imaging system, thereby improving the sensitivity and the accuracy of the imaging system. Moreover, due to the use of the two sets of pinholes, the angular sampling of the imaging system may be improved, which may be beneficial to improve spatial resolution and reduce aliasing artifacts of the imaging system. FIG.1is a schematic diagram illustrating an exemplary imaging system according to some embodiments of the present disclosure. In some embodiments, the imaging system100may be a single-modality system or a multi-modality system. Exemplary single-modality systems may include a single-photon emission computed tomography (SPECT) system, a positron emission tomography (PET) system, etc. Exemplary multi-modality systems may include a SPECT-CT system, a SPECT-PET system, a SPECT-magnetic resonance (SPECT-MR) system, etc. In some embodiments, the imaging system100may include modules and/or components for performing imaging and/or related analysis. Merely by way of example, as illustrated inFIG.1, the imaging system100may include an imaging device110, a processing device120, a storage device130, one or more terminal devices140, and a network150. The components in the imaging system100may be connected in one or more of various ways. Merely by way of example, the imaging device110may be connected to the processing device120through the network150. As another example, the imaging device110may be connected to the processing device120directly as illustrated inFIG.1. As a further example, the terminal device140may be connected to another component of the imaging system100(e.g., the processing device120) via the network150. As still a further example, the terminal device140may be connected to the processing device120directly as illustrated by the dotted arrow inFIG.1. As still a further example, the storage device130may be connected to another component of the imaging system100(e.g., the processing device120) directly as illustrated inFIG.1, or through the network150. The imaging device110may be configured to acquire imaging data relating to at least one part of an object. For example, the imaging device110may scan an object or a portion thereof that is located within its detection region and generate projection data relating to the object or the portion thereof. The imaging data relating to at least one part of an object may include an image (e.g., an image slice), projection data, or a combination thereof. In some embodiments, the imaging data may be two-dimensional (2D) imaging data, three-dimensional (3D) imaging data, four-dimensional (4D) imaging data, or the like, or any combination thereof. The object may be biological or non-biological. For example, the object may include a patient, an animal, a man-made object (e.g., a phantom), etc. As another example, the object may include a specific portion, organ, and/or tissue of the patient. For example, the object may include the head, the neck, the thorax, the heart, the stomach, a blood vessel, soft tissue, a tumor, nodules, or the like, or any combination thereof. In some embodiments, the imaging device110may include a single modality imaging device. For example, the imaging device110may include a single-photon emission computed tomography (SPECT) device, a positron emission tomography (PET) device, etc. In some embodiments, the imaging device110may include a multi-modality imaging device. Exemplary multi-modality imaging devices may include a SPECT-CT device, a SPECT-PET device, a SPECT-MR device, etc. A SPECT device may be taken as an example of the imaging device110, and not intended to limit the scope of the present disclosure. The SPECT device may include a gantry, a collimator, a detector, an electronics module, and/or other components not shown. The gantry may support one or more parts of the SPECT device, for example, the collimator, the detector, the electronics module, and/or other components. The collimator may collimate photons (e.g., y photons) emitted from an object being examined. In some embodiments, the collimator may be a multi-pinhole collimator having at least two sets of pinholes. The at least two sets of pinholes may include a first set of first pinholes and a second set of second pinholes. In some embodiments, one or more second pinholes (e.g., each second pinhole) of the second set of second pinholes may be equipped with a filter configured to filter the photons. The detector may be configured to detect the photons collimated by the collimator and/or generate electrical signals. The electronics module may collect and/or process electrical signals (e.g., scintillation pulses) generated by the detector. The electronics module may convert an analog signal (e.g., an electrical signal generated by the detector) relating to a photon detected by the detector to a digital signal to generate projection data. In some embodiments, the electronics module may be part of the detector. More descriptions regarding the imaging device may be found elsewhere of the present disclosure (e.g.,FIG.4and the descriptions thereof). The processing device120may process data and/or information obtained from the imaging device110, the terminal device140, and/or the storage device130. For example, the processing device120may obtain projection data acquired by the imaging device110. The processing device120may generate an image based on the projection data. As another example, the processing device120may determine a system matrix of the imaging device110. The processing device120may generate the image further based on the system matrix. In some embodiments, the processing device120may be a computer, a user console, a single server or a server group, etc. The server group may be centralized or distributed. In some embodiments, the processing device120may be local or remote. For example, the processing device120may access information and/or data stored in the imaging device110, the terminal device140, and/or the storage device130via the network150. As another example, the processing device120may be directly connected to the imaging device110, the terminal device140, and/or the storage device130to access stored information and/or data. In some embodiments, the processing device120may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. The storage device130may store data, instructions, and/or any other information. In some embodiments, the storage device130may store data obtained from the terminal device140and/or the processing device120. The data may include imaging data acquired by the processing device120, algorithms and/or models for processing the imaging data, etc. For example, the storage device130may store imaging data (e.g., SPECT images, SPECT projection data, etc.) acquired by the imaging device110. As another example, the storage device130may store one or more algorithms (e.g., a maximum likelihood expectation maximization (MLEM) algorithm) for processing the imaging data, etc. In some embodiments, the storage device130may store data and/or instructions that the processing device120may execute or use to perform exemplary methods/systems described in the present disclosure. In some embodiments, the storage device130may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage devices may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memories may include a random access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage device130may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the storage device130may be connected to the network150to communicate with one or more other components in the imaging system100(e.g., the processing device120, the terminal device140, etc.). One or more components in the imaging system100may access the data or instructions stored in the storage device130via the network150. In some embodiments, the storage device130may be directly connected to or communicate with one or more other components in the imaging system100(e.g., the processing device120, the terminal device140, etc.). In some embodiments, the storage device130may be part of the processing device120. The terminal device140may include a mobile device140-1, a tablet computer140-2, a laptop computer140-3, or the like, or any combination thereof. In some embodiments, the mobile device140-1may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, a footgear, eyeglasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the mobile device may include a mobile phone, a personal digital assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, a laptop, a tablet computer, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass™, an Oculus Rift™, a Hololens™, a Gear VR™, etc. In some embodiments, the terminal device140may be part of the processing device120. The network150may include any suitable network that can facilitate the exchange of information and/or data for the imaging system100. In some embodiments, one or more components of the imaging device110(e.g., a SPECT device, a SPECT-CT device, etc.), the terminal device140, the processing device120, the storage device130, etc., may communicate information and/or data with one or more other components of the imaging system100via the network150. For example, the processing device120may obtain data from the imaging device110via the network150. As another example, the processing device120may obtain user instructions from the terminal device140via the network150. The network150may be and/or include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN), a wide area network (WAN)), etc.), a wired network (e.g., an Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi network, etc.), a cellular network (e.g., a Long Term Evolution (LTE) network), a frame relay network, a virtual private network (“VPN”), a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof. Merely by way of example, the network150may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public telephone switched network (PSTN), a Bluetooth™ network, a ZigBee™ network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network150may include one or more network access points. For example, the network150may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the imaging system100may be connected to the network150to exchange data and/or information. In some embodiments, a three-dimensional coordinate system160may be used in the imaging system100as illustrated inFIG.1. A first axis may be parallel to the lateral direction of a table (e.g., the x-axis direction as shown inFIG.1). A second axis may be parallel to the longitudinal direction of the table (e.g., the z-direction as shown inFIG.1). A third axis may be parallel to a vertical direction of the table (e.g., the y-axis direction as shown inFIG.1). The origin of the three-dimensional coordinate system160may be any point in the space. In some embodiments, the origin of the three-dimensional coordinate system160may be determined by an operator. In some embodiments, the origin of the three-dimensional coordinate system160may be determined by the imaging system100. It should be noted that the above description of the imaging system100is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. For example, the assembly and/or function of the imaging system100may be varied or changed according to specific implementation scenarios. FIG.2is a schematic diagram illustrating hardware and/or software components of an exemplary computing device200on which the processing device120may be implemented according to some embodiments of the present disclosure. As illustrated inFIG.2, the computing device200may include a processor210, a storage220, an input/output (I/O)230, and a communication port240. The processor210may execute computer instructions (program codes) and perform functions of the processing device120in accordance with techniques described herein. The computer instructions may include, for example, routines, programs, objects, components, signals, data structures, procedures, modules, and functions, which perform particular functions described herein. For example, the processor210may process data obtained from the imaging device110, the terminal device140, the storage device130, and/or any other component of the imaging system100. Specifically, the processor210may process one or more measured data sets obtained from the imaging device110. For example, the processor210may generate an image based on the data set(s). In some embodiments, the generated image may be stored in the storage device130, the storage220, etc. In some embodiments, the generated image may be displayed on a display device by the I/O230. In some embodiments, the processor210may perform instructions obtained from the terminal device140. In some embodiments, the processor210may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application-specific integrated circuits (ASICs), an application-specific instruction-set processor (ASIP), a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a microcontroller unit, a digital signal processor (DSP), a field-programmable gate array (FPGA), an advanced RISC machine (ARM), a programmable logic device (PLD), any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof. Merely for illustration, only one processor is described in the computing device200. However, it should be noted that the computing device200in the present disclosure may also include multiple processors. Thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device200executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two or more different processors jointly or separately in the computing device200(e.g., a first processor executes operation A and a second processor executes operation B, or the first and second processors jointly execute operations A and B). The storage220may store data/information obtained from the imaging device110, the terminal device140, the storage device130, or any other component of the imaging system100. In some embodiments, the storage220may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. For example, the mass storage device may include a magnetic disk, an optical disk, a solid-state drive, etc. The removable storage device may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. The volatile read-and-write memory may include a random access memory (RAM). The RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. The ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (PEROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage220may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. For example, the storage220may store a program for the processing device120for generating a SPECT image based on a first projection data set associated with a first portion of photons each of which having a first energy, and a second projection data set associated with a second portion of photons each of which having a second energy. The I/O230may input or output signals, data, and/or information. In some embodiments, the I/O230may enable user interaction with the processing device120. In some embodiments, the I/O230may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Exemplary output devices may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Exemplary display devices may include a liquid crystal display (LCD), a light-emitting diode (LED)-based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT), or the like, or a combination thereof. The communication port240may be connected with a network (e.g., the network150) to facilitate data communications. The communication port240may establish connections between the processing device120and the imaging device110, the terminal device140, or the storage device130. The connection may be a wired connection, a wireless connection, or a combination of both that enables data transmission and reception. The wired connection may include an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include a Bluetooth network, a Wi-Fi network, a WiMax network, a WLAN, a ZigBee network, a mobile network (e.g., 3G, 4G, 5G, etc.), or the like, or any combination thereof. In some embodiments, the communication port240may be a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port240may be a specially designed communication port. For example, the communication port240may be designed in accordance with the digital imaging and communications in medicine (DICOM) protocol. FIG.3is a schematic diagram illustrating hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure. As illustrated inFIG.3, the mobile device300may include a communication platform310, a display320, a graphics processing unit (GPU)330, a central processing unit (CPU)340, an I/O350, a memory360, and a storage390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device300. In some embodiments, a mobile operating system370(e.g., iOS, Android, Windows Phone, etc.) and one or more applications380may be loaded into the memory360from the storage390in order to be executed by the CPU340. The applications380may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device120. User interactions with the information stream may be achieved via the I/O350and provided to the processing device120and/or other components of the imaging system100via the network150. To implement various modules, units, and functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. The hardware elements, operating systems, and programming languages of such computers are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith to adapt those technologies to generate an image as described herein. A computer with user interface elements may be used to implement a personal computer (PC) or another type of work station or terminal device, although a computer may also act as a server if appropriately programmed. It is believed that those skilled in the art are familiar with the structure, programming and general operation of such computer equipment and as a result, the drawings should be self-explanatory. FIG.4is a schematic diagram illustrating a cross-sectional view of a portion of an exemplary imaging device according to some embodiments of the present disclosure. In some embodiments, an imaging device400illustrated inFIG.4may be part of the imaging device110. As shown inFIG.4, the imaging device400may include a table410, a collimator420, and a detector430. The table410may be configured to support an object to be examined. In some embodiments, the object may include the neck, the heart, the abdomen, a lung, or the like, or any combination thereof. In some embodiments, the object may be injected with a radioactive tracer before being scanned by the imaging device400. For example, the object may be scanned by the imaging device400in a predetermined time period after the radioactive tracer is injected into the object. As another example, the object may be scanned by the imaging device400in a certain time period after the tracer distribution in the object reaches equilibrium or steady-state. In some embodiments, the radioactive tracer may include technetium-99 (Tc-99), fluorine-18 (F-18), indium-111 (In-111), iodine-131 (I-131), or the like, or any combination thereof. An energy spectrum of the radioactive tracer may have one or more characteristic peaks each of which corresponds to an energy. For brevity, the one or more characteristic peaks of the energy spectrum of the radioactive tracer may also be referred to as the one or more characteristic peaks of the radioactive tracer. As used herein, a characteristic peak corresponding to an energy refers to the main energy emission from the decay of the radioactive tracer injected in the object. In some embodiments, an energy range (also referred to as an energy window of the detector430) of the photons detected by the detector430may be associated with the energies corresponding to the characteristic peak(s) of the radioactive tracer. Specifically, the energy window may include energies within an energy threshold range around the energy of a characteristic peak. For example, if an energy corresponding to a characteristic peak is 150 keV, and the energy threshold range is 25 keV, then the energy window may be determined as [125 keV, 175 keV]. In some embodiments, the energy window may be determined based on one or more energy values. In some embodiments, the energy value may be set according to a default setting of the imaging device400or preset by a user or operator via the terminal device140. It should be noted that if a measured energy of a photon falls within an energy window corresponding to a specific energy, it may mean that the energy of the photon is considered as the specific energy, i.e., the photon has the specific energy. In some embodiments, the photons emitted from the object (e.g., the object injected with the radioactive tracer having one or more characteristic peaks) may be measured as having one or more energies corresponding to the characteristic peak(s). For example, for technetium-99 (Tc-99) having a characteristic peak at 141 keV, photons emitted by Tc-99 (or the object injected with Tc-99) may have an energy of 141 keV. In some embodiments, if the measured energy of a photon is close to a specific energy (e.g., a difference between the measured energy and the specific energy is less than a threshold), the photon may be considered as having the specific energy. For instance, a photon with a measured energy of 137 keV may be regarded as a photon with an energy of 141 keV (or the photon with a measured energy of 137 keV may be regarded as having an energy of 141 keV). As another example, for indium-111 (In-111) having two characteristic peaks at 171 keV and 245 keV, respectively, two energy windows corresponding to the energy of 171 keV and the energy of 245 keV may be determined by an energy value (e.g., 208 keV). For instance, a first energy window corresponding to the energy of 171 keV may include energies less than the energy value (e.g., 208 keV), and a second energy window corresponding to the energy of 245 keV may include energies that exceeds the energy value (e.g., 208 keV). It should be noted that the energy value of 208 keV may be either assigned to the first energy window or the second energy window. In this case, if a measured energy of a photon is 190 keV, the photon may be regarded as a photon with an energy of 171 keV (or the photon with a measured energy of 190 keV may be regarded as having an energy of 171 keV). If a measured energy of a photon is 210 keV, the photon may be regarded as a photon with an energy of 245 keV (or the photon with a measured energy of 210 keV may be regarded as having an energy of 245 keV). In some embodiments, the characteristic peak(s) of the radioactive tracer may be designated or inputted by a user via the terminal device140. For example, for a radioactive tracer whose energy spectrum has no characteristic peak, the user may designate characteristic peak(s) corresponding to one or more energies of the radioactive tracer. In some embodiments, two or more radioactive tracers each of which has only one characteristic peak (also referred to as a single-energy isotope tracer or a single-peak isotope tracer) may be injected into the object at a certain ratio. For example, two single-energy isotope tracers having different characteristic peaks may be injected into the object at a certain ratio. The two single-energy isotope tracers at the certain ratio may be equivalent to a radioactive tracer having two characteristic peaks (also referred to as a dual-energy isotope tracer or a dual-peak isotope tracer). The collimator420may be configured to collimate the photons emitted from the object. In some embodiments, the collimator420may be a multi-pinhole collimator having at least two sets of pinholes. Each set of pinholes may include one or more pinholes. For example, as shown inFIG.4, the collimator420may have a first set of first pinholes423and a second set of second pinholes425. In some embodiments, one or more second pinholes (e.g., each second pinhole) of the second set of second pinholes425may be equipped with filters (e.g., a filter422,424, or426). The filters may be configured to filter the photons. In other words, the filters may prevent a portion of the photons from passing through the second pinholes and allow the remaining portion of the photons to pass through the second pinholes. As a result, a count or number of photons filtered by the filters may change. In some embodiments, one or more first pinholes (e.g., each first pinhole) of the first set of first pinholes423may be equipped with first filters, and one or more second pinholes (e.g., each second pinhole) of the second set of second pinholes425may be equipped with second filters different from the first filters. For example, the second filters and the first filters may have different thicknesses. As another example, the material of the second filters may be different from the material of the first filters. In some embodiments, the collimator420may be made of a heavy metal such as lead, tungsten, gold, etc. The thickness of the collimator420may relate to the energy of photons that the imaging system100is desired to detect. For example, the thickness of the collimator420may be large enough to prevent the majority of the photons from penetrating the collimator420, so that photons primarily pass through the pinholes on the collimator420. In some embodiments, each pinhole (i.e., the first pinhole or the second pinhole) of the collimator420may have a size (or diameter), a shape, etc. In some embodiments, the sizes of the pinholes of the collimator420may be the same or different. For example, if a pinhole is relatively close to the object or a field of view (FOV)450of the imaging device400, the size of the pinhole may be relatively small. In some embodiments, the shapes of the pinholes of the collimator420may be the same or different. For example, the shapes of the pinholes may include a funnel shape, a “V” shape, a double conical shape, or the like, or any combination thereof. In some embodiments, each set of pinholes may include two or more pinholes. The first set of first pinholes423may be patterned such that first projections (e.g., as indicated by the solid lines512inFIGS.5A and5B) of the FOV450of the imaging device400through the first set of first pinholes423onto the detector430have no overlapping region. As used herein, a projection of the FOV of the imaging device through a pinhole onto the detector corresponds to a region where the photons emitted from the object in the FOV fall on the detector after passing through the pinhole. Thus, the first projections having no overlapping region may refer that photons passing through different first pinholes may fall in different regions on the detector. The second set of second pinholes425may be patterned such that second projections (e.g., as indicated by the dashed lines514inFIGS.5A and5B) of the FOV450of the imaging device400through the second set of second pinholes425onto the detector430have no overlapping region. Similarly, the second projections having no overlapping region may refer that photons passing through different second pinholes may fall in different regions on the detector. In some embodiments, the first projections and/or the second projections may cover the entire detector430. In some embodiments, at least one of the first projections may be overlapped with at least one of the second projections. In other words, a detector unit corresponding to an overlapping region between the at least one of the first projections and the at least one of the second projections may detect both photons passing through a first pinhole corresponding to the at least one of the first projections and photons passing through a second pinhole corresponding to the at least one of the second projections. In some embodiments, because at least one of the first projections is overlapped with at least one of the second projections, the collimator having the at least two sets of pinholes may also be referred to as a spectral multiplexing collimator in the present disclosure. As illustrated inFIG.4, solid lines represent first auxiliary lines of first projections of the FOV450of the imaging device400through the first pinhole(s)423onto the detector430, and dashed lines represent second auxiliary lines of second projections of the FOV450through the second set of second pinholes425onto the detector430. An auxiliary line may correspond to a projection line of the FOV (or object). As used herein, a projection line refers to a line from a site (in the FOV) of a photon emitted from the object to a site where the photon falls on the detector. Region A inFIG.4illustrates an exemplary overlapping region between the first projections and the second projections. In some embodiments, at least one of the first projections may be overlapped with at least two of the second projections (e.g., as shown inFIGS.5A and5B). In some embodiments, the second set of second pinholes may interleave between the first set of first pinholes. In other words, the second pinholes may be arranged in one or more areas between the first pinholes. For example, the first set of first pinholes and the second set of second pinholes may be arranged in a manner similar to that illustrated inFIG.6. In some embodiments, the first set of first pinholes may include a first count of rows of first pinholes, and the second set of second pinholes may include a second count of rows of second pinholes (e.g., as shown inFIG.6). One or more rows (e.g., each row) of pinholes may be arranged in any direction, such as a transaxial direction (e.g., the x-axis direction inFIG.1) or an axial direction (e.g., the z-axis direction inFIG.1) of the imaging device400. In some embodiments, one or more rows (e.g., each row) of first pinholes may be equally spaced. In some embodiments, one or more rows (e.g., each row) of second pinholes may be equally spaced. In some embodiments, spacings between a row of first pinholes may be equal to spacings between a row of second pinholes. In some embodiments, at least one second pinhole of the second set of pinholes may be arranged at a center of a region encompassing four first pinholes adjacent to the at least one second pinhole. Alternatively or additionally, in some embodiments, at least one first pinhole of the first set of pinholes may be arranged at a center of a region encompassing four second pinholes adjacent to the at least one first pinhole. In some embodiments, the second count may be greater than, equal to, or less than the first count. For example, the first set of first pinholes may include 7 rows of first pinholes in the axial direction, and the second set of second pinholes may include 6 rows of second pinholes in the axial direction. In some embodiments, the collimator420may be plate shaped or ring shaped. The detector430may also be plate shaped or ring shaped. For example, as shown inFIG.4, the collimator420may be configured as a collimator plate, and the detector430may also be configured as a detector plate, which is not intended to limit the scope of the present disclosure. As another example, the collimator420may be configured as a collimator plate, and the detector430may be ring shaped (e.g. a cylinder). As still an example, the collimator420may be ring shaped, and the detector430may be configured as a detector plate. As still a further example, the collimator420may be ring shaped, and the detector430may be ring shaped accordingly. The detector430may be concentric with the collimator420. More descriptions about the ring shaped collimator and/or ring shaped detector may be found elsewhere in the present disclosure (e.g.,FIG.5A,FIG.5BandFIG.6and the descriptions thereof). In some embodiments, if the collimator420and/or the detector430are plate shaped, the collimator420and/or the detector430may be set on a rotatable gantry to rotate around the object when the object is scanned by the imaging device400. In some embodiments, the imaging device400may include two or more collimator plates to achieve photon detection from multiple sampling angles. The filter may filter photons with different energies at different ratios. Specifically, the filter may filter photons with different energies at different ratios of a count or number of photons. In some embodiments, after being filtered by the filter, a ratio of a count or number of photons with a first energy to a count or number of photons with a second energy may be changed. Because a second pinhole is equipped with a filter, the ratio of photons with different energies may be changed after the photons pass through the second pinhole. If a first pinhole is equipped without a filter, the ratio of photons with different energies may not be changed after the photons pass through the first pinhole. Therefore, a first ratio of photons with different energies passing through the first set of first pinholes and a second ratio of photons with different energies passing through the second set of second pinholes may be different. For example, the filter may filter (or shield) 20% of a total count of photons of a first energy (e.g., a relatively high energy), and the filter may filter 40% of a total count of photons of a second energy (e.g., a relatively low energy). According to the above-mentioned example, for a beam of photons in which the ratio of a count (or number) of photons with different energies is 1:1, a first ratio of the count of photons (among the beam of photons) with the first energy to the count of photons (among the beam of photons) with the second energy may be substantially 1:1, after the beam of photons pass through the first set of first pinholes423. A second ratio of the count of photons (among the beam of photons) with the first energy to the count of photons (among the beam of photons) with the second energy may be 8:6, after the beam of photons pass through the second set of second pinholes425. It should be noted that if a first pinhole is equipped with a first filter and a second pinhole is equipped with a second filter different from the first filter, a change of the ratio of photons with different energies after the photons pass through the first pinhole may be different from a change of the ratio of photons with different energies after the photons pass through the second pinhole. As a result, a first ratio of photons with different energies passing through the first set of first pinholes and a second ratio of photons with different energies passing through the second set of second pinholes may be different. According to the present disclosure, if the at least one of the first projections is overlapped with the at least one of the second projections (i.e., multiplexing projections are formed), the multiplexing projections may be encoded by equipping the second set of second pinholes with filters (or equipping the first set of first pinholes and the second set of second pinholes with different filters). That is, using the filters, the multiplexing projections on the detector may be formed by different ratios of photons with different energies, which may allow for the decomposition of the multiplexing projections. Specifically, for a multiplexing projection (e.g., a projection on the detector unit435in region A inFIG.4), the detector unit435may detect a cumulative count of photons with the first energy and a cumulative count of photons with the second energy. A first total count of photons passing through the first pinhole and a second total count of photons passing through the second pinhole may be determined based on the cumulative count of photons with the first energy, the cumulative count of photons with the second energy, and the ratio difference between the first ratio and the second ratio. Further, an image may be generated based on a plurality of first total counts (also referred to as a first piece of data) and a plurality of second total counts (also referred to as a second piece of data). More descriptions for generating an image based on the first piece of data and the second piece of data may be found elsewhere of the present disclosure (e.g.,FIG.8and the descriptions thereof). In some embodiments, the filter may be made of a heavy metal. For example, the filter may include a heavy metal sheet of a certain thickness. In some embodiments, the heavy metal sheet may include a tungsten sheet, a gold sheet, a copper sheet, a lead sheet, or the like, or any combination thereof. In some embodiments, the filter may have a thickness in a range from 0.01 mm to 1 mm, such as 0.1 mm, 0.2 mm, 0.35 mm, 0.5 mm, etc. It should be noted that filters with different thicknesses and/or materials may have different filter transmissivities for different energies, which may affect the imaging effect of the imaging device400. In some embodiments, the material and/or the thickness of the filter may be selected based on the characteristics of the radioactive tracer, system geometry parameters of the imaging device400, a target (or desired) sensitivity of the imaging device, or the like, or a combination thereof. Exemplary system geometry parameters may include parameters associated with a size of the collimator (e.g., a radius of a transaxial section of a cylindrical collimator, an area of a collimator plate), a size of the detector, sizes and positions of the pinholes on the collimator, a size of the FOV, a resolution of the detector, or the like, or any combination thereof. Specifically, the thickness of the filter may be negatively correlated with the sensitivity of the imaging device and positively correlated with the spatial resolution of the imaging device. Thus, the thickness design of the filter may need to weigh the sensitivity of the imaging device and the spatial resolution of the imaging device. More descriptions regarding the trade-off between the sensitivity of the imaging device and the spatial resolution of the imaging device may be found elsewhere in the present disclosure (e.g.,FIG.8and the descriptions thereof). In some embodiments, the filter may be disposed at any position of each second pinhole, as long as the photons can be filtered when passing through the second hole. For example, the filter (e.g., the filter422) may be disposed in a central region of the second pinhole. As another example, the filter (e.g., the filter424) may be disposed on a surface of the collimator420facing the table410, that is, the filter may cover the second pinhole. As a further example, the filter (e.g., the filter426) may be disposed on a surface of the collimator420facing the detector430. In some embodiments, the shape of the filter may match the corresponding pinhole. In some embodiments, an area of the filter may be greater than a size of the corresponding pinhole(s). In some embodiments, the filter may be embedded in a second pinhole (e.g., the filter422), cover a surface of the collimator close to the table corresponding to a second pinhole (e.g., the filter424), cover a surface of the collimator close to the detector corresponding to a second pinhole (e.g., the filter426), or be disposed by a distance (e.g., 0.5 mm, 1 mm, etc.) above a second pinhole. In some embodiments, one or more filters and the collimator may be configured as one piece. In some embodiments, two or more filters may be configured as a filter plate. In such cases, the filter plate may include one or more holes corresponding to one or more first pinholes. The detector430may be configured to detect the photons collimated by the collimator420. The detector430may be used for multiplex detection of the photons with different energies each of which corresponds to one characteristic peak of the radioactive tracer. For example, the detector430may detect a cumulative count of photons with the first energy and a cumulative count of photons with the second energy. That is, the detector430may perform the detection of multiplexing channels of signals (e.g., the cumulative count of photons with the first energy and the cumulative count of photons with the second energy). A portion of the photons with the first energy may pass through the first set of first pinholes and be detected by the detector, and another portion of the photons with the first energy may pass through the second set of second pinholes and be detected by the detector. Similarly, a portion of the photons with the second energy may pass through the first set of first pinholes and be detected by the detector, and another portion of the photons with the second energy may pass through the second set of second pinholes and be detected by the detector. In some embodiments, the detector430may include a plurality of detector units each of which includes multiple channels. In some embodiments, if first projections and second projections have an overlapping region on the detector, each detector unit in the overlapping region may correspond to a first pinhole and a second pinhole. In other words, the detector unit may detect photons from both the first pinhole and the second pinhole. A count and/or size of the detector units may be associated with a spatial resolution of the imaging device400. In some embodiments, a count of the multiplexing channels may be less than or equal to a count of the sets of pinholes. For example, if the collimator420includes three sets of pinholes (e.g., a first set of pinholes, a second set of pinholes equipped with first filters, and a third set of pinholes equipped with second filters), for a dual-energy isotope trace, each detector unit in the detector430may have two channels. In some embodiments, a channel may correspond to an energy window corresponding to one of the one or more characteristic peaks of the radioactive tracer (or the channel may correspond to a characteristic peak of the radioactive tracer). The detector unit may classify projections of the photons into energy bins based on the energy window. For example, the detector unit may detect an energy of each photon, identify which energy window the energy of the photon belongs to, and add 1 to the cumulative count of photons in the channel corresponding to the energy window. For example, photons emitted by the radioactive tracer In-111 may be associated with two energy windows, e.g., a first energy window of [100-208) keV, and a second energy window of [208 keV, 300 keV]. For a specific photon with an energy of 200 keV, the detector unit may identify the energy of the specific photon is within the second energy window, and may add 1 to the cumulative count of photons in the channel corresponding to the second energy window. Merely by way of example, for a radioactive tracer having two characteristic peaks, as shown inFIG.4, the detector unit435may classify the detected photons (emitted by the radioactive tracer) based on two energy windows each of which corresponds to one of the two characteristic peaks. The characteristic peak may correspond to an energy. The detector unit435may detect photons from both a first pinhole (e.g., photons projected through a projection line corresponding to the auxiliary line442) and a second pinhole (e.g., photons projected through a projection line corresponding to the auxiliary line444), and generate first projection data associated with a first portion of photons each of which has a first energy, and second projection data associated with a second portion of photons each of which has a second energy. Then a first projection data set and a second projection data set measured by the plurality of detector units of the detector430may be acquired and be used to generate an image. More descriptions regarding image generation based on the first projection data set and the second projection data set may be found elsewhere in the present disclosure (e.g.,FIG.8and the descriptions thereof). It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently, for persons having ordinary skills in the art, multiple variations and modifications may be conducted under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, some other components/modules may be added into the imaging device400. For example, the imaging device400may further include a first cover plate configured to adjustably cover the second set of second pinholes. The imaging device400may be switched between a working mode using the second set of second pinholes and another working mode without using the second set of second pinholes, by using the first cover plate (e.g., moving the first cover plate). In some embodiments, the first cover plate may be made of a material same as or different from the material of the collimator. For example, the material of the first cover plate may include a heavy metal such as lead and tungsten. As another example, the thickness of the first cover plate may relate to the energy of photons that the imaging system100is desired to detect. In some embodiments, the filters operably coupled to the collimator420may be replaced according to different needs. FIG.5Ais a schematic diagram illustrating a cross-sectional view of a portion of an exemplary imaging device along a transaxial direction according to some embodiments of the present disclosure.FIG.5Bis a schematic diagram illustrating a cross-sectional view of an imaging device inFIG.5Aalong an axial direction according to some embodiments of the present disclosure. The imaging device500may include a ring shaped collimator510and a ring shaped detector520. In some embodiments, the imaging device500may further include a table (not shown) configured to support an object, a gantry (not shown) configured to support the collimator510and the detector520, or the like. As illustrated inFIG.5A, the solid lines512may represent auxiliary lines corresponding to projection lines of a FOV through the first set of first pinholes, and the dashed lines514may represent auxiliary lines corresponding to projection lines of the FOV through the second set of second pinholes. According toFIG.5A, the cross-sectional of the collimator510and the detector520along the transaxial direction (i.e., the x-axis direction inFIG.1of the imaging device110) of the imaging device500are ring shaped. The detector520may be concentric with the collimator510. The collimator510may include a first set of first pinholes (indicated by intersection points of the solid lines on the collimator510) and a second set of pinholes (indicated by intersection points of the dashed lines on the collimator510). Each second pinhole may be equipped with a filter (not shown). The first set of first pinholes may be patterned such that first projections of a FOV of the imaging device500through the first set of first pinholes onto the detector520have no overlapping region. The first projections may cover the entire detector520, see, solid lines512inFIGS.5A and5B. The second set of second pinholes may be patterned such that second projections of the FOV of the imaging device500through the second set of second pinholes onto the detector520have no overlapping region. The second projections may cover the entire detector520, see, dashed lines514inFIGS.5A and5B. In such cases, by setting the first pinholes and the second pinholes to enable the first projections and the second projections to cover the entire detector520, the sensitivity of the imaging device500may be improved. For example, the first set of first pinholes may include a first count of rows of first pinholes, and the second set of second pinholes may include a second count of rows of second pinholes. In some embodiments, each row of pinholes may be arranged on a plane perpendicular to the central axis of the collimator510. In other words, each row of pinholes may be arranged in the transaxial direction (i.e., the x-direction inFIG.1) of the imaging device500. In some embodiments, each row of first pinholes may be (substantially) equally spaced, and each row of second pinholes may be (substantially) equally spaced. In some embodiments, spacings between a row of first pinholes may be equal to spacings between a row of second pinholes. More descriptions about a ring shaped collimator may be found elsewhere in the present disclosure (e.g.,FIG.6and the descriptions thereof). FIG.6is a schematic diagram illustrating an exemplary arrangement of pinholes of a ring shaped collimator according to some embodiments of the present disclosure.FIG.6is a front view of a ring shaped collimator (referred to as a collimator for brevity). As shown inFIG.6, the collimator600(represented by the grey portion640) may include a first set of first pinholes (indicated by trapezoids610) and a second set of second pinholes (each second pinhole is obscured by a black rectangle620inFIG.6). Each second pinhole may be equipped with a filter (indicated by the black rectangle) on the second pinhole. The second set of second pinholes may interleave between the first set of first pinholes. For example, the first set of first pinholes may include 7 rows of first pinholes indicated by dashed lines R1, R3, R5, R7, R9, R11, and R13, and the second set of second pinholes may include 6 rows of second pinholes indicated by dashed lines R2, R4, R6, R8, R10, and R12. Each row of pinholes may be arranged on a plane perpendicular to a central axis (i.e., the z-axis direction) of the collimator600. The sizes of pinholes in each row may be the same. It should be noted that because only the filters are needed to cover the second pinholes, the sizes of the black rectangles inFIG.6do not necessarily indicate the sizes of the second pinholes. The sizes of graphs (i.e., trapezoids) representing the first pinholes may reflect the sizes of the first pinholes. When an FOV is located at the center of the ring shaped collimator600, pinholes (first pinholes or second pinholes) are relatively close to the FOV, the sizes of the pinholes may be relatively small. For example, if an FOV is located at a position corresponding to row R7, sizes of first pinholes in row R7may be smaller than sizes of first pinholes in row R1or row R13. As another example, if an FOV is located at a position corresponding to row R7, sizes of second pinholes in row R6may be smaller than sizes of second pinholes in row R2or row R12. The pinholes in each row may be equally spaced. In some embodiments, a second row of first pinholes (e.g., row R3) may be offset by a certain angle with respect to a first row of first pinholes (e.g., row R1). In some embodiments, at least one second pinhole may be arranged at a center of a region encompassing four first pinholes. For example, a region630may encompass four first pinholes612,614,616, and618, and a second pinhole622may be located at the center of the region630. In some embodiments, at least one first pinhole may be arranged at a center of a region encompassing four second pinholes. Accordingly, by setting the first set of first pinholes and the second set of second pinholes as illustrated inFIG.6, a relatively large amount of photons can pass through the pinholes, thereby improving the sensitivity of an imaging device using the collimator600. FIG.7is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure. In some embodiments, processing device120may be implemented on a computing device200(e.g., the processor210) illustrated inFIG.2or a CPU340as illustrated inFIG.3. As illustrated inFIG.7, the processing device120may include an obtaining module710and an image generation module720. Each of the modules described above may be a hardware circuit that is designed to perform certain actions, e.g., according to a set of instructions stored in one or more storage media, and/or any combination of the hardware circuit and the one or more storage media. The obtaining module710may be configured to obtain data and/or information for image generation. For example, the obtaining module710may obtain a first projection data set associated with a first portion of photons each of which having a first energy. The obtaining module710may further obtain a second projection data set associated with a second portion of photons each of which having a second energy. The image generation module720may be configured to generate an image based on the first projection data set and the second projection data set. In some embodiments, the image generation module720may determine a first piece of data corresponding to a first set of first pinholes and a second piece of data corresponding to a second set of second pinholes based on the first projection data set, the second projection data set, and a first matrix associated with the filters. The image generation module720may reconstruct the image based on the first piece of data and the second piece of data. It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently, for persons having ordinary skills in the art, multiple variations and modifications may be conducted under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the obtaining module710may be divided into two units configured to obtain a first projection data set and a second projection data set, respectively. As another example, some other components/modules (e.g., a storage module) may be added into the processing device120. FIG.8is a schematic flowchart illustrating an exemplary process for generating an image according to some embodiments of the present disclosure. In some embodiments, process800may be implemented as a set of instructions (e.g., an application) stored in the storage device130, storage220, or storage390. The processing device120, the processor210, and/or the CPU340may execute the set of instructions, and when executing the instructions, the processing device120, the processor210, and/or the CPU340may be configured to perform the process800. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process800may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of the process800illustrated inFIG.8and described below is not intended to be limiting. In810, the processing device120(e.g., the obtaining module710) may obtain a first projection data set associated with a first type of photons each of which has a first energy, and a second projection data set associated with a second type of photons each of which has a second energy. The first type of photons may also be referred to as the first portion of photons. The second type of photons may also be referred to as the second portion of photons. In some embodiments, the first projection data set and the second projection data set may be obtained from an imaging device (e.g., the imaging device110), the storage device130, or any other storage device. For example, the imaging device may transmit acquired first projection data set and/or second projection data set to the storage device130, or any other storage device for storage. The processing device120may obtain the first projection data set and/or the second projection data set from the storage device130, or any other storage device. As another example, the processing device120may obtain the first projection data set and the second projection data set from the imaging device directly. In some embodiments, the first projection data set and the second projection data set may be acquired by the imaging device (e.g., a SPECT device) in a predetermined time period after a radioactive tracer having two characteristic peaks is injected into an object. For instance, the predetermined time period after the injection of the radioactive tracer may be 1 minute-2 minutes, or 1.5 minutes-2 minutes, or 1 minute-3 minutes, etc., after the injection of the radioactive tracer. As another example, the first projection data set and the second projection data set may be acquired by the imaging device in a certain time period after the tracer distribution in the object reaches equilibrium or steady-state. The first energy and the second energy may correspond to the two characteristic peaks of the radioactive tracer, respectively. In some embodiments, the radioactive tracer may include two single-energy isotope tracers (e.g., Tc-99 and F-18) or a dual-energy isotope tracer (i.e., having two characteristic peaks, for example, In-111). More descriptions regarding the radioactive tracer may be found elsewhere in the present disclosure (e.g.,FIG.4and the descriptions thereof). During a process for acquiring the first projection data set and the second projection data set by the imaging device, photons (including the first portion of photons and the second portion of photons) may be collimated by a collimator of the imaging device. The collimator may be a multi-pinhole collimator having a first set of first pinholes and a second set of second pinholes. Each second pinhole of the second set of second pinholes may be equipped with a filter. More descriptions regarding the collimator may be found elsewhere in the present disclosure (e.g.,FIG.4,FIG.5A, andFIG.5Band the descriptions thereof). The first projection data set may include a plurality of first sub-sets of data each of which is measured by one of a plurality of detector units of a detector of the imaging device. The second projection data set may include a plurality of second sub-sets of data each of which is measured by one of the plurality of detector units. Each first sub-set of data may correspond to one second sub-set of data. As used herein, one first sub-set of data corresponding to one second sub-set of data may refer that both the first sub-set of data and the second sub-set of data are measured by a same detector unit. For one first sub-set of data and a corresponding second sub-set of data, the first sub-set of data measured by a detector unit may be associated with a total count of photons with the first energy passing through a corresponding first pinhole and a corresponding second pinhole. Similarly, each second sub-set of data measured by the detector unit may be associated with a total count of photons with the second energy passing through the corresponding first pinhole and the corresponding second pinhole. A first ratio of a count of photons with the first energy passing through the first pinhole to a count of photons with the second energy passing through the first pinhole may be different from a second ratio of a count of photons with the first energy passing through the second pinhole to a count of photons with the second energy passing through the second pinhole. The processing device120may generate an image based on the difference between the first ratio and the second ratio. In820, the processing device120(e.g., the image generation module720) may generate an image based on the first projection data set and the second projection data set. In some embodiments, the processing device120may generate the image using a reconstruction algorithm. Exemplary reconstruction algorithms may include a maximum likelihood expectation maximization (MLEM) algorithm, an algebraic reconstruction technique (ART), a simultaneous algebraic reconstruction technique (SART), or the like, or any combination thereof. In some embodiments, the processing device120may determine a first piece of data corresponding to the first set of first pinholes and a second piece of data corresponding to the second set of second pinholes based on the first projection data set, the second projection data set, and a first matrix associated with the filters. The first piece of data may be associated with a first count of photons, among the first portion of photons and the second portion of photons, that pass through the first set of first pinholes. The second piece of data may be associated with a second count of photons, among the first portion of the photons and the second portion of the photons, that pass through the second set of second pinholes. In some embodiments, the first matrix may be associated with transmissivities of the photons having the first energy and the photons having the second energy passing through the filters. In some embodiments, the first matrix may be further associated with yield abundances of the radioactive tracer at the first energy and the second energy. The processing device120may reconstruct the image based on the first piece of data and the second piece of data. For example, the processing device120may reconstruct the image based on the first piece of data and the second piece of data using the MLEM algorithm. According to some embodiments of the present disclosure, for a radioactive tracer having two characteristic peaks, the processing device120may determine the first piece of data and the second piece of data according to Equation (1): [yLyH]=[aLfLaLaHfHaH][l1l2],(1) where yHdenotes the first projection data set; yLdenotes the second projection data set; αHand αLdenote yield abundances of the radiative tracer at the first energy (e.g., a relatively high energy) and the second energy (e.g., a relatively low energy), respectively; fHand fLdenote transmissivities of the photons having the first energy and the photons having the second energy passing through the filters; l1denotes the first piece of data associated with the first set of first pinholes; and l2denotes the second piece of data associated with the second set of second pinholes. In some embodiments, [aLfLaLaHfHaH] may denote the first matrix associated with the filters. Alternatively, in some embodiments, the processing device120may determine the image at least based on a second matrix including a first sub-matrix associated with the first set of first pinholes and a second sub-matrix associated with the second set of second pinholes. In some embodiments, the second matrix may also be referred to as a system matrix associated with the acquisition of the first projection data set and the second projection data set. The system matrix may describe or correspond to the physical geometry of one or more components (e.g., the collimator, the detector, the first set of first pinholes, and the second set of second pinholes) of the imaging device with respect to the FOV. Exemplary parameters associated with the physical geometry of the one or more components may include a size of the collimator (e.g., a radius of a transaxial section of a cylindrical collimator, an area of a collimator plate), a position of the collimator (or a relative position with respect to the detector), positions of pinholes (including the first pinholes and the second pinholes) on the collimator, a size of the detector, a position of the detector (or a relative position with respect to the collimator), a size of each detector unit of the detector, a detection efficiency of the detector, a size of the FOV, or the like, or any combination thereof. The first sub-matrix associated with the first set of first pinholes may refer to a first system matrix of an imaging system without the second set of second pinholes. Similarly, the second sub-matrix associated with the second set of second pinholes may refer to a second system matrix of an imaging system without the first set of first pinholes. Merely by way of example, the processing device120may determine the image based on Equation (2) as follows: [yLyH]=[aLfLaLaHfHaH][G1G2]x,(2) where G1denotes the first sub-matrix associated with the first set of first pinholes; G2denotes the second sub-matrix associated with the second set of second pinholes; and x denotes the image. In some embodiments, the first sub-matrix and/or the second sub-matrix may be determined based on a calibration technique for calibrating the imaging device. For example, for determining the first sub-matrix, a plurality of first point spread functions (PSFs) of points at different positions in the FOV of the imaging device may be determined when the second set of second pinholes are covered using a second cover plate. In some embodiments, the first PSFs may be obtained, e.g., by performing a simulation for the imaging device without the second set of second pinholes. The second cover plate may be configured to prohibit photons from passing through the second pinholes. In some embodiments, the second cover plate may be the same or different as the first cover plate described inFIG.4. The first sub-matrix may be determined based on the plurality of first PSFs according to, e.g., a convolution algorithm. Similarly, for determining the second sub-matrix, a plurality of second point spread functions (PSFs) of points at different positions in the FOV of the imaging device may be determined when the first set of first pinholes are covered using a cover plate (e.g., the second cover plate). The second sub-matrix may be determined based on the plurality of second PSFs. In some embodiments, the first sub-matrix and/or the second sub-matrix may be determined based on a simulation technique. For example, for determining the first sub-matrix, the processing device120may obtain a first simulation image generated assuming that the collimator only has the first set of first pinholes (or the second set of second pinholes are covered using the second cover plate). The first simulation image may include a plurality of pixels each of which corresponds to a detector unit. The processing device120may determine simulation projection data corresponding to each of the plurality of pixels. The processing device120may determine the first sub-matrix based on the simulation projection data. Similarly, for determining the second sub-matrix, the processing device120may obtain a second simulation image generated assuming that the collimator only has the second set of second pinholes (or the first set of first pinholes are covered using the second cover plate). The second simulation image may include a plurality of pixels each of which corresponds to a detector unit. The processing device120may determine simulation projection data corresponding to each of the plurality of pixels. The processing device120may determine the second sub-matrix based on the simulation projection data. In some embodiments, in order to meet different requirements (e.g., relatively high sensitivity but low resolution), the imaging device used to acquire the first projection data set and the second projection data set may be optimized to obtain an updated first projection data set and an updated second projection data set. An updated image that satisfies the requirements may be generated based on the updated first projection data set and the updated second projection data set. Generally, on one hand, the sensitivity of the imaging device may be associated with a thickness of the filter. For example, when considering a filter made of a specific material, the sensitivity of the imaging device may be decreased with the increased thickness of the filter. The sensitivity of the imaging device may be evaluated based on Equations (3-5) as follows: Sinc(t)=fLaLG2x+fHaHG2xaLG1x+aHG1x,(3)fL=exp(-μLt),(4)fH=exp(-μHt),(5) where Sinc(t) denotes the sensitivity of the imaging device; μHand μLdenote linear attenuation coefficients of the specific material at the first energy and the second energy, respectively; and t denotes the thickness of the filter. On the other hand, the spatial resolution of the imaging device may be associated with the thickness of the filter. For example, when considering a filter made of the specific material, the spatial resolution of the imaging device may be improved with the increased or decreased thickness of the filter. The spatial resolution of the imaging device may be evaluated based on a condition number of the first matrix associated with the filters. The condition number of the first matrix may be described in Equations (6) as follows: C(t)=Cond([aLfL(t)aLaHfH(t)aH]),(6) where C(t) denotes a condition number of the first matrix. The trade-off between the sensitivity and the spatial resolution of the imaging device may be leveraged with different filtration design (e.g., different thicknesses, different materials of the filter). It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, during the process for generating the image, the projection data including the first projection data set and the second projection data set may be corrected based on an anatomical image acquired by another imaging device (e.g., a CT device). As another example, one or more other optional operations (e.g., a storing operation) may be added elsewhere in the process800. In the storing operation, the processing device120may store information and/or data (e.g., the first projection data set, the second projection data set, the first matrix, the second matrix (or the system matrix), etc.) associated with the imaging system100in a storage device (e.g., the storage device130) disclosed elsewhere in the present disclosure. FIG.9Aillustrates a cross-sectional view of a phantom according to some embodiments of the present disclosure. A phantom900may include six groups of rods of different diameters including 0.5 mm, 0.6 mm, 0.8 mm, 1 mm, 1.2 mm, and 1.5 mm. The six groups of rods may be set to have different resolutions. Each rod may be simulated to have an activity equivalent to the radiation of a radioactive tracer (e.g., In-111). The activity of each rod to background (e.g., a region910) may be 4:1. FIGS.9B and9Cillustrate images of the phantom900inFIG.9Aaccording to some embodiments of the present disclosure. Image900binFIG.9Bis a cross-sectional view along an axial direction of the phantom900, and image900cinFIG.9Cis a cross-sectional view along the direction of CC′ of the rods with sizes 0.8 mm to 1.2 mm. Images900band900cmay be acquired by scanning the phantom900with 300 seconds using a SPECT device having a spectral multiplexing collimator described in the present disclosure. The spectral multiplexing collimator may have a first set of first pinholes and a second set of second pinholes. Each second pinhole may be equipped with a tungsten filter with a thickness of 0.1 mm. FIGS.9D and9Eillustrate images of the phantom900inFIG.9Aaccording to some embodiments of the present disclosure. Image900dinFIG.9Dis a cross-sectional view along an axial direction of the phantom900, and image900einFIG.9Eis a cross-sectional view along the direction of DD′ of the rods with sizes 0.8 mm to 1.2 mm. Images900dand900emay be acquired by scanning the phantom900with 300 seconds using a SPECT device having a single-set collimator. Compared with the spectral multiplexing collimator used in acquiring images900band900c, the single-set collimator may only have the first set of first pinholes. According to the comparison between images900band900d, and the comparison between images900cand900e, the images acquired using the spectral multiplexing collimator may have improved image resolution (e.g., see rods with size 0.8 mm in images900band900c), superior contrast recovery, lower noise, and lower aliasing artifacts. As a result, using the spectral multiplexing collimator, a higher contrast to noise ratio may be achieved, or the same contrast to noise ratio with a shorter acquisition time may be achieved. The improved angular sampling may be beneficial to improve spatial resolution and reduce aliasing artifacts. FIGS.10A to10Dare graphs illustrating curves of contrast recovery ratio to noise at different resolution levels according to some embodiments of the present disclosure. As used herein, a contrast recovery ratio (CR) refers to a ratio of a reconstructed contrast to a true contrast.FIG.10Ashows a graph of curves of contrast recovery ratio to noise at a resolution of 1.5.FIG.10Bshows a graph of curves of contrast recovery ratio to noise at a resolution of 1.2.FIG.10Cshows a graph of curves of contrast recovery ratio to noise at a resolution of 1.FIG.10Dshows a graph of curves of contrast recovery ratio to noise at a resolution of 0.8. InFIGS.10A to10D, curves associated with using a single-set collimator (also be referred to as Single for brevity) may be obtained based on different acquisition times (or different scan times) by using a same single-set collimator having only a first set of first pinholes. Curves associated with using a spectral multiplexing collimator (also be referred to as Multi for brevity) may be obtained based on an acquisition time of 300 seconds by using a spectral multiplexing collimator equipped with filters with different thicknesses. The spectral multiplexing collimator may include the first set of first pinholes and an additional set of second pinholes. A filter may be disposed on each second pinhole. The filter may be a tungsten filter. According toFIGS.10A to10D, at the same resolution and noise levels, the longer the acquisition time is, the greater the CR may be achieved by using the single-set collimator. At the same resolution and noise levels, the less the thickness of the filter is, the greater the CR may be achieved by using the spectral multiplexing collimator. At the same resolution and contrast levels, 300 seconds scan using the spectral multiplexing collimator with 0.1 mm thick filters may achieve improved or equivalent noise reduction compared with the 500 seconds scan using the single-set collimator. At the same resolution and noise levels, 300 seconds scan using the spectral multiplexing collimator with 0.1 mm thick tungsten filter may achieve greater or equivalent CR than the 500 seconds scan using the single-set collimator. This allows for at least 40% time-saving in acquisition protocol design. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “unit,” “module,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon. A non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C #, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran, Perl, COBOL, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations, therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof to streamline the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed object matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment. In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting effect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail. In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described. | 94,784 |
11857358 | DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS All numeric values are herein assumed to be modified by the terms “about” or “approximately,” whether or not explicitly indicated, wherein the terms “about” and “approximately” generally refer to a range of numbers that one of skill in the art would consider equivalent to the recited value (i.e., having the same function or result). In some instances, the terms “about” and “approximately” may include numbers that are rounded to the nearest significant figure. The recitation of numerical ranges by endpoints includes all numbers within that range (e.g., 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5). As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. In describing the depicted embodiments of the disclosed inventions illustrated in the accompanying figures, specific terminology is employed for the sake of clarity and ease of description. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner. It is to be further understood that the various elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other wherever possible within the scope of this disclosure and the appended claims. Various embodiments of the disclosed inventions are described hereinafter with reference to the figures. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the disclosed inventions, which is defined only by the appended claims and their equivalents. In addition, an illustrated embodiment of the disclosed inventions needs not have all the aspects or advantages shown. For example, an aspect or an advantage described in conjunction with a particular embodiment of the disclosed inventions is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated. For the following defined terms and abbreviations, these definitions shall be applied throughout this patent specification and the accompanying claims, unless a different definition is given in the claims or elsewhere in this specification: An “acquired image” refers to an image generated while visualizing a patient's tissue. Acquired images can be generated by radiation from a radiation source impacting on a radiation detector disposed on opposite sides of a patient's tissue, as in a conventional mammogram. A “reconstructed image” refers to an image generated from data derived from a plurality of acquired images. A reconstructed image simulates an acquired image not included in the plurality of acquired images. A “synthesized image” refers to an artificial image generated from data derived from a plurality of acquired and/or reconstructed images. A synthesized image includes elements (e.g., objects and regions) from the acquired and/or reconstructed images, but does not necessarily correspond to an image that can be acquired during visualization. Synthesized images are constructed analysis tools. An “Mp” image is a conventional mammogram or contrast enhanced mammogram, which are two-dimensional (2D) projection images of a breast, and encompasses both a digital image as acquired by a flat panel detector or another imaging device, and the image after conventional processing to prepare it for display (e.g., to a health professional), storage (e.g., in the PACS system of a hospital), and/or other use. A “Tp” image is an image that is similarly two-dimensional (2D), but is acquired at a respective tomosynthesis angle between the breast and the origin of the imaging x rays (typically the focal spot of an x-ray tube), and encompasses the image as acquired, as well as the image data after being processed for display, storage, and/or other use. A “Tr” image is a type (or subset) of a reconstructed image that is reconstructed from tomosynthesis projection images Tp, for example, in the manner described in one or more of U.S. Pat. Nos. 7,577,282, 7,606,801, 7,760,924, and 8,571,289, the disclosures of which are fully incorporated by reference herein in their entirety, wherein a Tr image represents a slice of the breast as it would appear in a projection x ray image of that slice at any desired angle, not only at an angle used for acquiring Tp or Mp images. An “Ms” image is a type (or subset) of a synthesized image, in particular, a synthesized 2D projection image that simulates mammography images, such as a craniocaudal (CC) or mediolateral oblique (MLO) images, and is constructed using tomosynthesis projection images Tp, tomosynthesis reconstructed images Tr, or a combination thereof. Ms images may be provided for display to a health professional or for storage in the PACS system of a hospital or another institution. Examples of methods that may be used to generate Ms images are described in the above-incorporated U.S. Pat. Nos. 7,760,924 and 8,571,289 and also U.S. application Ser. No. 15/120,911, published as U.S. Publication No. 2016/0367120 on Dec. 22, 2016 and entitled System and Method for Generating and Displaying Tomosynthesis Image Slabs, PCT Application No. PCT/US2018/024911, filed Mar. 28, 2018 and entitled System and Method for Hierarchical Multi-Level Feature Image Synthesis and Representation, PCT Application No. PCT/US2018/024912, filed Mar. 28, 2018, and entitled System and Method for Synthesizing Low-Dimensional Image Data From High-Dimensional Image Data Using an Object Grid Enhancement, and PCT Application No. PCT/US018/0249132, filed Mar. 28, 2018, and entitled System and Method for Targeted Object Enhancement to Generate Synthetic Breast Tissue Images, the contents of all of which are incorporated herein by reference as thought set forth in full. It should be appreciated that Tp, Tr, Ms and Mp image data encompasses information, in whatever form, that is sufficient to describe the respective image for display, further processing, or storage. The respective Mp, Ms. Tp and Tr images, including those subjected to high density element suppression and enhancement, are typically provided in digital form prior to being displayed, with each image being defined by information that identifies the properties of each pixel in a two-dimensional array of pixels. The pixel values typically relate to respective measured, estimated, or computed responses to X-rays of corresponding volumes in the breast, i.e., voxels or columns of tissue. In a preferred embodiment, the geometry of the tomosynthesis images (Tr and Tp) and mammography images (Ms and Mp) are matched to a common coordinate system, as described in U.S. Pat. No. 7,702,142. Unless otherwise specified, such coordinate system matching is assumed to be implemented with respect to the embodiments described in the ensuing detailed description of this patent specification. The terms “generating an image” and “transmitting an image” respectively refer to generating and transmitting information that is sufficient to describe the image for display. The generated and transmitted information is typically digital information. The term “high density element” is defined as an element, when imaged with breast tissue, partially or completely obscures imaged breast tissue or clinically important information of breast tissue such as malignant breast mass, tumors, etc. A high density element may be detected based on pre-determined criteria or filters involving one or more of contrast, brightness, radiopacity or other attribute. A high density element may be a foreign object or naturally occurring within breast tissue and may be partially or completely radiopaque. For example, one type of high density element is a metallic object such as a metallic biopsy marker inserted into breast tissue. Such markers are designed to be radiopaque such that they are clearly visible when using x-rays. Another example of a high density element is a calcification within the breast tissue. A high density element may also be a non-metallic or non-calcified element such as a shadow artifact generated by imaging a metallic marker, and which may not be considered to be radiopaque. Accordingly, a “high density element” is defined to include metallic objects such as a biopsy marker or a skin marker, radiopaque materials or objects, and shadows or shadow artifacts generated by imaging of same. The terms “differential” or “multi-flow” image processing are defined to refer to the input images being processed in different ways to generate different image results and is defined to include one flow involving suppression of an imaged high density element and involving enhancement of an imaged high density element. Different image processing flows can be executed in parallel and simultaneously, and images input to image processors of embodiments may be of different dimensional formats. In order to ensure that a synthesized 2D image displayed to a reviewer or end-user (e.g., an Ms image) includes the most clinically relevant information, it is necessary to detect and identify 3D objects, such as malignant breast mass, tumors, etc., within the breast tissue. Towards this end, in accordance with embodiments of the presently disclosed inventions, 3D objects may be identified using multiple target object recognition/synthesis modules, wherein each target recognition/synthesis module may be configured to identify and reconstruct a particular type of object. These multiple target synthesis modules may work together in combining information pertaining to respective objects during the reconstruction process of generating one or more synthesized 2D images, ensuring that each object is represented accurately, and preserving clinically significant information on the 2D synthesized images that are the displayed to the end-user. The synthesized 2D image that is displayed to an end-user should also be clear such that clinically relevant information and objects are not obscured by undesirable image elements or artifacts, which may include a high density element such as a biopsy marker and/or a shadow generated by imaging of same during breast imaging. Towards this end, in accordance with embodiments of the presently disclosed inventions, a multi-flow image processor is utilized to generate a 2D synthesized image by suppressing high density elements in one image processing method and enhancing high density elements in another image processing method such that when different 2D synthesized images generated by different image processing flows are combined, high density elements such as shadows are reduced or eliminated resulting in a composite 2D synthesized image that is clearer and more accurately depicts breast tissue and breast tissue objects while providing for more accurate and efficient radiologist review. Embodiments designed to generate a 2D synthesized image that maintains and enhances clinically interesting characteristics are described with reference toFIGS.1-8B, and embodiments that utilize a multi-flow image processing method for reducing of high density elements such as shadows and generate a clearer 2D composite synthesized image are described with reference toFIGS.9-24. FIG.1illustrates the flow of data in an exemplary image generation and display system100, which incorporates each of synthesized image generation, object identification, and display technology. It should be understood that, whileFIG.1illustrates a particular embodiment of a flow diagram with certain processes taking place in a particular serial order or in parallel, the claims and various other embodiments described herein are not limited to the performance of the image processing steps in any particular order, unless so specified. More particularly, the image generation and display system100includes an image acquisition system101that acquires tomosynthesis image data for generating Tp images of a patient's breasts, optionally using the respective 3D and/or tomosynthesis acquisition methods of any of the currently available systems. If the acquisition system is a combined tomosynthesis/mammography system, Mp images may also be generated. Some dedicated tomosynthesis systems or combined tomosynthesis/mammography systems may be adapted to accept and store legacy mammogram images, (indicated by a dashed line and legend “Mplegacy” inFIG.1) in a storage device102, which is preferably a DICOM-compliant Picture Archiving and Communication System (PACS) storage device. Following acquisition, the tomosynthesis projection images Tp may also be transmitted to the storage device102(as shown inFIG.1). The storage device102may further store a library of known 3D objects that may be used to identify significant 3D image patterns to the end-user. In other embodiments, a separate dedicated storage device (not shown) may be used to store the library of known 3D objects with which to identify 3D image patterns or objects. The Tp images are transmitted from either the acquisition system101, or from the storage device102, or both, to a computer system configured as a reconstruction engine103that reconstructs the Tp images into reconstructed image “slices” Tr, representing breast slices of selected thickness and at selected orientations, as disclosed in the above-incorporated patents and applications. Mode filters107are disposed between image acquisition and image display. The filters107may additionally include customized filters for each type of image (i.e., Tp, Mp, and Tr images) arranged to identify and highlight or enhance certain aspects of the respective image types. In this manner, each imaging mode can be tuned or configured in an optimal way for a specific purpose. For example, filters programmed for recognizing objects across various 2D image slices may be applied in order to detect image patterns that may belong to a particular high-dimensional objects. The tuning or configuration may be automatic, based on the type of the image, or may be defined by manual input, for example through a user interface coupled to a display. In the illustrated embodiment ofFIG.1, the mode filters107are selected to highlight particular characteristics of the images that are best displayed in respective imaging modes, for example, geared towards identifying objects, highlighting masses or calcifications, identifying certain image patterns that may be constructed into a 3D object, or for creating 2D synthesized images (described below). AlthoughFIG.1illustrates only one mode filter107, it should be appreciated that any number of mode filters may be utilized in order to identify structures of interest in the breast tissue. The imaging and display system100further includes a 2D image synthesizer104that operates substantially in parallel with the reconstruction engine103for generating 2D synthesized images using a combination of one or more input Tp (tomosynthesis projection), Mp (mammography projection), and/or Tr (tomosynthesis reconstruction) images. The 2D image synthesizer104consumes a set of input images, determines a set of most relevant features from each of the input images, and outputs one or more synthesized 2D images. The synthesized 2D image represents a consolidated synthesized image that condenses significant portions of various slices onto one image. This provides an end-user (e.g., medical personnel, radiologist, etc.) with the most clinically-relevant image data in an efficient manner, and reduces time spent on other images that may not have significant data. One type of relevant image data to highlight in the synthesized 2D images would be relevant objects found across one or more Mp, Tr and/or Tp images. Rather than simply assessing image patterns of interest in each of the 2D image slices, it may be helpful to determine whether any of the 2D image patterns of interest belong to a larger high-dimensional structure, and if so, to combine the identified 2D image patterns into a higher-dimensional structure. This approach has several advantages, but in particular, by identifying high-dimensional structures across various slices/depths of the breast tissue, the end-user may be better informed as to the presence of a potentially significant structure that may not be easily visible in various 2D slices of the breast. Further, instead of identifying similar image patterns in two 2D slices (that are perhaps adjacent to each other), and determining whether or not to highlight image data from one or both of the 2D slices, identifying both image patterns as belonging to the same high-dimensional structure may allow the system to make a more accurate assessment pertaining to the nature of the structure, and consequently provide significantly more valuable information to the end-user. Also, by identifying the high-dimensional structure, the structure can be more accurately depicted on the synthesized 2D image. Yet another advantage of identifying high-dimensional structures within the various captured 2D slices of the breast tissue relates to identifying a possible size/scope of the identified higher-dimensional structure. For example, once a structure has been identified, previously unremarkable image patterns that are somewhat proximate to the high-dimensional structure may now be identified as belonging to the same structure. This may provide the end-user with an indication that the high-dimensional structure is increasing in size/scope. To this end, the 2D image synthesizer104employs a plurality of target object recognition/enhancement modules (also referred to as target object synthesis modules) that are configured to identify and reconstruct different types of objects. Each target image recognition/synthesis module may be applied (or “run”) on a stack (e.g., a tomosynthesis image stack) of 2D image slices of a patient's breast tissue, and work to identify particular types of objects that may be in the breast tissue, and ensure that such object(s) are represented in a clinically-significant manner in the resulting 2D synthesized image presented to the end-user. For example, a first target image synthesis module may be configured to identify calcifications in the breast tissue. Another target image synthesis module may be configured to identify and reconstruct spiculated lesions in the breast tissue. Yet another target image synthesis module may be configured to identify and reconstruct spherical masses in the breast tissue. In one or more embodiments, the multiple target image synthesis modules process the image slice data and populate respective objects in a high-dimensional grid (e.g., 3D grid) comprising respective high-dimensional structures (e.g., 3D objects) present in the breast tissue. This high-dimensional grid may then be utilized to accurately depict the various structures in the 2D synthesized image. A high-dimensional object may refer to any object that comprises at least three or more dimensions, e.g., 3D or higher object, or a 3D or higher object and time dimension, etc. Examples of such objects or structures include, without limitation, calcifications, spiculated lesions, benign tumors, irregular masses, dense objects, etc. An image object may be defined as a certain type of image pattern that exists in the image data. The object may be a simple round object in a 3D space, and a corresponding flat round object in a 2D space. It can be an object with complex patterns and complex shapes, and it can be of any size or dimension. The concept of an object may extend past a locally bound geometrical object. Rather, the image object may refer to an abstract pattern or structure that can exist in any dimensional shape. It should be appreciated that the inventions disclosed herein are not limited to 3D objects and/or structures, and may include higher-dimensional structures. It should be appreciated that each of the target image synthesis modules is configured for identifying and reconstructing respective types of objects. These “objects” may refer to 2D shapes, 2D image patterns, 3D objects, or any other high-dimensional object, but in any event will all be referred to as “objects” or “3D objects” herein for simplicity, but this illustrative use should not be otherwise read as limiting the scope of the claims. In the illustrated embodiment, the 2D synthesizer104comprises a plurality of target object recognition/enhancement modules (e.g.,110a,110b. . .110n), each configured for recognizing and enhancing a particular type of object. Each of the target object recognition/enhancement modules110may be run on a 2D image stack (e.g., Tr image stack), and is configured to identify the respective object (if any is/are present) therein. By identifying the assigned object in the 2D image stack, each target object recognition/enhancement module110works to ensure that the respective object is preserved and depicted accurately in the resulting 2D synthesized image presented to the end-user. In some embodiments, a hierarchical model may be utilized in determining which objects to emphasize or de-emphasize in the 2D synthesized image based on a weight or priority assigned to the target object recognition/enhancement module. In other embodiments, all objects may be treated equally, and different objects may be fused together if there is an overlap in the z direction, as will be discussed in further detail below. These reconstruction techniques allow for creation of 2D synthesized images that comprise clinically-significant information, while eliminating or reducing unnecessary or visually confusing information. The synthesized 2D images may be viewed at a display system105. The reconstruction engine103and 2D image synthesizer104are preferably connected to a display system105via a fast transmission link. The display system105may be part of a standard acquisition workstation (e.g., of acquisition system101), or of a standard (multi-display) review station (not shown) that is physically remote from the acquisition system101. In some embodiments, a display connected via a communication network may be used, for example, a display of a personal computer or of a so-called tablet, smart phone or other hand-held device. In any event, the display105of the system is preferably able to display respective Ms, Mp, Tr, and/or Tp images concurrently, e.g., in separate side-by-side monitors of a review workstation, although the invention may still be implemented with a single display monitor, by toggling between images. Thus, the imaging and display system100, which is described as for purposes of illustration and not limitation, is capable of receiving and selectively displaying tomosynthesis projection images Tp, tomosynthesis reconstruction images Tr, synthesized mammogram images Ms, and/or mammogram (including contrast mammogram) images Mp, or any one or sub combination of these image types. The system100employs software to convert (i.e., reconstruct) tomosynthesis images Tp into images Tr, software for synthesizing mammogram images Ms, software for decomposing 3D objects, software for creating feature maps and object maps. An object of interest or feature in a source image may be considered a ‘most relevant’ feature for inclusion in a 2D synthesized image based upon the application of the object maps along with one or more algorithms and/or heuristics, wherein the algorithms assign numerical values, weights or thresholds, to pixels or regions of the respective source images based upon identified/detected objects and features of interest within the respective region or between features. The objects and features of interest may include, for example, spiculated lesions, calcifications, and the like. FIG.2illustrates the 2D image synthesizer104in further detail. As discussed above, various image slices218of a 3D tomosynthesis data set or “stack”202(e.g., filtered and/or unfiltered Tr and/or Tp images of a patient's breast tissue) are input into the 2D image synthesizer104, and then processed to determine portions of the images to highlight in a synthesized 2D image that will be displayed on the display105. The image slices218may be consecutively-captured cross-sections of a patient's breast tissue. Or, the image slices218may be cross-sectional images of the patient's breast tissue captured at known intervals. The 3D tomosynthesis stack202comprising the image slices218may be forwarded to the 2D image synthesizer104, which evaluates each of the source images in order to (1) identify various types of objects (Tr) for possible inclusion in one or more 2D synthesized images, and/or (2) identify respective pixel regions in the images that contain the identified objects. As shown in the illustrated embodiment, the 3D tomosynthesis stack202comprises a plurality of images218taken at various depths/cross-sections of the patient's breast tissue. Some of the images218in the 3D tomosynthesis stack202comprise 2D image patterns. Thus, the tomosynthesis stack202comprises a large number of input images containing various image patterns within the images of the stack. More particularly, as shown inFIG.2, three target object recognition/enhancement modules210a,210band210care configured to run on the 3D tomosynthesis stack202, wherein each of the target object recognition and enhancement modules210corresponds to a respective set of programs/rules and parameters that define a particular object, and how to identify that particular object amongst other objects that may exist in the breast tissue depicted by the 3D tomosynthesis stack202. For example, filtering/image recognition techniques and various algorithms/heuristics may be run on the 3D tomosynthesis stack202in order to identify the object assigned to the particular target object recognition/enhancement module210. It will be appreciated that there are many ways to recognize objects using a combination of image manipulation/filtration techniques. For the purposes of illustration, it will be assumed that the each of the target object recognition/enhancement modules210identifies at least one respective object, but it should be appreciated that in many cases no objects will be identified. However, even healthy breast tissue may have one or more suspicious objects or structures, and the target object recognition/enhancement modules may inadvertently identify a breast background object. For example, all breast linear tissue and density tissue structures can be displayed as the breast background object. In other embodiments, “healthy” objects such as spherical shapes, oval shapes, etc., may simply be identified by one or more of the target object recognition/enhancement modules210. The identified 3D objects may then be displayed on the 2D synthesized image206; of course, out of all identified 2D objects, more clinically-significant objects may be prioritized/enhanced when displaying the respective objects on the 2D synthesized image, as will be discussed in further detail below. In the illustrated embodiment, a first target object recognition/enhancement module210ais configured to recognize circular and/or spherical shapes in the images218of the 3D tomosynthesis stack202(e.g., Tr, Tp, etc.). A second target object synthesis module210bis configured to recognize lobulated shapes. A third target object synthesis module210cis configured to recognize calcification patterns. In particular, each of the target object synthesis modules210a,210band210cis run on the Tr image stack202, wherein a set of features/objects are recognized by the respective target object synthesis modules. For example, target object recognition/enhancement module210amay recognize one or more circular shapes and store these as “recognized objects”220a. It will be appreciated that multiple image slices218of the 3D tomosynthesis stack202may contain circular shapes, and that these shapes may be associated with the same spherical object, or may belong to different spherical objects. In the illustrated embodiment, at least two distinct circular objects are recognized by the target object recognition/enhancement module210a. Similarly, target object recognition/enhancement module210bmay recognize one or more lobulated shapes and store these as recognized objects220b. In the illustrated embodiment, one lobulated object has been recognized in the 3D tomosynthesis stack202by the target object recognition/enhancement module210b. As can be seen, two different image slices218in the 3D tomosynthesis stack202depict portions of the lobulated object, but the respective portions are recognized as belonging to a single lobulated object by the recognition/enhancement module210b, and stored as a single recognized object220b. Finally, target object recognition/enhancement module210cmay recognize one or more calcification shapes and store these as recognized objects220c. In the illustrated embodiment, a (single) calcification cluster has been recognized by the target object recognition/enhancement module210cand stored as a recognized object220c. The recognized objects220a,220band220cmay be stored at storage facilities corresponding to the respective target object recognition/enhancement modules210a,210band210c, or alternatively at a separate (i.e., single) storage facility that may be accessed by each of the target object recognition/enhancement modules. Referring now toFIG.3, each of the target object recognition/enhancement modules210may be configured to identify and synthesize (e.g., to reduce to 2D) a respective 3D object to be displayed on the one or more 2D synthesized images. In other words, once the 3D objects are recognized by the respective target object recognition/enhancement module210a,210bor210c, the target object recognition/enhancement module thereafter converts the recognized 3D object into a 2D format so that the recognized object may be displayed on the 2D synthesized image. In the illustrated embodiment, the target object recognition/enhancement modules210a,21band210crecognize respective objects, and convert the recognized objects into respective 2D formats. As part of the conversion process, certain of the recognized objects may be enhanced to a greater or lesser degree for the displayed image, as will be discussed in further detail below. Assuming all three target object recognition/enhancement modules210a,210band210care considered equally important to the 2D image synthesizer104, the respective 2D formats of all recognized objects (e.g., two spherical objects, one lobular object, and one calcification mass) depicted on the 2D synthesized image302. FIG.4illustrates how a single target object recognition/enhancement module210may be run on a 3D tomosynthesis stack to generate a portion of the 2D synthesized image. In the illustrated embodiment, image slices402of the 3D tomosynthesis stack are fed through a single target object recognition/enhancement module404, which is configured to recognize star shaped objects in the stack of images402. As a result, the single target object synthesis module reduces information pertaining to the recognized star shape gained from various depths of the image slices onto a single 2D synthesized image406. FIG.5illustrates an exemplary embodiment for having multiple target object recognition/enhancement modules work together to produce the 2D synthesized image. In the illustrated embodiment, image slices502(of a respective 3D tomosynthesis stack) are fed through a first target object recognition/enhancement module504aconfigured to recognize and reconstruct circular and/or spherical shapes, a second target object recognition/enhancement module504bconfigured to recognize and reconstruct star-like shapes, and a third target object recognition/enhancement module504cconfigured to recognize and reconstruct calcification structures. It should be appreciated that any number of target object recognition/enhancement modules may be programmed for any number of object types. Each of the target object recognition/enhancement modules504a,504band504ccorresponds to respective algorithms that are configured with various predetermined rules and attributes that enable these programs to successfully recognize respective objects, and reduce the recognized objects to a 2D format. By applying all three target object recognition/synthesis modules504a,504band504cto the image slices502, a 2D synthesized image506is generated. In particular, rather than simply displaying a single type of object, the 2D synthesized image506comprises all three object types that are recognized and synthesized by the three target object recognition/enhancement modules504a,504band504c, with each of the recognized objects being equally emphasized. While this may be desirable if all the object types are of equal significance, it may be helpful to enhance/emphasize different object types to varying degrees based on their weight/priority. This technique may be more effective in alerting the end-user to a potentially important object, while de-emphasizing objects of lesser importance. Referring now toFIG.6A, a hierarchical sequential approach to combine data from the multiple target object recognition/enhancement modules is illustrated. In particular, a sequential combination technique may be applied if the various object types have a clearly defined hierarchy associated with them. For example, one type of object (e.g., spiculated lesions) may be deemed to be more clinically significant than another type of object (e.g., a spherical mass in breast tissue). This type of object (and the corresponding target object module) may be assigned a particular high weight/priority. In such a case, if two objects are competing for space on the 2D synthesized image, the object type associated with the higher priority may be emphasized/displayed on the 2D synthesized image, and the other object type may be de-emphasized, or not displayed at all. Similarly, in such an approach, each of the target object recognition/enhancement modules may be assigned respective weights based on respective significance. In the illustrated embodiment, the image slices602of the 3D tomosynthesis stack are sequentially fed through three different target object recognition/enhancement modules (604,606and608) to generate the 2D synthesized image610, wherein each of the target object synthesis modules is configured to recognize and reconstruct a particular type of object. The first target object recognition/enhancement module604(associated with a square-shaped object) is run first on the reconstruction image slices602, followed by the second target object recognition/enhancement module606(associated with a diamond-shaped object), and then followed by the third target object recognition/enhancement module608(associated with a circular-shaped object). It should be appreciated that since the target object recognition/enhancement modules are applied (or “run”) sequentially, the second target object recognition/enhancement module606may be considered a higher priority object as compared with the first target object recognition/enhancement module604, and the third target object recognition/enhancement module608may be considered as having a higher priority as compared to the second target object recognition/enhancement module606. Thus, the third object type may override (or be emphasized over) the second object type, and the second object type may override (or be emphasized over) the first object type. FIG.6Billustrates this hierarchical approach to combining various object types sequentially. In particular, the 3D tomosynthesis image stack652includes objects656,658and660that can be recognized in various image slices. As illustrated, objects658and660somewhat overlap in the z direction, which means that they are likely to compete for representation in the 2D synthesized image654. When using the sequential approach ofFIG.6Ato combine data from the multiple target object recognition/enhancement modules604,606and608, the programmed hierarchy is preserved. Thus, since target object recognition/enhancement module608configured to recognize and reconstruct circular-shaped objects has higher priority as compared to target object recognition/enhancement module604configured to recognize and reconstruct square-shaped objects, in a case of overlap between the two objects (as is the case inFIG.6B), circular-shaped object658overrides square-shaped object660in the 2D synthesized image654. Of course, it should be appreciated that since diamond-shaped object656does not overlap in the z direction with the other two objects, diamond shaped object656is also displayed in the 2D synthesized image654. In other embodiments, instead of completing overriding the lower-priority object, the object with high-priority may be emphasized relative to the lower-priority object (rather than be omitted from display). Another approach to running multiple target object synthesis modules on a set of image slices is illustrated inFIG.7A. As can be seen, rather than running the multiple target object recognition/enhancement modules sequentially with the last-run target object synthesis module having the highest priority, all the target object recognition/enhancement modules may be applied in parallel. In particular, one or more enhancement or fusion modules712may be utilized to ensure that the various objects are combined appropriately on the 2D synthesized image. This approach may not follow a hierarchical approach, and all of the objects may be given equal weight. The image slices702are fed through three different target object recognition/enhancement modules,704,706and708, in parallel. The first target object recognition/enhancement module604(associated with square-shaped object), the second target object recognition/enhancement module606(associated with diamond-shaped object), and the third target object recognition/enhancement module608(associated with circular-shaped object) are all run in parallel on the image slices702. In some embodiments, an enhancement and fusion module712may be utilized to ensure that the different objects are fused together appropriately in case of overlap between multiple objects. The target object recognition/enhancement modules704,706and708, run in parallel may generate the 2D synthesized image710. This approach to combining various object types in parallel is illustrated inFIG.7B. In particular, the tomosynthesis stack752depict the same objects asFIG.6B(e.g., objects756,758and760) at various image slices. As illustrated, objects758and760somewhat overlap in the z direction, which means that they are likely to compete for representation and/or overlap in the 2D synthesized image754. Here, because the multiple target object recognition/enhancement modules are run in parallel, rather than one object type overriding another object type, as was the case inFIG.6B, both the square-object760and the circular object758are fused together in the 2D synthesized image754. Thus, this approach does not assume an innate priority/hierarchy between objects and all objects may be fused together appropriately in the 2D synthesized image754. FIG.8Adepicts a flow diagram800that illustrates exemplary steps that may be performed in an image merge process carried out in accordance with the sequential combination approach outlined above in conjunction withFIGS.6A and6B. At step802, an image data set is acquired. The image data set may be acquired by a tomosynthesis acquisition system, a combination tomosynthesis/mammography system, or by retrieving pre-existing image data from a storage device, whether locally or remotely located relative to an image display device, e.g., through a communication network. At steps804and806, for a range of 2D images (e.g., Tr stack), a first target object recognition/enhancement module is run in order to recognize a first object associated with the first target object recognition/enhancement module. Any recognized objects may be stored in a storage module associated with the first target object recognition/enhancement module. At step808, a second target object recognition/enhancement module is run in order to recognize a second object associated with the second target object recognition/enhancement module. At step810, it may be determined whether the first recognize object and the second recognized object overlap each other in the z direction. If it is determined that the two objects overlap, only the second object may be displayed (or otherwise emphasized over the first object) on the 2D synthesized image at step812. If, on the other hand, it is determined that the two objects do not overlap, both objects are displayed on the 2D synthesized image at step814. FIG.8Bdepicts a flow diagram850that illustrates exemplary steps that may be performed in an image synthesis process carried out in accordance with the parallel combination approach outlined above in conjunction withFIGS.7A and7B. At step852, an image data set is acquired. The image data set may be acquired by a tomosynthesis acquisition system, a combination tomosynthesis/mammography system, or by retrieving pre-existing image data from a storage device, whether locally or remotely located relative to an image display device. At steps854and856, for a range of 2D images (e.g., Tr stack), all the programmed target object recognition/enhancement modules are run to recognize respective objects in the Tr image stack. At step858, one or more enhancement modules may also be run to determine whether a fusion process needs to occur. At step860, it may be determined whether any recognized objects overlap in the z direction. If it is determined that any two (or more) objects overlap, the overlapping objects may be fused together, at step862. If, on the other hand, it is determined that no objects overlap, all the objects are displayed as is on the 2D synthesized image at step814. Having described how a 3D stack of image slices is generated and processed by a 2D synthesizer comprising target object recognition/enhancement modules in order to ensure that a synthesized 2D image displayed to a reviewer or end-user includes the most clinically relevant information, embodiments related to generating clearer, reduced shadow or shadow-free 2D synthesized images are described with reference toFIGS.9-24. Embodiments described with reference toFIGS.9-24eliminate or reduce high density elements such as image portions depicting metal objects and/or shadows generated by imaging of same within 2D acquired or projection images and/or sets or stacks of 3D slices reconstructed based on 2D projection images. With embodiments, high density elements such as shadows are eliminated or reduced resulting in clearer 2D synthesized image that more accurately depicts breast tissue being analyzed and allows for more accurate and efficient radiologist examination since clinically relevant information is not blocked or obscured by shadows within the 2D synthesized image. Referring toFIG.9, and referring again toFIGS.1-2, reconstructed images Tr form a 3D tomosynthesis stack902of image slices918. As a non-limiting example, a 3D tomosynthesis stack902may include about 30 to about 120 image slices Tr (e.g., ˜60 image slides Tr) derived from or constructed based on about 15 or more 2D projection images Tp acquired by an x-ray image acquisition component101such as an x-ray source and detector collectively rotate around the patient or breast being analyzed.FIG.9depicts a 3D tomosynthesis stack902including image slices918, e.g., similar to the stack202illustrated inFIG.2.FIG.9further illustrates a high density element920in the breast tissue910and extending across multiple image slices918.FIG.9illustrates the high density element920extending across two slices918, but it will be understood that the high density element may extend to various depths. An example of a high density element920is a metallic biopsy marker or clip, which may be made of stainless steel or titanium or other radiopaque or dense material. Another example of a high density element920is an external skin marker. A high density element920may also be a biological or tissue component within the breast tissue910such as a calcification or other dense biological or tissue structure that obscures other clinically relevant information or objects of interest in the breast tissue910. A high density element920is also defined to include image artifacts generated thereby including shadows922generated by imaging or radiating a high density element900during breast imaging. Thus, a “high density element” may be a “foreign” or “external” object that is inserted into breast tissue910or attached to an outer breast surface910or be a naturally occurring material or component of breast tissue910having sufficient density to obscure other breast tissue that is clinically relevant information of breast tissue910. For ease of explanation and not limitation, reference is made to a high density element920, and a specific example of a metallic biopsy marker and a shadow922generated by imaging the metallic biopsy marker920, but it will be understood that embodiments are not so limited. The high density element920is illustrated as extending across multiple image slices918. As generally illustrated inFIG.9, the high density element920is denser than breast tissue910such that when imaged, a shadow922is generated, and the shadow922(as well as the metallic biopsy marker920) obscures underlying and/or adjacent breast tissue910and clinically relevant information concerning same. In the example generally illustrated inFIG.9, the shadow922generated by imaging the metallic biopsy marker920is a “complete, “circumferential” or “global” shadow since the shadow922surrounds the metallic biopsy marker920. Shadows may be caused from various aspects of image acquisition. For example, the type of shadow922generally depicted inFIG.9may result from one or more of the limited angle of tomosynthesis acquisition and reconstruction, also known as a reconstruction artifact, and image processing and enhancement, also known as an enhancement artifact. The illustrative shadow922depicted inFIG.9overlaps or obscures924objects of interest or clinically relevant information of breast tissue910such as lesions and spiculations. The depth and dimensions of shadows922depicted in the 3D tomosynthesis stack902(or in 2D projection images Tp) resulting from imaging of the high density element920may vary based on one imaging and/or material attributes including the angles of the x-ray source utilized and number of projection images Tp acquired, the metallic biopsy marker920material, and the size, shape and orientation of the metallic biopsy marker920being imaged. Thus,FIG.9is provided for purposes of general illustration, not limitation, to illustrate that a high density element in the form of a metallic biopsy marker920and/or shadow922depicted in one or more images may obscure clinically relevant information. Moreover,FIG.9illustrates a single high density element920in a 3D tomosynthesis stack902, but there may be multiple high density elements920, each of which may generate their own shadow922, and which may be distinct and independent of each other or overlap with other shadows922. Thus, multiple markers920and shadows922can further complicate generation and review of synthesized images since they may obscure relevant information from at multiple viewpoints. Referring toFIG.10, embodiments of inventions provide breast image acquisition and processing systems100sand multi-flow image processing methods1000that address complications with imaging high density elements920within breast images as discussed above with reference toFIG.9and provide for clearer and more accurate images that have reduced or are free of shadows for more accurate and efficient radiologist review.FIG.10illustrates a breast image generation and display system100s(“s” referring to a breast image generation and display system that “suppresses” high density elements) constructed according to one embodiment and configured to execute a multi-flow or differential image processing method1000for selective high density element suppression and high density enhancement in breast images. Details of various system100scomponents and interoperability thereof such as an acquisition system101, storage system102, reconstruction engine103, a 2D image synthesizer104and a display105are discussed above with reference toFIGS.1-8Band not repeated. Different images generated or processed thereby including acquired images, reconstructed images, synthesized images, Tp images (a 2D image acquired at respective tomosynthesis angles), Tr images (type (or subset) of a reconstructed image that is reconstructed from tomosynthesis projection images Tp) and Ms images (type (or subset) of a synthesized image, in particular, a synthesized 2D projection image that simulates mammography images) are also described above with reference toFIGS.1-8B. For ease of explanation, embodiments of inventions are described with reference to 2D acquired images such as 2D projection images (e.g., Tp), reconstructed images (e.g., Tr) or a 3D stack902of image slices918and 2D synthesized images. In the illustrated embodiment, the breast image generation and display system100sincludes a multi-flow image processor1000that is in communication with the reconstruction engine103and display105. The image processor1000receives input images or digital image data1001of one or more types of images. The input image data1001(generally, input data1001) may be for images of different dimensional formats such as 2D projection images and/or a 3D tomosynthesis stack902of image Tp slices218. The input data1001is processed according to a first image processing flow or method1010, and the same input data1001is processed with a second image processing flow or method1020different from the first processing flow or method1010. The resulting 2D synthesized image is based at least in part upon high density element suppression and based at least in part upon high density element enhancement, and an image fusion or merge element1030combines the 2D synthesized images generated by respective image processing flows or methods1010and1020to generate a new 2D composite image1032, which is communicated to display105. Thus, with the breast image generation and display system100s, the same input data1001is processed in different ways according to different image processing flows to generate different 2D synthesized images, which are merged to generate a single 2D synthesized composite image1034. In the illustrated embodiment, the multi-flow image processor1000processes the same input data1001in different ways, which may be done by parallel and simultaneous image processing flows. In one embodiment, the input data1001is data of 2D projection images (Tp). In another embodiment, the input data1001is data of 3D images of a stack902of image slices908. Different image processing methods executed based on the type of input data received are described in further detail below. The input data1001received by the image processor is first processed in different ways, beginning with one or more image detectors1011,1021. Two image detectors1011,1021are illustrated as the beginning of respective first and second image processing flows1010,1020. Image detector1011identifies and differentiates high density elements920and other elements such as breast tissue/background910. Image detector1021identifies high density elements920. Image detectors1011,1021may operate to distinguish a high density element920from breast tissue910or other image portions based on pre-determined filters or criteria involving, for example, one or more of image contrast, brightness, and radiopacity attributes. For example, high density element920may be associated with high contrast and brightness attributes compared to breast tissue or background910and thus be identified as a high density element. Detection criteria may involve a group of pixels or adjacent pixels having common characteristics, e.g., contrast or brightness within a certain range such that the group is identified as being a high density element. Image detectors may also distinguish a high density element920from best tissue based on shape, orientation and/or location data. For example, the image processor1000may be provide with specifications of known metallic biopsy markers. This data may be used in conjunction with image or pixel data such that image portions having similar properties also form a shape similar to a known shape of a biopsy marker, those pixels are identified as depicting a high density element920. As another example, another factor that can be utilized to differentiate a high density element920is that skin markers are typically attached to an outer surface of the breast rather than being inserted into breast tissue. Thus, pixels having similar properties and being located at an outer surface indicative of an external skin marker are identified as a high density element920. Location data can also be a factor, e.g., if a certain marker is inserted into a particular breast tissue region. Accordingly, it will be understood that image portions corresponding to high density elements and image portions corresponding to breast tissue or background910can be differentiated or detected in various ways using various filters, criteria and/or more sophisticated algorithms such as feature-based machine learning algorithms, or deep convolutional neural network algorithm. Image detector1011is in communication with a high density element suppression module1012, and image detector1012is in communication with a high density enhancement module1024such that respective detection results are provided to respective suppression and enhancement modules1012,1022. Respective outputs of respective high density element suppression and enhancement modules1012,1022are provided as inputs to respective 2D image synthesizers1014,1024. According to one embodiment, 2D image synthesizer1014used in the first image processing flow1010and that executes on high density element suppressed image portions operates in the same manner as 2D image synthesizer104that executes object enhancement and recognition modules110a-nas discussed above with reference toFIGS.1-8B, except that 2D image synthesizer1014receives high density suppressed image data. The 2D image synthesizer of the first image processing flow1010is thus referred to as 2D image synthesizer104supp (“supp” referring to high density element “suppressed”). Thus, 2D image synthesizer1014is configured to process high density element suppressed data while providing for breast tissue object enhancement and recognition via modules110a-n. In contrast, 2D image synthesizer1024does not involve high density element suppression or high density element suppressed data, and instead processes high density element enhanced image data while not enhancing breast tissue. In this manner, the focus of 2D image synthesizer1024is high density element920enhanced image data rather than breast tissue910enhancement such that 2D image synthesizer1024may also be referred to as 2D image synthesizer104enh (“enh” referring to high density element “enhanced”). For this purpose, the 2D image synthesizer1024may not include object enhancement and recognition modules110a-nor these object enhancement and recognition modules110a-nmay be deactivated. Thus, 2D image synthesizer1024is configured to process high density element enhanced data while breast tissue is not enhanced. The 2D image synthesizer1014/104supp outputs a 2D synthesized image1016that embodies high density element suppression and breast tissue enhancement data, and 2D image synthesizer1024/104enh outputs a different 2D synthesized image1026that embodies high density element enhancement data. These different 2D synthesized images1016,1026are provided as inputs to an image fusion or merging element1030, which combines or merges the 2D synthesized images1014,1024to generate a 2D composite synthesized image1032that incorporates elements of both of the 2D synthesized images1014,1024. Multi-flow image processing methods involving different types of input data1001and intermediate image and associated processing involving different dimensional formats and image or slice configurations are described in further detail with reference toFIGS.11-24. Referring toFIG.11, in a multi-flow image processing method1100executed by breast image acquisition and processing system100saccording to one embodiment, at1102, digital input data1001of one or more breast tissue images is fed as an input to a multi-flow or differential image processor1000of image generation and display system100ssuch as a tomosynthesis system. At1104, portions of images that depict breast tissue910and portions of images that depict high density elements920are identified or detected. A first image processing flow1010is executed on input data to generate first 2D synthesized image1016. Referring toFIG.12, the first image processing flow or method1010, or metal suppression flow, includes enhancing image portions depicting breast tissue910at1202, whereas image portions depicting a high density element920such as a metallic biopsy marker920and/or shadow922are suppressed, replaced or eliminated at1204. At1206, a first set or stack of 3D image slices (e.g., Tr slices) embodying enhanced breast tissue and suppressed high density element image portions is constructed, and at1208, a first 2D synthesized image1016is generated based at least in part upon first stack of 3D image slices. Referring toFIG.13, the second image processing method1020is different from first image processing method1010and is executed on the same input data1001. The second image processing method1020generates a different, second 2D synthesized image1026. At1302, image portions depicting high density elements920are emphasized (without breast tissue enhancement, or by deemphasizing breast tissue), at1304, a second set of 3D image slices based at least in part upon enhanced image portions depicting high density elements is generated. At1306, the second 2D synthesized image1026is generated based at least in part upon second set of 3D image slices. Referring again toFIG.11, at1110, the first and second 2D synthesized images1016,1026are combined or merged to generate composite synthesized image1032, and at1112, the composite synthesized image1032is presented through display105of image generation and display system100sto a radiologist or end user. Referring toFIG.14and with further reference toFIG.15, one embodiment of a method1400for processing breast images using the system100sconfiguration shown inFIG.10and as described with reference toFIGS.11-13is described. In method1400, the multi-flow image processing method1000is executed on an input of a reconstructed 3D stack of image slices in which breast tissue910and high density elements920are both visible. Thus, in this embodiment, a stack of 3D image slices1506rather than 2D projection images1502are provided as an input1501to the image processor1500such that the multi-flow image processing method1500is not executed on 2D projection images1502. At1402, image acquisition component101(e.g., x-ray device of digital tomosynthesis system)) is activated, and at1404, a plurality of 2-D images1502of patient's breast is acquired. For example, in a tomosynthesis system, approximately 15 2D projection images Tp1502may be acquired at respective angles between the breast and the x-ray source—detector. It will be understood that 15 2D projection images is provided as an example of how many projection images may be acquired, and other numbers, greater than and less than 15, may also be utilized. At1406, if needed, the acquired or projection images1502are stored by the acquisition component101to a data store102for subsequent retrieval, which may be from a data store102that is remote relative to the image processor1000and via a communication network. At1408, 2D projection image reconstruction1504is executed to generate a 3D stack1508of image slices Tr1506(e.g., ˜60 image slices in the illustrative example). At1410, the first detector1511of the first image processing flow1510identifies portions of input 3D image slices1506depicting breast tissue910and portions of image slices1506depicting high density elements920(e.g., metallic object or calcification, or shadow) generated by imaging high density element920in or on breast. A second detector1521identifies a high density element920. For these purposes, the image processor1500may utilize one or more criteria or filters as described above to identify and differentiate breast tissue or background910and high density element image portions920in the 3D stack1506. Continuing with reference toFIGS.14-15, at1412, the first image processing flow1510involves high density element suppression1512of the input stack1508, the result of which is generation of a first processed 3D stack1513in which a high density element920is suppressed or eliminated. FIG.16illustrates in further detail one manner in which high density element suppression1512may be executed on the input 3D stack1508and also how an optional mask may be generated for subsequent use in generating a 2D synthesized composite image1032. In the illustrated embodiment, the first image processing flow1510on the input1501of 3D image slices1506involves detection of portions of image slices1506depicting a high density element920such as a metallic biopsy marker at1602, segmentation or pixel identification of the detected high density element portions920at1604, and at1606, a segmentation mask may be generated based on the segmentation results. The mask may be subsequently utilized when generating a 2D synthesized composite image1032. At1608, segmented portions are suppressed or eliminated from image slices of the 3D stack for high density element suppression. This may be done by interpolation or replacing segmented portions with other sampled portions of image slice background. High density element suppression results in the elimination of high density element920image portions from the 3D stack1508of image slices such that the high density element920would not be visually perceptible to a radiologist, or visually perceptible or to a lesser degree. Thus, the end result1610of the suppression process is a processed 3D stack1610of reconstruction image slices, or metal suppressed breast tissue slices, in which breast tissue image portions910are maintained while suppressing or eliminating image portions of high density elements920, while also generating a separate “high density mask.” FIG.17illustrates in further detail one manner in which high density element enhancement1522within the input 3D stack1508may be executed and also how an optional segmentation or pixel mask may be generated for subsequent use in generating a 2D synthesized composite image1032. In the illustrated embodiment, the second image processing flow1520on the input 3D image slices1501involves detecting portions of image slices depicting a high density element920such as a metallic biopsy marker at1702, segmentation of the detected high density element portions at1704, and at1706, a segmentation mask may be generated and may be subsequently utilized when generating a 2D synthesized composite image1032. Metal segmentation1704information may be recorded as a metal segmentation mask1706, and masks from different slices can be combined into a single 2D metal mask, which is a side output from the metal synthesizer module1524. As an example, in the case of using a binary mask, within this 2D metal mask image, the high density element regions are marked with 1 and the background or breast tissue regions are marked with 0. Different mask configurations or designs can also be utilized for these purposes by utilizing other or multiple labels rather than only binary “0” and “1” labels. At1708, segmented portions are isolated and emphasized or enhanced in image slices of the 3D stack. High density element enhancement1708may be executed using, for example, maximum intensity projection or “MIP.” The end result1710generated by the metal enhancement module is a stack1523of 3D reconstruction image slices in which breast tissue image portions910are not processed or not enhanced, and high density elements920are enhanced or emphasized. Referring again toFIGS.10and14-15, at1414, the multi-flow image processor1000executes the first 2D image synthesizer1514that receives the processed or metal suppressed stack1513of 3D image slices as an input. The first 2D image synthesizer1514generates a 2D synthesized image1515based on the suppressed high density image portions of the metal suppressed 3D stack while enhancing or emphasizing breast tissue image portions by use of the target object recognition/enhancement modules (e.g.,110a,110b. . .110n), each configured for recognizing and enhancing a particular type of object. The first 2D image synthesizer1514may operate in the same manner as 2D image synthesizer104discussed above, except that the 2D image synthesizer receives image data resulting from high density element suppression1512. As discussed above with reference toFIGS.1-8B, target object recognition/enhancement modules110a-nare configured to identify the respective object (if any is/are present) therein such that the resulting 2D synthesized images includes clinically-significant information. With continuing reference toFIGS.14-15and with further reference toFIG.18, the multi-flow image processor1000executes the second 2D image synthesizer1524/1801that receives the processed or metal enhanced stack1523of 3D image slices as an input to generate a 2D synthesized image1525/1802based on the enhanced high density image portions of the metal enhanced 3D stack while other background or breast tissue portions are maintained or not enhanced, or even deemphasized, e.g., by reducing brightness thereof. For this purpose, the second 2D image synthesizer1524does not include, or deactivates, target object recognition/enhancement modules (e.g.,110a,110b. . .110n) such that these breast tissue analyses and enhancements are not performed and are not necessary in view of a high density element920structure. For example, a metallic biopsy marker may have a less complex geometric shape (e.g., a cylinder), or is typically less complex than breast tissue. For example, rather than employing more complicated target object recognition/enhancement110a-n, the second image processing flow in which high density elements920are enhanced can deploy simply image processing algorithms such as mean-intensity projection or max-intensity projection as the base method to combine 3D stack of metal object slices into a single metal object 2D synthetic image1810, which may be stored to a buffer. The result generated by the second 2D synthesizer1524/1801is generally illustrated by the high density object appearing as a “dot” in the 2D synthesized image15251802inFIG.18. FIG.18also illustrates the 2D synthetic image1802including various artifacts1810resulting from imperfections in metal detection and segmentation processes. Morphological operations1803(e.g. pixel dilation and/or erosion) can be executed on the 2D synthetic image1802to clean these artifacts1810by smoothing the high density object boundary to make the boundary in the resulting 2D image1525/1804more accurate and more visually appealing. Referring again toFIGS.14-15, having generated a first 2D synthesized image1515based at least in part upon first stack of 3D image slices and a second 2D synthesized image1525based at least in part upon second stack of 3D image slices, at1420, these intermediate first and second 2D synthesized images1515,1525generated by respective first and second image processing flows1510,1520are merged or combined1530to generate a 2D final or composite synthesized image1532, which is presented to a radiologist or end user via display105at1422. According to one embodiment, image combination1530may involve selecting the best signals of 2D synthetic image data from each synthetic image buffer and ensuring that the transition between the breast tissue910and the high density element920is seamless. The 2D composite synthesized image1532is visually free of shadow artifacts920such that unwanted shadow artifacts do not obscure clinically important information while also including enhanced breast tissue or background and sharp delineations between breast tissue910and high density elements920. Referring toFIG.19, according to one embodiment of combining1030the first and second 2D synthesized images1515,1525, the 2D metal mask1900generated by segmentation as discussed above may be utilized for modulated combination1902or maximum intensity projection or “MIP” combination of the intermediate first and second 2D synthesized images1515,1525to generate the 2D composite synthesized image1532. This embodiment essentially extracts the signals or image portions from each 2D synthetic image1515,1525buffer for seamless transition between breast tissue910and high density elements920such that the resulting 2D composite image1032is visually sharp, free of high density shadow elements while providing for optimal breast tissue background. FIGS.20A-Billustrate an example of how multi-flow image processing embodiments can be executed to generate a 2D synthesized composite image1032that is visually sharp and clear with reduced or eliminated shadow922artifacts.FIG.21Billustrates a 2D synthesized image1032that is constructed according to multi-flow image processing of embodiments that eliminates obscuring shadow artifacts922compared toFIG.21A, which includes various shadow artifacts922around the metallic biopsy marker920. The final result of a 2D synthesis composite image1032generated according to embodiments is sharp and shadow922free while breast tissue or background910is also enhanced. Certain embodiments described above with reference toFIGS.10-20Binvolve the multi-flow image processor1500receiving reconstructed or generated images or a 3D stack1508of image slices1506(e.g., ˜60 reconstructed slices) as an input1001such that multi-flow image processing is executed on the same 3D stack1508. The 3D stack1508is generated based on acquired 2D projection images1502, which are not provided as an input1001to the image processor1000in these embodiments. Thus, the multi-flow image processing is not executed on the 2D projection images1502in these embodiments. In other words, the multi-flow image processing is executed directly on the 3D stack1508of image slices, but not the 2D projection images1502upon which the 3D stack1508of image slices is based, and the multi-flow image processing is executed after reconstruction1504. Other embodiments may involve the image processor1000receiving inputs of different image types and dimensional formats. For example, in other embodiments, the multi-flow image processor receives an input of 2D projection images such that the multi-flow image processing is executed directly on the 2D projection images rather than the 3D stack of image slices that is eventually generated after reconstruction. Different 3D stacks of image slices are provided as respective inputs to respective 2D image synthesizers after suppression and enhancement processing has been executed on 2D projection images. Thus, in certain embodiments, high density element suppression and enhancement occurs after reconstruction1504of a 3D stack1508of image slices1506, whereas in other embodiments, high density element suppression and enhancement occur before reconstruction of a 3D stack of image slices. Alternative embodiments of multi-flow image processing involving execution of image processing embodiments using 2D projection images as an input to the image processor are described with reference toFIGS.21-24. System components and their interoperability described above are not repeated. Referring toFIGS.21-22, in an image processing method2100according to another embodiment, at2102, image acquisition component101(e.g., x-ray device) of the image generation and display system100sis activated, and at2104, a plurality of 2-D images of patient's breast2201(e.g., ˜15 projection images at respective angles between breast and x-ray source—detector) is acquired. At2106, 2D projection images2201are stored to a data store102, and at2108, digital image data of the 2-D projection images2201is received from the data store102and provided as an input to the multi-flow image processor2200of image generation and display system100s. At2210, a first detection module2211identifies portions of individual 2D projection images2201depicting breast tissue910and portions of individual 2D projection images2201depicting high density elements920(e.g., metallic biopsy marker or shadow) generated by imaging the metallic biopsy marker, and a second detection module2221identifies portions of individual 2D projection images2201depicting high density elements920(e.g., metallic object or shadow) generated by imaging high density object in or on breast. At2212, the first image processing method or flow2210including high density element suppression2212is executed on the input 2D projection images2201to generate processed/high density element suppressed 2D projection images2213, and at2214, the second image processing method or flow2220including high density element enhancement2222is executed on the input 2D projection images2201to generate processed/high density element enhanced 2D projection images2223. In certain embodiments, all of the input 2D projection images2201are suppressed in some way, whereas in other embodiments, only certain input 2D projection images2201are subjected to high density suppression2212, e.g., only those determined to include at least a portion of a high density element920. Thus, in certain embodiments, high density suppression2212and high density enhancement2222are both executed before any image reconstruction into a 3D stack of image slices. Further, in one embodiment, each input 2D projection image2201is processed such that the set of processed of 2D projection images2213,2223is the same as the number of input 2D projection images2201, but it will be understood that embodiments are not so limited. For example, the number of input 2D projection images2201that are subjected to high density element suppression2212and enhancement2213may be less than the number of input 2D projection images2201if only those input 2D projection images2201that are determined to include a high density element920are processed. Thus, for example, image acquisition may result in 15 input 2D projection images2201, only eight of which contain at least a portion of a high density element920, in which case only those eight input 2D projection images2201are processed for high density element suppression2212and enhancement2222. The remaining seven input 2D projection images2201may be rejoined with the eight that were processed for a set of 15 projection images prior to reconstruction and generation of a 3D stack. Accordingly, high density element suppression2212and enhancement2222may be executed before any 3D image reconstruction, on all of the 2D projection images2201of the input set, or on selected 2D projection images2201of the input set, e.g., those determined to contain high density elements by detector2211, since a metallic object or shadow920generated thereby may not be present in certain images depending on the high density element size, location and orientation and position relative to a radiation source and detector used for imaging. Moreover, the number of processed 2D projection images2213,2223following suppression2212and enhancement2222may be the same as the number of input 2D projection images2201even if only some of the input 2D projection images2201are processed since unprocessed input 2D projection images2201may be added to the processed set. Continuing with reference toFIGS.21-22, having generated the processed set of 2D projection images2213,2223, at2216, a first stack2214of 3D image slices (e.g., ˜60 image slices) is generated based at least in part upon the first set of processed 2D projection images2213(e.g., ˜15 images) involving high density element suppression2212, and at2218, a second stack2224of 3D image slices is generated based at least in part upon the second set of processed 2D projection images2223involving high density element enhancement2222. Having constructed the first and second stacks of 3D images2214,2224, these stacks are then processed at2120,2122by respective 2D image synthesizers2215,2225to generate respective first and second 2D synthesized images2216,2226based at least in part upon respective first and second stacks2214,2224. At2124, morphological operations may be executed on the second 2D synthesized image2226as necessary to dilate or erode image edges of enhanced image portions depicting high density elements as necessary, and at2226, the first and second 2D synthesized images2216,2226are merged or combined2230to generate a 2D composite image at2232, which is presented to the radiologist or end user via a display105. FIGS.23-24further illustrate how respective suppression2212and enhancement2222processing are executed, and are similar to the processing described with reference toFIGS.15-17above except that the detection, segmentation and suppression (FIG.16) and suppression (FIG.17) are based on inputs of individual input 2D projection images1502rather than on an input 3D stack of image slices, the resulting masks2306,2406generated by segmentation is a mask for an individual image as shown inFIGS.23-24rather than for a stack of 3D image slices as shown inFIGS.15-17, and the result or output of the suppression and enhancement processing is a suppressed or enhanced processed 2D projection image as shown inFIG.22rather than an output of a high density element suppressed 3D stack. Having described exemplary embodiments, it can be appreciated that the examples described above and depicted in the accompanying figures are only illustrative, and that other embodiments and examples also are encompassed within the scope of the appended claims. For example, while the flow diagrams provided in the accompanying figures are illustrative of exemplary steps; the overall image merge process may be achieved in a variety of manners using other data merge methods known in the art. The system block diagrams are similarly representative only, illustrating functional delineations that are not to be viewed as limiting requirements of the disclosed inventions. It will also be apparent to those skilled in the art that various changes and modifications may be made to the depicted and/or described embodiments (e.g., the dimensions of various parts), without departing from the scope of the disclosed inventions, which is to be defined only by the following claims and their equivalents. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. | 79,945 |
11857359 | DETAILED DESCRIPTION The following discussion describes in detail one embodiment of the invention (and several variations of that embodiment). However, this discussion should not be construed as limiting the invention to those particular embodiments. Practitioners skilled in the art will recognize numerous other embodiments as well. For definition of the complete scope of the invention, the reader is directed to appended claims. Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and such as represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware and hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named manufacturer. A novel ultrafast 3D digital imaging system with multi pulsed X-ray sources by deflecting tube electron beam using electrical field is shown inFIG.1. It comprises a primary motor3engaged with a primary motor stage4, multiple X-ray tubes6, each in X-ray source tube housing5and a pair of deflection electrical plates7on each side of each tube6. The X-ray source tube housings5are mounted on a supporting frame structure2, all of which move together on the primary motor stage4. By applying a voltage difference at the electrical deflection plate7, an electrical field will be created between the electrical deflection plates7as shown inFIG.5. The strength of the electrical field varies upon the voltage applied. The deflection of the electron beam of the X-ray tube can be achieved by using a deflection magnetic field as shown inFIG.4. A primary motor3mechanically engages with a primary motor stage4to control the speed of the primary motor stage4. X-ray sources move in arcs at the same speed as the primary motor stage4, with the primary motor3being on one side of the primary motor stage4. A supporting frame structure2provides housing for the primary motor stage4and X-ray sources. A flat panel detector1receives X-ray imaging data. A pair of deflection plates7or a pair of magnetic coils8yoke produces an electrical field or a magnetic field at the X-ray tube electron beam9. The multiple pulsed X-ray sources or X-ray tubes6are mounted on the primary motor stage4to form an array of sources. The multiple X-ray sources move simultaneously relative to an object on a pre-defined arc track at a constant speed as a group. Electron beam9inside each individual X-ray tube can be deflected by magnetic or electrical field to move focal spot a small distance. When the focal spot of an X-ray tube beam has a speed that is equal to group speed but with opposite moving direction, the X-ray tube6and X-ray flat panel detector1are activated through an external exposure control unit so that source tube stay momentarily standstill equivalently. With the multiple sources or X-ray tubes6working parallel, the system only moves a fraction of the distance that a single tube system has to move. As a result, the 3D scan can cover much wider sweep angle in much shorter time and image analysis can also be done in real-time. To power the structure in motion, the primary motor3engages primary motor stage4by gears. Primary motor3can move primary motor stage4along a rigid rail at a predetermined constant speed. By applying a voltage to a pair of deflection electrical plates7at X-ray tube6, the electron can be deflected before X-ray tube electrons reach the X-ray tube target11. By fine-tuning voltage, the electron focal spot can move along the direction of primary motion stage4. When focal spot speed is equal to the primary motion stage4and has an opposite direction, then X-ray tube6and X-ray flat panel detector1are triggered. At this trigger moment, X-ray tube6and X-ray detector1actually have a relative standstill position. The primary motion stage4with the X-ray source(s) is moved on an arc rail with a predetermined shape, and the X-ray source(s) is moved on said primary motion stage4at a constant speed by said primary motor3. Multiple X-ray sources are mounted on said primary motion stage4in the form of an array of sources. The multiple X-ray sources move simultaneously around an object on a pre-defined track at a constant speed as a group. The focal spot of the X-ray source can also move rapidly around its static position of a small distance. When an X-ray tube focal spot on an individual X-ray source has a speed equal to group speed but an opposite moving direction, the respective X-ray source is triggered through an external exposure control unit. This arrangement allows the X-ray source to stay relatively standstill during the X-ray pulse trigger exposure duration. Multiple X-ray sources result in a much-reduced source travel distance for individual X-ray sources. A flat panel detector1is placed on a supporting frame structure to receive X-ray imaging data. A pair of deflection plate7or a magnetic coil8yoke is positioned to produce an electrical field or a magnetic coil at an X-ray tube electron beam9. Multiple X-ray tubes6in an array and the detector will be mechanically moved in a predetermined arc track by a primary motor stage4. A set of multiple X-ray tubes can be connected to a primary motor stage4via a rack and pinion type mechanical structure or fixed on a plurality of bases with a fixed distance between each other. The X-ray tube focal spot is deflected in one direction and opposite direction by electric field or magnetic field. While moving on the arc track, individual X-ray tubes would move rapidly around their static position by a deflecting electric field or magnetic field. X-ray sources from one of the sources can be randomly activated through the control unit, in which 3D radiography image data acquisition and image analysis can be made in real-time while the scan goes. A preferred method to trigger multiple pulsed X-ray sources in motion includes positioning a primary motor stage4to a predetermined initial location; sweeping the primary motor stage at a predetermined constant speed by said primary motor3; deflecting X-ray tube electron beam9with a predetermined sequence by applying a voltage to plate or by applying current to magnetic coil; electrically activating an X-ray source and a flat panel detector1when an X-ray tube focal spot moves in an opposite direction to that of the primary motor stage4and at a selected speed of the primary motor stage4; and acquiring image data from an X-ray flat panel detector1. X-ray source tube housing5is pivotally mounted on an axis parallel to the X-ray source mounting plate and coupled to a rotation driving mechanism that causes the X-ray source tube housing to rotate around an axis parallel to the X-ray source mounting plate. The angle of rotation is designated by an angle “β” in this application. The amount of angle of rotation can be set by the user based on specific requirements. In an exemplary embodiment, the angle of rotation is about 12.5 degrees. A single rotation driver couples the X-ray source tube housing5to a geared motor for rotating X-ray source tube housing5. The rotation driving mechanism comprises two pairs of pulleys and, wherein each pair of pulleys is mounted on each end of X-ray source tube housing5and is coupled by drive gears. They drive gears and rotate X-ray source tube housing5when the rotation driver is activated by software. A preferred speed range of the X-ray tube housing5is from about 20 mm/s to about 50 mm/s. A pair of deflection electrical plates7is disposed to an arc of the X-ray source and an X-ray flat panel detector1. The deflection electrical plates7are adjusted to a position where the X-ray source and the flat panel detector1are not in line. When the arcuate shape is pre-defined, and the X-ray source is mechanically moved in a circular motion around its focal point in accordance with a speed control unit that controls the speed of the primary motor3in conjunction with the X-ray exposure control unit that controls the time duration of the X-ray output from the X-ray source through trigger signal generated from trigger source, the X-ray source will trace out a curve in 3D space on the detector. At the same time, the X-ray source will also race out a corresponding curve on the detector with some degree rotation in 3D space. Image data can be reconstructed with knowledge of the target object structure at each location that the X-ray source moves through. Knowledge of the target object geometry can be calculated with pre-measured landmarks and image processing tools to yield accurate geometric modeling of the interior body structures within the patient. One embodiment includes a two-dimensional or set of 3D cameras to detect patient movement during the procedure and compensate for the patient movement in real-time. A supporting frame structure2provides housing for the X-ray source moving mechanism. An arc rail, which can be part of a single axis motion stage, is provided to move along a circular track in one direction. An electronic controller (not shown) allows the speed of the arc rail to be precisely controlled. A plurality of pulsed X-ray sources is mounted on the moving mechanism in an array around the periphery of the arc rail. For the X-ray sources, any suitable type of X-ray tube may be used. The arc rail and its related structure can be moved smoothly on the supporting frame structure2at high speed and minimal friction. Each X-ray source is triggered when it comes into position relative to a patient during a sweep. Each X-ray source must be positioned so that the focal spot of the X-ray source does not irradiate any portion of the patient until the X-ray source is triggered. In one embodiment, the system uses multiple pulsed X-ray sources in motion to perform ultrafast, highly efficient 3D radiography. In the system, multiple pulsed X-ray sources are mounted on a structure in motion to form an array of the source. The multiple X-ray sources move simultaneously around an object on a pre-defined track at a constant speed of a group. X-ray tube focal spot at each X-ray source can also move rapidly around its static position of a small distance by deflection electrical field or deflection magnetic field. When an X-ray tube focal spot on an individual X-ray source has a speed equal to group speed but an opposite moving direction, the respective X-ray source is triggered through an external exposure control unit. This arrangement allows the X-ray source to stay relatively standstill during the X-ray pulse trigger exposure duration. Multiple X-ray sources result in a much-reduced source travel distance for individual X-ray sources. X-ray receptor is an X-ray flat panel detector1. As a result, 3D radiography image projection data can be acquired with an overall much broader sweep in a much shorter time, and image analysis can also be done in real-time while the scan goes. More details of the ultrafast 3D digital imaging system with multi pulsed X-ray sources by deflecting tube electron beam9using deflection magnetic field is shown inFIG.2, which shows one of the multiple X-ray sources, each of which includes a pair of magnetic deflection coil8are placed at an X-ray tube6inside an X-ray source tube housing5. By applying current at the magnetic deflection coil8, a magnetic field will be created between the pair of the magnetic deflection coil8. The strength of the magnetic field varies upon the current flow through the magnetic coil. During operation, primary motor3engages primary stage4by gears to provide motion for the X-ray sources in the housings5. Primary motor3can move primary stage4along the rigid rail at a predetermined constant speed. By applying current to a pair of magnetic deflection coils8at X-ray tube6, X-ray tube electron beam9can be deflected by force from the magnetic field before the electrons reach the X-ray tube target. By fine-tuning the current, the electron focal spot can move along the direction of primary motor stage4. When X-ray tube focal moving speed is equal to the speed of primary motion stage4and has an opposite direction, then X-ray tube6and X-ray detector1are triggered. At this trigger moment, X-ray tube6and X-ray detector1actually have a relative standstill position. The X-ray tube6is the heart of the X-ray machine. The X-ray tube6has high voltage terminals connected to an external high voltage power supply through electrical wires. The X-ray tube6produces a current flow along an electron gun column in a vacuum container inside the X-ray tube6. A pair of magnetic deflection coils8is used to adjust the beams of the X-ray tube6. The X-ray tube6or source could be a point source, smaller focal spot size is desirable. Smaller focal spot size would have better image resolution. A spectrally filtered X-ray tube is desirable to produce an X-ray beam of desired energy range. A tube-mount assembly provides an electrical and mechanical connection between the X-ray tube6and the primary motor stage4. Tube-mount assembly has a secondary or tertiary or more level of metal to shield against electrical interference. A front and back cover and respectively could provide shielding against ambient radiation and airborne particles. The X-ray source tube housing5with an X-ray tube6mounted inside is moveable on the primary motor stage4. The X-ray source tube housing5is mounted at primary motor stage4that moves freely on an arc rail at a predetermined constant speed; a primary motor3that controls a speed of the primary motor stage4; and multiple X-ray sources (one of which is) housed in the X-ray source tube housing5that is all moved simultaneously at the same speed as the primary motor stage4. An X-ray flat panel detector1is to receive X-ray and send imaging data. It is mounted around the rotation center of the primary motor stage4to receive the X-ray beam transmitted through a portion of the object under test placed at the rotation center. An array of five X-ray sources is mounted on the X-ray source tube housing5at an equal angle to each other; they move together with the primary motor stage4at a constant speed. A collimator positioned between the X-ray source tube housing5and the flat panel X-ray detector1along the axis of motion of the primary motor stage4can limit the horizontal component of the X-ray beam passing through. A supporting frame structure2provides housing for the primary motor stage4and an electrical field deflection device such as a pair of deflection plates7. Primary motor3provides driving motion to move the primary motor stage4on a predetermined track. A plurality of X-ray sources is mounted on the primary motor stage4for emitting X-rays sequentially. The X-ray sources are arranged in an array configuration, each X-ray source moving simultaneously with the others on the primary motor stage along the same path, at a constant speed and speed as a group. A flat panel detector1is usually mounted on the supporting frame structure2to receive X-ray and send imaging data. A pair of electrical deflection plates7or magnetic deflection coil8yoke is located in front of the X-ray tube target11to control the position of the X-ray source focal spot. Technical features of an X-ray imaging system using multiple pulsed X-ray sources in motion to perform ultrafast, highly efficient 3D radiography: The first technical feature is that the primary motor stage4is moved on a predetermined track. Each X-ray source is moved with the primary motor stage4on the predetermined track, the X-ray sources moving simultaneously with the others on the primary motor stage4along the same path, at a constant speed and speed as a group. Primary motor stage4may be moved freely on an arc rail with a predetermined shape. A primary motor3that engages with the primary motor stage4controls the speed of the primary motor stage4. In certain implementations, X-ray sources may be mounted on the primary motor stage4and move simultaneously around an object on a pre-defined track at a constant speed of a group. X-ray tube focal spot at each X-ray source can also move rapidly around its static position of a small distance by deflection electrical field or deflection magnetic field. When an X-ray tube focal spot on an individual X-ray source has a speed equal to group speed but an opposite moving direction, the respective X-ray source is triggered through an external exposure control unit. This arrangement allows the X-ray source to stay relatively standstill during the X-ray pulse trigger exposure duration. Multiple X-ray sources result in a much-reduced source travel distance for individual X-ray sources. X-ray receptor is an X-ray flat panel detector1. As a result, 3D radiography image projection data can be acquired with an overall much broader sweep in a much shorter time, and image analysis can also be done in real-time while the scan goes. Primary motion stage4is mounted on the fixed base structure of a frame and is placed so that it can move freely on an arc rail with a predetermined shape. A primary motor3drives the primary motor stage4. The primary motor speed controller controls the speed of the primary motor stage based on desired movement time and input from the computer system (or programmed timing) during a scan. A power supply is connected to the primary motor to provide electricity for primary motor operation. The primary motor stage4is a drive element that moves in a sweeping motion along the rail of the base structure in the direction at a constant speed, controlled by the primary motor3. The center of the primary motor stage4is a cylinder supporting an X-ray tube and a high voltage generator. X-ray flat panel detector1of an X-ray imaging system to provide fast 3D radiography using multiple pulsed X-ray sources in motion with a primary motor stage4moving freely on an arc rail with a predetermined shape; a primary motor3that engages with said primary motor stage4and controls a speed of the primary motor stage4; a plurality of X-ray sources each moved on the primary motor stage4; a supporting frame structure2that provides housing for the primary motor stage4; a flat panel detector1to receive X-ray imaging data; a pair of deflection plates7to produce electrical field or a pair of magnetic coil8yoke to produce a magnetic field at X-ray tube electron beam9. A method of fast 3D radiography using multiple pulsed X-ray sources in motion includes positioning a primary motor stage4to a predetermined initial location; sweeping the primary motor stage at a predetermined constant speed by said primary motor3; deflecting X-ray tube electron beam9with a predetermined sequence by applying a voltage to plate or by applying current to magnetic coil8; electrically activating an X-ray source and a flat panel detector when an X-ray tube focal spot moves in an opposite direction to that of the primary motor stage4and at a selected speed of the primary motor stage4; and acquiring image data from a flat panel detector. In one embodiment, an X-ray imaging system using multiple pulsed X-ray sources in motion to perform ultrafast, highly efficient D radiography is presented. FIG.3illustrates an exemplary complete X-ray exposure position. In this example, there are there are five X-ray tubes6in X-ray source tube housing5, and the five X-ray tubes6at X-ray source tube housing5perform 25 total X-ray exposures at different angle positions. Each of the five X-ray tubes6only needs to travel one-fifth of total covered angle. Therefore, with multiple X-ray tubes6working in parallel, a large amount of projection data can be acquired at a fraction of amount of time compared with that of a single X-ray source. An X-ray flat panel detector1is served as an X-ray receiver. Electronic signal always goes faster than that of mechanical motion, bottle neck of limiting factor is always motor stage motion itself. Next bottleneck is detector readout limitation. Because detector also needs some time to read out many Mega pixel data and then transfer to a computer. In view of the widely available superfast computer available, image analysis can be done in real-time with the image acquisition. Judgment on the images taken will have an impact on the X-ray tube6position for the next shot. There is no need to wait until finish of whole image acquisition to do image reconstruction.FIG.3illustrates that a five-X-ray-source system takes 25 sets of projection data by traveling only one-fifth of the total distance. X-ray tubes6are a group of multiple pulsed X-ray sources to form an array of sources or multiple groups of pulsed X-ray sources mounted on a structure in motion to form an array of sources. The multiple X-ray sources move simultaneously relative to an object on a pre-defined arc track at a constant speed as a group. A focal spot at each X-ray source can also move rapidly around its static position at a small distance. When the X-ray tube focal spot of a respective X-ray source has a speed equal to group speed but an opposite moving direction, the X-ray source and an X-ray detector are activated through an external exposure control unit. This arrangement allows the X-ray source to stay relatively standstill during the X-ray source activation and X-ray detector exposure. X-ray receptor is an X-ray flat panel detector. The first advantage is that system overall is several times faster. Each x-ray source would only need to mechanically travel a small fraction of the whole distance in an arc trajectory. It greatly reduces the amount of data acquisition time needed for a patient at the X-ray diagnosis machine. The second advantage is that image analysis can also be done in real-time as the scan goes. Judgment on the images taken will impact the X-ray source focal spot position for the next shot. X-ray source tube housing5contains a primary X-ray tube6powered by a high voltage generator. In this patent, only one primary X-ray tube source is described, but it should be understood that more than one source may be used at the same time to acquire different parts of a 3D image data set. Primary X-ray tube housing5is mounted on a moveable structure (not shown) with a motor control system to enable movement in any direction on an arc rail that forms a part of a circular arc track or helical motion track around the patient. The high voltage generator outputs a high voltage electrical current that flows through power cables that go to the input connectors on the backside of the primary X-ray tube. In addition, the X-ray source can have a separate voltage controller that can adjust the output voltage for both high voltage primary X-ray tube and individual X-ray tube focal spot moving voltage to control the focal spot position around the static position. Multiple X-ray tubes6are arranged in an array. The X-ray tubes6are mounted on a structure moved relative to the object by a primary motor on an arc rail in the exemplary embodiment. A sequence of moving the structure of the X-ray tubes at a constant speed and a slight angle is pre-defined to generate an array of the X-ray tubes in motion simultaneously around the object. At each time point, the direction of the movement of the X-ray tube, the distance between adjacent X-ray tubes, and the time delay between adjacent X-ray tubes can be determined to form an arc trajectory for all the X-ray tubes. At a certain timing point, each X-ray tube's focal spot (electron beam) is moved around its static position by a predetermined electrical field or magnetic field from a deflection plate. A high voltage supply can produce a deflection electrical field at a pair of deflection plates that deflects the focal spot of the X-ray tube from its static position by a predetermined distance to a new location that forms a predetermined geometric shape as illustrated inFIG.5. X-ray flat panel detector1is used as a detector of an X-ray imaging system. X-ray flat panel detector1comprises a plurality of individual detector panels arranged in two dimensions to form a square or rectangular shape and can be sensitive to X-rays. The X-ray flat panel detector1is an ultra-high speed, high efficiency, active pixel sensor flat panel detector1with a fast readout capability. X-ray flat panel detector1can provide images at frame rates higher than 25 fps. X-ray flat panel detector1includes each detector panel that can be individually addressed for readout by address unit through panel driver. The X-ray tube6is located inside the X-ray machine with the X-ray tube focal spot moved by deflection electrical field or deflection magnetic field from the X-ray controller. X-ray controller triggers X-ray tube6activation bypassing trigger signal to the X-ray power supply. A pair of magnetic deflection coils8are positioned near the X-ray tube inFIG.4. X-ray sources have a focal spot that can move by the magnetic field generated by coils and, respectively. FIG.4illustrates an exemplary deflection of an electron beam in an X-ray tube can be deflected by magnetic coils8when current flows through the coils. X-ray tube6can be fixed on a supporting frame or have an electrically actuated mechanism to enable X-ray focal spot positioning. In the latter case, the X-ray tube can move with a primary motor stage4with the primary motor engaged to rotate a rotor shaft, thus controlling the speed of the primary motor stage4. On the other hand, a deflection plate is mounted to a platform at an equal distance from X-ray sources. The plate is part of a voltage/current drive system that can generate an electrical field or magnetic field. The plate is driven by a control board that receives commands from the exposure control unit via a digital interface. The voltage from the exposure control unit passes through a converter to energize the electrical coils of the magnetic coil, which generates a magnetic field surrounding the X-ray tube. An exposure control unit controls a magnetic coil through a magnetic coil driver. After starting a scan sequence, the control board triggers individual pulsed X-ray sources through a signal cable with digital signals generated by the exposure control unit. Upon receiving a trigger command, respective pulsed X-ray sources begin a ramp-up operation to increase power output. In an embodiment, the X-ray source in the X-ray tube is stationary at the beginning of the operation, but when it is time to expose the X-ray receptor (flat panel detector), it moves in an opposite direction from the primary motor stage4while being fired with a selected speed. In another embodiment, the X-ray source can be turned on randomly by triggering either from outside the system or from inside the system using a random firing scheme. Results of each and accumulated analysis determine the next X-ray source and exposure condition. 3D X-ray radiography images are reconstructed based on each image with an angled geometry of the X-ray exposure source. Much broader applications include 3D mammography or tomosynthesis, chest 3D radiography for COVID or fast 3D IDT, fast 3D X-ray security inspection. X-ray tube electron beam9and flat-panel detector1are positioned in a parallel arrangement to each other. X-ray tube focal spot moves around the static position at a small distance on X-ray tube anode due to magnetic field produced by magnetic coil yoke or electrical field produced by deflector plates, depending on its design. At each instance of X-ray tube focal spot's movement around its origin, the x-ray beam projected onto a region of interest is determined by x-ray tube current intensity that changes rapidly due to switching in a certain duration. The X-ray imaging system also includes a controlling unit with a random firing switch module (FFW) and exposure control unit (ECU) for operating X-ray sources. The random firing switch module is connected to all X-ray sources. It randomly fires one of the X-ray sources at a time with an externally generated trigger signal. Thus, the activation of each X-ray source and image acquisition occurs simultaneously. When one of the X-ray sources is activated, an associated electronic unit that is connected to a flat panel detector (units in total) will control the electronic trigger signal applied to the flat panel detector so that the acquisition of x-ray imaging data begins simultaneously with the activation of the X-ray source. FIG.5illustrates an exemplary electron beam in an X-ray tube deflection by electric plate pair through a voltage difference. X-ray tube6has a focal spot with a finite size at an initial position that changes after an electrical field, or magnetic field has deflected the X-ray tube focal spot to a subsequent position. The multiple X-ray sources are moved at the same speed as the primary motor stage but opposite to that of the primary motor stage on the predetermined track. They are also positioned relative to the first X-ray source and second X-ray source at a selected angle relative to each other. In this example, four pulsed X-ray sources move simultaneously relative to the object at a constant speed as a group and sequentially from one to another as they sweep across the object. A corresponding detector is mounted on the opposite side of the primary motor stage4that is parallel to the movement direction of the pulsed X-ray sources. Electrical deflection plate7is located between the X-ray tube cathode and target. Electrical voltage pulses are applied to the electrical deflection plate7to control the movement of the focal spot relative to the X-ray source along a tracking axis, as shown in the schematic diagram. The other electrical voltage pulses are applied to the electrical deflection plate7to control the movement of the focal spot along a direction perpendicular to the track axis, which results in moving the focal spot back and forth along the track axis in synchronous with other movements on track axis. The magnetic coil is mounted between the X-ray tube cathode and target and is also used to control the movement of the focal spot. By applying different combinations of electrical or magnetic fields simultaneously to deflection device, the X-ray tube electron beam9is deflected along the moving direction of primary motor stage4. X-ray tube electron beam9has its focal spot moved around the x-ray tube stationary axis by an external deflection electrical field (plate) or an external deflection magnetic field (coil). The focal spot constantly moves as the primary motor stage4sweeps in a circular path at a predetermined sweep speed to scan an object. This method is done in parallel with multiple X-ray sources. X-ray tube cathode10produces the electron beam9, and the X-ray is emitted after electron beam9hits X-ray tube target11and X-ray moving toward the object is referred to as primary X-ray. The deflection coils may be located between the X-ray tube cathode10to deflect the electron beam9toward the tube target11by passing an electrical current through the deflection coils. A high voltage generator (the structure that provides high voltage pulses) connected to the cathode produces an electric pulse and sends it to the deflection coils to deflect the electron beam9before it hits the target. An X-ray tube6can include an electron gun, a heated cathode10, X-ray tube target11or other material that generates an electron beam9from one end. X-ray tube target11is mounted on another end of the X-ray tube6. The electrical insulator and the material are attached to the housing and conductive with the target. An electrical insulator can be made of many different materials. Examples of electrical insulators can include Teflon and/or other dielectric materials such as glass or mica. Housing can be made of many different materials. Examples of housing can include stainless steel, aluminum, plastic, ceramic, combinations thereof, or any other materials that will not interfere with the transmission of X-rays. Electrical insulators and materials can have the same or different compositions and can be configured in a single or multiple layers between target and housing. The primary motor stage4is mounted on a supporting frame structure2that provides housing for the primary motor stage4. An X-ray flat panel detector1to receive X-ray flux is positioned to generate X-ray image projection data from the plurality of X-ray sources. The X-ray sources are arranged on a primary motor stage4that moves freely on an arc rail with a predetermined shape. An exposure control unit controls the electrical field applied to each X-ray source to deflect the X-ray tube electron beam9. An X-ray source moves simultaneously relative to an object on a pre-defined track at a constant speed as a group. Each X-ray source focal spot can also move rapidly around its static position at a small distance by deflection electrical field or deflection magnetic field. When an X-ray tube focal spot on an individual X-ray source has a speed equal to group speed but an opposite moving direction, the respective X-ray source is triggered through an external exposure control unit. Multiple pulsed X-ray sources result in a much-reduced source travel distance for individual X-ray sources. A primary motor stage4is positioned at a pre-defined initial location and sweeps on an arc track at a constant speed by said primary motor. One or more X-ray sources each moved on the primary motor stage4. The pre-defined initial location can be set to any of various initial locations depending on how one wishes to position a subject on the X-ray imaging machine for X-ray scanning. Various exemplary locations are chest X-ray scan (ventral/dorsal), chest CT scan, etc. Various modifications and alterations of the invention will become apparent to those skilled in the art without departing from the spirit and scope of the invention, which is defined by the accompanying claims. It should be noted that steps recited in any method claims below do not necessarily need to be performed in the order they are recited. Those of ordinary skill in the art will recognize variations in performing the steps from the order in which they are recited. In addition, the lack of mention or discussion of a feature, step, or component provides the basis for claims where the absent feature or component is excluded by way of a proviso or similar claim language. While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only and not of limitation. The various diagrams may depict an example architectural or other configuration for the invention, which is done to understand the features and functionality that may be included in the invention. The invention is not restricted to the illustrated example architectures or configurations, but the desired features may be implemented using various alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical, or physical partitioning and configurations may be implemented to implement the desired features of the present invention. Also, many different constituent module names other than those depicted herein may be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions, and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise. Although the invention is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects, and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead may be applied, alone or in various combinations, to one or more of the other embodiments of the invention, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open-ended as opposed to limiting. As examples of the previous, the term “including” should be read as meaning “including, without limitation” or the such as; the term “example” is used to provide exemplary instances of the item in the discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the such as; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given period or to an item available as of a given time. Instead, they should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Hence, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. A group of items linked with the conjunction “and” should not be read as requiring that every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should also be read as “and/or” unless expressly stated otherwise. Furthermore, although items, elements, or components of the invention may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other such as phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “module” does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, may be combined in a single package or separately maintained and may further be distributed across multiple locations. Additionally, the various embodiments set forth herein are described in exemplary block diagrams, flow charts, and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives may be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration. The previous description of the disclosed embodiments enables anyone skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art. The generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is accorded the most comprehensive scope consistent with the principles and novel features disclosed herein. While there has been shown several and alternate embodiments of the present invention, it is to be understood that certain changes can be made as would be known to one skilled in the art without departing from the underlying scope of the invention as is discussed and set forth above and below. Furthermore, the embodiments described above are only intended to illustrate the principles of the present invention. They are not intended to limit the scope of the invention to the disclosed elements. | 40,918 |
11857360 | DESCRIPTION OF EMBODIMENTS Overview of Biological Sound Measurement Device of Embodiment First, an overview of an embodiment of a biological sound measurement device according to the present invention will be described. The biological sound measurement device according to the embodiment is configured to measure, as an example of a biological sound, a pulmonary sound from a subject such as a person and notify a measurer of an occurrence of wheezing when wheezing is determined to be included in the measured sound. In this way, it is possible to support the determination of the necessity of medication for the person to be measured, the determination of whether or not to take the person to the hospital, and the like. The biological sound measurement device according to the embodiment includes a sound measurement unit including a contact surface configured to be brought into contact with the body surface of the subject such as a person, and a gripping portion supporting this sound measurement unit and configured to be gripped by a measurer. The gripping portion is provided with a display unit configured to display an analysis result of the biological sound measured by the sound measurement unit on a surface facing the body surface side in a state in which the sound measurement unit is in contact with the body surface of the subject. With such a configuration, in a state in which the contact surface of the sound measurement unit has been brought into contact with the body surface, the display unit is in a state of not being visible to the measurer, making it possible for the measurer to focus on the measurement task and thus the measurement accuracy can be increased. Further, the operation from completion of the measurement task to confirmation of the analysis result can be performed smoothly, making it possible to reduce the burden on the measurer. A specific configuration example of the biological sound measurement device according to the embodiment will be described below. Embodiment FIG.1is a side view schematically illustrating an outline configuration of a biological sound measurement device1, which is an embodiment of the biological sound measurement device according to the present invention.FIG.2is a schematic view of the biological sound measurement device1illustrated inFIG.1, viewed from the measurer side in a direction B.FIG.3is a schematic view of the biological sound measurement device1illustrated inFIG.2, viewed from the subject side.FIG.4is a cross-sectional schematic view of a sound measurement unit3of the biological sound measurement device1illustrated inFIG.1. As illustrated inFIG.1toFIG.3, the biological sound measurement device1includes a gripping portion10having a columnar shape extending in a direction A and constituted by a case of a resin, a metal, or the like. A head portion11is provided on one end side of this gripping portion10. The gripping portion10is a portion gripped by the measurer. An integrated control unit (not illustrated) configured to integrally control the entire biological sound measurement device1, a battery (not illustrated) configured to supply a voltage required for operation, a display unit21illustrated inFIG.3, and the like are provided inside the gripping portion10. The integrated control unit includes various processors, random access memory (RAM), read only memory (ROM), and the like, and performs a control and the like of each hardware of the biological sound measurement device1in accordance with a program. As illustrated inFIG.1andFIG.4, the head portion11is provided with the sound measurement unit3that protrudes toward one side (lower side inFIG.1andFIG.4) in a direction intersecting the longitudinal direction A of the gripping portion10. A contact surface30configured to be brought into contact with the body surface S of the subject is provided on a tip end of this sound measurement unit3. The contact surface30is constituted by a pressure-receiving region3a(refer toFIG.3) having a circular shape, for example, and an extended region3b(refer toFIG.3) having an annular shape, for example. The pressure-receiving region3ais a flat surface required for receiving pressure from the body surface S, and the extended region3bis a flat surface formed around the pressure-receiving region3aand provided to increase a contact area with the body surface S. In the example ofFIG.1andFIG.4, the pressure-receiving region3aprotrudes slightly further toward the body surface S side than the extended region3b, but may be formed on the same plane as the extended region3b. The direction B illustrated inFIG.1is a direction perpendicular to the contact surface30and intersects the longitudinal direction A of the gripping portion10. As illustrated inFIG.2, in a state of viewing in the direction B perpendicular to the contact surface30, a recessed portion12for placement of an index finger F, for example, of a hand Ha of the measurer is formed on a surface10aof the gripping portion10, which is opposite side to the sound measurement unit3side, on a portion overlapping the sound measurement unit3. As illustrated inFIG.1andFIG.2, the biological sound measurement device1is used, in a state in which the index finger F of the hand Ha of the measurer is placed in the recessed portion12of the gripping portion10and the gripping portion10is gripped by the hand Ha, with the contact surface30including the pressure-receiving region3aof the sound measurement unit3being pressed against the body surface S by the index finger F. As illustrated inFIG.4, the sound measurement unit3includes the sound detector33such as a micro-electro-mechanical systems (MEMS) type microphone or a capacitive microphone, and a housing32having a bottomed tubular shape, forming an accommodation space32baccommodating the sound detector33, and including an opening32a, a cover34closing the opening32afrom outside the accommodation space32band forming the pressure-receiving region3athat receives pressure from the body surface S, and a case31supported by the gripping portion10and accommodating the housing32and the cover34in a state in which the cover34is exposed. The housing32is made of a material having higher acoustic impedance than that of air and high rigidity, such as resin or metal. The housing32is preferably made of a material that reflects sound in a detection frequency band of the sound detector33in a sealed state of the housing32so that sound is not transmitted from the outside to the interior of the accommodation space32b. The cover34is a member having a bottomed tubular shape, and a shape of a hollow portion thereof substantially matches an outer wall shape of the housing32. The cover34is made of a material having a flexibility, an acoustic impedance close to that of the human body, air, or water, and favorable biocompatibility. Examples of the material of the cover34include silicone and an elastomer. The case31is made of resin, for example. The case31is formed with an opening31aat an end portion of opposite side to the gripping portion10side, and a portion of the cover34is in a protruding and exposed state from this opening31a. A front surface of the cover34exposed from this case31forms the pressure-receiving region3adescribed above. When the pressure-receiving region3ais brought into close contact state with the body surface S, vibration of the body surface S generated by the pulmonary sound of the living body vibrates the cover34. When the cover34vibrates, an internal pressure of the accommodation space32bfluctuates due to this vibration and, by this internal pressure fluctuation, an electrical signal corresponding to the pulmonary sound is detected by the sound detector33. An outer surface of the portion of the case31protruding from the gripping portion10is constituted by the extended region3bdescribed above, which is formed of a flat surface having an annular shape, and a tapered surface3cthat connects an outer peripheral edge of the extended region3band the gripping portion10. The tapered surface3cis a surface having an outer diameter that continuously increases from the gripping portion10side toward the extended region3bside. As illustrated inFIG.2, the sound measurement unit3and the gripping portion10partially overlap. InFIG.2, a non-overlapping portion31bof the sound measurement unit3positioned outside the gripping portion10includes the contact surface30described above. Then, a width of the non-overlapping portion31bin a direction parallel to the contact surface30is greatest at a first position, which is a position of the contact surface30in the direction B (defined as the position of the extended region3b). Further, at a position closer to the gripping portion10than the first position in the direction B, the width of the non-overlapping portion31bin the direction parallel to the contact surface30is less than the width at the first position. In other words, a cross-sectional area of a cross section of the non-overlapping portion31bparallel to the contact surface30(area of the region surrounded by an outer edge of the non-overlapping portion31b) is greatest at the first position and, at a position closer to the gripping portion10than the first position, is less than the cross-sectional area at the first position. As illustrated inFIG.3, the display unit21and an operation unit20are provided on a surface10bof a front surface of the gripping portion10on the body surface S side in a state in which the contact surface30has been brought into contact with the body surface S. The display unit21includes light emitting units21aand21bincluding light emitting elements such as light emitting diodes (LEDs). On the surface10bof the gripping portion10, the characters “Wheezing” are printed adjacent to an upper side of the light emitting unit21a, and the characters “No wheezing” are printed adjacent to an upper side of the light emitting unit21b. The integrated control unit described above included in the gripping portion10notifies of a detection result of wheezing (analysis result of the biological sound) by the display unit21. Specifically, in a case in which the integrated control unit analyzes the pulmonary sound detected by the sound detector33and, as a result, determines that wheezing is included in the pulmonary sound, the integrated control unit turns off the light emitting unit21band causes the light emitting unit21ato emit light, thereby notifying the measurer that wheezing was detected. Further, in a case in which it is determined that wheezing is not included in the pulmonary sound, the integrated control unit turns off the light emitting unit21aand causes the light emitting unit21bto emit light, thereby notifying the measurer that wheezing was not detected. Note that only the light emitting unit21amay be used as the display unit21, and the integrated control unit may notify of the presence or absence of wheezing by changing the light emission color of the light emitting unit21ain accordance with the measurement result. As illustrated inFIG.3, in the biological sound measurement device1, in a state of viewing in a direction perpendicular to the longitudinal direction A of the gripping portion10(direction from the front to the back of the paper inFIG.3), the display unit21is provided adjacent to a region of the surface10bof the gripping portion10that overlaps the recessed portion12, and the operation unit20is provided at a position on the surface10bon the other end side of the gripping portion10from the display unit21. The operation unit20is an interface configured to perform various operations such as turning on a power source of the device, turning off the power source of the device, and initiating measurement of the biological sound. The operation unit20need only be configured to at least turn on and off the power source of the device. The operation unit20is constituted by a button or a switch capable of inputting an instruction by being pressed, or a sensor capable of inputting an instruction by being touched. Effects of Biological Sound Measurement Device1 As described above, according to the biological sound measurement device1, the display unit21configured to display the analysis result of the biological sound is provided on the surface10bof the gripping portion10on the body surface S side. That is, the display unit21is not visible to the measurer while the contact surface30is being brought into contact with the body surface S and the biological sound is being measured. Thus, the measurer can focus on measurement of the biological sound. Therefore, an event such as a change in the contact state between the contact surface30and the body surface S during measurement can be prevented, and the measurement accuracy of the biological sound can be ensured. Further, according to the biological sound measurement device1, the analysis result of the biological sound is displayed on the display unit21provided on the surface10bof the gripping portion10on the body surface S side. Therefore, the measurer can check the display unit21without changing a gripping posture of the gripping portion10. As a result, it is possible to smoothly perform tasks from initiation of the biological sound measurement task to confirmation of the analysis result and reduce the burden of the measurer. Further, according to the biological sound measurement device1, the display unit21is provided adjacent to the region of the surface10bof the gripping portion10overlapping the recessed portion12, in other words, in the vicinity of the sound measurement unit3. Thus, even in a state in which the measurer is gripping the gripping portion10with the hand Ha, the display unit21is less likely to be hidden by the hand Ha. Accordingly, the analysis result can be confirmed more smoothly. Further, according to the biological sound measurement device1, the display unit21includes the one or plurality of light emitting elements and is configured to display the analysis result of the biological sound by changing a light emission position or a light emission color of the one or plurality of light emitting elements. Thus, it is possible to reduce a size of the gripping portion10, reduce cost, and save energy. Further, for example, assuming that the biological sound measurement device1is utilized while an infant or the like is sleeping, the analysis result is displayed by slight light of the one or plurality of light emitting elements, making it possible to prevent the sleeping of the infant from being disturbed. Further, according to the biological sound measurement device1, the operation unit20is provided on the surface10bof the gripping portion10on the body surface S side. Thus, while the contact surface30is being brought into contact with the body surface S and the biological sound is being measured, a finger of the measurer is less likely to touch the operation unit20, and erroneous operation during measurement can be prevented. Further, after measurement is completed, the measurer can check the display unit21and subsequently operate the operation unit20with a thumb, for example, and turn off the power source as is. In this way, the tasks from confirmation of the measurement result to turning the power source off can be performed smoothly and convenience can be improved. Further, according to the biological sound measurement device1, in a state in which the contact surface30of the sound measurement unit3is in contact with the body surface S, the outer edge of the non-overlapping portion31bnot overlapping the gripping portion10of the sound measurement unit3becomes the outer edge of the contact surface30as is, and is visible. Therefore, the contact state between the contact surface30and the body surface S can be easily confirmed. As a result, a favorable contact state can be easily achieved, making it possible to improve the measurement accuracy of the biological sound. Further, according to the biological sound measurement device1, the side surface of the sound measurement unit3excluding the contact surface30of the case31is the tapered surface3cthat decreases in diameter (width) from the contact surface30toward the gripping portion10. This makes it possible to secure space for avoiding interference with clothing, a bone, and the like between the tapered surface3cand the gripping portion10while increasing the area of the contact surface30to enable stable contact with the body surface S. As a result, preparatory work prior to the start of measurement of the biological sound can be performed smoothly. In particular, in a device configured to detect wheezing from a pulmonary sound, the subject is presumably an infant or the like. An infant presumably moves frequently and thus, with this work being performed smoothly, the burden on the measurer can be alleviated. Further, according to the biological sound measurement device1, the longitudinal direction (direction A) of the gripping portion10and the contact surface30intersect. Thus, in a state in which the contact surface30is in contact with the body surface S, the gripping portion10is not parallel to the body surface S. In such a configuration, the outer edge of the non-overlapping portion31bbecomes visible as an outer edge of the contact surface30as is and, regardless of the orientation of the gripping portion10, the contact state between the contact surface30and the body surface S can be intuitively determined. As a result, it is possible to improve the measurement accuracy of the biological sound while alleviating the burden on the measurer. Modified Example of Biological Sound Measurement Device1 FIG.5is a diagram illustrating a configuration of a biologic sound measurement device1A that is a modified example of the biological sound measurement device1ofFIG.1, and corresponds toFIG.2. The biological sound measurement device1A has the same configuration as that of the biological sound measurement device1except that three light emitting units40are added to the non-overlapping portion31b. The light emitting units40are configured to emit light by light emitting elements such as LEDs, and are embedded in the case31, for example, in a partially exposed state. The light emitting units40are controlled by the integrated control unit. For example, the integrated control unit determines a state of close contact between the contact surface30and the body surface S and, in a case in which it is determined that the state of close contact is not suitable for measurement of the biological sound, causes light to be emitted from the light emitting units40. Alternatively, to notify the measurer that measurement is in progress while the biological sound is being measured, the integrated control unit performs control that causes the three light emitting units40to sequentially emit light in a predetermined pattern. As the measurement process of the biological sound progresses, the integrated control unit may perform control that increases the number of light emitting units40that emit light. Thus, because the light emitting units40are located in the non-overlapping portion31bto be visible to the measurer even in a state in which the contact surface30has been brought into contact with the body surface S, even if the biological sound measurement device1A is largely hidden by the hand Ha as illustrated inFIG.5, the measurer can be notified of various notifications other than the analysis result to the measurer. In the biological sound measurement device1A, the display unit21provided on the surface10bdisplays the analysis result and the light emitting units40indicate notifications to the measurer during measurement of the biological sound, thereby allowing the measurer to smoothly confirm the measurement result while focusing on the measurement task to improve measurement accuracy. Note that the number of the light emitting units40included in the biological sound measurement device1A is not limited to three, and may be one, two, or four or more. Other Modified Examples The display unit21may be any unit as long as capable of notifying the measurer of the analysis result of the biological sound, and may display the analysis result as an image by, for example, an organic electro-luminescence (EL) panel or a liquid crystal display panel. The positions of the display unit21and the operation unit20may be reversed. Further, it is sufficient to provide at least the display unit21on the surface10b, and the operation unit20may be provided on the surface10a, for example. In the biological sound measurement devices1and1A, the longitudinal direction (direction A) of the gripping portion10and the contact surface30may be configured to be parallel. Further, the side surface of the case31may be a surface parallel to the direction B, for example, rather than the tapered surface3c. Further, the sound measurement unit3may be configured to be completely concealed by the gripping portion10(configured without the non-overlapping portion31b) in a state of viewing from the direction B. While various embodiments have been described with reference to the drawings, needless to say, the present invention is not limited to such examples. It will be apparent to those skilled in the art that various changes and modifications can be made within the scope of the claims, and it is understood that these are naturally belong within the technical scope of the present invention. Further, each of the components of the above-described embodiments may be combined as desired within a range that does not depart from the spirit of the present invention. Note that the present application is based on Japanese Patent Application filed Jan. 11, 2019 (JP 2019-3484), the contents of which are incorporated herein by reference. REFERENCE SIGNS LIST 1,1A Biological sound measurement device3Sound measurement unit10Gripping portion10a,10bSurface11Head portion12Recessed portion3aPressure-receiving region3bExtended region3cTapered surface30Contact surface31Case31a,32aOpening31bNon-overlapping portion32Housing32bAccommodation space33Sound detector34Cover20Operation unit21Display unit21a,21b,40Light emitting unitS Body surfaceHa HandF Index finger | 22,222 |
11857361 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. For example, while aspects of the present disclosure are described in terms of intraluminal ultrasound imaging, it is understood that it is not intended to be limited to this application. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately. FIG.1is a diagrammatic schematic view of an intraluminal ultrasound imaging system100, according to aspects of the present disclosure. For example, the system100can be intravascular ultrasound (IVUS) imaging system. The intraluminal ultrasound imaging system100includes an intraluminal ultrasound imaging device102, a patient interface module (PIM)104, a processing system106, and a display108. The intraluminal ultrasound imaging device102can be an IVUS imaging device, such as a catheter, guide wire, or guide catheter. At a high level, the IVUS device102emits ultrasonic energy from a transducer or acoustic element array124included in the ultrasound imaging or scanner assembly110mounted near a distal end of the catheter or flexible elongate member121. The flexible elongate member121can be referenced as a longitudinal body in some instances. The flexible elongate member121can include a proximal portion and a distal portion opposite the proximal portion. The array124can be positioned around a longitudinal axis LA of the imaging assembly110and/or the flexible elongate member121. The ultrasonic energy is reflected by tissue structures in the medium, such as a body lumen120, surrounding the scanner assembly110, and the ultrasound echo signals are received by the transducer array124. The PIM104transfers the received echo signals to the processing system106where the ultrasound image (including B-mode and/or flow data) is reconstructed and displayed on the display108. The display108can be referenced as a monitor in some instances. The processing system106can include a processor and a memory. The processing system106can be referenced as a console, computer, and/or computing system in some instances. The processing system106can be operable to facilitate the features of the IVUS imaging system100described herein. For example, the processor can execute computer readable instructions stored on the non-transitory tangible computer readable medium The scanner assembly110may include one or more controllers125, such as control logic integrated circuits (IC), in communication with the array124. For example, the controllers125can be application specific integrated circuits (ASICs). The controllers125can be in communication with the array124via conductors, such as conductive traces in or on a substrate. The controllers125are configured to control operations of the array124associated with emitting and/or receiving ultrasound energy to obtain imaging data associated with the body lumen120. The scanner assembly110can include any suitable number of controllers125, including one, two, three, four, five, six, seven, eight, nine, or more controllers. In some embodiments, the controllers125(FIG.1) can be mounted on the imaging assembly110longitudinally proximal to the transducer array124. In some other embodiments, the one or more control logic ICs can be disposed between the rolled-around transducer array124and the tubular member126. The controllers125can be referenced as control logic circuitry, chips, or integrated circuits (ICs) in some instances. Aspects of an intraluminal imaging device, including various techniques of transforming the transducer array124from a flat configuration to a cylindrical or rolled-around configuration, are disclosed in one or more of U.S. Pat. Nos. 6,776,763, 7,226,417, U.S. Provisional App. No. 62/596,154, filed Dec. 8, 2017, U.S. Provisional App. No. 62/596,141, filed Dec. 8, 2017, U.S. Provisional App. No. 62/596,300, filed Dec. 8, 2017, U.S. Provisional App. No. 62/596,205, filed Dec. 8, 2017, each of which is hereby incorporated by reference in its entirety. In some embodiments, the acoustic elements of the array124and/or the controllers125can be positioned in an annular configuration, such as a circular configuration or in a polygon configuration, around the longitudinal axis LA. It will be understood that the longitudinal axis LA of the support member126may also be referred to as the longitudinal axis of the scanner assembly110, the flexible elongate member115, the device102, and/or the support member126ofFIG.2. For example, a cross-sectional profile of the imaging assembly110at the transducer elements of the array124and/or the controllers125can be a circle or a polygon. Any suitable annular polygon shape can be implemented, such as a based on the number of controllers/transducers, flexibility of the controllers/transducers, etc., including a pentagon, hexagon, heptagon, octagon, nonagon, decagon, etc. The PIM104facilitates communication of signals between the processing system106, such as an IVUS console, and the scanner assembly110included in the IVUS device102. This communication includes the steps of: (1) providing commands to one or more control logic integrated circuits included in the scanner assembly110to select the particular transducer array element(s) to be used for transmit and receive, (2) providing the transmit trigger signals to the one or more control logic integrated circuits included in the scanner assembly110to activate the transmitter circuitry to generate an electrical pulse to excite the selected transducer array element(s), and/or (3) accepting amplified echo signals received from the selected transducer array element(s) via amplifiers included on the one or more control logic integrated circuits of the scanner assembly110. In some embodiments, the PIM104performs preliminary processing of the echo data prior to relaying the data to the processing system106. In examples of such embodiments, the PIM104performs amplification, filtering, and/or aggregating of the data. In an embodiment, the PIM104also supplies high- and low-voltage DC power to support operation of the device102including circuitry within the scanner assembly110. The processing system106receives the echo data from the scanner assembly110by way of the PIM104and processes the data to reconstruct an image of the tissue structures in the medium surrounding the ultrasound imaging assembly110. The processing system106outputs image data such that an image of the body lumen120, such as a cross-sectional image of a vessel, is displayed on the display108. Generally, the system100and/or the device102can be used in any suitable lumen of a patient body. In that regard, the system100can be an intraluminal ultrasound imaging system, and the device102can be an intraluminal ultrasound imaging device. In some instances, the device102can be an intra-cardiac echocardiography (ICE) imaging catheter or a trans-esophageal echocardiography (TEE) probe. The system100and/or the device102can be referenced as an interventional device, a therapeutic device, a diagnostic device, etc. The device102can be sized and shaped, structurally arranged, and/or otherwise configured to be positioned within the body lumen120. Body lumen120may represent fluid filled or surrounded structures, both natural and man-made. The body lumen120may be within a body of a patient. The body lumen120may be a blood vessel, such as an artery or a vein of a patient's vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or or any other suitable lumen inside the body. For example, the device102may be used to examine any number of anatomical locations and tissue types, including without limitation, organs including the liver, heart, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood, chambers or other parts of the heart, and/or other systems of the body. In addition to natural structures, the device102may be may be used to examine man-made structures such as, but without limitation, heart valves, stents, shunts, filters and other devices. In some embodiments, the IVUS device includes some features similar to traditional solid-state IVUS catheters, such as the EagleEye® catheter available from Koninklijke Philips N.V. and those disclosed in U.S. Pat. No. 7,846,101 hereby incorporated by reference in its entirety. For example, the IVUS device102includes the scanner assembly110near a distal end of the flexible elongate member121and a cable112extending along the flexible elongate member121. The cable112can include a plurality of communication lines, including one, two, three, four, five, six, seven, or more communication lines134(as shown for example inFIG.2). Any suitable communication lines134can be implemented, such as a conductors, fiber optics, etc. It is understood that any suitable gauge wire, for example, can be used for the communication lines134. In an embodiment, the cable112can include a four-conductor transmission line arrangement with, e.g., 41 AWG gauge wires. The cable112can be referenced as a transmission-line bundle in some instances. In an embodiment, the cable112can include a seven-conductor transmission line arrangement utilizing, e.g., 44 AWG gauge wires. In some embodiments, 43 AWG gauge wires can be used. The cable112terminates in a PIM connector114at a proximal end of the device102. The PIM connector114electrically couples the cable112to the PIM104and physically couples the IVUS device102to the PIM104. In an embodiment, the IVUS device102further includes a guide wire exit port116. Accordingly, in some instances, the IVUS device102is a rapid-exchange catheter. The guide wire exit port116allows a guide wire118to be inserted towards the distal end in order to direct the device102through the body lumen120. In other instances, the IVUS device102can be an over-the-wire catheter including a guide wire lumen extending along an entire length of the flexible elongate member121. The flexible elongate member121can be made of polymeric lengths of tubing in some instances, including one or more lumens for the cable112and/or the guide wire118. FIG.2is a diagrammatic perspective view of the intraluminal imaging device102, including the ultrasound scanner assembly110. In some embodiments, the ultrasound scanner assembly110can be disposed at a distal portion of the flexible elongate member121of the device102. The flexible elongate member121is sized and shaped, structurally arranged, and/or otherwise configured to be positioned within a body lumen of a patient. The imaging or scanner assembly110obtains ultrasound imaging data associated with the body lumen while the device102is positioned within the body lumen. As shown inFIG.2, the scanner assembly110may include the transducer array124positioned around the longitudinal axis LA of the device102. The transducer array124can be referenced as an array of acoustic elements in some instances. In some instances, the scanner assembly110can include a diameter between about 0.8 mm and about 1.6 mm, such as 1.2 mm. The array124is disposed in a rolled or cylindrical configuration around a tubular member126. The tubular member126can also be referred to as a support member, a unibody, or a ferrule. In some implementations, the tubular member126can include a lumen128. The lumen128can be sized and shaped to receive a guide wire, such as the guide wire118shown inFIG.1. The device102can be configured to be moved along or ride on the guide wire118to a desired location within the physiology of the patient. In those implementations, the lumen128can be referred to as a guide wire lumen128. In some embodiments, the scanner assembly110may also include a backing material130between the transducer array124and the tubular member126. In that regard, the tubular member126can include stands that radially space the transducer array124from the body of the support member126. The backing material130can be disposed within the radial space between the tubular member126and the array124. The backing material130serves as an acoustic damper to minimize or eliminate propagation of ultrasound energy in undesired directions (e.g., radially towards the center) Thus, the ultrasound energy from the array124is directed radially towards the body lumen120in which the flexible elongate member121is positioned. As shown in the enlarged view of a region of the transducer array124, the transducer array124can include a plurality of rows of acoustic/transducer elements140fabricated on a semiconductor substrate132. The semiconductor substrate132is divided into a plurality of islands141spaced apart from one another and/or separated by trenches144. The trenches144isolate the islands141, which allows islands to be orientated at different angles, such as when the array124is positioned around the longitudinal axis LA of the device102. The imaging assembly110can include any suitable number of islands141, such as 4, 8, 16, 32, 64, 128, and/or other values both larger and smaller. A plurality of acoustic elements140can be formed on each island141. In some instances, a single row of acoustic elements140can be formed on each island, as shown inFIG.2. In other instances, two rows of acoustic elements140can be formed on each island and arranged in a staggered manner, as shown inFIG.9. In some embodiments, the acoustic elements140can be positioned side-by-side one another on an individual island141. In some embodiments, the substrate132may be formed of a semiconductor material. Each of the ultrasound transducer elements140in the transducer array124can be a micromachined ultrasound transducer, such as a capacitive micromachined ultrasound transducer (CMUT) or a piezoelectric micromachined ultrasound transducer (PMUT). While each of the ultrasound transducer elements140is illustrated as being circular in shape, it should be understood that each of the ultrasound transducer elements140can be in any shape. The divided islands141of the semiconductor substrate132are coupled to a common flexible interconnect142. The flexible interconnect142can extend around the acoustic elements140as well as across and/or over the trenches144. The flexible interconnect142can include holes aligned with a diaphragm or movable membrane143of the acoustic elements140. In that regard, the interconnect142does not completely cover the islands141. The interconnect142can cover portions of the islands141that do not include the diaphragm or movable membrane143of the acoustic elements140. In some embodiments, the interconnect142completely covers the islands141, including the diaphragm or movable membrane143of the acoustic elements140, such as when the flexible interconnect142also comprises an acoustic matching layer. As described herein, an acoustically-transparent window can be disposed over the acoustic elements140 The flexible interconnect142can be made of polymer material, such as polyimide (for example, KAPTON™ (trademark of DuPont)), and can be considered a flexible substrate. Other suitable polymer materials include polyester films, polyimide films, polyethylene napthalate films, or polyetherimide films, other flexible printed semiconductor substrates as well as products such as Upilex® (registered trademark of Ube Industries) and TEFLON® (registered trademark of E.I. du Pont) As the transducer array124is first fabricated on the semiconductor substrate132, which is rigid, and then a flexible substrate (i.e. the flexible interconnect142) is positioned over the transducer array124, the transducer array124is fabricated using flexible-to-rigid (F2R) technology. The trenches144are positioned under the flexible interconnect142and form the fold lines when the transducer array124is rolled around the tubular member126. That is, the array124can be manufactured in a flat configuration and transitioned into a cylindrical or rolled configuration around the longitudinal axis of the flexible elongate member121during assembly of the device102. Exemplary aspects of manufacturing the ultrasound imaging assembly are described in U.S. Provisional App. No. 62/527,143, filed Jun. 30, 2017, and U.S. Provisional App. No. 62/679,134, filed Jun. 1, 2018, each of which is hereby incorporated by reference in its entirety. While flex-to-rigid (F2R) and/or the flexible interconnect142are mentioned, it is understood that acoustically-transparent window described herein can be implemented in other intraluminal device architectures, including intraluminal devices without F2R and/or the flexible interconnect142. A tip member123defines the distal end of the device102. The tip member123is the leading component of the device102as the device102is inserted into and moved within the body lumen120. The tip member123can be formed of a polymer material such that the device102atraumatically contacts anatomy. The tip member123can include a guide wire lumen in communication with the lumen128of the support member126such that the guide wire118extends through the support member126and the tip member123. FIG.3is a cross-sectional side view illustrating a portion of the ultrasound imaging assembly110. In particular, an acoustically-transparent window300is positioned over the acoustic element140. The acoustically-transparent window300can be formed of multiple material layers formed on top of one another, including material layers310,320, and/or330. In that regard, the layers of the acoustically-transparent window300can be directly or indirectly coupled, secured, and/or otherwise affixed to another. While only one acoustic element140is shown inFIG.3, it understood that the acoustically-transparent window300can be positioned over all acoustic elements140of the imaging assembly110. In general, the acoustically-transparent window300facilitates desired propagation of ultrasound energy from the acoustic element140to the body lumen120of the patient and reflected ultrasound echoes from the body lumen120to the acoustic element140. In that regard, the materials of the acoustically-transparent window300provide acoustic impedance values such that the acoustic path for ultrasound energy to and from the acoustic element140is free from sharp impedance transitions, which can cause undesirable reflection/refraction. For example, the acoustically-transparent window300can provide an acoustic impedance match with the acoustic impedance of blood and/or blood vessel tissue such that ultrasound energy propagates in a desired manner across the transition between the imaging assembly110and the blood. CMUT advantageously provides a high bandwidth for ultrasound energy. The materials of the acoustically-transparent window300can be advantageously arranged to minimize or eliminate impedance transitions, which reduce bandwidth and prevent the full advantages of CMUT from being realized. In embodiments in which the acoustically-transparent window300includes a non-impedance matched layer, that layer is structurally configured to be very thin, such that the layer does not interfere with the desired manner of ultrasound propagation through the window. An additional factor improving acoustic transparency of a material is its low acoustic wave absorption properties (coefficient) in the desired range of acoustic wave frequencies. As described with respect toFIG.2, the polyimide interconnect142can extend across trenches144to couple individual islands141. In that regard, the interconnect142is a mechanical coupling between islands141. A dimension149, such as a height or a thickness, of the interconnect142can be between approximately 1 μm and approximately 50 μm, approximately 1 μm and approximately 20 μm, approximately 3 μm and approximately 10 μm, including values such as approximately 3 μm, approximately 5 μm, approximately 7 μm, approximately 10 μm, and/or other values both larger and smaller. The acoustically-transparent window300can also extend across the trenches144such that the acoustically-transparent window300is positioned over all acoustic elements140. In the orientation of the imaging assembly110shown inFIG.3, layer330is the radially outermost layer and is exposed to anatomy within the body lumen120. Layers320,310, as well as the interconnect142and the acoustic element140are disposed radially inwardly of the layer330. It is understood that additional components, such as the substrate132, the acoustic backing material130, and/or the support member126(FIG.2) are disposed farther radially inwardly of the acoustic element140. The acoustic element140can be a CMUT element in some instances, including the membrane143, a substrate145, and a vacuum gap147between the membrane143and the substrate145. The membrane143can be formed of silicon nitride in some embodiments. The substrate145can include a bottom electrode, and the membrane143can include a top electrode. The basic principle of the CMUT element140involves a parallel-plate capacitor formed by the top and bottom electrodes143,145. The substrate/bottom electrode145is fixed, and the membrane/top electrode143is flexible. The membrane143is configured to deflect, as indicated by the arrows inFIG.3, during operation of acoustic element140to obtain ultrasound data. In receiving mode, an ultrasonic wave (e.g., ultrasonic echoes reflected from the body lumen120) causes vibration of the membrane143and a change of capacitance, which can be detected. In transmitting mode, an alternating voltage is applied between the membrane143and the substrate145. The resulting electrostatic forces cause vibration of the membrane143, sending out ultrasound energy to the body lumen120at the frequency of modulation. In various embodiments, the acoustic element140can be configured to emit ultrasound energy with a center frequency between approximately 1 MHz and approximately 70 MHz, 5 MHz and approximately 60 MHz, or between approximately 20 MHz and approximately 40 MHz, including values such as approximately 5 MHz, approximately 10 MHz, approximately 20 MHz, approximately 30 MHz, approximately 40 MHz, and/or other suitable values both larger and smaller. According to aspects of the present disclosure, the acoustically-transparent window300is especially suitable for use with high frequency ultrasound, such as between 20 MHz and 40 MHz, or higher frequencies, for CMUT elements. High frequency ultrasound can be beneficial for B-mode imaging of tissue structures, such as a blood vessel wall, as well fluid within the body lumen120, such as blood within the blood vessel. The layer310of the acoustically-transparent window300is positioned over and directly in contact with the acoustic element140. In that regard, a bottom surface of the layer310is directly in contact a top surface of the membrane143. The layer310can be referenced as the innermost layer of the acoustically-transparent window300in that it is the layer radially closest to the acoustic element140. The layer310can advantageously be formed of a flexible and/or elastic material that is deformable upon movement of the membrane143. In that regard, the innermost layer (e.g., the layer310) can be the softest layer of the all of the layers of the acoustically-transparent window300. In some embodiments, the layer310can have a durometer hardness in the Shore 00 range, including values such as less the 5 Shore 00. In some embodiments, the layer310can have a durometer hardness between approximately 40 Shore A and 80 Shore A, including values such as approximately 60 Shore A, and/or other suitable values both larger and smaller. The layer310can be formed of polybutadiene rubber (PBR) in an exemplary embodiment. The PBR material exhibits one of the lowest acoustic wave abortions in the wide-range of center frequencies used for the intraluminal ultrasound imaging devices. In other embodiments, the layer310can include silicone, polyurethane, gel, liquid, combinations thereof, and/or other suitable materials. The layer310advantageously allows deflection of the membrane143during transmit and receive, and remains in contact with the membrane143over the range of the motion of the membrane143. For example, the layer310can be referenced as conformable layer. The material forming the layer310can also advantageously assist with the mechanical dispersion of force. For example, material forming the layer310is selected to avoid mechanical coupling and/or contact between the CMUT membrane143and any stiff layer in the stack forming the acoustically-transparent window300. When there is a very soft layer310in between, the CMUT membrane143does not feel the stiffness of, e.g., the top layer330. Thus, movement of the membrane143during transmit/receive is unaffected. In contrast, if there is a soft but incompressible material surrounded on all sides with a stiff material then movement of the membrane143requires movement of the stiff top layer330, and therefore the membrane143is hindered. As described further below, the arrangement of the layers in the acoustically-transparent window300also advantageously avoids direct coupling and/or contact between the stiff top layer330and the flexible interconnect142, which would lock in the soft layer310and hinder the membrane143. The layer320of the acoustically-transparent window300is positioned over and directly in contact with the layer310. In that regard, a bottom surface of the layer320is directly in contact a top surface of the layer310. The layer320can be harder than the layer310but softer than the layer330. The layer320can have a durometer hardness between approximately 5 Shore A and 80 Shore A, including values such as approximately 5 Shore A, 40 Shore A, 60 Shore A, 80 Shore A, and/or other suitable values both larger and smaller. In some instances, the layer320can be formed of a material having a greater hardness when the layer320is thinner. The layer320can be referenced as an adhesive layer or a curing layer in some instances. In the acoustically-transparent window300, the layer320can be a thin bond line between, e.g., the layer310and the layer330. In that regard, the layer320can be configured to couple the layer330to other layers of the acoustically-transparent window300, such as the layer310in the embodiment illustrated inFIG.3. In other embodiments, the acoustically-transparent window300includes one or more additional layers between the layers310and320. The one or more additional layers, together with the layers310,320, and330can advantageously provide a desired acoustic impedance transition from the acoustic element140to the body lumen120. The layer320can be formed of polyurethane (PU) in an exemplary embodiment. The acoustic impedance of PU can be Z=1.3 to 1.9. In other embodiments, the layer320can include a silicone or other soft conformable material. In some instances, PU can be advantageous because it can be implemented as a relatively thin line with good impedance values for acoustic matching, whereas other materials, such as silicone, may require filling to increase impedance for acoustic matching, which correspondingly increase a dimension322, such as height or thickness, of the layer320. PU can be used for applications in which minimizing diameter of the intravascular device102is a priority. In some embodiments, the acoustically-transparent window300does not include the layer310. Instead, the layer320is positioned over and in direct contact with the acoustic element140. In such instances, the layer320is the innermost layer of the acoustically-transparent window300. As similarly described above, the layer320can advantageously be a flexible and/or elastic material that is deformable upon movement of the membrane143. When the layer310is omitted in the acoustically-transparent window300, a softer PU can be used. A harder PU can be used when the layer310is positioned between the layer320and the acoustic element140. The layers310and/or320can advantageously provide a low attenuation and matched impedance for ultrasound energy. A dimension312, such as a height or a thickness, of the layer310can be between approximately 10 μm and approximately 20 μm, or between approximately 12 μm and approximately 18 μm, including values such as approximately 10 μm, approximately 13 μm, approximately 15 μm, approximately 17 μm, and/or other values both larger and smaller. The dimension322, such as a height or a thickness, of the layer320can be between approximately 1 μm and approximately 10 μm, or between approximately 1 μm and approximately 5 μm, including values such as approximately 1 μm, approximately 3 μm, approximately 5 μm, and/or other values both larger and smaller. Together, the total height or thickness of the layer310and the layer320can be greater than approximately 15 μm, such as approximately 20 μm. The layer330of the acoustically-transparent window300is positioned over and directly in contact with the layer320. In that regard, a bottom surface of the layer330is directly in contact a top surface of the layer320. The layer330can be referenced as the outermost layer of the acoustically-transparent window300in that is the layer radially farthest from the acoustic element140. The outermost layer330can be opposite the innermost layer310in the embodiment of the acoustically-transparent window300illustrated inFIG.3. The layer330is exposed to anatomy within the body lumen120(e.g., blood, blood vessel tissue, stenosis). A dimension332, such as a height or a thickness, of the layer330can be between approximately 1 μm and approximately 10 μm, or between approximately 3 μm and approximately 8 μm, including values such as approximately 1 μm, approximately 3 μm, approximately 5 μm, 7 μm, and/or other values both larger and smaller. The layer330can be advantageously formed of a relatively more rigid material. In that regard, the outermost layer (e.g., layer330) can be the hardest layer of the all of the layers of the acoustically-transparent window300. The layer330can have a durometer hardness between approximately 1 Shore D and 100 Shore D, approximately 80 Shore D and 100 Shore D, including values such as approximately 85 Shore D, and/or other suitable values both larger and smaller. The arrangement of the layers310,320, and330provide mechanical, electrical, and/or chemical protection for the acoustic element140. Mechanical protection can be needed, e.g., while the intraluminal imaging device102comes into contact with anatomy. When the acoustically-transparent window300encounters rigid or sharp anatomy, such as a when the intraluminal imaging device102crosses a calcified stenosis, the soft bottom layer310deforms but the tough top layer330stays intact. The top layer330also has a high breakdown voltage for electrical protection. Additionally, the layer330also provides good permeation barrier for water. The layer330can be formed of polyethylene terephthalate (PET) in an exemplary embodiment. In other embodiments, the layer330can include a polymer material, such as polymethylpentene (PMP) or TPX®, available from Mitsui Chemicals. In some instances, PET can be used when a thinner layer330is desired, such as when minimizing diameter of the intravascular device102. A thinner PET layer330advantageously allows the propagation of ultrasound energy in the desired manner, e.g., without reflections, even though PET is not impedance matched. PMP or TPX® can be used in applications where a thicker layer330is beneficial. A PMP layer330is impedance matched and can therefore be thicker without causing undesirable ultrasound reflections. In some embodiments, the layer330can be obtained, prior to assembly of the intraluminal imaging device102, in the form of tubing, such as shrink wrap tubing. In other embodiments, the layer330, prior to assembly of the intraluminal imaging device102, can be in the form of a planar sheet of material (e.g., a foil) that can be wrapped into an annular configuration around the longitudinal axis LA, around the array124of acoustic elements140. In embodiments in which a PET heat shrink tube is used as the tough upper layer330, the dimension332, such as height or thickness, is selected to be small compared to the wavelength of sound, preferably < 1/10 λ. The acoustic wavelength in PET is close to 80 μm at 30 MHz. While PET is not impedance matched in that it has a relatively higher acoustic impedance, PET heat shrink tubings with a dimension332, such as wall thickness, of approximately 5 μm (e.g., available from Vention Medical) minimize acoustic reflections at impedance transitions. The thinner the not-impedance-matched layers are, the better the acoustic performance at higher frequencies. In other application, an impedance-matched material, such as TPX®, is used as the outer protective layer330. In that case, no acoustic reflections (reverb) will occur even when the layer330is relatively thicker. The arrangement of the layers310,320,330advantageously prevents direct contact between the layer330and the interconnect142. Direct contact between relatively harder layer330and the interconnect142would eliminate the movement/conformance of the layer310with the membrane143during ultrasound transmit and receive. In that regard, the dimension312of the layer310is selected such that the layer310positioned over and directly in contact with interconnect142, as well as the membrane143. In that regard, a bottom surface of the layer310is directly in contact a top surface of the interconnect142. Accordingly, the layer310and/or the layer320is positioned between the layer330and the interconnect142. In embodiments of the imaging assembly110with configurations other than F2R, such as those without the flexible interconnect142, the entirety of the bottom surface of the layer310can be in contact with the top surface of the membrane143. The present disclosure provides numerous advantages compared to acoustically-transparent windows for ultrasound applications known in the art. In that regard, the arrangement of the acoustically-transparent window300described herein can be utilized for a wide-range of center frequencies, such as a 5 MHz and 40 MHz, whereas existing windows where described in the context of 1 MHz to 20 MHz. As described herein, layers310,320,330provide a suitable acoustic pathway even for high frequency ultrasound energy (e.g., 20 MHz to 40 MHz). Existing acoustic windows in the art are also much thicker (e.g., >30 μm), as well as designed for re-usable devices, such as external ultrasound probes. Aspects of the present disclosure advantageously provide a thin acoustically-transparent window300suitable for intraluminal applications, such as IVUS imaging. In that regard, the acoustically-transparent window300also is arranged implementation in a flex-to-rigid framework, such as being positioned over the polyimide interconnect142, which allows for the array124to be transitioned from a planar configuration to an annular configuration around the longitudinal axis LA of the device102Known acoustic windows for external ultrasound probes do not address the need to avoid contact between the tough outer layer330and the interconnect142. In some embodiments, the adhesive layer320is the hardest layer of the acoustically-transparent window300. Such embodiments can permit propagation of ultrasound energy through the acoustically-transparent window300in the intended manner by making the adhesive layer320very thin, even though the impedance of the layer320is likely undesirably high with a very hard material. In a two layer design of the acoustically-transparent window300, the outer layer will be harder than the inner layer. In some instances, a hydrophilic coating is positioned over the acoustically-transparent window300The hardness of the hydrophilic coating can be greater than or less than hardness of the layers310,320, and/or330of the acoustically-transparent window300, in various embodiments. According to one advantageous set of embodiments, a thickness of the acoustically-transparent window may vary across its extent. Thickness means for example the dimension normal to a surface of the window, or of the acoustic elements or of the flexible elongate member. It means for example a dimension in the direction extending between the innermost layer and the outermost layer. It means for example a height of the acoustically transparent window. This set of embodiments will now be explained in more detail. As described above, embodiments of the present invention are based on providing an outermost layer for the acoustically-transparent window which is the hardest of the layers, and an innermost layer which is the least hard of the layers. The benefit of a mechanically hard and strong outermost layer is that it provides mechanical protection for the softer lower layers, for example for the softer innermost layer, whose softness advantageously enables free unimpeded movement of the acoustic elements while still providing coupling between the window and the elements. For example, the hard outermost layer provides scratch protection for the lower layers, for example in the case of the flexible elongate member passing over a calcified stenosis. However, some example hard materials which may advantageously be used for the outermost layer can exhibit a somewhat poor acoustic match to water or blood, which can lead to a significant ringing effect in the obtained acoustic signal. This reduces the bandwidth and the axial resolution. This in turn can significantly degrade the obtainable image quality. By way of one example, the example material PET, which is one advantageous suitable material for forming the outermost layer, exhibits a poor acoustic match with blood and water, leading to problems such as those described. To explain the problem further, by way of one illustrative example, the disadvantageous ringing effect for one example set of acoustic window layer materials will now be described. This represents only one example, and the problem may also similarly arise for other sets of layer materials. By way of this example, one advantageous set of materials for the acoustically-transparent window, starting from the innermost layer and working outward, is PBR-PU-PET (Polybutadiene-Polyurethane-Polyethylene terephthalate). The material properties of these materials compared with water are shown in Table 1 below: TABLE 1SpeedDensityImpedance[m/s][kg/m3][MRayl]PBR15769271.46PU156710501.65PET256013903.56Water150010001.50 The hard PET foil layer has an acoustic impedance of Z=3.6 MRayl, compared to water which has an acoustic impedance of Z=1.5 MRayl. Thus, a strong reflection, of both outgoing and incoming ultrasound waves, of 41% results at the PET layer. This is illustrated schematically inFIG.4, which shows the PBR innermost layer310, the PU adhesive layer320and the PET outermost layer330of the acoustically transparent window300, coupled atop an example CMUT element membrane143, mounted to a silicon substrate145. The reflection of outgoing acoustic signals is illustrated. This strong reflection is visible as a tail in the echo signal, or a ‘hump’ in the obtained ultrasound spectrum. This effect is illustrated in the graph ofFIG.5. The graph represents the achieved ultrasound signal spectrum for a CMUT ultrasound transducer having a central (mean) frequency output of 40 MHz. Curve501represents the spectrum (calculated by the finite element method) for the CMUT transducer in the absence of a window. This curve represents the ideal, desired spectrum, since it shows the signal in the absence of any reflection or interference effects of a window. The equivalent time domain signal has a short tail. Curve503represents the FEM calculated ultrasound spectrum for the case in which the three layer PBR-PU-PET window, including PET layer, is present. A ‘hump’ can be observed in the obtained spectrum at around 35 MHz. This is caused by a resonance effect due to the strong reflection mentioned above. This effect is highly undesirable as it degrades the image quality. To ameliorate this problem, according to advantageous embodiments of the present invention, the acoustically-transparent window is provided having a thickness which varies across its extent (i.e. across its major cross-sectional area). By way of example, the acoustically-transparent window may be provided having a thickness which varies such that the window defines a wedge shape. The thickness variation causes the ringing in the tail of the spectrum to average. As a result, the spectrum curve is smoothed, and the ‘hump’ is substantially eliminated. This is illustrated by curve502inFIG.5. Curve502shows the obtained ultrasound spectrum using a wedge shaped acoustically-transparent window. As can be seen, the shape substantially eliminates the interference/reflection effects and the spectrum approaches the shape of the desired (ideal) no-window curve501. Further experimental results are shown inFIGS.6A and6B.FIG.6Ashows by way of illustration the obtained ultrasound signal as a function of time [μs] for a CMUT transducer covered by each of the following: a PBR only material layer601, a PBR-PET material layer stack602, and a complete acoustically-transparent window having a wedge shape603.FIG.6Bshows the obtained ultrasound spectrum as a function of frequency [MHz], for each of a PBR-only material layer606, a PBR-PET material layer stack607, and a complete acoustically-transparent PBR-PU-PET window having a wedge shape608. It can be seen that the spectrum608for the wedge-shaped window resembles the (ideal) PBR-only spectrum606. The obtained bandwidth is >50%. When PET is added to the material stack, without wedge shape formation, the obtained spectrum607exhibits the undesirable hump, and the bandwidth is reduced to around 25%. Different options are possible for the particular configuration of the variable thickness acoustically-transparent window. According to a preferred set of examples, the thickness of the acoustically-transparent window varies smoothly across the window, such that an uppermost surface of the window (i.e. of the outermost layer) inclines or declines smoothly at one or more rates or incline angles across the window. The thickness may vary linearly, so that the upper surface of the window slopes linearly at one or more slope angles. FIG.7schematically depicts a side cross-sectional view through one example arrangement which follows this configuration. The acoustically transparent window300is shown extending over an array of ultrasound acoustic elements143. In this example, the plurality of acoustic elements143is arranged as an array of elements, comprising a plurality of rows (lines) of elements. One line of acoustic elements143is schematically drawn inFIG.7. In this example, the acoustically-transparent window is arranged such that the thickness of the acoustically-transparent window varies along a direction of the lines (rows) of acoustic elements, i.e. the uppermost surface of the window (i.e. of the outermost layer) slopes in a direction parallel the lines or rows of acoustic elements143. In alternative examples, the thickness may vary non-linearly, so that the upper surface curves smoothly up or down at one or more rates. Where the acoustic elements comprise an arrangement of one or more lines of elements, the sloping of an uppermost surface of the element may extend along the direction of said one or more lines of elements. As mentioned above, where the plurality of acoustic elements comprises an array of elements comprising one or more rows, the thickness of the window may vary along a direction of said one or more rows. In the particular example ofFIG.7, the sloping of the acoustically-transparent window is such that the acoustically transparent window300follows a wedge shape. According to one or more examples, the thickness of the acoustically-transparent window may vary along the direction of the longitudinal axis of the flexible elongate member. This may coincide with a direction of the lines or rows of acoustic elements in some examples, so that the thickness varies (e.g. slopes or inclines) along the direction of both. According to one or more embodiments, the thickness of the acoustically-transparent window may oscillate smoothly between a lower and upper thickness level, such that an uppermost surface of the outermost layer varies up and down across the layer between an upper and lower surface level. An example is schematically depicted inFIG.8. In this example, the thickness of the acoustically-transparent window300oscillates linearly, such that an uppermost surface of the window (i.e. of the outermost layer) slopes linearly up and down between an upper and lower surface level. This results, in the example ofFIG.8, in a window comprising two sloped sections801,802, one section801which respectively inclines toward a central peak thickness point, and one802which declines away from the peak thickness point (or equivalently, both inclining or declining respectively toward or away from this central peak). FIG.9schematically depicts a further example. In this example, the thickness of the acoustically-transparent window300oscillates linearly a number of times across the extent of the window between a minimum and maximum thickness level. This shows as a linear sloping of an uppermost surface of the window (i.e. of the outermost layer) of the element up and down between an minimum and maximum surface level. This results in a total of six sloped sections901-906, which respectively slope upwards and downwards towards and away from peak thickness points of the window300. For the variable thickness window to be effective in suppressing the resonance or ringing effect (and the consequent hump in the ultrasound signal spectrum), the distance z at which an target observed point of the anatomy is scanned (relative to the imaging assembly of the intraluminal device) should be sufficiently large for the acoustic ‘averaging’ effect of the window to take effect. Where the point of observation is very close to the imaging assembly, only a small part, δ, of the array of acoustic element contributes. This is schematically depicted inFIGS.10A and10B, which show observation of an observed point1001with two different respective variable thickness windows300, where the observed point is a distance, z, from the plurality of acoustic elements143. Hence, to enable observation of points at even close distances relative to the imaging assembly, the variable thickness acoustic window should be steep enough in its thickness gradient, such that there is sufficient thickness variation along δ to be effective. A preferred distance for viewing objects is the near-field distance, z0. The near field distance is the distance at which a plane wave is first being formed. Obtained images are sharpest for objects observed at the near-field distance. However, objects can also be viewed at distances other than the near-field distance. The approximate relation between the near-field distance, z0, and δ is: z0=δ24λ where λ is wavelength of the sound waves, For example, for c=1500 m/s, f=40 MHz, and δ=600 microns, the resulting near-field distance is z0=2.4 mm, where c is the speed of sound in the medium separating the imaging assembly and the observed point1001. For this example configuration, a plane wave is created, and the full array of acoustic elements contribute to the signal, so long as the object is viewed at or close to the near-field distance of 2.4 mm. Hence also the entire length of the acoustic window contributes to suppress the resonance. A distance of 2.4 mm is within the range of typical observation distances for the device during typical uses. By way of example, the acoustic window thickness may vary across its full length or extent between a starting thickness of approximately 10 microns and a final thickness of approximately 30 microns. The thickness may increase linearly from 10 to 30 microns across its full extent for example. This provides a sufficient thickness gradient for observation of objects at observation distances z>z0, where z is the observation distance and z0is the near-field distance, as discussed above. This however represents one example thickness gradient only however, and others may alternatively be used. For example, simulations have been performed to test different thickness gradients in terms of their performance in eliminating the ringing effect (i.e. the tail in the echo signal) discussed above. Based on these simulations it has been found that windows in which the thickness increases linearly by a total of approximately 20-30 microns across the full length of (at least the active part, δ, of) the window element can be expected to perform best. For example, the window may advantageously vary (e.g. linearly increase) in thickness from an initial thickness of between 7-15 microns to a final thickness of between 20-30 microns. In preferred examples, the starting thickness of the wedge window (i.e. the thickness at the window's thinnest point, at one end of the window) should ideally be as small as possible to minimize acoustic attenuation. It has been found that a starting thickness of approximately 10 microns (for example between 7 and 15 microns) is optimal, in terms of balancing acoustic performance, and structural stability. FIG.11is a flow diagram of a method400of manufacturing at least portions of the intraluminal ultrasound imaging device102, according to an embodiment of the present disclosure. As illustrated, the method400includes a number of enumerated steps, but embodiments of the method400may include additional steps before, after, and in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently. The steps of the method400can be carried by a manufacturer of the device102to yield the devices including features described inFIGS.1-3. The method400will be described with reference toFIGS.12-17, which are diagrammatic views of various components of the device102during various steps of manufacturing. For example,FIGS.12-17illustrate assembly steps for various components of ultrasound imaging assembly110, such as the acoustically-transparent window300disposed over the acoustic element array124. At step410, the method400includes obtaining an acoustic element array (FIG.11). A diagrammatic top view of an acoustic element array124is illustrated inFIG.12, according to an embodiment of the present disclosure. The acoustic element array124can include a plurality of acoustic elements140. In an exemplary embodiment, the acoustic elements140can be CMUT elements. The imaging assembly110of the device102shown inFIG.12also includes the tip member123positioned distally of the array124, the controllers125positioned proximally of the array124, and the communication lines134(e.g., conductors providing electrical communication) extending proximally from the controllers125. At step420, the method400includes distributing material forming a flexible layer of the acoustically-transparent window over and directly contacting the acoustic element array (FIG.11). For example, PBR can be used to form the innermost layer310(FIG.3). In that regard, dissolved PBR can be dispensed on to device102. The device102can be rotated (e.g., around the longitudinal axis LA) after the PBR is dispensed or the PBR can be dispensed while the device102is rotating. As a result, the PBR is spread evenly around the circumference of the array124and/or other components of the device102. In some embodiments, heptane is added to PBR to control viscosity of the dissolved material. At step430, the method400includes drying or curing the flexible layer of the acoustic window. In embodiments in which heptane is added to the PBR, the step430can include evaporating the heptane. Step430can include applying heat or air to the material forming the flexible layer. As a result of step430the material distributed in step420forms into the layer310. As a result of drying/curing the layer310separately from forming the others layers, the dimension312, such as height or thickness, of the layer310(FIG.3) can be controlled independently of the steps involved in forming the other layer. For example, the dimension312of the layer310does not depending on the shrinking behavior of the tubing forming the layer330, as described with respect to step460. FIG.13illustrates the imaging assembly110of the device102after step430.FIG.13is a diagrammatic top view after the flexible layer of the acoustically-transparent window has been distributed over the acoustic element array and cured, according to an embodiment of the present disclosure. In that regard, the layer310is completely covers the array124. In the illustrated embodiment ofFIG.13, the layer310also covers at least a portion of the tip member123, the controllers125, and/or the communication lines134. In that regard, the layer310can form part of the flexible elongate member121. In embodiments in which additional layers can form part of the acoustically-transparent window300, the method400includes forming additional layers over the flexible layer310. At step440, the method400includes dispensing an adhesive on the flexible layer (FIG.11). The adhesive, such as PU, can form the layer320between the layers310and330(FIG.3). For example, one or more droplets of the adhesive can be dispensed directly onto the dried/cured layer310. In embodiments of the acoustically-transparent window300that omit the layer310, the adhesive can be disposed directly onto the array124.FIG.14illustrates a volume710of the adhesive dispensed over the array124.FIG.14is a diagrammatic top view of the heat shrink tubing arranged to be positioned over the acoustic element array (step450) including the adhesive dispensed on the flexible layer of the acoustically-transparent window, according to an embodiment of the present disclosure. In some embodiments, the adhesive is sprayed onto the layer310and/or the array124. At step450, the method400includes positioning tubing over the adhesive and the acoustic element array (FIG.11). The tubing can form the outermost layer330(FIG.3) in some instances. In that regard, the tubing can be a heat shrinkable tubing formed of PET in some embodiments. As shown inFIG.14, the tubing720can include lumen722. The array124and/or the tubing720is moved such that the array124, with the volume710of the adhesive disposed thereon, is positioned within the lumen722of the tubing720. In some embodiment, the step450also includes fixing the ends of the heat shrink tubing720. In that regard, during the heat shrink process, the wall thickness increases if the tube720can freely shrink (reduced diameter, reduced length, increased wall thickness). The increase in all thickness can be minimized or eliminated by fixing the heat shrink tube720so it cannot shrink in the length or longitudinal direction. Another option for step450is to mount the tube720around the array124by stretching the tube720in the length direction so that it shrinks in diameter prior to heating. In some instances, stretching the tube720prior to heating can even lead to a thinner wall after applying heat. For example, the dimension332of the layer330can be smaller. In embodiments in which tubing is not used, the step450includes positioning a planar piece of material (e.g., a foil) around the array124and the volume710of the adhesive. For example, the planar piece of material can be wrapped into an annular configuration around the array124and the volume710of the adhesive. At step460, the method400includes applying heat and/or air to shrink the tubing (FIG.11). For example, as a result of applying hot air, the tubing720shrinks around the array124forming the rigid layer330(FIG.3). The shrinking tubing720spreads out the adhesive laterally, between the layer310and the surface of the lumen722. The volume710of the adhesive is thus distributed around the entire circumference of the layer310, rather than remaining the droplet form shown inFIG.14. In embodiments, in which the tube720is stretched (step450), the step460can include a combination of applying heat and mechanical stretching. Excess tubing720can be cut and removed if needed. The cut ends of the tubing720can be additionally heat shrunk. In some instances, additional adhesive is applied around the ends of the tubing720. In embodiments which omit the layer310, the adhesive is spread out laterally, between the array124and the surface of the lumen722as a result of shrinking tubing720. Adhesive is thus distributed around the entire circumference of the array124. At step470, the method400includes removing excess adhesive (FIG.11). The shrinking tubing720can expel excess adhesive from the space between the layer310and the surface of the lumen722, out of the ends of the tubing720. The excess adhesive can be removed, such as by wiping away, prior to curing the adhesive (step480). At step480, the method400includes curing the adhesive (FIG.11). In some embodiments, the adhesive is a two component curing systems, such as a two component PU. Step480can include application of heat and/or air. In some embodiments, the adhesive cures without vapor emission. Solvent evaporation or gases formed by, for instance, moist curing can be advantageously avoided. If something evaporates in/from the glue layer320, gas/air bubbles will be formed under the layer330and trapped in the acoustically-transparent window300, thereby degrading acoustic performance. As a result of the steps of the method400, the layer330is formed around the array124, as shown inFIG.15. In that regard,FIG.15is a diagrammatic top view after shrinking the tubing and curing the adhesive to form the acoustically-transparent window300, according to an embodiment of the present disclosure. The adhesive layer320and flexible layer310are positioned between the array124and the layer330(FIG.3). In some instances, the layer330can also cover the controllers125. In some embodiments, the acoustic window300has a thickness which varies across its extent. In these embodiments, the method of fabricating or assembling the intraluminal ultrasound imaging device may vary to that outlined above. In particular, one or more of steps420to480outlined above may vary. One example approach for steps for fabricating the device with a varying thickness acoustic window is as follows. This approach is applicable in particular for an elongate member which has a round cross-section. An imaging assembly comprising a plurality of acoustic elements is provided coupled to a distal portion of an elongate member. The shape of the variable thickness acoustically-transparent window300may be controlled in the application process of the lowermost layer, e.g. PBR layer. This layer is applied to the assembled elongate member and imaging assembly arrangement. The elongate member is rotated while the material of the lowermost layer is continuously deposited. The procedure will be described for an example material of PBR. However, this is for illustration only and the same approach can be used for other materials. The amount of PBR applied is controlled by for instance measuring the diameter of a droplet on a syringe tip with a camera, or pumping the fluid to a tip of a needle for a predefined time. Then the droplet makes contact with the elongate member that is rotating. As a result of the rotation, a PBR ring or donut is formed around the imaging assembly. The height and width of the deposited PBR region is controlled by the surface tension (influenced by imaging assembly surface material and surface condition, e.g. whether or not the surface has been plasma cleaned, and the viscosity of the dissolved PBR solution). The thickness (radial direction) of the layer is varied along the length direction of the elongate member, to thereby realize the thickness variation (e.g. wedge shape) of the window element along the length direction. For example, the thickness may be varied linearly along the length direction to achieve a window having thickness which increases linearly along its length direction. The ring of PBR material is then dried. While drying, the ring shrinks by the same fraction at all points around its circumference. In this way a very well controlled PBR layer with thickness variation (along the length direction of the elongate member) may be deposited. When approximately half of the ring (along the length direction) has been deposited on the elongate member and imaging assembly arrangement, effectively a wedge is formed above the acoustic elements of the imaging assembly. This is illustrated inFIG.16(left). The half-ring of deposited PBR1020is shown. Material outside the area of interest can be removed. FIG.16(right) shows the deposited PBR1020after drying. Following this step, the deposited PBR may be covered with a PET heat shrink (tubing), applied atop a PU adhesive layer. These materials represent only one example set of materials which may be used for forming the outermost layer and adhesive layer respectively, and other materials may alternatively be used, as outlined above for example. The heat shrink (and thin PU adhesive layer) follow the shape of the deposited PBR. Different shapes may be formed by varying the shape of the deposited PBR innermost layer along the length direction (the elongate length of the elongate member). For example, a linearly sloping shape may be provided to the acoustic window. One ring may be deposited, or multiple rings may be deposited. Also concave shapes are possible. By way of example, the example shape shown inFIGS.8and9above may be formed by depositing multiple rings. The window ofFIG.8for example may be formed by depositing two adjacent rings, with appropriately sloping thickness in the length direction, and the arrangement ofFIG.9formed by depositing six adjacent rings with appropriately sloping thickness in the length direction. Alternatively, the example shapes shown inFIG.8or9, may be formed by depositing an initial homogenous-height layer of PBR, and then, subsequent to drying of the layer, ablating the deposited layer in appropriate areas in such a way as to shape it to define the particular relief pattern required for these shapes. The PET layer may then be deposited atop this shaped PBR layer, which layer shrinks to follow the shaped topography of the PBR layer, as described above. FIG.17is a diagrammatic top view of CMUT elements visible through the acoustically-transparent window, according to an embodiment of the present disclosure.FIG.17illustrates a portion of the array124after the steps of the method400. The CMUT elements140are formed on islands141of the substrate132that are separated by the trench144. The membrane143is visible through the optically and acoustically-transparent window300A void free acoustically-transparent window300is advantageously realized according to aspects of the present disclosure. Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. | 64,759 |
11857362 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. For example, while the focusing system is described in terms of cardiovascular imaging, it is understood that it is not intended to be limited to this application. The system is equally well suited to any application requiring imaging within a confined cavity. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately. FIG.1is a diagrammatic schematic view of an intravascular ultrasound (IVUS) imaging system100, according to aspects of the present disclosure. The IVUS imaging system100may include a solid-state IVUS device102such as a catheter, guide wire, or guide catheter, a patient interface module (PIM)104, an IVUS processing system or console106, and a monitor108. At a high level, the IVUS device102emits ultrasonic energy from a transducer array124included in scanner assembly110mounted near a distal end of the catheter device. The ultrasonic energy is reflected by tissue structures in the medium, such as a vessel120, surrounding the scanner assembly110, and the ultrasound echo signals are received by the transducer array124. The PIM104transfers the received echo signals to the console or computer106where the ultrasound image (including the flow information) is reconstructed and displayed on the monitor108. The console or computer106can include a processor and a memory. The computer or computing device106can be operable to facilitate the features of the IVUS imaging system100described herein. For example, the processor can execute computer readable instructions stored on the non-transitory tangible computer readable medium. The PIM104facilitates communication of signals between the IVUS console106and the scanner assembly110included in the IVUS device102. This communication includes the steps of: (1) providing commands to integrated circuit controller chip(s)206A,206B, illustrated inFIG.2, included in the scanner assembly110to select the particular transducer array element(s) to be used for transmit and receive, (2) providing the transmit trigger signals to the integrated circuit controller chip(s)206A,206B included in the scanner assembly110to activate the transmitter circuitry to generate an electrical pulse to excite the selected transducer array element(s), and/or (3) accepting amplified echo signals received from the selected transducer array element(s) via amplifiers included on the integrated circuit controller chip(s)126of the scanner assembly110. In some embodiments, the PIM104performs preliminary processing of the echo data prior to relaying the data to the console106. In examples of such embodiments, the PIM104performs amplification, filtering, and/or aggregating of the data. In an embodiment, the PIM104also supplies high- and low-voltage DC power to support operation of the device102including circuitry within the scanner assembly110. The IVUS console106receives the echo data from the scanner assembly110by way of the PIM104and processes the data to reconstruct an image of the tissue structures in the medium surrounding the scanner assembly110. The console106outputs image data such that an image of the vessel120, such as a cross-sectional image of the vessel120, is displayed on the monitor108. Vessel120may represent fluid filled or surrounded structures, both natural and man-made. The vessel120may be within a body of a patient. The vessel120may be a blood vessel, as an artery or a vein of a patient's vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or or any other suitable lumen inside the body. For example, the device102may be used to examine any number of anatomical locations and tissue types, including without limitation, organs including the liver, heart, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood, chambers or other parts of the heart, and/or other systems of the body. In addition to natural structures, the device102may be may be used to examine man-made structures such as, but without limitation, heart valves, stents, shunts, filters and other devices. In some embodiments, the IVUS device includes some features similar to traditional solid-state IVUS catheters, such as the EagleEye® catheter available from Volcano Corporation and those disclosed in U.S. Pat. No. 7,846,101 hereby incorporated by reference in its entirety. For example, the IVUS device102includes the scanner assembly110near a distal end of the device102and a transmission line bundle112extending along the longitudinal body of the device102. The transmission line bundle or cable112can include a plurality of conductors, including one, two, three, four, five, six, seven, or more conductors218(FIG.2). It is understood that any suitable gauge wire can be used for the conductors218. In an embodiment, the cable112can include a four-conductor transmission line arrangement with, e.g., 41 AWG gauge wires. In an embodiment, the cable112can include a seven-conductor transmission line arrangement utilizing, e.g., 44 AWG gauge wires. In some embodiments, 43 AWG gauge wires can be used. The transmission line bundle112terminates in a PIM connector114at a proximal end of the device102. The PIM connector114electrically couples the transmission line bundle112to the PIM104and physically couples the IVUS device102to the PIM104. In an embodiment, the IVUS device102further includes a guide wire exit port116. Accordingly, in some instances the IVUS device is a rapid-exchange catheter. The guide wire exit port116allows a guide wire118to be inserted towards the distal end in order to direct the device102through the vessel120. FIG.2is a top view of a portion of an ultrasound scanner assembly110according to an embodiment of the present disclosure. The assembly110includes a transducer array124formed in a transducer region204and transducer control logic dies206(including dies206A and206B) formed in a control region208, with a transition region210disposed therebetween. The transducer control logic dies206and the transducers212are mounted on a flex circuit214that is shown in a flat configuration inFIG.2.FIG.3illustrates a rolled configuration of the flex circuit214. The transducer array202is a non-limiting example of a medical sensor element and/or a medical sensor element array. The transducer control logic dies206is a non-limiting example of a control circuit. The transducer region204is disposed adjacent a distal portion221of the flex circuit214. The control region208is disposed adjacent the proximal portion222of the flex circuit214. The transition region210is disposed between the control region208and the transducer region204. Dimensions of the transducer region204, the control region208, and the transition region210(e.g., lengths225,227,229) can vary in different embodiments. In some embodiments, the lengths225,227,229can be substantially similar or a length227of the transition region210can be greater than lengths225,229of the transducer region and controller region, respectively. While the imaging assembly110is described as including a flex circuit, it is understood that the transducers and/or controllers may be arranged to form the imaging assembly110in other configurations, including those omitting a flex circuit. The transducer array124may include any number and type of ultrasound transducers212, although for clarity only a limited number of ultrasound transducers are illustrated inFIG.2. In an embodiment, the transducer array124includes 64 individual ultrasound transducers212. In a further embodiment, the transducer array124includes 32 ultrasound transducers212. Other numbers are both contemplated and provided for. With respect to the types of transducers, in an embodiment, the ultrasound transducers124are piezoelectric micromachined ultrasound transducers (PMUTs) fabricated on a microelectromechanical system (MEMS) substrate using a polymer piezoelectric material, for example as disclosed in U.S. Pat. No. 6,641,540, which is hereby incorporated by reference in its entirety. In alternate embodiments, the transducer array includes piezoelectric zirconate transducers (PZT) transducers such as bulk PZT transducers, capacitive micromachined ultrasound transducers (cMUTs), single crystal piezoelectric materials, other suitable ultrasound transmitters and receivers, and/or combinations thereof. The scanner assembly110may include various transducer control logic, which in the illustrated embodiment is divided into discrete control logic dies206. In various examples, the control logic of the scanner assembly110performs: decoding control signals sent by the PIM104across the cable112, driving one or more transducers212to emit an ultrasonic signal, selecting one or more transducers212to receive a reflected echo of the ultrasonic signal, amplifying a signal representing the received echo, and/or transmitting the signal to the PIM across the cable112. In the illustrated embodiment, a scanner assembly110having 64 ultrasound transducers212divides the control logic across nine control logic dies206, of which five are shown inFIG.2. Designs incorporating other numbers of control logic dies206including 8, 9, 16, 17 and more are utilized in other embodiments. In general, the control logic dies206are characterized by the number of transducers they are capable of driving, and exemplary control logic dies206drive 4, 8, and/or 16 transducers. The control logic dies are not necessarily homogenous. In some embodiments, a single controller is designated a master control logic die206A and contains the communication interface for the cable112. Accordingly, the master control circuit may include control logic that decodes control signals received over the cable112, transmits control responses over the cable112, amplifies echo signals, and/or transmits the echo signals over the cable112. The remaining controllers are slave controllers206B. The slave controllers206B may include control logic that drives a transducer212to emit an ultrasonic signal and selects a transducer212to receive an echo. In the depicted embodiment, the master controller206A does not directly control any transducers212. In other embodiments, the master controller206A drives the same number of transducers212as the slave controllers206B or drives a reduced set of transducers212as compared to the slave controllers206B. In an exemplary embodiment, a single master controller206A and eight slave controllers206B are provided with eight transducers assigned to each slave controller206B. The flex circuit214, on which the transducer control logic dies206and the transducers212are mounted, provides structural support and interconnects for electrical coupling. The flex circuit214may be constructed to include a film layer of a flexible polyimide material such as KAPTON′ (trademark of DuPont). Other suitable materials include polyester films, polyimide films, polyethylene napthalate films, or polyetherimide films, other flexible printed semiconductor substrates as well as products such as Upilex® (registered trademark of Ube Industries) and TEFLON® (registered trademark of E.I. du Pont). In the flat configuration illustrated inFIG.2, the flex circuit214has a generally rectangular shape. As shown and described herein, the flex circuit214is configured to be wrapped around a support member230(FIG.3) to form a cylindrical toroid in some instances. Therefore, the thickness of the film layer of the flex circuit214is generally related to the degree of curvature in the final assembled scanner assembly110. In some embodiments, the film layer is between 5 μm and 100 μm, with some particular embodiments being between 12.7 μm and 25.1 μm. To electrically interconnect the control logic dies206and the transducers212, in an embodiment, the flex circuit214further includes conductive traces216formed on the film layer that carry signals between the control logic dies206and the transducers212. In particular, the conductive traces216providing communication between the control logic dies206and the transducers212extend along the flex circuit214within the transition region210. In some instances, the conductive traces216can also facilitate electrical communication between the master controller206A and the slave controllers206B. The conductive traces216can also provide a set of conductive pads that contact the conductors218of cable112when the conductors218of the cable112are mechanically and electrically coupled to the flex circuit214. Suitable materials for the conductive traces216include copper, gold, aluminum, silver, tantalum, nickel, and tin, and may be deposited on the flex circuit214by processes such as sputtering, plating, and etching. In an embodiment, the flex circuit214includes a chromium adhesion layer. The width and thickness of the conductive traces216are selected to provide proper conductivity and resilience when the flex circuit214is rolled. In that regard, an exemplary range for the thickness of a conductive trace216and/or conductive pad is between 10-50 μm. For example, in an embodiment, 20 μm conductive traces216are separated by 20 μm of space. The width of a conductive trace216on the flex circuit214may be further determined by the width of the conductor218to be coupled to the trace/pad. The flex circuit214can include a conductor interface220in some embodiments. The conductor interface220can be a location of the flex circuit214where the conductors218of the cable114are coupled to the flex circuit214. For example, the bare conductors of the cable114are electrically coupled to the flex circuit214at the conductor interface220. The conductor interface220can be tab extending from the main body of flex circuit214. In that regard, the main body of the flex circuit214can refer collectively to the transducer region204, controller region208, and the transition region210. In the illustrated embodiment, the conductor interface220extends from the proximal portion222of the flex circuit214. In other embodiments, the conductor interface220is positioned at other parts of the flex circuit214, such as the distal portion220, or the flex circuit214omits the conductor interface220. A value of a dimension of the tab or conductor interface220, such as a width224, can be less than the value of a dimension of the main body of the flex circuit214, such as a width226. In some embodiments, the substrate forming the conductor interface220is made of the same material(s) and/or is similarly flexible as the flex circuit214. In other embodiments, the conductor interface220is made of different materials and/or is comparatively more rigid than the flex circuit214. For example, the conductor interface220can be made of a plastic, thermoplastic, polymer, hard polymer, etc., including polyoxymethylene (e.g., DELRIN®), polyether ether ketone (PEEK), nylon, and/or other suitable materials. As described in greater detail herein, the support member230, the flex circuit214, the conductor interface220and/or the conductor(s)218can be variously configured to facilitate efficient manufacturing and operation of the scanner assembly110. In some instances, the scanner assembly110is transitioned from a flat configuration (FIG.2) to a rolled or more cylindrical configuration (FIGS.3and4). For example, in some embodiments, techniques are utilized as disclosed in one or more of U.S. Pat. No. 6,776,763, titled “ULTRASONIC TRANSDUCER ARRAY AND METHOD OF MANUFACTURING THE SAME” and U.S. Pat. No. 7,226,417, titled “HIGH RESOLUTION INTRAVASCULAR ULTRASOUND TRANSDUCER ASSEMBLY HAVING A FLEXIBLE SUBSTRATE,” each of which is hereby incorporated by reference in its entirety. As shown inFIGS.3and4, the flex circuit214is positioned around the support member230in the rolled configuration.FIG.3is a diagrammatic side view with the flex circuit214in the rolled configuration around the support member230, according to aspects of the present disclosure.FIG.4is a diagrammatic cross-sectional side view of a distal portion of the intravascular device110, including the flex circuit214and the support member230, according to aspects of the present disclosure. The support member230can be referenced as a unibody in some instances. The support member230can be composed of a metallic material, such as stainless steel, or non-metallic material, such as a plastic or polymer as described in U.S. Provisional Application No. 61/985,220, “Pre-Doped Solid Substrate for Intravascular Devices,” filed Apr. 28, 2014, the entirety of which is hereby incorporated by reference herein. The support member230can be ferrule having a distal portion262and a proximal portion264. The support member230can define a lumen236extending longitudinally therethrough. The lumen236is in communication with the exit port116and is sized and shaped to receive the guide wire118(FIG.1). The support member230can be manufactured accordingly to any suitable process. For example, the support member230can be machined, such as by removing material from a blank to shape the support member230, or molded, such as by an injection molding process. In some embodiments, the support member230may be integrally formed as a unitary structure, while in other embodiments the support member230may be formed of different components, such as a ferrule and stands242,244, that are fixedly coupled to one another. Stands242,244that extend vertically are provided at the distal and proximal portions262,264, respectively, of the support member230. The stands242,244elevate and support the distal and proximal portions of the flex circuit214. In that regard, portions of the flex circuit214, such as the transducer portion204, can be spaced from a central body portion of the support member230extending between the stands242,244. The stands242,244can have the same outer diameter or different outer diameters. For example, the distal stand242can have a larger or smaller outer diameter than the proximal stand244. To improve acoustic performance, any cavities between the flex circuit214and the surface of the support member230are filled with a backing material246. The liquid backing material246can be introduced between the flex circuit214and the support member230via passageways235in the stands242,244. In some embodiments, suction can be applied via the passageways235of one of the stands242,244, while the liquid backing material246is fed between the flex circuit214and the support member230via the passageways235of the other of the stands242,244. The backing material can be cured to allow it to solidify and set. In various embodiments, the support member230includes more than two stands242,244, only one of the stands242,244, or neither of the stands. In that regard the support member230can have an increased diameter distal portion262and/or increased diameter proximal portion264that is sized and shaped to elevate and support the distal and/or proximal portions of the flex circuit214. The support member230can be substantially cylindrical in some embodiments. Other shapes of the support member230are also contemplated including geometrical, non-geometrical, symmetrical, non-symmetrical, cross-sectional profiles. Different portions the support member230can be variously shaped in other embodiments. For example, the proximal portion264can have a larger outer diameter than the outer diameters of the distal portion262or a central portion extending between the distal and proximal portions262,264. In some embodiments, an inner diameter of the support member230(e.g., the diameter of the lumen236) can correspondingly increase or decrease as the outer diameter changes. In other embodiments, the inner diameter of the support member230remains the same despite variations in the outer diameter. A proximal inner member256and a proximal outer member254are coupled to the proximal portion264of the support member230. The proximal inner member256and/or the proximal outer member254can be flexible elongate member that extend from proximal portion of the intravascular102, such as the proximal connector114, to the imaging assembly110. For example, the proximal inner member256can be received within a proximal flange234. The proximal outer member254abuts and is in contact with the flex circuit214. A distal member252is coupled to the distal portion262of the support member230. The distal member252can be a flexible component that defines a distal most portion of the intravascular device102. For example, the distal member252is positioned around the distal flange232. The distal member252can abut and be in contact with the flex circuit214and the stand242. The distal member252can be the distal-most component of the intravascular device102. One or more adhesives can be disposed between various components at the distal portion of the intravascular device102. For example, one or more of the flex circuit214, the support member230, the distal member252, the proximal inner member256, and/or the proximal outer member254can be coupled to one another via an adhesive. FIGS.5and6illustrate an embodiment of an intravascular device300, including an imaging assembly302.FIG.5is a side view illustration of the intravascular device300.FIG.6is cross-sectional side view illustration of the intravascular device300. For clarity, the distal portion of the intravascular device300is shown the left side ofFIGS.5and6, and more proximal portions are shown on the right side. The intravascular device300and the imaging assembly302can be similar the intravascular device102and the imagine assembly110, respectively, in some aspects. The imaging assembly302is disposed at a distal portion of the intravascular device300. The imaging assembly302includes a flex circuit314having a transducer region304with a plurality of transducers212, a controller region308having a plurality of controllers, including the controller(s)206B, and a transition region310having a plurality of conductive traces facilitating electrical communication between the controllers206A,206B and the transducers212. The flex circuit314is positioned around the support member330having a distal flange332, a body portion333, and a proximal flange334. The support member330defines a longitudinal lumen336that is sized and shaped to receive the guide wire118. The flex circuit314is positioned in a rolled, cylindrical, and/or cylindrical toroid manner around the support member330. A distal member352extends distally from the support member330and is positioned around the distal flange332. The distal member352defines a lumen353sized and shaped to receive the guide wire118and in communication with the lumen336of support member330. The distal member352may be mechanically coupled to the flex circuit314and/or the support member330via adhesive370. One or more proximal members354,356extend proximally from the support member330. For example, an outer member354may be positioned around the proximal flange334, and the inner member356may be received within the proximal flange334. The inner member356may define a lumen358sized and shaped to receive the guide wire118and in communication with the lumen336. The one or more proximal members354,356may be mechanically coupled to the flex circuit314and/or the support member330via adhesive370. FIG.7is a flow diagram of a method400of assembling an intravascular imaging device, including an imaging assembly with a support member described herein. It is understood that the steps of method400may be performed in a different order than shown inFIG.7, additional steps can be provided before, during, and after the steps, and/or some of the steps described can be replaced or eliminated in other embodiments. The steps of the method400can be carried out by a manufacturer of the intravascular imaging device. At step405, the method400includes obtaining a support member. The support member includes a body portion having a plurality of recesses longitudinally spaced from one another. The body portion can extend between proximal and distal stands that have a larger outer diameter than the body portion. The support member defines longitudinal lumen extending therethrough. The body portion surrounds the lumen. Each of the plurality of longitudinally-spaced recesses extends from an outer surface of body portion through to an inner surface of the lumen. At step410, the method400includes positioning a flex circuit around the support member. The flex circuit includes a first section having a plurality of transducers, a second section having a plurality of controllers, and a third section having a plurality of conductive traces facilitating communication between the plurality of the transducers and the plurality of controllers. The flex circuit can be wrapped in a cylindrical configuration around the support member. The flex circuit can be radially spaced from the body portion when positioned around the support member. For example, proximal and distal portions of the flex circuit are in contact with the proximal and distal stands, respectively. A central portion of the flex circuit, between the proximal and distal portions, is radially spaced from the body portion of support member. Adhesive or other coupling mechanism may be used to join the flex circuit and the support member. At step415, the method400may include positioning a mandrel with the lumen of the support member. The mandrel may stabilize the support member during assembly of the intravascular device. In some embodiments, the mandrel may be coated and/or otherwise covered with a lubricous material, such as TEFLON® (registered trademark of E.I. du Pont) and/or other suitable material. The method400may additionally include positioning a plug within the lumen defined by the support member. For example, the plug may be positioned at a proximal portion of the lumen when backing material directed into the lumen from the distal portion, and the plug may be positioned at a distal portion of the lumen when backing material directed into the lumen from the distal portion, as described with respect to step420. At step420, the method400includes filling a space between the flex circuit and the support member with a backing material. In that regard, the space between the flex circuit and support member is created when the flex circuit is positioned around the support member. In particular, the central portion of flex circuit is radially spaced from the body portion of the support member because the proximal and distal portions of the flex circuit contact the larger diameter stands of the support member, when the flex circuit is wrapped or rolled around the support member. The backing material may be an acoustic backing material that facilitates operation of the transducers. The backing material may be liquid when introduced into the space between the flex circuit and the support member. The lumen defined by the support member may be in fluid communication with the space between the flex circuit and the support member via the plurality of recesses of the support member. Accordingly, step420can include introducing the backing material into the lumen of the support member such that the backing material flows into the space between the space between the flex circuit and the support member via the plurality of recesses. In some embodiments, the backing material may be introduced in substantially equal proportions along the longitudinal length of the support member lumen. The recesses of the body portion of the support member may be axially/longitudinally and/or circumferentially distributed to allow the backing material to evenly fill the longitudinally length of the space between the flex circuit and the support member. In some embodiments, backing material may be directed into the lumen through the lumen opening at the proximal portion or the distal portion such that backing material flows into the space between the support member and the flex circuit via the plurality of recesses. In some embodiments, a conduit may be inserted at least partially into the lumen and the backing material may be directed into the lumen and/or the space between the support member and the flex circuit. At step425, the method400can include evacuating air from the space between the flex circuit and the support member. This may advantageously prevent uneven filling/distribution of the backing material within the space because of air pockets. Air may be evacuated from the space by applying suction at one or more openings in the proximal stand and/or distal stand of the support member. Steps420and425may be performed simultaneously to efficiently fill the space between the flex circuit and the support member with the backing material. At step430, the method400includes removing the mandrel from the support member lumen after backing material cures. Because the mandrel may be coated with a lubricous material, the mandrel may be quickly and easily removed from the lumen. At step435, the method400includes removing excess backing material from the support member lumen after the backing material cures. Because the liquid backing material was introduced into the space between the flex circuit and the support member through the lumen, the lumen may include excess backing material. Step435may including reaming the support member lumen to remove the excess backing material which ensures that the internal diameter of the support member lumen is available to receive a guide wire. Removing the excess backing material may include sliding a component having a diameter equal to or slight less than the diameter of the support member lumen through the lumen. The exertion of the component against the excess backing material within the lumen clears the lumen of the excess backing material. The component also removes the plug which may be positioned at a proximal or distal portion of the lumen. The component used to remove the excess backing material may be formed of a material, such as polytetrafluoroethylene (PTFE) or TEFLON® (registered trademark of E.I. du Pont) and/or other suitable material, through the lumen. The acoustic backing material cures over time. Light and/or heat may be applied in some instances to cure the backing material. At step440, the method400includes coupling a distal member to the flex circuit and/or the support member. The support member may include a distal flange that is sized and shaped to facilitate coupling to the distal member. When joined, a distal portion of the flex circuit may extend over a proximal portion of the distal member such that the flex circuit and the distal member form a lap joint. Adhesive may be positioned between the distal member, the flex circuit, and/or the support member to affix the components. At step450, the method400includes electrically coupling one or more conductors to the flex circuit. For example, the flex circuit may include a conductor interface that extends at an oblique angle relative to a body of the flex circuit. The conductive traces of the conductor interface are in electrical communication with electronic components of the flex circuit, such as the controllers, transducers, and/or other conductive traces. Electrically coupling the one or more conductors establishes electrical communication between the conductors and the components of the flex circuit. For example, the conductors can be soldered to the conductor interface. The conductor interface can extend from the main body of the flex circuit such that the location on the conductor interface where the conductors are soldered is advantageously spaced from the main body of the flex circuit. For example, the conductor interface can be positioned around, such as in a spiral and/or other suitable configuration, around a proximal flange of the support member. The outer diameter of the intravascular device can be advantageously minimized by connecting the conductor to the conductor interface of the flex circuit spaced from the controllers and/or transducers of the flex circuit. At step455, the method400includes coupling one or proximal members to the flex circuit and/or the support member. For example, an inner member and/or an outer member can be coupled to the flex circuit and/or the support member. In some embodiments, the inner member and outer member can be coupled to the flex circuit and/or the support member at different steps of the method400. The support member may include a proximal flange that is sized and shaped to facilitate coupling to the proximal member(s). For example, the proximal flange may have a plurality of cavities that extends radially inwards from an outer surface of the proximal flange through the inner wall of the support member lumen. The inner proximal member may be positioned within the proximal flange. Step455can include applying adhesive to affix the inner proximal member and the support member. The adhesive may also adhere to the conductor interface that is positioned around the proximal flange. Light and/or heat may be delivered to the adhesive via the plurality of cavities in the proximal flange to allow curing of the adhesive. The outer proximal member may be positioned around the proximal flange. When joined, a proximal portion of the flex circuit may extend over a distal portion of the outer proximal member such that the flex circuit and the outer proximal member form a lap joint. Adhesive may be positioned between the one or more distal members, the flex circuit, and/or the support member to affix the components. FIG.8is perspective view illustration of an embodiment of the support member330. The support member330is described with reference also toFIG.10, which is a cross-sectional side view illustration of a distal portion of the intravascular device300, including the support member330. The support member330may be metallic or non-metallic in various embodiments. For example, the support member330may be molded plastic or polymer. The support member330includes the body portion333extending between the distal stand342and the proximal stand344. The proximal and distal stands342,344have a larger outer diameter than the body portion333. The larger outer diameter of the proximal and distal stands342,344define a radial space337. As described herein, when the flex circuit314is positioned around the support member330, the space337can be filed with the acoustic backing material. In that regard, the body portion333includes multiple recesses or holes339that are spaced from one another. The recesses339may be longitudinally and/or circumferentially distributed on the body portion333. In that regard, the recesses339may be arranged in any suitable distribution or pattern along the body portion333. In the illustrated embodiments, the recesses339may form two spirals around the body portion333. It is understand any suitable pattern of recesses339, including one, two, three, four, or more spirals, a geometric pattern, such as a checkerboard, or other regularly spaced pattern, irregular pattern, random pattern, and/or other suitable distribution may be utilized. Each of the recesses339extends radially from an outer surface371of the support member through an inner wall372of the lumen336. The recesses339establish fluid communication between the lumen336extending longitudinally through the support member and the space337. The space337may be filled with the acoustic backing material by introducing the backing material into the lumen such that the backing material flows in the space337through the recesses339. In that regard, the recesses339may be distributed and/or spaced from one another such that the backing material evenly fills the space337. Recesses339may have any suitable shape, including a circle (as shown), polygon, ellipse, etc. In the illustrated embodiment, the stand342includes an opening343. The opening343extends longitudinally between proximal and distal sides of the stand342. When the space337is filled with the backing material, suction may be applied at the opening343to evacuate any air in the space337. While only one opening343is shown, it is understood more than one opening343may be provided on the stand342. In other embodiments, opening(s)343may be provided only on the proximal stand344and/or both the proximal and distal stands342,344. The support member330includes the distal flange332. In various embodiments, the inner diameter and/or outer diameter of the distal flange332may be larger than, smaller than, and/or equal to the inner diameter and/or outer diameter of the central portion333. In an exemplary embodiment, the inner and outer diameters of the distal flange332are substantially equal to the inner and outer diameters of the body portion333. The distal flange332may be sized and shaped to facilitate coupling with the distal member352. In that regard, the distal flange332may have cross-sectional profile that is straight/linear, tapered, spiral groove-shaped, screw thread-shaped, buttress thread-shaped, and/or otherwise suitably shaped, including the shapes described in U.S. Provisional App. No. 62/315,395, filed Mar. 30, 2016, the entirety of which is hereby incorporated by reference herein. As shown inFIG.10, the distal flange332engages an inner surface355of a lumen353of the distal member352when the distal member352is positioned around the distal flange. The spiral groove or buttress thread shape of the distal flange332in the illustrated embodiment advantageously enhances adhesion and/or grip by increasing the surface area of contact between the support member330and the distal member352. This advantageously results in higher pull strength values required to separate the support member330and the distal member352. In some embodiments, the adhesive370may be positioned the flex314, the support member330, and/or the distal member352to support the coupling. FIGS.9,10, and11illustrate an embodiment of the distal portion of the intravascular device300where the flex circuit314, the support member330, and/or the distal member352are mechanically coupled to one another.FIGS.9and11are perspective view illustrations of the distal portion of the intravascular device300, including the imaging assembly302.FIG.9shows a relatively earlier stage of the assembly process for the intravascular device300, whileFIG.11shows a relatively later stage.FIG.10is a cross-sectional side view illustration of the intravascular device300, including the imaging assembly302. As shown inFIGS.9and10, a distal portion321of the flex circuit314overlaps a proximal portion359of the distal member352to form a lap joint357. Conventional intravascular devices utilized butt joints encapsulated by a fillet that undesirably increases the outer diameter of the intravascular device. The lap joint357may be advantageously implemented with adhesive370to minimize the outer diameter, such as to achieve a3F or smaller outer diameter for the intravascular device300. When the intravascular device300is assembled, the proximal portion359of the distal member352can be coated with the adhesive370, and the distal member352can be moved proximally and slide under distal portion321of the flex circuit314. The adhesive370mechanically affixes one or more of the distal member352, the support member330, and/or the flex circuit314to one another. The distal member352can be slide proximally over and around the distal flange352such that the distal member352abuts distal stand352. As illustrated inFIG.11, the adhesive370can also be applied around the joint357. The joint357advantageously creates and maintains a hermetic seal for the flex circuit314. In some embodiments, the lap joint357can be formed when the proximal portion359of the distal member352overlaps the distal portion321of the flex circuit314. Referring again toFIG.8, the support member330includes a proximal flange334. The proximal flange334may be sized and shaped to facilitate coupling to one or more proximal members354,356. In various embodiments, the inner and/or outer diameter of the proximal flange334may be larger than, smaller than, and/or equal to the inner and/or outer diameter of the central portion333. In an exemplary embodiment, the inner and outer diameters of the proximal flange334are larger than the inner and outer diameters of the central portion333. The proximal flange334includes a plurality of cavities341. As described herein, the cavities341can facilities adhesion between the proximal members354,356, the flex circuit314, and/or the support member330with the adhesive370. FIGS.12and14-19illustrate various steps in assembly of the intravascular device300. In particular,FIGS.12and14-19show components of the intravascular device300at a proximal portion of the imaging assembly302.FIGS.12,14,15,18, and19are perspective view illustrations of the imaging assembly302.FIG.16is cross-sectional side view illustration of the imaging assembly302. The flex circuit314includes a conductor interface320. The conductor interface320extends proximally from a proximal portion322of the flex circuit314. One or more conductors218of the cable112(FIGS.1and2) are electrically coupled to the conductor interface320. For example, the conductors218can be soldered at a proximal portion323of the conductor interface320. By electrically coupling the conductors218at the distal portion323of the conductor interface320, the conductors218can be soldered at a location spaced from the electronic components in the main body of the flex circuit414. This may advantageously minimize the outer diameter of the imaging assembly302and the intravascular device300because the thickness associated with soldering the conductors is moved away from the flex circuit314. The conductor interface320includes conductive traces that are in electrical communication with the flex circuit314. Electrically coupling the conductors218to the conductor interface320thus facilitates exchange of electrical signals between the conductors218, the controllers206A,206B, and/or the transducers212(FIGS.2and6). The conductor interface320extends at an oblique angle relative to a main body of the flex circuit314. The main body of the flex circuit314can collectively describe the transducer region310, the controller region308, and the transition region310. FIG.13is a diagrammatic schematic top view of a flex circuit414and a conductor interface420. The conductor interface420forms an oblique angle α with respect to the main body of the flex circuit414. The oblique angle α may be between approximately 0° and approximately 89° in some embodiments. The oblique angle may be between approximately 91° and approximately 179° in some embodiments. The proximal portion423includes conductive pads417where the conductors218are soldered. The conductive pads417are in electrical communication with the conductive traces215, which are, in turn, in electrical communication with the electronic components of the flex circuit414. As shown inFIGS.12and14, the conductor interface320can be positioned around the proximal flange334. For example, the conductor interface320can be wrapped in a spiral or helical configuration around the proximal flange334. The conductor interface320can be wound around the proximal flange334any suitable number of times, depending on the length of the conductor interface320. In other embodiments, the conductor interface320can extend proximally from the main body of the flex circuit in a different manner, such as a linear/straight configuration, a curved configuration, etc. The flex circuit314, the support member330, and/or the proximal members354,356are coupled with the adhesive370at the proximal joint379.FIG.14illustrates that the inner proximal member356can be inserted into and received within the proximal flange334. The conductor interface320may be wrapped in a spiral configuration around both the proximal flange334and the inner member356in some embodiments. In such embodiments, the conductors218can extend along the length of the intravascular device300within a lumen of the outer member354, between the inner member356and the outer member354. The adhesive370is applied onto and around the proximal flange334, as illustrated inFIG.15-17. The adhesive370flows through the cavities341of the proximal flange334so that the adhesive370covers surfaces of the inner member356and the support member330. The cavities341are longitudinally and/or circumferentially distributed on the proximal flange334. Each of the cavities341extends radially from an outer surface376of the proximal flange334through an inner wall377of the lumen336. As described above, in some embodiments, the inner diameter of the lumen336may be larger in proximal flange than in the central portion333. The cavities341establish fluid communication between spaces of the intravascular device above/outside (e.g., between the outer member354and the proximal flange334) and below/inside (e.g., within the lumen336, between the proximal flange334and the inner member356) of the proximal flange334. In that regard, the cavities341may be distributed and/or spaced from one another such that the adhesive370evenly coats the support member330, the flex circuit314, and/or the proximal member354,356. The cavities341may have any suitable shape, including oblong (as shown), circle, polygon, ellipse, etc. The cavities341advantageously allow for light to travel to penetrate the proximal flange334and cure the adhesive370. FIG.16illustrates a stylized shape that the adhesive370assumes when the proximal portion of the imaging assembly302is coated with the adhesive370. The structural components of the intravascular device300are not visible inFIG.15. The pillars373of the adhesive370extend within the cavities341of the proximal flange334. The pillars373are formed between an outer circumference374and an inner circumference375. The inner circumference375establishes adhesive contact between the proximal flange334and the inner member356. The outer circumference374establishes adhesive contact between the proximal flange334and the outer member354. The pillars373extending through the cavities341establishes continuity between the outer and inner circumferences374,375and strengthens the bond between the support member330and the proximal members354,356. As shown inFIGS.17-19, the outer member354can be moved distally over and around the proximal flange334and the conductor interface320until the outer member354abuts the proximal stand344. The proximal joint379can be a lap joint, which can advantageously seal the flex circuit314using the adhesive370. For example, a distal portion378can overlap a proximal portion322bin some embodiments. In such embodiments, the proximal portion322bmay be bent and secured to the proximal flange334, allowing the outer member354to slide over the proximal portion322b. In other embodiments, a proximal portion322aof the flex circuit314overlaps the outer member354. In such embodiments, the distal portion378of the outer member may be slid underneath the proximal portion322a. The distal portion378and/or the proximal portion322a,322bmay be coated with the adhesive370. As illustrated inFIG.19, the adhesive370can also be applied around the joint379. Various embodiments of an intravascular device and/or imaging assembly can include features described in U.S. Provisional App. No. 62/315,395, filed on Mar. 30, 2016, U.S. Provisional App. No. 62/315,406, filed on Mar. 30, 2016, U.S. Provisional App. No. 62/315,421, filed on Mar. 30, 2016, and U.S. Provisional App. No. 62/315,416, filed on Mar. 30, 2016, the entireties of which are hereby incorporated by reference herein. Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. | 49,380 |
11857363 | DETAILED DESCRIPTION Systems and methods of medical ultrasound imaging are disclosed. The presently disclosed systems and methods of medical ultrasound imaging employ medical ultrasound imaging equipment that includes housing in a tablet form factor, and a touch screen display disposed on a front panel of the housing. The touch screen display includes a multi-touch touch screen that can recognize and distinguish one or more single, multiple, and/or simultaneous touches on a surface of the touch screen display, thereby allowing the use of gestures, ranging from simple single point gestures to complex multipoint gestures, as user inputs to the medical ultrasound imaging equipment. Further details regarding tablet ultrasound systems and operations are described in U.S. application Ser. No. 10/997,062 filed on Nov. 11, 2004, Ser. No. 10/386,360 filed Mar. 11, 2003 and U.S. Pat. No. 6,969,352, the entire contents of these patents and applications are incorporated herein by reference. FIG.1depicts an illustrative embodiment of exemplary medical ultrasound imaging equipment100, in accordance with the present application. As shown inFIG.1, the medical ultrasound imaging equipment100includes a housing102, a touch screen display104, a computer having at least one processor and at least one memory implemented on a computer motherboard106, an ultrasound engine108, and a battery110. For example, the housing102can be implemented in a tablet form factor, or any other suitable form factor. The housing102has a front panel101and a rear panel103. The touch screen display104is disposed on the front panel101of the housing102, and includes a multi-touch LCD touch screen that can recognize and distinguish one or more multiple and/or simultaneous touches on a surface105of the touch screen display104. The computer motherboard106, the ultrasound engine108, and the battery110are operatively disposed within the housing102. The medical ultrasound imaging equipment100further includes a Firewire connection112(see alsoFIG.2A) operatively connected between the computer motherboard106and the ultrasound engine108within the housing102, and a probe connector114having a probe attach/detach lever115(see alsoFIGS.2A and2B) to facilitate the connection of at least one ultrasound probe/transducer. The transducer probe housing can include circuit components including a transducer array, transmit and receive circuitry, as well as beamformer and beamformer control circuits in certain preferred embodiments. In addition, the medical ultrasound imaging equipment100has one or more I/O port connectors116(seeFIG.2A), which can include, but are not limited to, one or more USB connectors, one or more SD cards, one or more network ports, one or more mini display ports, and a DC power input. In an exemplary mode of operation, medical personnel (also referred to herein as the “user” or “users”) can employ simple single point gestures and/or more complex multipoint gestures as user inputs to the multi-touch LCD touch screen of the touch screen display104for controlling one or more operational modes and/or functions of the medical ultrasound imaging equipment100. Such a gesture is defined herein as a movement, a stroke, or a position of at least one finger, a stylus, and/or a palm on the surface105of the touch screen display104. For example, such single point/multipoint gestures can include static or dynamic gestures, continuous or segmented gestures, and/or any other suitable gestures. A single point gesture is defined herein as a gesture that can be performed with a single touch contact point on the touch screen display104by a single finger, a stylus, or a palm. A multipoint gesture is defined herein as a gesture that can be performed with multiple touch contact points on the touch screen display104by multiple fingers, or any suitable combination of at least one finger, a stylus, and a palm. A static gesture is defined herein as a gesture that does not involve the movement of at least one finger, a stylus, or a palm on the surface105of the touch screen display104. A dynamic gesture is defined herein as a gesture that involves the movement of at least one finger, a stylus, or a palm, such as the movement caused by dragging one or more fingers across the surface105of the touch screen display104. A continuous gesture is defined herein as a gesture that can be performed in a single movement or stroke of at least one finger, a stylus, or a palm on the surface105of the touch screen display104. A segmented gesture is defined herein as a gesture that can be performed in multiple movements or stokes of at least one finger, a stylus, or a palm on the surface105of the touch screen display104. Such single point/multipoint gestures performed on the surface105of the touch screen display104can correspond to single or multipoint touch events, which are mapped to one or more predetermined operations that can be performed by the computer and/or the ultrasound engine108. Users can make such single point/multipoint gestures by various single finger, multi-finger, stylus, and/or palm motions on the surface105of the touch screen display104. The multi-touch LCD touch screen receives the single point/multipoint gestures as user inputs, and provides the user inputs to the processor, which executes program instructions stored in the memory to carry out the predetermined operations associated with the single point/multipoint gestures, at least at some times, in conjunction with the ultrasound engine108. As shown inFIG.3A, such single point/multipoint gestures on the surface105of the touch screen display104can include, but are not limited to, a tap gesture302, a pinch gesture304, a flick gesture306,314, a rotate gesture308,316, a double tap gesture310, a spread gesture312, a drag gesture318, a press gesture320, a press and drag gesture322, and/or a palm gesture324. For example, such single point/multipoint gestures can be stored in at least one gesture library in the memory implemented on the computer motherboard106. The computer program operative to control system operations can be stored on a computer readable medium and can optionally be implemented using a touch processor connected to an image processor and a control processor connected to the system beamformer. Thus beamformer delays associated with both transmission and reception can be adjusted in response to both static and moving touch gestures. In accordance with the illustrative embodiment ofFIG.1, at least one flick gesture306or314may be employed by a user of the medical ultrasound imaging equipment100to control the depth of tissue penetration of ultrasound waves generated by the ultrasound probe/transducer. For example, a dynamic, continuous, flick gesture306or314in the “up” direction, or any other suitable direction, on the surface105of the touch screen display104can increase the penetration depth by one (1) centimeter, or any other suitable amount. Further, a dynamic, continuous, flick gesture306or314in the “down” direction, or any other suitable direction, on the surface105of the touch screen display104can decrease the penetration depth by one (1) centimeter, or any other suitable amount. Moreover, a dynamic, continuous, drag gesture318in the “up” or “down” direction, or any other suitable direction, on the surface105of the touch screen display104can increase or decrease the penetration depth in multiple centimeters, or any other suitable amounts. Additional operational modes and/or functions controlled by specific single point/multipoint gestures on the surface105of the touch screen display104can include, but are not limited to, freeze/store operations, 2-dimensional mode operations, gain control, color control, split screen control, PW imaging control, cine/time-series image clip scrolling control, zoom and pan control, full screen display, Doppler and 2-dimensional beam steering control, and/or body marking control. At least some of the operational modes and/or functions of the medical ultrasound imaging equipment100can be controlled by one or more touch controls implemented on the touch screen display104. Further, users can provide one or more specific single point/multipoint gestures as user inputs for specifying at least one selected subset of the touch controls to be implemented, as required and/or desired, on the touch screen display104. Shown inFIG.3Bis a process sequence in which ultrasound beamforming and imaging operations340are controlled in response to touch gestures entered on a touchscreen. Various static and moving touch gestures have been programmed into the system such that the data processor operable to control beamforming and image processing operations342within the tablet device. A user can select344a first display operation having a first plurality of touch gestures associated therewith. Using a static or moving gesture the user can perform one of the plurality of gestures operable to control the imaging operation and can specifically select one of a plurality of gestures that can adjust beamforming parameters346being used to generate image data associated with the first display operation. The displayed image is updated and displayed348response to the updated beamforming procedure. The user can further elect to perform a different gesture having a different velocity characteristic (direction or speed or both) to adjust350a second characteristic of the first ultrasound display operation. The displayed image is then updated352based on the second gesture, which can modify imaging processing parameters or beamforming parameters. Examples of this process are described in further detail herein where changes in velocity and direction of different gestures can be associated with distinct imaging parameters of a selected display operation. Ultrasound images of flow or tissue movement, whether color flow or spectral Doppler, are essentially obtained from measurements of movement. In ultrasound scanners, a series of pulses is transmitted to detect movement of blood. Echoes from stationary targets are the same from pulse to pulse. Echoes from moving scatterers exhibit slight differences in the time for the signal to be returned to the scanner. As can be seen fromFIG.3C-3H, there has to be motion in the direction of the beam; if the flow is perpendicular to the beam, there is no relative motion from pulse to pulse receive, there is no flow detected. These differences can be measured as a direct time difference or, more usually, in terms of a phase shift from which the ‘Doppler frequency’ is obtained. They are then processed to produce either a color flow display or a Doppler sonogram. InFIG.3C-3D, the flow direction is perpendicular to the beam direction, no flow is measured by Pulse Wave spectral Doppler. InFIG.3G-3Hwhen the ultrasound beam is steered to an angle that is better aligned to the flow, a weak flow is shown in the color flow map, and in addition flow is measured by Pulse Wave Doppler. InFIG.3H, when the ultrasound beam is steered to an angle much better aligned to the flow direction in response to a moving, the color flow map is stronger, in addition when the correction angle of the PWD is placed aligned to the flow, a strong flow is measured by the PWD. In this tablet ultrasound system, an ROI, region of interest, is also used to define the direction in response to a moving gesture of the ultrasound transmit beam. A liver image with a branch of renal flow in color flow mode is shown inFIG.3Isince the ROI is straight down from the transducer, the flow direction is almost normal to the ultrasound beam, so very week renal flow is detected. Hence, the color flow mode is used to image a renal flow in liver. As can be seen, the beam is almost normal to the flow and very weak flow is detected. A flick gesture with the finger outside of the ROI is used to steer the beam. As can be seen inFIG.3J, the ROI is steered by resetting beamforming parameters so that the beam direction is more aligned to the flow direction, a much stronger flow within the ROI is detected. InFIG.3J, a flick gesture with the finger outside of the ROI is used to steer the ultrasound beam into the direction more aligned to the flow direction. Stronger flow within the ROI can be seen. A panning gesture with the finger inside the ROI will move the ROI box into a position that covers the entire renal region, i.e., panning allows a translation movement of the ROI box such that the box covers the entire target area. FIG.3Kdemonstrates a panning gesture. With the finger inside the ROI, it can move the ROI box to any place within the image plane. In the above embodiment, it is easy to differentiate a “flick” gesture with a finger outside an “ROI” box is intended for steering a beam, and a “drag-and-move, panning” gesture with a finger inside the “ROI” is intended for moving the ROI box. However, there are applications in which no ROI as a reference region, then it is easy to see that it is difficult to differentiate a “flick” or a “panning” gesture, in this case, the touch-screen program needs to track the initial velocity or acceleration of the finger to determine it is a “flick” gesture or a “drag-and-move” gesture. Thus, the touch engine that receives data from the touchscreen sensor device is programmed to discriminate between velocity thresholds that indicate different gestures. Thus, the time, speed and direction associated with different moving gestures can have preset thresholds. Two and three finger static and moving gestures can have separate thresholds to differentiate these control operations. Note that preset displayed icons or virtual buttons can have distinct static pressure or time duration thresholds. When operated in full screen mode, the touchscreen processor, which is preferably operating on the systems central processing unit that performs other imaging operations such as scan conversion, switches off the static icons. FIGS.4A-4Cdepict exemplary subsets402,404,406of touch controls that can be implemented by users of the medical ultrasound imaging equipment100on the touch screen display104. It is noted that any other suitable subset(s) of touch controls can be implemented, as required and/or desired, on the touch screen display104. As shown inFIG.4A, the subset402includes a touch control408for performing 2-dimensional (2D) mode operations, a touch control410for performing gain control operations, a touch control412for performing color control operations, and a touch control414for performing image/clip freeze/store operations. For example, a user can employ the press gesture320to actuate the touch control408, returning the medical ultrasound imaging equipment100to 2D mode. Further, the user can employ the press gesture320against one side of the touch control410to decrease a gain level, and employ the press gesture320against another side of the touch control410to increase the gain level. Moreover, the user can employ the drag gesture318on the touch control412to identify ranges of densities on a 2D image, using a predetermined color code. In addition, the user can employ the press gesture320to actuate the touch control414to freeze/store a still image or to acquire a cine image clip. As shown inFIG.4B, the subset404includes a touch control416for performing split screen control operations, a touch control418for performing PW imaging control operations, a touch control420for performing Doppler and 2-dimensional beam steering control operations, and a touch control422for performing annotation operations. For example, a user can employ the press gesture320against the touch control416, allowing the user to toggle between opposing sides of the split touch screen display104by alternately employing the tap gesture302on each side of the split screen. Further, the user can employ the press gesture320to actuate the touch control418and enter the PW mode, which allows (1) user control of the angle correction, (2) movement (e.g., “up” or “down”) of a baseline that can be displayed on the touch screen display104by employing the press and drag gesture322, and/or (3) an increase or a decrease of scale by employing the tap gesture302on a scale bar that can be displayed on the touch screen display104. Moreover, the user can employ the press gesture320against one side of the touch control420to perform 2D beam steering to the “left” or any other suitable direction in increments of five (5) or any other suitable increment, and employ the press gesture320against another side of the touch control420to perform 2D beam steering to the “right” or any other suitable direction in increments of five (5) or any other suitable increment. In addition, the user can employ the tap gesture302on the touch control422, allowing the user to enter annotation information via a pop-up keyboard that can be displayed on the touch screen display104. As shown inFIG.4C, the subset406includes a touch control424for performing dynamic range operations, a touch control426for performing Teravision™ software operations, a touch control428for performing map operations, and a touch control430for performing needle guide operations. For example, a user can employ the press gesture320and/or the press and drag gesture322against the touch control424to control or set the dynamic range. Further, the user can employ the tap gesture302on the touch control426to choose a desired level of the Teravision™ software to be executed from the memory by the processor on the computer motherboard106. Moreover, the user can employ the tap gesture302on the touch control428to perform a desired map operation. In addition, the user can employ the press gesture320against the touch control430to perform a desired needle guide operation. In accordance with the present application, various measurements and/or tracings of objects (such as organs, tissues, etc.) displayed as ultrasound images on the touch screen display104of the medical ultrasound imaging equipment100(seeFIG.1) can be performed, using single point/multipoint gestures on the surface105of the touch screen display104. The user can perform such measurements and/or tracings of objects directly on an original ultrasound image of the displayed object, on a magnified version of the ultrasound image of the displayed object, and/or on a magnified portion of the ultrasound image within a virtual window506(seeFIGS.5C and5D) on the touch screen display104. FIGS.5A and5Bdepict an original ultrasound image of an exemplary object, namely, a liver502with a cystic lesion504, displayed on the touch screen display104of the medical ultrasound imaging equipment100(seeFIG.1). It is noted that such an ultrasound image can be generated by the medical ultrasound imaging equipment100in response to penetration of the liver tissue by ultrasound waves generated by an ultrasound probe/transducer operatively connected to the equipment100. Measurements and/or tracings of the liver502with the cystic lesion504can be performed directly on the original ultrasound image displayed on the touch screen display104(seeFIGS.5A and5B), or on a magnified version of the ultrasound image. For example, the user can obtain such a magnified version of the ultrasound image using a spread gesture (see, e.g., the spread gesture312;FIG.3) by placing two (2) fingers on the surface105of the touch screen display104, and spreading them apart to magnify the original ultrasound image. Such measurements and/or tracings of the liver502and cystic lesion504can also be performed on a magnified portion of the ultrasound image within the virtual window506(seeFIGS.5C and5D) on the touch screen display104. For example, using his or her finger (see, e.g., a finger508;FIGS.5A-5D), the user can obtain the virtual window506by employing a press gesture (see, e.g., the press gesture320;FIG.3) against the surface105of the touch screen display104(seeFIG.5B) in the vicinity of a region of interest, such as the region corresponding to the cystic lesion504. In response to the press gesture, the virtual window506(seeFIGS.5C and5D) is displayed on the touch screen display104, possibly at least partially superimposed on the original ultrasound image, thereby providing the user with a view of a magnified portion of the liver502in the vicinity of the cystic lesion504. For example, the virtual window506ofFIG.5Ccan provide a view of a magnified portion of the ultrasound image of the cystic lesion504, which is covered by the finger508pressed against the surface105of the touch screen display104. To re-position the magnified cystic lesion504within the virtual window506, the user can employ a press and drag gesture (see, e.g., the press and drag gesture322;FIG.3) against the surface105of the touch screen display104(seeFIG.5D), thereby moving the image of the cystic lesion504to a desired position within the virtual window506. In one embodiment, the medical ultrasound imaging equipment100can be configured to allow the user to select a level of magnification within the virtual window506to be 2 times larger, 4 times larger, or any other suitable number of times larger than the original ultrasound image. The user can remove the virtual window506from the touch screen display104by lifting his or her finger (see, e.g., the finger508;FIGS.5A-5D) from the surface105of the touch screen display104. FIG.6Adepicts an ultrasound image of another exemplary object, namely, an apical four (4) chamber view of a heart602, displayed on the touch screen display104of the medical ultrasound imaging equipment100(seeFIG.1). It is noted that such an ultrasound image can be generated by the medical ultrasound imaging equipment100in response to penetration of the heart tissue by ultrasound waves generated by an ultrasound probe/transducer operatively connected to the equipment100. Measurements and/or tracings of the heart602can be performed directly on the original ultrasound image displayed on the touch screen display104(seeFIGS.6A-6E), or on a magnified version of the ultrasound image. For example, using his or her fingers (see, e.g., fingers610,612;FIGS.6B-6E), the user can perform a manual tracing of an endocardial border604(seeFIG.6B) of a left ventricle606(seeFIGS.6B-6E) of the heart602by employing one or more multi-finger gestures on the surface105of the touch screen display104. In one embodiment, using his or her fingers (see, e.g., the fingers610,612;FIGS.6B-6E), the user can obtain a cursor607(seeFIG.6B) by employing a double tap gesture (see, e.g., the double tap gesture310;FIG.3A) on the surface105of the touch screen display104, and can move the cursor607by employing a drag gesture (see, e.g., the drag gesture318;FIG.3A) using one finger, such as the finger610, thereby moving the cursor607to a desired location on the touch screen display104. The systems and methods described herein can be used for the quantitative measurement of heart wall motion and specifically for the measurement of ventricular dyssynchrony as described in detail in U.S. application Ser. No. 10/817,316 filed on Apr. 2, 2004, the entire contents of which is incorporated herein by reference. Once the cursor607is at the desired location on the touch screen display104, as determined by the location of the finger610, the user can fix the cursor607at that location by employing a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using another finger, such as the finger612. To perform a manual tracing of the endocardial border604(seeFIG.6B), the user can employ a press and drag gesture (see, e.g., the press and drag gesture322;FIG.3) using the finger610, as illustrated inFIGS.6C and6D. Such a manual tracing of the endocardial border604can be highlighted on the touch screen display104in any suitable fashion, such as by a dashed line608(seeFIGS.6C-6E). The manual tracing of the endocardial border604can continue until the finger610arrives at any suitable location on the touch screen display104, or until the finger610returns to the location of the cursor607, as illustrated inFIG.6E. Once the finger610is at the location of the cursor607, or at any other suitable location, the user can complete the manual tracing operation by employing a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using the finger612. It is noted that such a manual tracing operation can be employed to trace any other suitable feature(s) and/or waveform(s), such as a pulsed wave Doppler (PWD) waveform. In one embodiment, the medical ultrasound imaging equipment100can be configured to perform any suitable calculation(s) and/or measurement(s) relating to such feature(s) and/or waveform(s), based at least in part on a manual tracing(s) of the respective feature(s)/waveform(s). As described above, the user can perform measurements and/or tracings of objects on a magnified portion of an original ultrasound image of a displayed object within a virtual window on the touch screen display104.FIGS.7A-7Cdepict an original ultrasound image of an exemplary object, namely, a liver702with a cystic lesion704, displayed on the touch screen display104of the medical ultrasound imaging equipment100(seeFIG.1).FIGS.7A-7Cfurther depict a virtual window706that provides a view of a magnified portion of the ultrasound image of the cystic lesion704, which is covered by one of the user's fingers, such as a finger710, pressed against the surface105of the touch screen display104. Using his or her fingers (see, e.g., fingers710,712;FIGS.7A-7C), the user can perform a size measurement of the cystic lesion704within the virtual window706by employing one or more multi-finger gestures on the surface105of the touch screen display104. For example, using his or her fingers (see, e.g., the fingers710,712;FIGS.7A-7C), the user can obtain a first cursor707(seeFIGS.7B,7C) by employing a double tap gesture (see, e.g., the double tap gesture310;FIG.3) on the surface105, and can move the first cursor707by employing a drag gesture (see, e.g., the drag gesture318;FIG.3) using one finger, such as the finger710, thereby moving the first cursor707to a desired location. Once the first cursor707is at the desired location, as determined by the location of the finger710, the user can fix the first cursor707at that location by employing a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using another finger, such as the finger712. Similarly, the user can obtain a second cursor709(seeFIG.7C) by employing a double tap gesture (see, e.g., the double tap gesture310;FIG.3) on the surface105, and can move the second cursor709by employing a drag gesture (see, e.g., the drag gesture318;FIG.3) using the finger710, thereby moving the second cursor709to a desired location. Once the second cursor709is at the desired location, as determined by the location of the finger710, the user can fix the second cursor709at that location by employing a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using the finger712. In one embodiment, the medical ultrasound imaging equipment100can be configured to perform any suitable size calculation(s) and/or measurement(s) relating to the cystic lesion704, based at least in part on the locations of the first and second cursors707,709. FIGS.8A-8Cdepict an original ultrasound image of an exemplary object, namely, a liver802with a cystic lesion804, displayed on the touch screen display104of the medical ultrasound imaging equipment100(seeFIG.1).FIGS.8a-8cfurther depict a virtual window806that provides a view of a magnified portion of the ultrasound image of the cystic lesion804, which is covered by one of the user's fingers, such as a finger810, pressed against the surface105of the touch screen display104. Using his or her fingers (see, e.g., fingers810,812;FIGS.8A-8C), the user can perform a caliper measurement of the cystic lesion804within the virtual window806by employing one or more multi-finger gestures on the surface105of the touch screen display104. For example, using his or her fingers (see, e.g., the fingers810,812;FIGS.8A-8C), the user can obtain a first cursor807(seeFIGS.8B,8C) by employing a double tap gesture (see, e.g., the double tap gesture310;FIG.3) on the surface105, and can move the cursor807by employing a drag gesture (see, e.g., the drag gesture318;FIG.3) using one finger, such as the finger810, thereby moving the cursor807to a desired location. Once the cursor807is at the desired location, as determined by the location of the finger810, the user can fix the cursor807at that location by employing a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using another finger, such as the finger812. The user can then employ a press and drag gesture (see, e.g., the press and drag gesture322;FIG.3) to obtain a connecting line811(seeFIGS.8B,8C), and to extend the connecting line811from the first cursor807across the cystic lesion804to a desired location on another side of the cystic lesion804. Once the connecting line811is extended across the cystic lesion804to the desired location on the other side of the cystic lesion804, the user can employ a tap gesture (see, e.g., the tap gesture302; seeFIG.3) using the finger812to obtain and fix a second cursor809(seeFIG.8C) at that desired location. In one embodiment, the medical ultrasound imaging equipment100can be configured to perform any suitable caliper calculation(s) and/or measurement(s) relating to the cystic lesion804, based at least in part on the connecting line811extending between the locations of the first and second cursors807,809. FIG.9Ashows a system140in which a transducer housing150with an array of transducer elements152can be attached at connector114to housing102. Each probe150can have a probe identification circuit154that uniquely identifies the probe that is attached. When the user inserts a different probe with a different array, the system identifies the probe operating parameters. Note that preferred embodiments can include a display104having a touch sensor107which can be connected to a touch processor109that analyzes touchscreen data from the sensor107and transmits commands to both image processing operations and to a beamformer control processor (1116,1124). In a preferred embodiment, the touch processor can include a computer readable medium that stores instructions to operate an ultrasound touchscreen engine that is operable to control display and imaging operations described herein. FIG.9Bshows a software flowchart900of a typical transducer management module902within the ultrasound application program. When a TRANSDUCER ATTACH904event is detected, the Transducer Management Software Module902first reads the Transducer type ID906and hardware revision information from the IDENTIFICATION Segment. The information is used to fetch the particular set of transducer profile data908from the hard disk and load it into the memory of the application program. The software then reads the adjustment data from the FACTORY Segment910and applies the adjustments to the profile data just loaded into memory912. The software module then sends a TRANSDUCER ATTACH Message914to the main ultrasound application program, which uses the transducer profile already loaded. After acknowledgment916, an ultrasound imaging sequence is performed and the USAGE segment is updated918. The Transducer Management Software Module then waits for either a TRANSDUCER DETACH event920, or the elapse of 5 minutes. If a TRANSDUCER DETACH event is detected921, a message924is sent and acknowledged926, the transducer profile data set is removed928from memory and the module goes back to wait for another TRANSDUCER ATTACH event. If a 5 minutes time period expires without detecting a TRANSDUCER DETACH event, the software module increments a Cumulative Usage Counter in the USAGE Segment922, and waits for another 5 minutes period or a TRANSDUCER DETACH event. The cumulative usage is recorded in memory for maintenance and replacement records. There are many types of ultrasound transducers. They differ by geometry, number of elements, and frequency response. For example, a linear array with center frequency of 10 to 15 MHz is better suited for breast imaging, and a curved array with center frequency of 3 to 5 MHz is better suited for abdominal imaging. It is often necessary to use different types of transducers for the same or different ultrasound scanning sessions. For ultrasound systems with only one transducer connection, the operator will change the transducer prior to the start of a new scanning session. In some applications, it is necessary to switch among different types of transducers during one ultrasound scanning session. In this case, it is more convenient to have multiple transducers connected to the same ultrasound system, and the operator can quickly switch among these connected transducers by hitting a button on the operator console, without having to physically detach and re-attach the transducers, which takes a longer time. Preferred embodiments of the invention can include a multiplexor within the tablet housing that can select between a plurality of probe connector ports within the tablet housing, or alternatively, the tablet housing can be connected to an external multiplexor that can be mounted on a cart as described herein. FIG.9Cis a perspective view of an exemplary needle sensing positioning system using ultrasound transducers without the requirement of any active electronics in the sensor assembly. The sensor transducer may include a passive ultrasound transducer element. The elements may be used in a similar way as a typical transducer probe, utilizing the ultrasound engine electronics. The system958includes the addition of ultrasound transducer elements960, added to a needle guide962, that is represented inFIG.9Cbut that may be any suitable form factor. The ultrasound transducer element960, and needle guide962, may be mounted using a needle guide mounting bracket966, to an ultrasound transducer probe acoustic handle or an ultrasound imagining probe assembly970. The needle with a disc mounted on the exposed end, the ultrasound reflector disc964, is reflective to ultrasonic waves. The ultrasound transducer element960, on the needle guide962, may be connected to the ultrasound engine. The connection may be made through a separate cable to a dedicated probe connector on the engine, similar to a sharing the pencil CW probe connector. In an alternate embodiment, a small short cable may be plugged into the larger image transducer probe handle or a split cable connecting to the same probe connector at the engine. In another alternate embodiment the connection may be made via an electrical connector between the image probe handle and the needle guide without a cable in between. In an alternate embodiment the ultrasound transducer elements on the needle guide may be connected to the ultrasound engine by enclosing the needle guide and transducer elements in the same mechanical enclosure of the imagining probe handle. FIG.9Dis a perspective view of a needle guide962, positioned with transducer elements960and the ultrasound reflector disc964. The position of the reflector disc964is located by transmitting ultrasonic wave972, from the transducer element960on the needle guide962. The ultrasound wave972travels through the air towards reflector disc964and is reflected by the reflector disc964. The reflected ultrasound wave974, reaches the transducer element960on the needle guide962. The distance976, between the reflector disc964, and the transducer element960is calculated from the time elapsed and the speed of sound in the air. FIG.9Eis a perspective view of an alternate embodiment of the exemplary needle sensing positioning system using ultrasound transducers without the requirement of any active electronics in the sensor assembly. The sensor transducer may include a passive ultrasound transducer element. The elements may be used in a similar way as a typical transducer probe, utilizing the ultrasound engine electronics. The system986includes needle guide962that may be mounted to a needle guide mounting bracket966that may be coupled to an ultrasound imaging probe assembly for imaging the patient's body982, or alterative suitable form factors. The ultrasound reflector disc964may be mounted at the exposed end of the needle956. In this embodiment a linear ultrasound acoustic array978, is mounted parallel to the direction of movement of the needle956. The linear ultrasound acoustic array978includes an ultrasound transducer array980positioned parallel to the needle956. In this embodiment an ultrasound imagining probe assembly982, is positioned for imagining the patient body. The ultrasound imaging probe assembly for imaging the patient body982is configured with an ultrasound transducer array984. In this embodiment, the position of the ultrasound reflector disc964can be detected by using the ultrasound transducer array980coupled to an ultrasound imaging probe assembly for imaging978. The position of the reflector disc964is located by transmitting ultrasonic wave972, from the transducer element980on the ultrasound imaging probe assembly for imaging978. The ultrasound wave972travels through the air towards reflector disc964and is reflected by the reflector disc964. The reflected ultrasound wave974, reaches the transducer element980on the ultrasound imaging probe assembly for imaging978. The distance976, between the reflector disc964, and the transducer element980is calculated from the time elapsed and the speed of sound in the air. In an alternate embodiment an alternate algorithm may be used to sequentially scan the polarity of elements in the transducer array and analyze the reflections produced per transducer array element. In an alternate embodiment a plurality of scans may occur prior to forming an ultrasound image. FIG.10Aillustrates an exemplary method for monitoring the synchrony of a heart in accordance with exemplary embodiments. In the method, a reference template is loaded into memory and used to guide a user in identifying an imaging plane (per step930). Next a user identifies a desired imaging plane (per step932). Typically an apical 4-chamber view of the heart is used; however, other views may be used without departing from the spirit of the invention. At times, identification of endocardial borders may be difficult, and when such difficulties are encountered tissue Doppler imaging of the same view may be employed (per step934). A reference template for identifying the septal and lateral free wall is provided (per step936). Next, standard tissue Doppler imaging (TDI) with pre-set velocity scales of, say, ±30 cm/sec may be used (per step938). Then, a reference of the desired triplex image may be provided (per step940). Either B-mode or TDI may be used to guide the range gate (per step942). B-mode can be used for guiding the range gate (per step944) or TDI for guiding the range gate (per step946). Using TDI or B-mode for guiding the range gate also allows the use of a direction correction angle for allowing the Spectral Doppler to display the radial mean velocity of the septal wall. A first pulsed-wave spectral Doppler is then used to measure the septal wall mean velocity using duplex or triplex mode (per step948). The software used to process the data and calculate dyssynchrony can utilize a location (e.g. a center point) to automatically set an angle between dated locations on a heart wall to assist in simplifying the setting of parameters. A second range-gate position is also guided using a duplex image or a TDI (per step950), and a directional correction angle may be used if desired. After step950, the mean velocity of the septal wall and lateral free wall are being tracked by the system. Time integration of the Spectral Doppler mean velocities952at regions of interest (e.g., the septum wall and the left ventricular free wall) then provides the displacement of the septal and left free wall, respectively. The above method steps may be utilized in conjunction with a high pass filtering means, analog or digital, known in the relevant arts for removing any baseline disturbance present in collected signals. In addition, the disclosed method employs multiple simultaneous PW Spectral Doppler lines for tracking movement of the interventricular septum and the left ventricular fee wall. In additional, a multiple gate structure may be employed along each spectral line, thus allowing quantitative measurement of regional wall motion. Averaging over multiple gates may allow measurement of global wall movement. FIG.10Bis a detailed schematic block diagram for an exemplary embodiment of the integrated ultrasound probe1040can be connected to any PC1010through an Interface unit1020. The ultra sound probe1040is configured to transmit ultrasound waves to and reduce reflected ultrasound waves from on ore more image targets1064. The transducer1040can be coupled to the interface unit1020using one or more cables1066,1068. The interface unit1020can be positioned between the integrated ultrasound probe1040and the host computer1010. The two stage beam forming system1040and1020can be connected to any PC through a USB connection1022,1012. The ultrasound probe1040, can include sub-arrays/apertures1052consisting of neighboring elements with an aperture smaller than that of the whole array. Returned echoes are received by the 1D transducer array1062and transmitted to the controller1044. The controller initiates formation of a coarse beam by transmitting the signals to memory1058,1046. The memory1058,1046transmits a signal to a transmit Driver11050, and Transmit Driver m1054. Transmit Driver11050and Transmit Driver m1054then send the signal to mux11048and mux m1056, respectively. The signal is transmitted to sub-array beamformer11052and sub-array beamformer n1060. The outputs of each coarse beam forming operation can include further processing through a second stage beam forming in the interface unit1020to convert the beam forming output to digital representation. The coarse beam forming operations can be coherently summed to form a fine beam output for the array. The signals can be transmitted from the ultrasound probe1040sub-array beam former11052and sub-array beam former n1060to the A/D convertors1030and1028within the interface unit1020. Within the interface unit1020there are A/D converters1028,1030for converting the first stage beam forming output to digital representation. The digital conversion can be received from the A/D convertors1030,1028by a customer ASIC such as a FPGA1026to complete the second stage beam forming. The FPGA Digital beam forming1026can transmit information to the system controller1024. The system controller can transmit information to a memory1032which may send a signal back to the FPGA Digital Beam forming1026. Alternatively, the system controller1024may transmit information to the custom USB3 Chipset1022. The USB3 Chipset1022may then transmit information to a DC-DC convertor1034. In turn, the DC-DC convertor1034may transmit power from the interface unit1020to the ultrasound probe1040. Within the ultrasound probe1040a power supply1042may receive the power signal and interface with the transmit driver11050to provide the power to the front end integration probe. The Interface unit1020custom or USB3 Chipset1022may be used to provide a communication link between the interface unit10220and the host computer1010. The custom or USB3 Chipset1022transmits a signal to the host computer's1010custom or USB3 Chipset1012. The custom or the USB3 Chipset1012then interfaces with the microprocessor1014. The microprocessor1014then may display information or send information to a device1075. In an alternate embodiment, a narrow band beamformer can be used. For example, an individual analog phase shifter is applied to each of the received echoes. The phase shifted outputs within each sub-array are then summed to form a coarse beam. The A/D converts can be used to digitize each of the coarse beams; a digital beam former is then used to form the fine beam. In another embodiment, forming a 64 element linear array may use eight adjacent elements to form a coarse beam output. Such arrangement may utilize eight output analog cables connecting the outputs of the integrated probe to the interface units. The coarse beams may be sent through the cable to the corresponding A/D convertors located in the interface unit. The digital delay is used to form a fine beam output. Eight A/D convertors may be required to form the digital representation. In another embodiment, forming a 128 element array may use sixteen sub-array beam forming circuits. Each circuit may form a coarse beam from an adjacent eight element array provided in the first stage output to the interface unit. Such arrangement may utilize sixteen output analog cables connecting the outputs of the integrated probe to the interface units to digitize the output. A PC microprocessor or a DSP may be used to perform the down conversion, base-banding, scan conversion and post image processing functions. The microprocessor or DSP can also be used to perform all the Doppler processing functions. FIG.10Cis a detailed schematic block diagram for an exemplary embodiment of the integrated ultrasound probe1040with the first sub array beamforming circuit, and the second stage beamforming circuits are integrated inside the host computer1082. The back end computer with the second stage beamforming circuit may be a PDA, tablet or mobile device housing. The ultra sound probe1040is configured to transmit ultrasound waves to and reduce reflected ultrasound waves from on ore more image targets1064. The transducer1040is coupled to the host computer1082using one or more cables1066,1068. Note that A/D circuit elements can also be placed in the transducer probe housing. The ultrasound probe1040includes subarray/apertures1052consisting of neighboring elements with an aperture smaller than that of the whole array. Returned echoes are received by the 1D transducer array1062and transmitted to the controller1044. The controller initiates formation of a coarse beam by transmitting the signals to memory1058,1046. The memory1058,1046transmits a signal to a transmit Driver11050, and Transmit Driver m1054. Transmit Driver11050and Transmit Driver m1054then send the signal to mux11048and mux m1056, respectively. The signal is transmitted to subarray beamformer11052and subarray beamformer n1060. The outputs of each coarse beam forming operation then go through a second stage beam forming in the interface unit1020to convert the beam forming output to digital representation. The coarse beamforming operations are coherently summed to form a fine beam output for the array. The signals are transmitted from the ultrasound probe1040subarray beamformer11052and subarray beamformer n1060to the A/D convertors1030and1028within the host computer1082. Within the host computer1082there are A/D converters1028,1030for converting the first stage beamforming output to digital representation. The digital conversion is received from the A/D convertors1030,1028by a customer ASIC such as a FPGA1026to complete the second stage beamforming. The FPGA Digital beamforming1026transmits information to the system controller1024. The system controller transmits information to a memory1032which may send a signal back to the FPGA Digital Beam forming1026. Alternatively, the system controller1024may transmit information to the custom USB3 Chipset1022. The USB3 Chipset1022may then transmit information to a DC-DC convertor1034. In turn, the DC-DC convertor1034may transmit power from the interface unit1020to the ultrasound probe1040. Within the ultrasound probe1040a power supply1042may receive the power signal and interface with the transmit driver11050to provide the power to the front end integration probe. The power supply can include a battery to enable wireless operation of the transducer assembly. A wireless transceiver can be integrated into controller circuit or a separate communications circuit to enable wireless transfer of image data and control signals. The host computer's1082custom or USB3 Chipset1022may be used to provide a communication link between the custom or USB3 Chipset1012to transmits a signal to the microprocessor1014. The microprocessor1014then may display information or send information to a device1075. FIG.11is a detailed schematic block diagram of an exemplary embodiment of the ultrasound engine108(i.e., the front-end ultrasound specific circuitry) and an exemplary embodiment of the computer motherboard106(i.e., the host computer) of the ultrasound device illustrated inFIGS.1and2A. The components of the ultrasound engine108and/or the computer motherboard106may be implemented in application-specific integrated circuits (ASICs). Exemplary ASICs have a high channel count and can pack 32 or more channels per chip in some exemplary embodiments. One of ordinary skill in the art will recognize that the ultrasound engine108and the computer motherboard106may include more or fewer modules than those shown. For example, the ultrasound engine108and the computer motherboard106may include the modules shown inFIG.17. A transducer array152is configured to transmit ultrasound waves to and receive reflected ultrasound waves from one or more image targets1102. The transducer array152is coupled to the ultrasound engine108using one or more cables1104. The ultrasound engine108includes a high-voltage transmit/receive (TR) module1106for applying drive signals to the transducer array152and for receiving return echo signals from the transducer array152. The ultrasound engine108includes a pre-amp/time gain compensation (TGC) module1108for amplifying the return echo signals and applying suitable TGC functions to the signals. The ultrasound engine108includes a sampled-data beamformer1110that the delay coefficients used in each channel after the return echo signals have been amplified and processed by the pre-amp/TGC module1108. In some exemplary embodiments, the high-voltage TR module1106, the pre-amp/TGC module1108, and the sample-interpolate receive beamformer1110may each be a silicon chip having 8 to 64 channels per chip, but exemplary embodiments are not limited to this range. In certain embodiments, the high-voltage TR module1106, the pre-amp/TGC module1108, and the sample-interpolate receive beamformer1110may each be a silicon chip having 8, 16, 32, 64 channels, and the like. As illustrated inFIG.11, an exemplary TR module1106, an exemplary pre-amp/TGC module1108and an exemplary beamformer1110may each take the form of a silicon chip including 32 channels. The ultrasound engine108includes a first-in first-out (FIFO) buffer module1112which is used for buffering the processed data output by the beamformer1110. The ultrasound engine108also includes a memory1114for storing program instructions and data, and a system controller1116for controlling the operations of the ultrasound engine modules. The ultrasound engine108interfaces with the computer motherboard106over a communications link112which can follow a standard high-speed communications protocol, such as the Fire Wire (IEEE 1394 Standards Serial Interface) or fast (e.g., 200-400 Mbits/second or faster) Universal Serial Bus (USB 2.0 USB 3.0), protocol. The standard communication link to the computer motherboard operates at least at 400 Mbits/second or higher, preferably at 800 Mbits/second or higher. Alternatively, the link112can be a wireless connection such as an infrared (IR) link. The ultrasound engine108includes a communications chipset1118(e.g., a Fire Wire chipset) to establish and maintain the communications link112. Similarly, the computer motherboard106also includes a communications chipset1120(e.g., a Fire Wire chipset) to establish and maintain the communications link112. The computer motherboard106includes a core computer-readable memory1122for storing data and/or computer-executable instructions for performing ultrasound imaging operations. The memory1122forms the main memory for the computer and, in an exemplary embodiment, may store about 4 GB of DDR3 memory. The computer motherboard106also includes a microprocessor1124for executing computer-executable instructions stored on the core computer-readable memory1122for performing ultrasound imaging processing operations. An exemplary microprocessor1124may be an off-the-shelf commercial computer processor, such as an Intel Core-i5 processor. Another exemplary microprocessor1124may be a digital signal processor (DSP) based processor, such as one or more DaVinci™ processors from Texas Instruments. The computer motherboard106also includes a display controller1126for controlling a display device that may be used to display ultrasound data, scans and maps. Exemplary operations performed by the microprocessor1124include, but are not limited to, down conversion (for generating I, Q samples from received ultrasound data), scan conversion (for converting ultrasound data into a display format of a display device), Doppler processing (for determining and/or imaging movement and/or flow information from the ultrasound data), Color Flow processing (for generating, using autocorrelation in one embodiment, a color-coded map of Doppler shifts superimposed on a B-mode ultrasound image), Power Doppler processing (for determining power Doppler data and/or generating a power Doppler map), Spectral Doppler processing (for determining spectral Doppler data and/or generating a spectral Doppler map), and post signal processing. These operations are described in further detail in WO 03/079038 A2, filed Mar. 11, 2003, titled “Ultrasound Probe with Integrated Electronics,” the entire contents of which are expressly incorporated herein by reference. To achieve a smaller and lighter portable ultrasound devices, the ultrasound engine108includes reduction in overall packaging size and footprint of a circuit board providing the ultrasound engine108. To this end, exemplary embodiments provide a small and light portable ultrasound device that minimizes overall packaging size and footprint while providing a high channel count. In some embodiments, a high channel count circuit board of an exemplary ultrasound engine may include one or more multi-chip modules in which each chip provides multiple channels, for example, 32 channels. The term “multi-chip module,” as used herein, refers to an electronic package in which multiple integrated circuits (IC) are packaged into a unifying substrate, facilitating their use as a single component, i.e., as a larger IC. A multi-chip module may be used in an exemplary circuit board to enable two or more active IC components integrated on a High Density Interconnection (HDI) substrate to reduce the overall packaging size. In an exemplary embodiment, a multi-chip module may be assembled by vertically stacking a transmit/receive (TR) silicon chip, an amplifier silicon chip and a beamformer silicon chip of an ultrasound engine. A single circuit board of the ultrasound engine may include one or more of these multi-chip modules to provide a high channel count, while minimizing the overall packaging size and footprint of the circuit board. FIG.12depicts a schematic side view of a portion of a circuit board1200including a multi-chip module assembled in a vertically stacked configuration. Two or more layers of active electronic integrated circuit components are integrated vertically into a single circuit. The IC layers are oriented in spaced planes that extend substantially parallel to one another in a vertically stacked configuration. InFIG.12, the circuit board includes an HDI substrate1202for supporting the multi-chip module. A first integrated circuit chip1204including, for example, a first beamformer device is coupled to the substrate1202using any suitable coupling mechanism, for example, epoxy application and curing. A first spacer layer1206is coupled to the surface of the first integrated circuit chip1204opposite to the substrate1202using, for example, epoxy application and curing. A second integrated circuit chip1208having, for example, a second beamformer device is coupled to the surface of the first spacer layer1206opposite to the first integrated circuit chip1204using, for example, epoxy application and curing. A metal frame1210is provided for mechanical and/or electrical connection among the integrated circuit chips. An exemplary metal frame1210may take the form of a leadframe. The first integrated circuit chip1204may be coupled to the metal frame1210using wiring1212. The second integrated circuit chip1208may be coupled to the same metal frame1210using wiring1214. A packaging1216is provided to encapsulate the multi-chip module assembly and to maintain the multiple integrated circuit chips in substantially parallel arrangement with respect to one another. As illustrated inFIG.12, the vertical three-dimensional stacking of the first integrated circuit chip1204, the first spacer layer1206and the second integrated circuit chip1208provides high-density functionality on the circuit board while minimizing overall packaging size and footprint (as compared to an ultrasound engine circuit board that does not employ a vertically stacked multi-chip module). One of ordinary skill in the art will recognize that an exemplary multi-chip module is not limited to two stacked integrated circuit chips. Exemplary numbers of chips vertically integrated in a multi-chip module may include, but are not limited to, two, three, four, five, six, seven, eight, and the like. In one embodiment of an ultrasound engine circuit board, a single multi-chip module as illustrated inFIG.12is provided. In other embodiments, a plurality of multi-chip modules also illustrated inFIG.12. In an exemplary embodiment, a plurality of multi-chip modules (for example, two multi-chip modules) may be stacked vertically on top of one another on a circuit board of an ultrasound engine to further minimize the packaging size and footprint of the circuit board. In addition to the need for reducing the footprint, there is also a need for decreasing the overall package height in multi-chip modules. Exemplary embodiments may employ wafer thinning to sub-hundreds micron to reduce the package height in multi-chip modules. Any suitable technique can be used to assemble a multi-chip module on a substrate. Exemplary assembly techniques include, but are not limited to, laminated MCM (MCM-L) in which the substrate is a multi-layer laminated printed circuit board, deposited MCM (MCM-D) in which the multi-chip modules are deposited on the base substrate using thin film technology, and ceramic substrate MCM (MCM-C) in which several conductive layers are deposited on a ceramic substrate and embedded in glass layers that layers are co-fired at high temperatures (HTCC) or low temperatures (LTCC). FIG.13is a flowchart of an exemplary method for fabricating a circuit board including a multi-chip module assembled in a vertically stacked configuration. In step1302, a HDI substrate is fabricated or provided. In step1304, a metal frame (e.g., leadframe) is provided. In step1306, a first IC layer is coupled or bonded to the substrate using, for example, epoxy application and curing. The first IC layer is wire bonded to the metal frame. In step1308, a spacer layer is coupled to the first IC layer using, for example, epoxy application and curing, so that the layers are stacked vertically and extend substantially parallel to each other. In step1310, a second IC layer is coupled to the spacer layer using, for example, epoxy application and curing, so that all of the layers are stacked vertically and extend substantially parallel to one another. The second IC layer is wire bonded to the metal frame. In step1312, a packaging is used to encapsulate the multi-chip module assembly. Exemplary chip layers in a multi-chip module may be coupled to each other using any suitable technique. For example, in the embodiment illustrated inFIG.12, spacer layers may be provided between chip layers to spacedly separate the chip layers. Passive silicon layers, die attach paste layers and/or die attach film layers may be used as the spacer layers. Exemplary spacer techniques that may be used in fabricating a multi-chip module is further described in Toh C H et al., “Die Attach Adhesives for 3D Same-Sized Dies Stacked Packages,” the 58th Electronic Components and Technology Conference (ECTC2008), pp. 1538-43, Florida, US (27-30 May 2008), the entire contents of which are expressly incorporated herein by reference. Important requirements for the die attach (DA) paste or film is excellent adhesion to the passivation materials of adjacent dies. Also, a uniform bond-link thickness (BLT) is required for a large die application. In addition, high cohesive strength at high temperatures and low moisture absorption are preferred for reliability. FIGS.14A-14Care schematic side views of exemplary multi-chip modules, including vertically stacked dies, that may be used in accordance with exemplary embodiments. Both peripheral and center pads wire bond (WB) packages are illustrated and may be used in wire bonding exemplary chip layers in a multi-chip module.FIG.14Ais a schematic side view of a multi-chip module including four vertically stacked dies in which the dies are spacedly separated from one another by passive silicon layers with a 2-in-1 dicing die attach film (D-DAF).FIG.14Bis a schematic side view of a multi-chip module including four vertically stacked dies in which the dies are spacedly separated from one another by DA film-based adhesives acting as die-to-die spacers.FIG.14Cis a schematic side view of a multi-chip module including four vertically stacked dies in which the dies are spacedly separated from one another by DA paste or film-based adhesives acting as die-to-die spacers. The DA paste or film-based adhesives may have wire penetrating capability in some exemplary embodiments. In the exemplary multi-chip module ofFIG.14C, film-over wire (FOW) is used to allow long wire bonding and center bond pads stacked die packages. FOW employs a die-attach film with wire penetrating capability that allows the same or similar-sized wire-bonded dies to be stacked directly on top of one another without passive silicon spacers. This solves the problem of stacking same or similar-sized dies directly on top of each other, which otherwise poses a challenge as there is no or insufficient clearance for the bond wires of the lower dies. The DA material illustrated inFIGS.14B and14Cpreferably maintain a bond-line thickness (BLT) with little to no voiding and bleed out through the assembly process. Upon assembly, the DA materials sandwiched between the dies maintain an excellent adhesion to the dies. The material properties of the DA materials are tailored to maintain high cohesive strength for high temperature reliability stressing without bulk fracture. The material properties of the DA materials are tailored to also minimize or preferably eliminate moisture accumulation that may cause package reliability failures (e.g., popcorning whereby interfacial or bulk fractures occur as a result of pressure build-up from moisture in the package). FIG.15is a flowchart of certain exemplary methods of die-to-die stacking using (a) passive silicon layers with a 2-in-1 dicing die attach film (D-DAF), (b) DA paste, (c) thick DA-film, and (d) film-over wire (FOW) that employs a die-attach film with wire penetrating capability that allows the same or similar-sized wire-bonded dies to be stacked directly on top of one another without passive silicon spacers. Each method performs backgrinding of wafers to reduce the wafer thickness to enable stacking and high density packaging of integrated circuits. The wafers are sawed to separate the individual dies. A first die is bonded to a substrate of a multi-chip module using, for example, epoxy application and curing in an oven. Wire bonding is used to couple the first die to a metal frame. In method (A), a first passive silicon layer is bonded to the first die in a stacked manner using a dicing die-attach film (D-DAF). A second die is bonded to the first passive layer in a stacked manner using D-DAF. Wire bonding is used to couple the second die to the metal frame. A second passive silicon layer is bonded to the second die in a stacked manner using D-DAF. A third die is bonded to the second passive layer in a stacked manner using D-DAF. Wire bonding is used to couple the third die to the metal frame. A third passive silicon layer is bonded to the third die in a stacked manner using D-DAF. A fourth die is bonded to the third passive layer in a stacked manner using D-DAF. Wire bonding is used to couple the fourth die to the metal frame. In method (B), die attach (DA) paste dispensing and curing is repeated for multi-thin die stack application. DA paste is dispensed onto a first die, and a second die is provided on the DA paste and cured to the first die. Wire bonding is used to couple the second die to the metal frame. DA paste is dispensed onto the second die, and a third die is provided on the DA paste and cured to the second die. Wire bonding is used to couple the third die to the metal frame. DA paste is dispensed onto the third die, and a fourth die is provided on the DA paste and cured to the third die. Wire bonding is used to couple the fourth die to the metal frame. In method (C), die attach films (DAF) are cut and pressed to a bottom die and a top die is then placed and thermal compressed onto the DAF. For example, a DAF is pressed to the first die and a second die is thermal compressed onto the DAF. Wire bonding is used to couple the second die to the metal frame. Similarly, a DAF is pressed to the second die and a third die is thermal compressed onto the DAF. Wire bonding is used to couple the third die to the metal frame. A DAF is pressed to the third die and a fourth die is thermal compressed onto the DAF. Wire bonding is used to couple the fourth die to the metal frame. In method (D), film-over wire (FOW) employs a die-attach film with wire penetrating capability that allows the same or similar-sized wire-bonded dies to be stacked directly on top of one another without passive silicon spacers. A second die is bonded and cured to the first die in a stacked manner. Film-over wire bonding is used to couple the second die to the metal frame. A third die is bonded and cured to the first die in a stacked manner. Film-over wire bonding is used to couple the third die to the metal frame. A fourth die is bonded and cured to the first die in a stacked manner. Film-over wire bonding is used to couple the fourth die to the metal frame. After the above-described steps are completed, in each method (a)-(d), wafer molding and post-mold curing (PMC) are performed. Subsequently, ball mount and singulation are performed. Further details on the above-described die attachment techniques are provided in TOH C H et al., “Die Attach Adhesives for 3D Same-Sized Dies Stacked Packages,” the 58th Electronic Components and Technology Conference (ECTC2008), pp. 1538-43, Florida, US (27-30 May 2008), the entire contents of which are expressly incorporated herein by reference. FIG.16is a schematic side view of a multi-chip module1600including a TR chip1602, an amplifier chip1604and a beamformer chip1606vertically integrated in a vertically stacked configuration on a substrate1614. Any suitable technique illustrated inFIGS.12-15may be used to fabricate the multi-chip module. One of ordinary skill in the art will recognize that the particular order in which the chips are stacked may be different in other embodiments. First and second spacer layers1608,1610are provided to spacedly separate the chips1602,1604,1606. Each chip is coupled to a metal frame (e.g., a leadframe)1612. In certain exemplary embodiments, heat transfer and heat sink mechanisms may be provided in the multi-chip module to sustain high temperature reliability stressing without bulk failure. Other components ofFIG.16are described with reference toFIGS.12and14. In this exemplary embodiment, each multi-chip module may handle the complete transmit, receive, TGC amplification and beam forming operations for a large number of channels, for example, 32 channels. By vertically integrating the three silicon chips into a single multi-chip module, the space and footprint required for the printed circuit board is further reduced. A plurality of multi-chip modules may be provided on a single ultrasound engine circuit board to further increase the number of channels while minimizing the packaging size and footprint. For example, a 128 channel ultrasound engine circuit board108can be fabricated within exemplary planar dimensions of about 10 cm×about 10 cm, which is a significant improvement of the space requirements of conventional ultrasound circuits. A single circuit board of an ultrasound engine including one or more multi-chip modules may have 16 to 128 channels in preferred embodiments. In certain embodiments, a single circuit board of an ultrasound engine including one or more multi-chip modules may have 16, 32, 64, 128 channels, and the like. FIG.17is a detailed schematic block diagram of an exemplary embodiment of the ultrasound engine108(i.e., the front-end ultrasound specific circuitry) and an exemplary embodiment of the computer motherboard106(i.e., the host computer) provided as a single board complete ultrasound system. An exemplary single board ultrasound system as illustrated inFIG.17may have exemplary planar dimensions of about 25 cm×about 18 cm, although other dimensions are possible. The single board complete ultrasound system ofFIG.17may be implemented in the ultrasound device illustrated inFIGS.1,2A,2B, and9A, and may be used to perform the operations depicted inFIGS.3-8,9B, and10. The ultrasound engine108includes a probe connector114to facilitate the connection of at least one ultrasound probe/transducer. In the ultrasound engine108, a TR module, an amplifier module and a beamformer module may be vertically stacked to form a multi-chip module as shown inFIG.16, thereby minimizing the overall packaging size and footprint of the ultrasound engine108. The ultrasound engine108may include a first multi-chip module1710and a second multi-chip module1712, each including a TR chip, an ultrasound pulser and receiver, an amplifier chip including a time-gain control amplifier, and a sample-data beamformer chip vertically integrated in a stacked configuration as shown inFIG.16. The first and second multi-chip modules1710,1712may be stacked vertically on top of each other to further minimize the area required on the circuit board. Alternatively, the first and second multi-chip modules1710,1712may be disposed horizontally on the circuit board. In an exemplary embodiment, the TR chip, the amplifier chip and the beamformer chip is each a 32-channel chip, and each multi-chip module1710,1712has 32 channels. One of ordinary skill in the art will recognize that exemplary ultrasound engines108may include, but are not limited to, one, two, three, four, five, six, seven, eight multi-chip modules. Note that in a preferred embodiment the system can be configured with a first beamformer in the transducer housing and a second beamformer in the tablet housing. The ASICs and the multi-chip module configuration enable a 128-channel complete ultrasound system to be implemented on a small single board in a size of a tablet computer format. An exemplary 128-channel ultrasound engine108, for example, can be accommodated within exemplary planar dimensions of about 10 cm×about 10 cm, which is a significant improvement of the space requirements of conventional ultrasound circuits. An exemplary 128-channel ultrasound engine108can also be accommodated within an exemplary area of about 100 cm2. The ultrasound engine108also includes a clock generation complex programmable logic device (CPLD)1714for generating timing clocks for performing an ultrasound scan using the transducer array. The ultrasound engine108includes an analog-to-digital converter (ADC)1716for converting analog ultrasound signals received from the transducer array to digital RF formed beams. The ultrasound engine108also includes one or more delay profile and waveform generator field programmable gate arrays (FPGA)1718for managing the receive delay profiles and generating the transmit waveforms. The ultrasound engine108includes a memory1720for storing the delay profiles for ultrasound scanning. An exemplary memory1720may be a single DDR3 memory chip. The ultrasound engine108includes a scan sequence control field programmable gate array (FPGA)1722configured to manage the ultrasound scan sequence, transmit/receiving timing, storing and fetching of profiles to/from the memory1720, and buffering and moving of digital RF data streams to the computer motherboard106via a high-speed serial interface112. The high-speed serial interface112may include Fire Wire or other serial or parallel bus interface between the computer motherboard106and the ultrasound engine108. The ultrasound engine108includes a communications chipset1118(e.g., a Fire Wire chipset) to establish and maintain the communications link112. A power module1724is provided to supply power to the ultrasound engine108, manage a battery charging environment and perform power management operations. The power module1724may generate regulated, low noise power for the ultrasound circuitry and may generate high voltages for the ultrasound transmit pulser in the TR module. The computer motherboard106includes a core computer-readable memory1122for storing data and/or computer-executable instructions for performing ultrasound imaging operations. The memory1122forms the main memory for the computer and, in an exemplary embodiment, may store about 4 Gb of DDR3 memory. The memory1122may include a solid state hard drive (SSD) for storing an operating system, computer-executable instructions, programs and image data. An exemplary SSD may have a capacity of about 128 GB. The computer motherboard106also includes a microprocessor1124for executing computer-executable instructions stored on the core computer-readable memory1122for performing ultrasound imaging processing operations. Exemplary operations include, but are not limited to, down conversion, scan conversion, Doppler processing, Color Flow processing, Power Doppler processing, Spectral Doppler processing, and post signal processing. An exemplary microprocessor1124may be an off-the-shelf commercial computer processor, such as an Intel Core-i5 processor. Another exemplary microprocessor1124may be a digital signal processor (DSP) based processor, such as DaVinci™ processors from Texas Instruments. The computer motherboard106includes an input/output (I/O) and graphics chipset1704which includes a co-processor configured to control I/O and graphic peripherals such as USB ports, video display ports and the like. The computer motherboard106includes a wireless network adapter1702configured to provide a wireless network connection. An exemplary adapter1702supports 802.11g and 802.11n standards. The computer motherboard106includes a display controller1126configured to interface the computer motherboard106to the display104. The computer motherboard106includes a communications chipset1120(e.g., a Fire Wire chipset or interface) configured to provide a fast data communication between the computer motherboard106and the ultrasound engine108. An exemplary communications chipset1120may be an IEEE 1394b 800 Mbit/sec interface. Other serial or parallel interfaces1706may alternatively be provided, such as USB3, Thunder-Bolt, PCIe, and the like. A power module1708is provided to supply power to the computer motherboard106, manage a battery charging environment and perform power management operations. An exemplary computer motherboard106may be accommodated within exemplary planar dimensions of about 12 cm×about 10 cm. An exemplary computer motherboard106can be accommodated within an exemplary area of about 120 cm2. FIG.18is a perspective view of an exemplary portable ultrasound system100provided in accordance with exemplary embodiments. The system100includes a housing102that is in a tablet form factor as illustrated inFIG.18, but that may be in any other suitable form factor. An exemplary housing102may have a thickness below 2 cm and preferably between 0.5 and 1.5 cm. A front panel of the housing102includes a multi-touch LCD touch screen display104that is configured to recognize and distinguish one or more multiple and/or simultaneous touches on a surface of the touch screen display104. The surface of the display104may be touched using one or more of a user's fingers, a user's hand or an optional stylus1802. The housing102includes one or more I/O port connectors116which may include, but are not limited to, one or more USB connectors, one or more SD cards, one or more network mini display ports, and a DC power input. The embodiment of housing102inFIG.18can also be configured within a palm-carried form factor having dimensions of 150 mm×100 mm×15 mm (a volume of 225000 mm3) or less. The housing102can have a weight of less than 200 g. Optionally, cabling between the transducer array and the display housing can include interface circuitry1020as described herein. The interface circuitry1020can include, for example, beamforming circuitry and/or A/D circuitry in a pod that dangles from the tablet. Separate connectors1025,1027can be used to connect the dangling pod to the transducer probe cable. The connector1027can include probe identification circuitry as described herein. The unit102can include a camera, a microphone and a speaker as well as wireless telephone circuitry for voice and data communications as well as voice activated software that can be used to control the ultrasound imaging operations described herein. The housing102includes or is coupled to a probe connector114to facilitate connection of at least one ultrasound probe/transducer150. The ultrasound probe150includes a transducer housing including one or more transducer arrays152. The ultrasound probe150is couplable to the probe connector114using a housing connector1804provided along a flexible cable1806. One of ordinary skill in the art will recognize that the ultrasound probe150may be coupled to the housing102using any other suitable mechanism, for example, an interface housing that includes circuitry for performing ultrasound-specific operations like beamforming. Other exemplary embodiments of ultrasound systems are described in further detail in WO 03/079038 A2, filed Mar. 11, 2003, titled “Ultrasound Probe with Integrated Electronics,” the entire contents of which is expressly incorporated herein by reference. Preferred embodiments can employ a wireless connection between the hand-held transducer probe150and the display housing. Beamformer electronics can be incorporated into probe housing150to provide beamforming of subarrays in a 1D or 2D transducer array as described herein. The display housing can be sized to be held in the palm of the user's hand and can include wireless network connectivity to public access networks such as the internet. FIG.19illustrates an exemplary view of a main graphical user interface (GUI)1900rendered on the touch screen display104of the portable ultrasound system100ofFIG.18. The main GUI1900may be displayed when the ultrasound system100is started. To assist a user in navigating the main GUI1900, the GUI may be considered as including four exemplary work areas: a menu bar1902, an image display window1904, an image control bar1906, and a tool bar1908. Additional GUI components may be provided on the main GUI1900to, for example, enable a user to close, resize and exit the GUI and/or windows in the GUI. The menu bar1902enables a user to select ultrasound data, images and/or videos for display in the image display window1904. The menu bar1902may include, for example, GUI components for selecting one or more files in a patient folder directory and an image folder directory. The image display window1904displays ultrasound data, images and/or videos and may, optionally, provide patient information. The tool bar1908provides functionalities associated with an image or video display including, but not limited to, a save button for saving the current image and/or video to a file, a save Loop button that saves a maximum allowed number of previous frames as a Cine loop, a print button for printing the current image, a freeze image button for freezing an image, a playback toolbar for controlling aspects of playback of a Cine loop, and the like. Exemplary GUI functionalities that may be provided in the main GUI1900are described in further detail in WO 03/079038 A2, filed Mar. 11, 2003, titled “Ultrasound Probe with Integrated Electronics,” the entire contents of which are expressly incorporated herein by reference. The image control bar1906includes touch controls that may be operated by touch and touch gestures applied by a user directly to the surface of the display104. Exemplary touch controls may include, but are not limited to, a 2D touch control408, a gain touch control410, a color touch control412, a storage touch control414, a split touch control416, a PW imaging touch control418, a beamsteering touch control20, an annotation touch control422, a dynamic range operations touch control424, a Teravision™ touch control426, a map operations touch control428, and a needle guide touch control428. These exemplary touch controls are described in further detail in connection withFIGS.4a-4c. FIG.20depicts an illustrative embodiment of exemplary medical ultrasound imaging equipment2000, implemented in the form factor of a tablet in accordance with the invention. The table may have the dimensions of 12.5″×1.25″×8.75″ or 31.7 cm×3.175 cm×22.22 cm but it may also be in any other suitable form factor having a volume of less than 2500 cm3and a weight of less than 8 lbs. As shown inFIG.20, the medical ultrasound imaging equipment2000, includes a housing2030, a touch screen display2010, wherein ultrasound images2010, and ultra sound data2040, can be displayed and ultrasound controls2020, are configured to be controlled by a touchscreen display2010. The housing2030, may have a front panel2060and a rear panel2070. The touchscreen display2010, forms the front panel2060, and includes a multi-touch LCD touch screen that can recognize and distinguish one or more multiple and or simultaneous touches of the user on the touchscreen display2010. The touchscreen display2010may have a capacitive multi-touch and AVAH LCD screen. For example, the capacitive multi-touch and AVAH LCD screen may enable a user to view the image from multi angles without losing resolution. In another embodiment, the user may utilize a stylus for data input on the touch screen. The tablet can include an integrated foldable stand that permits a user to swivel the stand from a storage position that conforms to the tablet form factor so that the device can lay flat on the rear panel, or alternatively, the user can swivel the stand to enable the tablet to stand at an upright position at one of a plurality of oblique angles relative to a support surface. Capacitive touchscreen module comprises an insulator for example glass, coated with a transparent conductor, such as indium tin oxide. The manufacturing process may include a bonding process among glass, x-sensor film, y-sensor film and a liquid crystal material. The tablet is configured to allow a user to perform multi-touch gestures such as pinching and stretching while wearing a dry or a wet glove. The surface of the screen registers the electrical conductor making contact with the screen. The contact distorts the screens electrostatic field resulting in measureable changes in capacitance. A processor then interprets the change in the electrostatic field. Increasing levels of responsiveness are enabled by reducing the layers and by producing touch screens with “in-cell” technology. “In-cell” technology eliminates layers by placing the capacitors inside the display. Applying “in-cell” technology reduces the visible distance between the user's finger and the touchscreen target, thereby creating a more directive contact with the content displayed and enabling taps and gestures to have an increase in responsiveness. FIG.21illustrates a preferred cart system for a modular ultrasound imaging system in accordance with the invention. The cart system2100uses a base assembly2122including a docking bay that receives the tablet. The cart configuration2100is configured to dock tablet2104, including a touch screen display2102, to a cart2108, which can include a full operator console2124. After the tablet2104, is docked to the cart stand2108, the system forms a full feature roll about system. The full feature roll about system may include, an adjustable height device2106, a gel holder2110, and a storage bin2114, a plurality of wheels2116, a hot probe holder2120, and the operator console2124. The control devices may include a keyboard2112on the operator console2124that may also have other peripherals added such as a printer or a video interface or other control devices. FIG.22illustrate a preferred cart system, for use in embodiments with a modular ultrasound imaging system in accordance with the invention. The cart system2200may be configured with a vertical support member2212, coupled to a horizontal support member2028. An auxiliary device connector2018, having a position for auxiliary device attachment2014, may be configured to connect to the vertical support member2212. A 3 port Probe MUX connection device2016may also be configured to connect to the tablet. A storage bin2224can be configured to attach by a storage bin attachment mechanism2222, to vertical support member2212. The cart system may also include a cord management system2226, configured to attach to the vertical support member. The cart assembly2200includes the support beam2212mounted on a base2228having wheels2232and a battery2230that provides power for extended operation of the tablet. The assembly can also include an accessory holder2224mounted with height adjustment device2226. Holders2210,2218can be mounted on beam2212or on console panel2214. The multiport probe multiplex device2216connects to the tablet to provide simultaneous connection of several transducer probes which the user can select in sequence with the displayed virtual switch. A moving touch gesture, such as a three finger flick on the displayed image or touching of a displayed virtual button or icon can switch between connected probes. FIG.23illustrates preferred cart mount system for a modular ultrasound imaging system in accordance with the invention. Arrangement2300depicts the tablet2302, coupled to the docking station2304. The docking station2304is affixed to the attachment mechanism2306. The attachment mechanism2306may include a hinged member2308, allowing for the user display to tilted into a user desired position. The attachment mechanism2306is attached to the vertical member2312. A tablet2302as described herein can be mounted on the base docking unit2304which is mounted to a mount assembly2306on top of beam2212. The base unit2304includes cradle2310, electrical connectors2305and a port2307to connect to the system2302to battery2230and multiplexor device2216. FIG.24illustrates preferred cart system2400modular ultrasound imaging system in accordance with the invention in which tablet2402is connected on mounting assembly2406with connector2404. Arrangement2400depicts the tablet2402, coupled to the vertical support member2408, via attachment mechanism2404without the docking element2304. Attachment mechanism2404may include a hinged member2406for display adjustment. FIGS.25A and25Billustrate a multi-function docking station.FIG.25Aillustrates docking station2502, and tablet2504, having a base assembly2506, that mates to the docking station2502. The tablet2504, and the docking station2502, may be electrically connected. The tablet2504may be released from docking station2502, by engaging the release mechanism2508. The docking station2502may contain a transducer port2512, for connection of a transducer probe2510. The docking station2502can contain 3 USB 3.0 ports, a LAN port, a headphone jack and a power connector for charging.FIG.25Billustrates a side view of the tablet2504, and docking station2502, having a stand in accordance with the preferred embodiments of the present invention. The docking station may include an adjustable stand/handle2526. The adjustable stand/handle2526may be tilted for multiple viewing angles. The adjustable stand/handle2526may be flipped up for transport purposes. The side view also illustrates a transducer port2512, and a transducer probe connector2510. FIG.26illustrates a 2D imaging mode of operation with a modular ultrasound imaging system in accordance with the invention. The touch screen of table2504may display images obtained by 2-dimensional transducer probe using a 256 digital beamformer channels. The 2-dimensional image window2602depicts a 2-dimensional image scan2604. The 2-dimensional image may be obtained using flexible frequency scans2606, wherein the control parameters are represented on the tablet. FIG.27illustrates a motion mode of operation with a modular ultrasound imaging system in accordance with the invention. The touch screen display of tablet2700, may display images obtained by a motion mode of operation. The touch screen display of tablet2700, may simultaneously display 2-dimensional2706, and motion mode imaging2708. The touch screen display of tablet2700, may display a 2-dimensional image window2704, with a 2-dimensional image2706. Flexible frequency controls2702displayed with the graphical user interface can be used to adjust the frequency from 2 MHz to 12 MHz. FIG.28illustrates a color Doppler mode of operation with a modular ultrasound imaging system in accordance with the invention. The touch screen display of tablet2800displays images obtained by color Doppler mode of operation. A 2-dimensional image window2806is used as the base display. The color coded information2808, is overlaid on the 2-dimensional image2810. Ultrasound-based imaging of red blood cells are derived from the received echo of the transmitted signal. The primary characteristics of the echo signal are the frequency and the amplitude. Amplitude depends on the amount of moving blood within the volume sampled by the ultrasound beam. A high frame rate or high resolution can be adjusted with the display to control the quality of the scan. Higher frequencies may be generated by rapid flow and can be displayed in lighter colors, while lower frequencies are displayed in darker colors. Flexible frequency controls2804, and color Doppler scan information2802, may be displayed on the tablet display2800. FIG.29illustrates a Pulsed wave Doppler mode of operation with a modular ultrasound imaging system in accordance with the invention. The touch screen display of tablet2900, may display images obtained by pulsed wave Doppler mode of operation. Pulsed wave Doppler scans produce a series of pulses used to analyse the motion of blood flow in a small region along a desired ultrasound cursor called the sample volume or sample gate2012. The tablet display2900may depict a 2-dimensional image2902, wherein the sample volume/sample gate2012is overlaid. The tablet display2900may use a mixed mode of operation2906, to depict a 2-dimensional image2902, and a time/doppler frequency shift2910. The time/doppler frequency shift2910can be converted into velocity and flow if an appropriate angle between the beam and blood flow is known. Shades of gray2908, in the time/doppler frequency shift2910, may represent the strength of signal. The thickness of the spectral signal may be indicative of laminar or turbulent flow. The tablet display2900can depict adjustable frequency controls2904. FIG.30illustrates a triplex scan mode of operation with a modular ultrasound imaging system in accordance with the invention. The tablet display3000may include a 2-dimensional window3002, capable of displaying 2-dimensional images alone or in combination with the color Doppler or directional Doppler features. The touch screen display of tablet3000, may display images obtained by color Doppler mode of operation. A 2-dimensional image window3002is used as the base display. The color coded information3004, is overlaid3006, on the 2-dimensional image3016. The pulsed wave Doppler feature may be used alone or in combination with 2-dimensional imaging or the color Doppler imaging. The tablet display3000may include a pulsed wave Doppler scan represented by a sample volume/sample gate3008, overlaid over 2 dimensional images3016, or the color code overlaid3006, either alone or in combination. The tablet display3000may depict a split screen representing the time/doppler frequency shift3012. The time/doppler frequency shift3012can be converted into velocity and flow if an appropriate angle between the insolating beam and blood flow is known. Shades of gray3014, in the time/doppler frequency shift3012, may represent the strength of signal. The thickness of the spectral signal may be indicative of laminar or turbulent flow. The tablet display3000also may depict flexible frequency controls3010. FIG.31illustrates a GUI home screen interface3100, for a user mode of operation, with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3100may be displayed when the ultrasound system is started. To assist a user in navigating the GUI home screen3100, the home screen may be considered as including three exemplary work areas: a menu bar3104, an image display window3102, and an image control bar3106. Additional GUI components may be provided on the main GUI home screen3100, to enable a user to close, resize and exit the GUI home screen and/or windows in the GUI home screen. The menu bar3104enables users to select ultrasound data, images and/or video for display in the image display window3102. The menu bar may include components for selecting one or more files in a patient folder directly and an image folder directory. The image control bar3106includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a depth control touch controls3108, a 2-dimensional gain touch control3110, a full screen touch control3112, a text touch control3114, a split screen touch control3116, a ENV touch control3118, a CD touch control3120, a PWD touch control3122, a freeze touch control3124, a store touch control3126, and a optimize touch control3128. FIG.32illustrates a GUI menu screen interface3200, for a user mode of operation, with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3200may be displayed when the menu selection mode is triggered from the menu bar3204thereby initiating operation of the ultrasound system. To assist a user in navigating the GUI home screen3100, the home screen may be considered as including three exemplary work areas: a menu bar3204, an image display window3202, and an image control bar3220. Additional GUI components may be provided on the main GUI menu screen3200to enable a user to close, resize and exit the GUI menu screen and/or windows in the GUI menu screen, for example. The menu bar3204enables users to select ultra sound data, images and/or video for display in the image display window3202. The menu bar3204may include touch control components for selecting one or more files in a patient folder directory and an image folder directory. Depicted in an expanded format, the menu bar may include exemplary touch control such as, a patient touch control3208, a pre-sets touch control3210, a review touch control3212, a report touch control3214, and a setup touch control3216. The image control bar3220includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to depth control touch controls3222, a 2-dimensional gain touch control3224, a full screen touch control3226, a text touch control3228, a split screen touch control3230, a needle visualization ENV touch control3232, a CD touch control3234, a PWD touch control3236, a freeze touch control3238, a store touch control3240, and a optimize touch control3242. FIG.33illustrates a GUI patient data screen interface3300, for a user mode of operation, with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3300, may be displayed when the patient selection mode is triggered from the menu bar3302, when the ultrasound system is started. To assist a user in navigating the GUI patient data screen3300, the patient data screen may be considered as including five exemplary work areas: a new patient touch screen control3304, a new study touch screen control3306, a study list touch screen control3308, a work list touch screen control3310, and an edit touch screen control3312. Within each touch screen control, further information entry fields are available3314,3316. For example, patient information section3314, and study information section3316, may be used to record data. Within the patient data screen3300, the image control bar3318, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to accept study touch control3320, close study touch control3322, print touch control3324, print preview touch control3326, cancel touch control3328, a 2-dimensional touch control3330, freeze touch control3332, and a store touch control3334. FIG.34illustrates a GUI patient data screen interface3400, for a user mode of operation with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3400, may be displayed when the pre-sets selection mode3404, is triggered from the menu bar3402, when the ultrasound system is started. Within the pre-sets screen3400, the image control bar3408, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a save settings touch control3410, a delete touch control3412, CD touch control3414, PWD touch control3416, a freeze touch control3418, a store touch control3420, and a optimize touch control3422. FIG.35illustrates a GUI review screen interface3500, for a user mode of operation, with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3500, may be displayed when the pre-sets expanded review3504, selection mode3404, is triggered from the menu bar3502, when the ultrasound system is started. Within the review screen3500, the image control bar3516, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a thumbnail settings touch control3518, sync touch control3520, selection touch control3522, a previous image touch control3524, a next image touch control3526, a 2-dimensional image touch control3528, a pause image touch control3530, and a store image touch control3532. A image display window3506, may allow the user to review images in a plurality of formats. Image display window3506, may allow a user to view images3508,3510,3512,3514, in combination or subset or allow any image3508,3510,3512,3514, to be viewed individually. The image display window3506, may be configured to display up to four images3508,3510,3512,3514, to be viewed simultaneously. FIG.36illustrates a GUI Report Screen Interface for a user mode of operation with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3600, may be displayed when the report expanded review3604, is triggered from the menu bar3602, when the ultrasound system is started. The display screen3606, contains the ultrasound report information3626. The user may use the worksheet section within the ultrasound report3626, to enter in comments, patient information and study information. Within the report screen3600, the image control bar3608, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a save touch control3610, a save as touch control3612, a print touch control3614, a print preview touch control3616, a close study touch control3618, a 2-dimensional image touch control3620, a freeze image touch control3622, and a store image touch control3624. FIG.37illustrates a GUI Setup Screen Interface for a user mode of operation with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3700, may be displayed when the report expanded review3704, is triggered from the menu bar3702, when the ultrasound system is started. Within the setup expanded screen3704, the setup control bar3744, includes touch controls that may be operated by touch and touch gestures, applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a general touch control3706, a display touch control3708, a measurements touch control3710, annotation touch control3712, a print touch control3714, a store/acquire touch control3716, a DICOM touch control3718, an export touch control3720, and a study information image touch control3722. The touch controls may contain a display screen that allow the user to enter configuration information. For example, the general touch control3706, contains a configuration screen3724, wherein the user may enter configuration information. Additionally, the general touch control3706, contains a section allowing user configuration of the soft key docking position3726.FIG.37Bdepicts the soft key controls3752, with a right side alignment.FIG.37Bfurther illustrates that activation of the soft key control arrow3750, will change the key alignment to the opposite side, in this case, left side alignment.FIG.37Cdepicts left side alignment of the soft key controls3762, the user may activate an orientation change by using the soft key control arrow3760, to change the position to right side alignment. Within the review screen3700, the image control bar3728, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include but are not limited to, a thumbnail settings touch control3730, sync touch control3732, selection touch control3734, a previous image touch control3736, a next image touch control3738, a 2-dimensional image touch control3740, and a pause image touch control3742. FIG.38illustrates a GUI Setup Screen Interface for a user mode of operation with a modular ultrasound imaging system in accordance with the invention. The screen interface for a user mode of operation3800, may be displayed when the report expanded review3804, is triggered from the menu bar3802, when the ultrasound system is started. Within the setup expanded screen3804, the setup control bar3844, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a plurality of icons such as a general touch control3806, a display touch control3808, a measurements touch control3810, annotation touch control3812, a print touch control3814, a store/acquire touch control3816, a DICOM touch control3818, an export touch control3820, and a study information image touch control3822. The touch controls can contain a display screen that allow the user to enter store/acquire information. For example, the store/acquire touch control3816, contains a configuration screen3802, wherein the user may enter configuration information. The user can actuate a virtual keyboard allowing the user to enter alphanumeric characters in different touch activated fields. Additionally, the store/acquire touch control3802, contains a section allowing user enablement of retrospective acquisition3804. When the user enables the store function, the system is defaulted to store prospective cine loops. If the user enables the enable retrospective capture, the store function may collect the cine loop retrospectively. Within the setup screen3800, the image control bar3828, includes touch controls that may be operated by touch and touch gestures applied by the user directly to the surface of the display. Exemplary touch controls may include, but are not limited to a thumbnail settings touch control3830, synchronize touch control3832, selection touch control3834, a previous image touch control3836, a next image touch control3838, a 2-dimensional image touch control3840, and a pause image touch control3842. FIGS.39A and39Billustrate an XY bi-plane probe consisting of two one dimensional, multi-element arrays. The arrays may be constructed where one array is on top of the other with a polarization axis of each array being aligned in the same direction. The elevation axis of the two arrays can be at a right angle or orthogonal to one another. Exemplary embodiments can employ transducer assemblies such as those described in U.S. Pat. No. 7,066,887, the entire contents of which is incorporated herein by reference, or transducers sold by Vernon of Tours Cedex, France, for example. Illustrated byFIG.39A, the array orientation is represented by arrangement3900. The polarization axis3908, of both arrays are pointed in the z-axis3906. The elevation axis of the bottom array, is pointed in y-direction3902, and the elevation axis of the top array, is in the x-direction3904. Further illustrated byFIG.39B, a one dimensional multi-element array forms an image as depicted in arrangement3912. A one-dimensional array with an elevation axis3910, in a y-direction3914, forms the ultrasound image3914, on the x-axis3904, z-axis3906, plane. A one-dimensional array with the elevation axis3910, in the x-direction3904, forms the ultrasound image3914, on the y-axis3902, z-axis3906. A one dimensional transducer array with elevation axis3910, along a y-axis3902, and polarization axis3908, along a z-axis3906, will result in a ultrasound image3914, formed along the x3904and the z3906plane. An alternate embodiment illustrated byFIG.39Cdepicts a one-dimensional transducer array with an elevation axis3920, in a x-axis904, and a polarization axis3922, in the z-axis3906, direction. The ultrasound image3924, is formed on the y3902and the z3906plane. FIG.40illustrates the operation of a bi-plane image forming xy-probe where array4012has a high voltage applied for forming images. High voltage driving pulses4006,4008,4010, may be applied to the bottom array4004, with a y-axis elevation. This application may result in generation of transmission pulses for forming the received image on the XZ plane, while keeping the elements of the top array4002at a grounded level. Such probes enable a 3D imaging mode using simpler electronics than a full 2D transducer array. A touchscreen activated user interface as described herein can employ screen icons and gestures to actuate 3D imaging operations. Such imaging operations can be augmented by software running on the tablet data processor that processes the image data into 3D ultrasound images. This image processing software can employ filtering smoothing and/or interpolation operations known in the art. Beamsteering can also be used to enable 3D imaging operations. A preferred embodiment uses a plurality of 1D sub-array transducers arranged for bi plane imaging. FIG.41illustrates the operation of a bi-plane image forming xy-probe.FIG.41illustrates a array4110, that has a high voltage applied to it for forming images. High voltage pulses4102,4104,4106, may be applied to the top array4112, with elevation in the x-axis, generating transmission pulses for forming the received image on the yz-plane, while keeping the elements of the bottom array4014, grounded4108. This embodiment can also utilize orthogonal 1D transducer arrays operated using sub-array beamforming as described herein. FIG.42illustrates the circuit requirements of a bi-plane image forming xy-probe. The receive beamforming requirements are depicted for a bi-plane probe. A connection to receive the electronics4202, is made. Then elements from the select bottom array4204, and select top array4208, are connected to share one connect to the receive electronics4202channel. A two to one mux circuit can be integrated on the high voltage driver4206,4210. The two to one multiplexor circuit can be integrated into high voltage driver4206,4212. One receive beam is formed for each transmit beam. The bi-plane system requires a total of 256 transmit beams for which 128 transmit beams are used for forming a XZ-plane image and the other 128 transmit beams are used for forming a YZ-plane image. A multiple-received beam forming technique can be used to improve the frame rate. An ultrasound system with dual received beam capabilities for each transmit beam provides a system in which two received beams can be formed. The bi-plane probe only needs a total of 128 transmit beams for forming the two orthogonal plane images, in which 64 transmit beams are used to form a XZ-plane image with the other 64 transmit beams for the YZ-plane image. Similarly, for an ultrasound system with a quad or 4 receive beam capability, the probe requires 64 transmit beams to form two orthogonal-plane images. FIGS.43A-43Billustrate an application for simultaneous bi-plane evaluation. The ability to measure the LV mechanical dyssynchrony with echocardiograph can help indentify patients that are more likely to benefit from Cardiac Resynchronization Therapy. LV parameters needed to be quantified are Ts-(lateral-septal), Ts-SD, Ts-peak, etc. The Ts-(lateral-septal) can be measured on a 2D apical 4-chamber view Echo image, while the Ts-SD, Ts-peak (medial), Ts-onset (medial), Ts-peak (basal), Ts-onset (basal) can be obtained on two separated parasternal short-axis views with 6 segments at the level of mitral valve and at the papillary muscle level, respectively, providing a total of 12 segments.FIG.43A-43Bdepicts an xy-probe providing apical four chamber4304, and apicial two chamber4302images, to be viewed simultaneously. FIGS.44A-44Billustrate ejection fraction probe measurement techniques. The biplane-probe provides for EF measurement, as visualization of two orthogonal planes ensure on-axis views are obtained. Auto-border detection algorithm, provides quantitative Echo results to select implant responders and guide the AV delay parameter setting. As depicted inFIG.44A XY probe acquires real-time simultaneous images from two orthogonal planes and the images4402,4404are displayed on a split screen. A manual contour tracing or automatic boarder tracing technique can be used to trace the endocardial boarder at both end-systole and end-diastolic time from which the EF is calculated. The LV areas in the apical 2CH4402, and 4CH4404, views, A1 and A2 respectively, are measured at the end of diastole and the end of systole. The LVEDV, left ventricular end-diastolic volume, and LVESV, left ventricular the end-systole volume, are calculated using the formula: V=83πA1A2L. And the ejection fraction is calculated by EF=LVEDV-LVESDLVEDV. It is noted that the operations described herein are purely exemplary, and imply no particular order. Further, the operations can be used in any sequence, when appropriate, and/or can be partially used. Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods may include more or fewer steps than those illustrated in the exemplary flowcharts, and that the steps in the exemplary flowcharts may be performed in a different order than shown. In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes a plurality of system elements or method steps, those elements or steps may be replaced with a single element or step. Likewise, a single element or step may be replaced with a plurality of elements or steps that serve the same purpose. Further, where parameters for various properties are specified herein for exemplary embodiments, those parameters may be adjusted up or down by 1/20th, 1/10th, ⅕th, ⅓rd, ½, etc., or by rounded-off approximations thereof, unless otherwise specified. With the above illustrative embodiments in mind, it should be understood that such embodiments can employ various computer-implemented operations involving data transferred or stored in computer systems. Such operations are those requiring physical manipulation of physical quantities. Typically, though not necessarily, such quantities take the form of electrical, magnetic, and/or optical signals capable of being stored, transferred, combined, compared, and/or otherwise manipulated. Further, any of the operations described herein that form part of the illustrative embodiments are useful machine operations. The illustrative embodiments also relate to a device or an apparatus for performing such operations. The apparatus can be specially constructed for the required purpose, or can incorporate general-purpose computer devices selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines employing one or more processors coupled to one or more computer readable media can be used with computer programs written in accordance with the teachings disclosed herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The foregoing description has been directed to particular illustrative embodiments of this disclosure. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their associated advantages. Moreover, the procedures, processes, and/or modules described herein may be implemented in hardware, software, embodied as a computer-readable medium having program instructions, firmware, or a combination thereof. For example, one or more of the functions described herein may be performed by a processor executing program instructions out of a memory or other storage device. It will be appreciated by those skilled in the art that modifications to and variations of the above-described systems and methods may be made without departing from the inventive concepts disclosed herein. Accordingly, the disclosure should not be viewed as limited except as by the scope and spirit of the appended claims. | 115,977 |
11857364 | DETAILED DESCRIPTION In the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. An ultrasound transducer probe having a transducer assembly and system for using the same are disclosed. Embodiments of the ultrasound transducer probe offers one or more improvements over probes of the prior art. First, the probe construction greatly improves thermal management, provides enhanced electromagnetic interface (EMI) control, and increased structural integrity. These improvements have been achieved while eliminating many labor-intensive production processes which are often subject to human error, such as, for example, hand casting, hand-wrapping components with copper foil and soldering around electrical components. Because many labor-intensive production processes have been eliminated, embodiments of the ultrasound transducer probe provide for more repeatable production steps without batch variations, which cut production assembly time significantly. FIG.1illustrates one embodiment of an ultrasound transducer probe having an ultrasound transducer assembly configured in accordance with an embodiment of the disclosed technology. Referring toFIG.1, ultrasound transducer probe100includes an enclosure110extending between a distal end portion112and a proximal end portion114. In one embodiment, enclosure110of ultrasonic transducer probe100has a transparent cover that surrounds an inner shell. In one embodiment, the inner shell comprises of metal material (e.g., diecast aluminum, etc.). In one embodiment, the transparent cover comprises transparent plastic (e.g., polysulfone) overmolded on the die cast metal inner shell. In one embodiment, the outer cover and the inner shell create enclosure110and work together to transfer heat out of the probe. Enclosure110is configured to carry or house system electronics (e.g., one or more processors, integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), beamformers, batteries and/or other power sources) disposed in an interior portion or cavity of enclosure110. The system electronics (not shown) are electrically coupled to an ultrasound imaging system130via a cable118that is attached to the proximal end of the probe. At the probe tip, a transducer assembly120having one or more transducer elements is electrically coupled to the system electronics. In operation, transducer assembly120transmits ultrasound energy from the one or more transducer elements toward a subject and receives ultrasound echoes from the subject. The ultrasound echoes are converted into electrical signals by transmit receive circuitry and electrically transmitted to the system electronics and to electronics (e.g., one or more processors, memory modules, beamformers, FPGAs, etc.) in ultrasound imaging system130configured to process the electrical signals and form one or more ultrasound images. Capturing ultrasound data from a subject using an exemplary transducer assembly (e.g., the transducer assembly120) generally includes generating ultrasound, transmitting ultrasound into the subject, and receiving ultrasound reflected by the subject. A wide range of frequencies of ultrasound may be used to capture ultrasound data, such as, for example, low frequency ultrasound (e.g., less than 15 MHz) and/or high frequency ultrasound (e.g., greater than or equal to 15 MHz) can be used. Those of ordinary skill in the art can readily determine which frequency range to use based on factors such as, for example, but not limited to, depth of imaging and/or desired resolution. In one embodiment, ultrasound imaging system130includes ultrasound control subsystem131having one or more processors. At least one processor causes electrical currents to be sent to the transducer(s) of probe100to emit sound waves and also receives the electrical pulses from the probe that were created from the returning echoes. A processor processes the raw data associated with the received electrical pulses and forms an image that is sent to ultrasound imaging subsystem132, which displays the image on display screen133. Thus, display screen133displays ultrasound images from the ultrasound data processed by the processor of ultrasound control subsystem131. In one embodiment, the ultrasound system also has one or more user input devices (e.g., a keyboard, a cursor control device, etc.) that inputs data and allows the taking of measurements from the display of the ultrasound display subsystem, a disk storage device (e.g., hard, floppy, compact disks (CD), digital video discs (DVDs)) for storing the acquired images, and a printer that prints the image from the displayed data. These also have not been shown inFIG.1to avoid obscuring the techniques disclosed herein. In one embodiment, the ultrasound transducer probe has integrated components for improved thermal management, electromagnetic interference (EMI) mitigation and structural integrity. The thermal properties are achieved by encapsulating long heat fins within the probe. These heat fins extend into the handle of the ultrasonic probe to transfer heat away from the heat producing components in the probe tip. The thermal transfer is further enhanced by a highly conductive inner enclosure (e.g., a metal enclosure, an enclosure comprising ceramic or other materials that offer both improved thermal and structural integrity) which contacts the long heat fins and conducts heat from the heat fins to the outer surface of the transducer probe. In one embodiment, the inner case has overlapping joints between halves of the enclosure for additional structural integrity, as well as EMI control. In one embodiment, an outer surface of the probe includes a plastic case that encapsulates the inner enclosure. The outer plastic case offers protection against electrical hazards, liquid ingress, and cosmetic damage. In one embodiment, ultrasound transducer probe100comprises a probe array assembly having a probe tip, a first enclosure disposed around a portion of the probe array comprising a thermally conductive material (e.g., metal (e.g., diecast aluminum), etc.), and one or more thermally conductive heat fins contained within the first enclosure. In one embodiment, each thermally conductive heat fin has an end enclosed within the probe array assembly and has a portion that extends away from the probe array assembly (e.g., a planar extension) that is in thermal contact with an inner surface of the first enclosure to transfer heat from the probe tip located proximally to one opening in the first enclosure. In one embodiment, material within ultrasound transducer probe100(e.g., elastomer or other foam pads) forces the heat fins toward and in contact with the inner surface of the first enclosure. In one embodiment, a second enclosure (e.g., an overmolded plastic enclosure), or cover, is disposed around the first enclosure and operates to overlap and cover a substantial portion, if not all, of the outside surface of the first enclosure. In one embodiment, the first enclosure comprises top and bottom clamshell halves and the second enclosure comprise top and bottom clamshell halves. The top clamshell half of the second enclosure is coupled to and overlaps the top clamshell half of the first enclosure, while the bottom clamshell half of the second enclosure is coupled to and overlaps the bottom clamshell half of the first enclosure. In one embodiment, the coupling between these respective clamshell halves is accomplished, at least in part, using mechanical interlocks. Embodiments of enclosure110provide a number of features. In one embodiment, the features include one or more of improved heat transfer, an electromagnetic compatibility (EMC) Faraday cage (to the extent possible with a probe tip), a ruggedized structural enclosure, and a simple final assembly. For example, using one or more of these features, embodiments of ultrasound transduce probe100have been shown to provide a decrease of 1.2° C. or more improvement at the probe tip of other probes. FIGS.2and3illustrate one embodiment of a probe case. In one embodiment, the probe case comprises an upper pair of clamshell halves consisting of upper cover201and inner shell202and a lower pair of clamshell halves consisting of lower cover204and inner shell203. Inner shell202and inner shell203are coupled together to form a first enclosure and upper cover201and lower cover204are coupled together to form second enclosure that covers a substantial portion of the outer surface of the first enclosure. In one embodiment, the combined enclosures operate as a housing that includes a handle portion at the end having the cable assembly for the ultrasound transducer probe (e.g., the distal end114ofFIG.1) and the transducer array housing near the other end (e.g., the proximal end112ofFIG.1). In one embodiment, inner shell202and inner shell203comprise a thermally conductive material. In one embodiment, inner shell202and inner shell203also comprise an electrically conductive material, such as, for example, but not limited to, a die cast aluminum, other metals, composite alloys, etc. Inner shell202and inner shell203may comprise other metal materials or non-metal materials (e.g., ceramic). Note that as a die cast aluminum inner shell case, the first enclosure created by inner shell202and inner shell203provides structural integrity, provides thermal transfer from heat fins (as described later) and acts as an EMC enclosure, such that no copper foil is needed for the electronics. In alternative embodiments, inner shell202and203are created using methods other than die casting, such as, for example, but not limited to, well-known methods such as pressure forming, investment casting, computerized numerical control (CNC) milling, or less well-known methods. In one embodiment, upper cover201and lower cover204are coupled together to form an overmolded plastic case (e.g., an injection molded polysulfone case). Note that upper cover201and lower cover204may comprise materials other than overmolded polysulfone, such as, for example, but not limited to, a polymer material, an insulate material, etc. In one embodiment, upper cover201and lower cover204are permanently overmolded onto inner shell202and inner shell203, respectively. In one embodiment, inner shell202includes a number of thru-holes, such as for example, thru-holes242. In one embodiment, when inner shell202is coupled to and enclosed within (e.g., covered by) upper cover201and inner shell203is coupled to and enclosed within (e.g., covered by) lower cover204, the plastic that is injection molded to form upper cover201and lower cover204proceeds through thru-holes242and injection molded tab interlocks are formed on the opposite side of inner shell202and inner shell203. These tab interlocks hold upper cover201to inner shell202and hold inner shell203to lower cover204. That is, in one embodiment, injection molding of the outer cover (upper cover201and lower cover204) is allowed to go through thru-holes242in halves of the inner enclosure (e.g., a metal case) and overlap in the interior of the inner enclosure, thereby creating tab interlocks that form a mechanical interlock between the outer cover formed by upper cover201and lower cover204and the inner enclosure formed by inner shell202and inner shell203. The mechanical interlock produced by thru-holes242provides a strong mechanical lock to keep the plastic from delaminating. Examples of such tab interlocks are shown as tab interlocks260and261in lower cover204. The tab interlocks will be shown in additional figures described in more detail below. In one embodiment, inner shell202and inner shell203are coupled together using lap joint280and one or more bosses (e.g., C-bosses) to form a metal case that connects with the cable enclosure at the end of the ultrasound transducer probe. In one embodiment, lap joint280provides EMC integrity (e.g., prevents leakage of electromagnetic waves) and structural integrity, such that when coupled together, a substantially complete Faraday cage is created except for the acoustic portion of the transducer at the front opening (at the proximal end) and rear opening (at the distal end). In one embodiment, a more complete Faraday cage is created where the lens includes an additional electrically conductive path that is connected to a gasket in the transducer assembly. In such a case, the gasket is coupled to at least one heat fin extending into the probe handle. Examples such a gasket and heat fin are EMI gasket401and heat fins403ofFIG.4which are described in more detail below. In one embodiment, this electrically conductive path comprises a foil embedded in the lens. In another embodiment, this electrically conductive path comprises a metal overlay (e.g., sputtered metal) that is over a portion of the transducer. In one embodiment, there are four bosses251that couple inner shell202and inner shell203together via a compliant interference fit, though more or less than 4 bosses may be used. In one embodiment, bosses251are C-bosses that include solid bosses with ribs that mate with hollow C-bosses. The bosses provide precision alignment, structural interlock, compliant tolerance allowance, and a larger surface area. In one embodiment, upper cover201and/or lower cover204include heat ribs. Examples of heat ribs are shown as heat ribs211in lower cover204. In one embodiment, there are multiple heat ribs that protrude and extend up from the inner surface of lower cover204, or protrude and extend down from the inner surface of upper cover201. When upper cover201is coupled to inner shell202, the heat ribs extend into heat rib slots, such as heat rib slots210, on inner shell202. In one embodiment, similar heat rib slots are included in inner shell203. The heat rib slots knit upper cover201to inner shell202and knit inner shell203to lower cover204to enhance heat transfer and structural integrity. The size of the heat ribs and/or the heat rib slots varies for different embodiments. However, increasing the surface area of the heat ribs and/or heat slots and reducing the spacing of the heat ribs and/or heat slots to make a tight or closer fit increases thermal transfer. Therefore, the surface area and the spacing of the heat ribs and/or heat slots may be adapted or optimized for different transducers operating at different frequencies, to control temperature reductions associated with the different transducers. In one embodiment, to transfer heat more efficiently, the heat ribs are increased in size to have more surface area when using transducers of higher frequency. FIG.3illustrates another view of inner shell202and inner shell203. As shown, inner shell202includes thru-holes242to provide holes for an injection molded material used to secure inner shell202to upper cover201(FIG.2) and inner shell203to lower cover204. As shown, inner shell203includes heat rib slots210to receive heat ribs protruding from lower cover204(as is the case with heat rib slots of inner shell202receiving heat ribs protruding from upper cover201) and provide thermal transfer from heat fins within the probe (described in more detail below in conjunction withFIGS.4and6A-10) to the outer surface of the ultrasonic transducer probe. FIG.4illustrates one embodiment of a probe case of an ultrasonic transducer probe near final assembly. Referring toFIG.4, upper cover201is attached to and covers an outer surface of inner shell202, and lower cover204is attached to and covers an outer surface of inner shell203. In one embodiment, inner shell203has overlapped joints with lower cover204and inner shell202has overlapped joints with upper cover201such that when the integrated halves are coupled together, a substantially complete Faraday cage is created (except for the front and rear openings of the ultrasonic probe). As discussed above, thru-holes242in inner shell202and inner shell203allow for plastic flow to create a mechanical interlock between inner shell202and upper cover201and between inner shell203and upper cover204. Note that in one embodiment, the shape of thru-holes242to create the tab interlocks depends on the material being used for the over molding of the covers (e.g., upper cover201, lower cover204). If the material for the over molding is more viscous, the shape of thru-holes242may require a larger diameter. Also, in one embodiment, the width-to-thickness ratio is chosen for the shape of thru-holes242to ensure the diameter of thru-holes242permits flow through of the over molding material. The ultrasonic transducer probe also includes a transducer array404that includes the transducer elements. In one embodiment, array404is injection molded. This eliminates the need to the labor-intensive and error-prone hand casting. Array404includes exposed return plane402that mates with EMI gaskets401on the top and bottom of array404. In one embodiment, exposed return plane402comprises a metal such as, but not limited to, copper. In one embodiment, EMI gaskets401are coupled to the array404via pins405. In one embodiment, exposed return plane402couples directly with the EMI gaskets401. In one embodiment, EMI gaskets401are electrically conductive and make contact the inner surface of the first enclosure created by inner shell202and inner shell203. By doing so, EMI gaskets401allow direct contact from return plane402of array404to the inner shells202and203. In one embodiment, the ultrasound transducer probe also includes one or more thermally conductive heat fins, such as heat fins403. In one embodiment, heat fins403include a pair of heat fins that comprise a thermally conductive material (e.g., copper, aluminum, etc.) or another material that thermally conducts heat away from the probe tip. In alternative embodiments, only one heat fin or more than two heat fins are included. In one embodiment, each of heat fins403has one end enclosed within the portion of the transducer array assembly that contains array404. In one embodiment, the backing block for array404includes not only the flex circuits but also heat fins403which are embedded in the backing block. In one embodiment, each of heat fins403includes a flat planar section (e.g., a planar extension) that extends from away from the probe array assembly having array404toward the handle, or distal portion (114), of the ultrasound transducer probe. In one embodiment, heat fins403extend to a location in the probe near where the cable in the cable assembly is exposed away from its wire mesh. In cases where there is direct or indirect contact between heat fins403and the cable, a highly conductive cable may improve the thermal transfer from the transducer face through the full length of the probe. At least a portion of the planar extension of each of heat fins403is in thermal contact with the interior surface of the enclosure created by coupling inner shells202and203. As inner shells202and203are thermally conductive, in one embodiment, heat fins403transfer heat from the probe tip where array404is located at the proximal end (112) to inner shells202and203and ultimately to the outer portion/exposed cable assembly of the probe that includes portions of the cover created by upper cover201and lower cover204. That is, in one embodiment, the thermal transfer occurs from one end of the probe enclosure to the other end of the probe enclosure. In one embodiment, heat fins403are embedded in the backing block to extend to within the eighteen to twenty thousandths of an inch of the probe tip and conduct heat from the probe tip all the way to the back of the probe handle. Note that the heat transfer not only occurs in the x and y axis spanning the plane of a heat fin but as well as the z axis which is perpendicular to the heat fin. This enables thermal transfer towards the outer surface of the enclosure created by inner shells and outer covers. In one embodiment, heat fins403are forced into thermal contact with the inner surface of inner shells202and203using one or more pads (e.g., elastomer foam pads) or one or more other internal structures or mechanisms. In one embodiment, a thermal paste or other substance is inserted between heat fins403and the inner surface of inner shells202and203to enhance thermal coupling. FIG.5illustrates the upper half and the lower half of the probe case connected together. Referring toFIG.5, upper cover201and lower cover204are connected via lap joints280on inner shells202and203. Examples of mechanical interlocks between the outer case (formed by upper cover201and lower cover204) and the inner enclosure (formed by inner shells202and203) are illustrated via thru-holes242and tab interlocks260. Also illustrated is the mating between heat ribs211of the outer cover formed by upper cover201and lower cover204and the heat rib slots210of the inner enclosure formed by inner shells202and203. As shown inFIG.5, an inner shell202has an inner flat220. In one embodiment, RTV (room temperature vulcanized) sealant is used around the coupling of outer cover201and outer cover204. Instead of using an RTV sealant around the sides, a gasket could be used. FIGS.6A-6B and7A-7Billustrate embodiments of the backing block with heat fins. Referring toFIGS.6A and6B, heat fins403are coupled with signal flex circuits601. In one embodiment, two heat fins403and two signal flexes601are pre-bonded in a backing block region and sandwiched together at the midsection with no gaps in-between. In one embodiment, the prebonding is performed using an adhesive. In on embodiment, the bonded configuration of heat fins403and signal flexes601is coupled to extension604. In one embodiment, extension604comprises of the thermally conducted material such as, for example copper, aluminum, etc. Extension604abuts support ribs603. Return planes602include holes for alignment with pins405and rest on support ribs603. Support ribs603include a number of rib blocks (e.g., rectangular blocks) coupled together with smaller blocks such as the upper blocks extend beyond the inner blocks that couple them together. In one embodiment, support ribs603are pre-molded, support return planes, allow low-pressure injection molding flow, and provide structural and resist from the shrink of resin. In one embodiment, the transducer array (e.g., array404ofFIG.4) undergoes low pressure injection molding to create backing block710. The low pressure injection molding of backing block710eliminates labor-intensive hand casting. In one embodiment, the center sandwich of the flex circuits601and heat fins403are injection molded with the pre-molded plastic support ribs603. Both the sandwiched flex circuits and heat fins are retained at the mold parting line. Backing block710also includes an exposed outer return ground planes701. In one embodiment, plastic locks702locks in ground plane701. In one embodiment, lock702is overmolded plastic lock. In one embodiment, exposed ground plane701(e.g., copper ground plane) mates with a conductive elastomer gasket (e.g., gasket401ofFIG.4) for direct EMC to the inner enclosure (i.e., shells202and203. In this way, no solder tails or copper foil is needed. FIG.8Aillustrates a side section view of one embodiment of a probe case with thermal fins. Referring toFIG.8A, backing block803is shown coupled to heat fins801and flex circuits811. As shown inFIGS.8A and8B, the probe case includes an upper cover802. As shown inFIG.8B, pads820are inserted between flex circuits811(and other centrally located electronics) and heat fins801, thereby forcing heat fins801against the inner surfaces of inner shell810(e.g., shells202and203). In one embodiment, the pads are elastomer pads. Where inner shell810(e.g., inner shells202and203) comprises of metallic material, heat fins801are forced against the metal inner case, and in this combination, the metal inner case offers a large surface contact with long thermal fins to enhance thermal transfer. In one embodiment, a thermal paste is applied between the metal case and the thermal fins as a way to eliminate air gaps which is detrimental to thermal transfer. FIG.8Billustrates a side section view of one embodiment of a probe case ofFIG.8Athat illustrates elastomer pads to force the fins into contact with an inner shell. Referring toFIG.8B, elastomer pads820are shown forcing heat fins801against the inner surfaces of inner shell801(e.g., shells202and203). As discussed above, thermal paste is used to thermally connect thermal fins801to inner shell801. Note also thatFIG.8Billustrates transducer electronics870(e.g., printed circuit boards (PCBs), etc.) associated with the transducer array coupled the cable of the probe. FIG.9illustrates a side section view of another embodiment of a probe case with thermal fins. Referring toFIG.9, the cable from a cable assembly is coupled to signal flex circuits900. The cable has a metal woven mesh buried along its entire length for shielding. During assembly, a portion of this woven mesh near the strain relief element910is peeled back from the cable and touches the two halves of the inner shell to enclose the probe as shown. Note that heat fins901and902are pushed into thermal contact with the inner surfaces of the inner metal enclosure and transfers heat as shown by the arrows. FIG.10illustrates one embodiment of a heat fin. Referring toFIG.10, in one embodiment, heat fin1000has a width of 1.48 inches at its widest point, which would be contained in the backing block (e.g., backing block710). Heat fin1000also has an extension that extends from the backing block. In one embodiment, the width of the extension is 0.70 inches and a length as measured from pins (e.g., pins405) in the backing block of 3.00 inches. Note that other sizes of heat fins may be used. WhileFIG.10illustrates one embodiment of a heat fin and its shape, heat fins may have other shapes. For example, in one embodiment, the portion of the heat fin that extends through the probe handle may have different dimensions. In other words, the portion of the heat fin that extends through the probe wouldn't have a uniform shape such as shown inFIG.10. For example, the extension of the heat fin may have a first portion and a second portion, wherein the dimensions of the first portion are different than the second portion. Furthermore, while the heat fin depicted inFIG.10is uniform in shape, particularly the extension, this is not required. Each heat fin may have various shapes and sizes that are based on the internal components and features of the probe. For example, a heat fin may have contours, cut-out areas, and/or shaped features that are needed to avoid contact with internal components in the probe while still providing the necessary thermal path through the probe. In yet another embodiment, the shape of a heat fin is designed to coincide and contact one or more internal electrical components that generate heat within the probe to improve, and potentially optimize, the thermal transfer within, and ultimately to the exterior of, the probe. In one embodiment, the thickness of each heat fin ranges from 0.005″ to 0.050″ (e.g., 0.010″). Note that heat fins of other thicknesses may be used and their size selected based on the desired thermal transfer properties of the heat fins, the desired amount of heat reduction at the transducer face, and space limitations within the probe itself (e.g., the size of the inner cavity of the probe). In another alternative embodiment, the heat fin has multiple layers of the same material (e.g., copper). Furthermore, in one embodiment, the thickness of the heat fin changes from one portion of the heat fin to the next. For example, one portion of the heat fin may have one thickness while another portion or portions of the heat fin have a different thickness. In one embodiment, the heat fin has a thicker section embedded within the transducer array near the lens, with a thinner portion of the heat fin extending outward from the transducer array into the handle area. Thus, the thickness of the heat fin changes from a first portion and a second portion. In one embodiment, the thicker section is metal or ceramic diecast, investment cast, CNC, etc. The use of multi-layers can provide additional thermal transfer benefits. In yet another alternative embodiment, the heat fin has various sized portions, such as described above, and multiple layers, where there are different combinations of layers used in the different portions of the heat fin. For example, in one embodiment, the portion of the heat fin in the lens portion has more layers than in the handle portion, thereby appearing to taper off as the heat fin extends into the probe handle. In another embodiment, the portion of the heat fin in the lens portion has fewer layers than in the handle portion, such that the heat fin grows in size as the heat fin extends into the probe handle. FIGS.11-13illustrate thermal images of probes of the prior art and probes utilizing the techniques described herein. Referring to each ofFIGS.11-13, a pair of probes is shown with a prior art probe configuration on the left and a probe constructed with features disclosed above on the right. Note that in each instance, the thermal energy spreads into and dissipates more into the probe handle in the case of the probes on the right with features disclosed herein in comparison to the probes on the left. There is a number of example embodiments described herein. Example 1 is a ultrasound transducer probe comprising: a probe array assembly having a probe tip; a first enclosure disposed around a portion of the probe array assembly, the first enclosure having first and second openings and comprising a thermally conductive material; and one or more thermally conductive fins contained within the first enclosure, each of the one or more thermally conductive fins having one end enclosed within the probe array assembly and a portion extending away from the probe array assembly and in thermal contact with an inner surface of the first enclosure to create a thermal path from the first opening to the second opening in the first enclosure. Example 2 is the ultrasound transducer probe of example 1 that may optionally include that the one or more thermally conductive fins comprises metal. Example 3 is the ultrasound transducer probe of example 1 that may optionally include a plurality of material pieces to force the one or more thermally conductive fins toward with the inner surface of the first enclosure. Example 4 is the ultrasound transducer probe of example 3 that may optionally include thermal paste thermally coupling the one or more thermally conductive fins with the inner surface of the first enclosure. Example 5 is the ultrasound transducer probe of example 3 that may optionally include that the plurality of material pieces comprises a plurality of pads. Example 6 is the ultrasound transducer probe of example 1 that may optionally include a second enclosure disposed around the first enclosure. Example 7 is the ultrasound transducer probe of example 6 that may optionally include that the second enclosure comprises a non-electrically conductive material. Example 8 is the ultrasound transducer probe of example 6 that may optionally include that the first enclosure comprises first and second clamshell halves and the second enclosure comprise third and fourth clamshell halves, wherein the third clamshell half of the second enclosure is coupled to the first clamshell half of the first enclosure using mechanical interlocks, wherein the fourth clamshell half of the second enclosure is coupled to the second clamshell half of the first enclosure using mechanical interlocks. Example 9 is the ultrasound transducer probe of example 8 that may optionally include that the first and second clamshell halves comprise a plurality of thru-holes, and non-electrically conductive material forming the third and fourth clamshell halves extends into and forms overlaps onto interior surfaces of the first and second clamshell halves. Example 10 is the ultrasound transducer probe of example 6 that may optionally include that the first and second clamshell halves are coupled together via lap joints along sides of the first and second clamshell halves. Example 11 is the ultrasound transducer probe of example 6 that may optionally include heat ribs protruding from the inner surface of the third clamshell half extend into and mate with slots of the first clamshell half and heat ribs protruding from the inner surface of the fourth clamshell half extend into and mate with slots of the second clamshell half. Example 12 is the ultrasound transducer probe of example 1 that may optionally include one or more electrically conductive electromagnetic interference (EMI) gaskets coupled to one or more return planes of the probe array assembly, the one or more electrically conductive EMI gaskets coupled to inner surfaces of the first enclosure. Example 13 is the ultrasound transducer probe of example 12 that may optionally include that one or more EMI gaskets provide a direct contact from the one or more return planes to the inner surfaces of the first enclosure. Example 14 is the ultrasound transducer probe of example 12 that may optionally include that the first enclosure comprises metal and comprises overlapping joints and operates with the EMI gaskets to create a full Faraday cage except for the first and second openings. Example 15 is the ultrasound transducer probe of example 14 that may optionally include that the second opening of the first enclosure is operable to electrically connect and provide a thermal coupling to a metal woven mesh of a cable enclosure. Example 16 is an ultrasound transducer probe comprising: a probe array assembly having a probe tip; a first enclosure disposed around a portion of the probe array assembly, the first enclosure having first and second openings and comprising wherein the first enclosure comprises first and second metal clamshell halves coupled together via lap joints along sides of the first and second clamshell halves; a second enclosure disposed around the first enclosure and having third and fourth clamshell halves, wherein the third clamshell half of the second enclosure is coupled to the first clamshell half of the first enclosure using mechanical interlocks, wherein the fourth clamshell half of the second enclosure is coupled to the second clamshell half of the first enclosure using mechanical interlocks; one or more thermally conductive fins contained within the first enclosure and comprising metal, each of the one or more thermally conductive fins having one end enclosed within the probe array assembly and a portion extending away from the probe array assembly and in thermal contact with an inner surface of the first enclosure to create a thermal path from the first opening to the second opening in the first enclosure; and one or more electrically conductive electromagnetic interference (EMI) gaskets coupled to one or more return planes of the probe array assembly, the one or more electrically conductive EMI gaskets coupled to inner surfaces of the first enclosure to provide direct contact from the one or more return planes to the inner surfaces of the first enclosure to create a full Faraday cage with exception of the first and second openings. Example 17 is the probe of example 16 that may optionally include a plurality of material pieces to force the one or more thermally conductive fins toward with the inner surface of the first enclosure. Example 18 is the probe of example 16 that may optionally include that the first and second clamshell halves comprise a plurality of thru-holes, and non-electrically conductive material forming the third and fourth clamshell halves extends into and forms overlaps onto interior surfaces of the first and second clamshell halves. Example 19 is the probe of example 16 that may optionally include that the second enclosure comprises a non-electrically conductive material, and further wherein the third clamshell half of the second enclosure is overmolded onto the first clamshell half of the first enclosure using mechanical interlocks, wherein the fourth clamshell half of the second enclosure is overmolded onto the second clamshell half of the first enclosure. Example 20 is the probe of example 16 that may optionally include that the second opening of the first enclosure is operable to electrically connect and provide a thermal coupling to a metal woven mesh of a cable enclosure. Example 21 is the probe of example 16 that may optionally include a plurality of bosses connecting the first and second clamshell halves together. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list. Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention. | 38,731 |
11857365 | DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. The invention improves the ability HCPs to sample tissue or deliver substances with real-time feedback. Referring now toFIG.1, a bronchoscope system10includes a bronchoscope12with an insertion tube14, a radial ultrasound system16and an access device20. The radial ultrasound system16includes a signal processor24, a display device18and a radial ultrasound probe22. The radial ultrasound probe22and a medical device30, such as a needle for sampling and/or medicant delivery, are received within the bronchoscope12via a handle component and a sheath/catheter component of the access device20. The display device18is in wired or wireless signal communication with the bronchoscope12and/or the signal processor24. The display device18presents images generated based on information received from the bronchoscope12and/or the signal processor24that receives image information from a radial ultrasound transducer at the distal end of the radial ultrasound probe22. A diagnostic endoscope (e.g., BF series produced by Olympus®) is an example of the bronchoscope12and the radial endobronchial ultrasound (rEBUS) probes produced by Olympus® are examples of the radial ultrasound device16. FIG.2illustrates an access device20-1that includes a two-port handle component32that attaches to a dual-lumen catheter40. Attached at a distal end of the dual-lumen catheter40is a cap42. As will be shown in more detail below, the dual-lumen catheter40includes a fully enclosed lumen that is in communication with a first access port44of the two-port handle component32and an open trough that is in communication with a second access port46of the two-port handle component32. The first access port44is configured to receive a medical device48, such as a needle, and direct the medical device into the enclosed lumen within the dual-lumen catheter40. The second access port46is configured to receive and direct the radial ultrasound probe22into the open trough of the dual-lumen catheter40. In one embodiment, the two-port handle component32may include a hinged door or removable section that allows access to an internal lumen that connects the second access port46to the open trough of the dual-lumen catheter40. When the hinged door or removable section is in an opened configuration, the internal lumen is made accessible such that the radial ultrasound probe22may be placed directly into the internal lumen. This allows one to simultaneously place a shaft of the radial ultrasound probe22into the internal lumen of the two-port handle32and into the open trough of the dual-lumen catheter40. FIGS.3and4illustrates views of an exemplary dual-lumen catheter60. The dual-lumen catheter60includes a first enclosed lumen62configured to receive a medical device. The dual-lumen catheter60includes a second lumen64sized to receive the radial ultrasound probe22. The second lumen64is not fully enclosed by the material of the dual-lumen catheter60. The second lumen64has a cross-section similar to that of a trench, trough or open channel. The second lumen64includes a central axis that is offset from a central axis of the dual-lumen catheter60. The position of the central axis of the second lumen64relative to the central axis of the dual-lumen catheter60is selected that when the radial ultrasound probe22is inserted into the second lumen64, the outer surface of the radial ultrasound probe22that is exposed is at a distance from the central axis of the dual-lumen catheter60that is between being slightly less to slightly greater than the actual radius of the dual-lumen catheter60. Thus, because the dual-lumen catheter60does not occupy the space fully around the second lumen64, the overall diameter of the dual-lumen catheter60can be optimized. In other words, the size of the first and second lumens62,64can be maximized. In one embodiment, the position of the central axis of the second lumen64relative to the central axis of the dual-lumen catheter60and dimensions of the second lumen64are selected so that the radius of the second lumen64extends at least as radially as the radius of the dual-lumen catheter60. In one embodiment, the edges of the dual-lumen catheter60that define the opening of the second lumen64are made of a material that allows them to expand to a more open configuration to allow the radial ultrasound probe22to be inserted collaterally versus being slid into the second lumen64from the proximal end. The material may include a variety of thermoplastics (Pebax, polyurethane, PEEK, etc.) or thermoset (silicone, PTFE). The material around the second lumen64trough should be rigid enough to hold the probe22, yet flexible enough to allow a snapping-in of the probe22. The insertion of the radial ultrasound probe22into the second lumen64may cause a clicking action or noise caused be the edges of the dual-lumen catheter60snapping back after being expanded to allow for the diameter of the probe22. As shown inFIGS.5and6, a dual-lumen catheter70includes an open lumen72and a medical device lumen74. At a distal end of the dual-lumen catheter70an exit port78provides access to the medical device lumen74. A ramp80at the exit port78allows a medical device82to be directed out of the exit port78. In one embodiment, a cap section90is attachable to the distal end of the dual-lumen catheter70. The cap section90includes a lumen94for receiving a radial ultrasound probe and one or more lumens for receiving one or more orientation pins92. In one embodiment, a ramp80-1may also be included in the cap section90instead of being included in the catheter. The orientation pins provide an echogenic feature, thus providing more visibility in the ultrasound image and thus alert the user to the rotational orientation of the distal end of the access device20and the needle relative to a target. Embodiments A. A catheter comprising: a first lumen; and a second lumen adjacent to the first lumen; wherein material of the catheter occupies an arc around the first lumen, wherein the arc is greater than 180° and less than 360°, thus creating a lengthwise opening to the first lumen. B. The catheter of A, wherein the catheter material surrounds the second lumen in a cross-sectional dimension. C. The catheter of A or B, wherein the second lumen has s smaller inner diameter than the first lumen. D. The catheter of any of A-C, wherein the catheter is configured to be slidably received within a working channel of an endoscope. E. The catheter of any of A-D, wherein the second lumen is configured to slidably receive a medical device. F. The catheter of any of A-E, wherein the first lumen is configured to receive an imaging device. G. The catheter of F, wherein the imaging device comprises a radial ultrasound probe. H. The catheter of any of A-G, wherein the lengthwise opening of the first lumen is defined by a first flexible edge and a second flexible edge, wherein a first chord measurement between the first and second edges has a first value when the imaging device is not occupying the first lumen, wherein a second chord measurement between the first and second edges has a second value when the imaging device is occupying the first lumen, wherein the first chord measurement is less than the second chord measurement. I. The catheter of H, wherein the first and second edges exhibit a snap-like action upon receiving the imaging device. J. A catheter system comprising: a flexible shaft comprising: a first lumen; and a second lumen adjacent to the first lumen; wherein material of the catheter occupies an arc around the first lumen, wherein the arc is greater than 180° and less than 360°, thus creating a lengthwise opening to the first lumen; a cap portion configured to be received at a distal end of the flexible shaft, the cap portion comprising: a first lumen; and a second lumen; and a handle device configured to attach to a proximal end of the flexible shaft, the handle device comprising: a first access port; a second access port; a first lumen in communication with the first access port; and a second lumen in communication with the second access port, wherein the first lumen of the handle device is colinear with the first lumen of the flexible shaft and the second lumen of the handle device is colinear with the second lumen of the flexible shaft. K. The catheter system of J, wherein the catheter material surrounds the second lumen in a cross-sectional dimension. L. The catheter system of J or K, wherein the second lumen is smaller than the first lumen. M. The catheter system of any of J-L, wherein the catheter is configured to be slidably received within a working channel of an endoscope. N. The catheter system of any of J-M, wherein the second lumen of the flexible shaft and the second access port and the second lumen of the handle device are configured to slidably receive a medical device. O. The catheter system of any of J-N, wherein the first lumen of the flexible shaft and the first access port and the first lumen of the handle device are configured to receive an imaging device. P. The catheter system of O, wherein the imaging device comprises a radial ultrasound probe. Q. The catheter system of any of J-P, wherein the lengthwise opening of the first lumen is defined by a first flexible edge and a second flexible edge, wherein a first chord measurement between the first and second edges has a first value when the imaging device is not occupying the first lumen of the flexible shaft, wherein a second chord measurement between the first and second edges has a second value when the imaging device is occupying the first lumen of the flexible shaft, wherein the first chord measurement is less than the second chord measurement. R. The catheter system of Q, wherein the first and second edges exhibit a snap-like action upon receiving the imaging device. The description of the invention is merely exemplary in nature and variations that do not depart from the gist of the invention are intended to be within the scope of the invention. Such variations are not to be regarded as a departure from the spirit and scope of the invention. | 10,271 |
11857366 | DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a preferred embodiment of an ultrasound endoscope according to the invention will be described referring to the accompanying drawings. FIG.1is a schematic configuration diagram showing an example of an ultrasonography system10that uses an ultrasound endoscope12of an embodiment. As shown inFIG.1, the ultrasonography system10comprises the ultrasound endoscope12, an ultrasound processor device14that generates an ultrasound image, an endoscope processor device16that generates an endoscope image, a light source device18that supplies illumination light, with which the inside of a body cavity is illuminated, to the ultrasound endoscope12, and a monitor20that displays the ultrasound image and the endoscope image. The ultrasonography system10comprises a water supply tank21athat stores cleaning water or the like, and a suction pump21bthat sucks aspirates inside the body cavity. The ultrasound endoscope12has an insertion part22that is inserted into the body cavity of the subject, an operating part24that is consecutively provided in a proximal end portion of the insertion part22and is used by an operator to perform an operation, and a universal cord26that has one end connected to the operating part24. In the operating part24, an air and water supply button28athat opens and closes an air and water supply pipe line (not shown) from the water supply tank21a, and a suction button28bthat opens and closes a suction pipe line (not shown) from the suction pump21bare provided side by side. In the operating part24, a pair of angle knobs29and29and a treatment tool insertion port30are provided. In the other end portion of the universal cord26, an ultrasound connector32athat is connected to the ultrasound processor device14, an endoscope connector32bthat is connected to the endoscope processor device16, and a light source connector32cthat is connected to the light source device18are provided. The ultrasound endoscope12is attachably and detachably connected to the ultrasound processor device14, the endoscope processor device16, and the light source device18respectively through the connectors32a,32b, and32c. The connector32ccomprises an air and water supply tube34athat is connected to the water supply tank21a, and a suction tube34bthat is connected to the suction pump21b. The insertion part22has, in order from a distal end side, a distal end part40that has an ultrasound observation part36and an endoscope observation part38, a bending part42that is consecutively provided on a proximal end side of the distal end part40, and a flexible part43that couples a proximal end side of the bending part42and the distal end side of the operating part24. The bending part42is remotely bent and operated by rotationally moving and operating a pair of angle knobs29and29provided in the operating part24. With this, the distal end part40can be directed in a desired direction. The ultrasound processor device14generates and supplies an ultrasound signal for making an ultrasound transducer array50of an ultrasound transducer unit46(seeFIG.2) of the ultrasound observation part36described below generate an ultrasonic wave. The ultrasound processor device14receives and acquires an echo signal reflected from an observation target part irradiated with the ultrasonic wave, by the ultrasound transducer array50and executes various kinds of signal processing on the acquired echo signal to generate an ultrasound image that is displayed on the monitor20. The endoscope processor device16receives and acquires a captured image signal acquired from the observation target part illuminated with illumination light from the light source device18in the endoscope observation part38and execute various kinds of signal processing and image processing on the acquired image signal to generate an endoscope image that is displayed on the monitor20. The ultrasound processor device14and the endoscope processor device16are configured with two devices (computers) provided separately. Note that the invention is not limited thereto, and both the ultrasound processor device14and the endoscope processor device16may be configured with one device. To image an observation target part inside a body cavity using the endoscope observation part38to acquire an image signal, the light source device18generates illumination light, such as white light including light of three primary colors of red light, green light, and blue light or light of a specific wavelength. Light propagates through a light guide (not shown) and the like in the ultrasound endoscope12, and is emitted from the endoscope observation part38, and the observation target part inside the body cavity is illuminated with light. The monitor20receives video signals generated by the ultrasound processor device14and the endoscope processor device16and displays an ultrasound image and an endoscope image. In regard to the display of the ultrasound image and the endoscope image, only one image may be appropriately switched and displayed on the monitor20or both images may be displayed simultaneously. In the embodiment, although the ultrasound image and the endoscope image are displayed on one monitor20, a monitor for ultrasound image display and a monitor for endoscope image display may be provided separately. Alternatively, the ultrasound image and the endoscope image may be displayed in a display form other than the monitor20, for example, in a form of being displayed on a display of a terminal carried with the operator. Next, the configuration of the distal end part40will be described referring toFIGS.2to4. FIG.2is a partial enlarged plan view showing the distal end part40shown inFIG.1and the vicinity thereof the distal end part40.FIG.3is a cross-sectional view taken along the line shown inFIG.2, and is a longitudinal sectional view of the distal end part40taken along a center line thereof in a longitudinal axis direction.FIG.4is a cross-sectional view taken along the line IV-IV shown inFIG.3, and is a cross-sectional view of the ultrasound transducer array50of the ultrasound observation part36of the distal end part40taken along a center line of an arc structure. As shown inFIGS.2and3, in the distal end part40, the ultrasound observation part36that acquires an ultrasound image is mounted on the distal end side, and the endoscope observation part38that acquires an endoscope image is mounted on the proximal end side. In the distal end part40, a treatment tool lead-out port44is provided between the ultrasound observation part36and the endoscope observation part38. The endoscope observation part38is configured with an observation window82, an objective lens84, a solid-state imaging element86, illumination windows88, a cleaning nozzle90, a wiring cable92, and the like. The treatment tool lead-out port44is connected to a treatment tool channel45that is inserted into the insertion part22. A treatment tool (not shown) inserted from the treatment tool insertion port30ofFIG.1is let out from the treatment tool lead-out port44into the body cavity through the treatment tool channel45. As shown inFIGS.2to4, the ultrasound observation part36comprises the ultrasound transducer unit46, an exterior member41that holds the ultrasound transducer unit46, and a cable100that is electrically connected to the ultrasound transducer unit46through a substrate60. The exterior member41is made of a rigid member, such as rigid resin, and configures a part of the distal end part40. The ultrasound transducer unit46has the ultrasound transducer array50that consists of a plurality of ultrasound transducers48, an electrode52that is provided on an end portion side of the ultrasound transducer array50in a width direction (a direction perpendicular to the longitudinal axis direction of the insertion part22), a backing material layer54that supports each ultrasound transducer48from a lower surface side, the substrate60that is disposed along a side surface of the backing material layer54in the width direction and is connected to the electrode52, and a filler layer80with which an internal space55between the exterior member41and the backing material layer54is filled. As long as the substrate60can electrically connect a plurality of ultrasound transducers48and the cable100, in particular, the structure thereof is not limited. It is preferable that the substrate60is configured with, for example, a wiring substrate, such as a flexible substrate (flexible print substrate (also referred to as a flexible printed circuit (FPC)) having flexibility, a printed wiring circuit substrate (also referred to as a printed circuit board (PCB)) made of a rigid substrate having high rigidity with no flexibility, or a printed wiring substrate (also referred to as a printed wired board (PWB)). The ultrasound transducer unit46has an acoustic matching layer76laminated on the ultrasound transducer array50, and an acoustic lens78laminated on the acoustic matching layer76. That is, the ultrasound transducer unit46is configured as a laminate47having the acoustic lens78, the acoustic matching layer76, the ultrasound transducer array50, and the backing material layer54. The ultrasound transducer array50is configured with a plurality of rectangular parallelepiped ultrasound transducers48arranged in a convex arc shape outward. The ultrasound transducer array50is an array of 48 to 192 channels consisting of 48 to 192 ultrasound transducers48, for example. Each of the ultrasound transducer48has a piezoelectric body49. The ultrasound transducer array50has the electrode52. The electrode52has an individual electrode52aindividually and independently provided for each ultrasound transducer48, and a transducer ground52bthat is a common electrode common to all the ultrasound transducers48. InFIG.4, a plurality of individual electrodes52aare disposed on lower surfaces of end portions of a plurality of ultrasound transducers48, and the transducer ground52bis disposed on upper surfaces of the end portions of the ultrasound transducers48. The substrate60has 48 to 192 wirings (not shown) that are electrically connected to the individual electrodes52aof the 48 to 192 ultrasound transducers48, respectively, and a plurality of electrode pads62that are connected to the ultrasound transducers48through the wirings, respectively. The ultrasound transducer array50has a configuration in which a plurality of ultrasound transducers48are arranged at a predetermined pitch in a one-dimensional array as an example. The ultrasound transducers48configuring the ultrasound transducer array50are arranged at regular intervals in a convex bent shape along an axial direction of the distal end part40(the longitudinal axis direction of the insertion part22) and are sequentially driven based on drive signals input from the ultrasound processor device14(seeFIG.1). With this, convex electronic scanning is performed with a range where the ultrasound transducers48shown inFIG.2are arranged, as a scanning range. The acoustic matching layer76is a layer that is provided for taking acoustic impedance matching between the subject and the ultrasound transducers48. The acoustic lens78is a lens that is provided for converging the ultrasonic waves emitted from the ultrasound transducer array50toward the observation target part. The acoustic lens78is formed of, for example, silicon-based resin (millable type silicon rubber, liquid silicon rubber, or the lie), butadiene-based resin, or polyurethane-based resin. In the acoustic lens78, powder, such as titanium oxide, alumina, or silica, is mixed as necessary. With this, the acoustic lens78can take acoustic impedance matching between the subject and the ultrasound transducers48in the acoustic matching layer76, and can increase the transmittance of the ultrasonic waves. As shown inFIGS.3and4, the backing material layer54is disposed on an inside with respect to the arrangement surface of a plurality of ultrasound transducers48, that is, a rear surface (lower surface) of the ultrasound transducer array50. The backing material layer54is made of a layer of a member made of a backing material. The backing material layer54has a role of mechanically and flexibly supporting the ultrasound transducer array50and attenuating ultrasonic waves propagated to the backing material layer54side among ultrasound signals emitted from a plurality of ultrasound transducers48or reflected propagated from the observation target. For this reason, the backing material is made of a material having rigidity, such as hard rubber, and an ultrasonic wave attenuation material (ferrite, ceramics, or the like) is added as needed. The filler layer80is a layer with which the internal space55between the exterior member41and the backing material layer54is filled, and has a role of fixing the substrate60, the non-coaxial cables110, and various wiring portions. It is preferable that the acoustic impedance of the filler layer80matches the acoustic impedance of the backing material layer54with given accuracy or higher such that the ultrasound signals propagated from the ultrasound transducer array50to the backing material layer54side are not reflected at a boundary surface between the filler layer80and the backing material layer54. It is preferable that the filler layer80is made of a member having heat dissipation to increase efficiency in dissipating heat generated in a plurality of ultrasound transducers48. In a case where the filler layer80has heat dissipation, heat is received from the backing material layer54, the substrate60, the non-coaxial cables110, and the like, and thus, heat dissipation efficiency can be improved. With the ultrasound transducer unit46configured as described above, in a case where each ultrasound transducer48of the ultrasound transducer array50is driven, and a voltage is applied to the electrode52of the ultrasound transducer48, the piezoelectric body49vibrates to sequentially generate ultrasonic waves, and the irradiation of the ultrasonic waves is performed toward the observation target part of the subject. Then, as a plurality of ultrasound transducers48are sequentially driven by an electronic switch, such as a multiplexer, scanning with ultrasonic waves is performed in a scanning range along a curved surface on which the ultrasound transducer array50is disposed, for example, a range of about several tens mm from the center of curvature of the curved surface. In a case where the echo signal reflected from the observation target part is received, the piezoelectric body49vibrates to generate a voltage and outputs the voltage as an electric signal corresponding to the received ultrasound echo to the ultrasound processor device14. Then, the electric signal is subjected to various kinds of signal processing in the ultrasound processor device14and is displayed as an ultrasound image on the monitor20. In the embodiment, the substrate60shown inFIG.4has, at one end, a plurality of electrode pads62that are electrically connected to a plurality of individual electrodes52a, and a ground electrode pad64that is electrically connected to the transducer ground52b. InFIG.4, the cable100is omitted. Electrical bonding of the substrate60and the individual electrodes52acan be established by, for example, a resin material having conductivity. Examples of the resin material include an anisotropic conductive film (ACF) or an anisotropic conductive paste (ACP) obtained by mixing thermosetting resin with fine conductive particles and forming the mixture into a film. As another resin material, for example, a resin material in which a conductive filler, such as metallic particles, is dispersed into binder resin, such as epoxy or urethane, and the filler forms a conductive path after adhesion may be used. Examples of this resin material include a conductive paste, such as a silver paste. As shown inFIG.3, the cable100comprises a plurality of non-coaxial cables110, and an outer coat102with which a plurality of non-coaxial cables110are coated. Signal wires included in the non-coaxial cable110are electrically bonded to the electrode pads62of the substrate60. Next, a connection structure of the substrate60and the cable100will be described referring to the drawings. FIG.5is an enlarged view of a portion including the substrate60and the cable100.FIG.6is a cross-sectional view taken along the line VI-VI.FIG.7is a cross-sectional view taken along the line VII-VII. As shown inFIG.5, the substrate60has a plurality of electrode pads62disposed along a side60aon a proximal end side, and the ground electrode pad64disposed between a plurality of electrode pads62and the side60a. The ground electrode pad64is disposed in parallel to the side60a. The cable100is disposed at a position facing the side60aof the substrate60. The cable100comprises a plurality of non-coaxial cables110, and the outer coat102that covers a plurality of non-coaxial cables110. The electrode pads62and signal wires112of the non-coaxial cables110are electrically bonded. The non-coaxial cables110are disposed in parallel with a side60band a side60cperpendicular to the side60a. Note that a positional relationship between the substrate60and the non-coaxial cables110is not particularly limited. Next, the structure of the non-coaxial cables110will be described. As shown inFIG.6, the non-coaxial cable110has a plurality of signal wires112and a plurality of ground wires114. Each signal wire112is made of, for example, a conductor112a, and an insulating layer112bwith which the periphery of the conductor112ais coated. The conductor112ais made of, for example, an element wire, such as copper or copper alloy. The element wire is subjected to, for example, plating processing, such as tin plating or silver plating. The conductor112ahas a diameter of 0.03 mm to 0.04 mm. The insulating layer112bcan be made of, for example, a resin material, such as fluorinated-ethylene-propylene (FEP) or perfluoroalkoxy (PFA). The insulating layer112bhas a thickness of 0.015 mm to 0.025 mm. Each ground wire114is made of a conductor having the same diameter as the signal wire112. The ground wire114is made of an element wire, such as copper or copper alloy, or a stranded wire obtained by stranding a plurality of element wires, such as copper or copper alloy. A first cable bundle116is configured by stranding a plurality of signal wires112and a plurality of ground wires114. Each non-coaxial cable110comprises a first shield layer118with which the periphery of the first cable bundle116is coated. The first shield layer118can be made of an insulating film obtained by laminating metallic foils through an adhesive. The insulating film is made of a polyethylene terephthalate (PET) film. The metallic foil is made of an aluminum foil or a copper foil. The non-coaxial cable110is shielded by the first shield layer118with a plurality of signal wires112as one set. The signal wires112are handled in a unit of the non-coaxial cable110. As shown inFIG.6, in the non-coaxial cable110of the embodiment, the first cable bundle116is configured by stranding seven wires in total of four signal wires112and three ground wires. One signal wire112of the four signal wires112is disposed at the center. The remaining three signal wires112and the three ground wires114are disposed adjacently in the periphery of the signal wire112at the center. Note that the number of signal wires112, the number of ground wires114, and the disposition of the wires in the first cable bundle116are not limited to the structure ofFIG.6. Next, the structure of the cable100will be described. As shown inFIG.7, the cable100comprises a plurality of non-coaxial cables110. A second cable bundle104is configured with a plurality of non-coaxial cables110. The second cable bundle104is coated with the outer coat102. The outer coat102can be made of a fluorine-based resin material, such as extruded and coated PFA, FEP, an ethylene/ethylene tetrafluoride copolymer (ETFE), or polyvinyl chloride (PVC). The outer coat102can be made of a wound resin tape (PET tape). The coating of the second cable bundle104with the outer coat102includes a case where the outside of the second cable bundle104is coated directly and a case where the outside of the second cable bundle104is coated indirectly. Indirect coating includes disposing another layer between the outer coat102and the second cable bundle104. The cable100of the embodiment comprises, in order from the inside, a resin layer106and a second shield layer108between the outer coat102and the second cable bundle104. The second cable bundle104is coated with the resin layer106. The resin layer106can be made of, for example, the fluorine-based resin material or the resin tape described above. The second shield layer108may be configured by, for example, braiding a plurality of element wires. The element wire is made of a copper wire, a copper alloy wire, or the like subjected to plating processing (tin plating or silver plating). The cable100may not comprise both the resin layer106and the second shield layer108other than the above-described configuration or may comprise only one of the resin layer106or the second shield layer108. The cable100of the embodiment includes 16 non-coaxial cables110, and includes 64 signal wires112. The number of non-coaxial cables110and the number of signal wires112are not limited to the numerical values. As described above, the non-coaxial cable110included in the cable100does not comprise a shield layer and an outer coat for each signal wire112, unlike the coaxial cable in the related art. In particular, in a case where the cable100is configured with a plurality of non-coaxial cables110, the cable100can be reduced in diameter compared to the coaxial cable in the related art. In a case where the outer diameter is the same as the outer diameter of the coaxial cable, the cable100can comprise a greater number of signal wires112than the coaxial cable in the related art. Next, a connection structure of the substrate60and the non-coaxial cables110will be described in detail. As shown inFIG.5, on the proximal end side of the substrate60, the resin layer106(not shown), the second shield layer108(not shown), and the outer coat102of the cable100are removed, and a plurality of non-coaxial cables110are exposed. On the proximal end side of the substrate60, the first shield layer118of each non-coaxial cable110is removed, and the first cable bundle116is exposed. The first shield layer118is positioned on the substrate60, and the substrate60and the first shield layer118overlap at least partially as viewed from a direction perpendicular to a principal surface of the substrate60(hereinafter, referred to as plan view). The first cable bundle116is exposed only on the substrate60, and the substrate60and the first cable bundle116overlap only on the substrate60. A part of the first cable bundle116may protrude from the substrate60. The substrate60and the first cable bundle116are fixed by a fixing part130, and the relative positions of the substrate60and each first cable bundle116are fixed. The position and the size of the fixing part130are not limited as long as the relative positions of the substrate60and each first cable bundle116are fixed. The first cable bundle116configured with a stranded wire of a plurality of signal wires112and a plurality of ground wires114is unstranded into the respective signal wires112at a distal end116a. Each unstranded signal wire112is electrically bonded to the electrode pad62disposed on the substrate60. The distal end116ais a start position where each signal wire112is unstranded. In some first cable bundles116, the fixing part130is omitted for ease of understanding. In the embodiment, the substrate60and the first cable bundle116are fixed by the fixing part130. Accordingly, when stress is applied to the cable100or the non-coaxial cable110, stress is prevented from being transmitted to a bonded portion of the electrode pad62and the signal wire112, and disconnection of the signal wire112can be prevented. The fixing part130is not particularly limited as long as the relative positional relationship between the substrate60and the first cable bundle116can be fixed, and for example, any one of an adhesive, solder, or a clamp member, or a combination thereof can be applied. The fixing part130can individually fix the substrate60and the first cable bundle116or can fix the substrate60and a plurality of first cable bundles116in a lump. The ground wires114of each first cable bundle116are electrically bonded to the ground electrode pad64of the substrate60. At least one ground wire114included in each first cable bundle116is electrically bonded to the ground electrode pad64. A plurality of ground wires114are in contact with each other in the first cable bundle116. Accordingly, at least one ground wire114of each first cable bundle116is electrically bonded to the ground electrode pad64, where the ground potentials of a plurality of first cable bundles116can be at the same potential. A region occupied by the wires can be reduced by reducing the number of ground wires114that are electrically bonded to the ground electrode pad64. As a result, it is possible to achieve reduction in diameter of the distal end part40. In the connection structure shown inFIG.5, the electrode pads62corresponding to each non-coaxial cable110are collectively disposed. That is, four electrode pads62that are electrically bonded to the four signal wires112are collectively disposed on the substrate60. It is preferable that the electrode pads62corresponding to the non-coaxial cable110are the electrode pads62that are disposed substantially in an extension direction of the non-coaxial cable110. It is preferable that the signal wires112of each non-coaxial cable110are not electrically bonded to the electrode pads62of an adjacent non-coaxial cable110. It is possible to prevent stress from being applied to the signal wires112. FIG.8is a block diagram showing the configuration of an ultrasound processor device. As shown inFIG.8, the ultrasound processor device14has a multiplexer140, a reception circuit142, a transmission circuit144, an A/D converter146, an application specific integrated circuit (ASIC)148, a cine memory150, a central processing unit (CPU)152, and a digital scan converter (DSC)154. The reception circuit142and the transmission circuit144are electrically connected to the ultrasound transducer array50of the ultrasound endoscope12. The multiplexer140selects a maximum of m drive target ultrasound transducers from among n ultrasound transducers48and opens the channels. The transmission circuit144has a field programmable gate array (FPGA), a pulser (pulse generation circuit158), a switch (SW), and the like, and is connected to the multiplexer140(MUX). An application-specific integrated circuit (ASIC), instead of the FPGA, may be used. The transmission circuit144is a circuit that supplies a drive voltage for ultrasonic wave transmission to the drive target ultrasound transducers48selected by the multiplexer140in response to a control signal sent from the CPU152for transmission of ultrasonic waves from the ultrasound transducer unit46. The drive voltage is a pulsed voltage signal (transmission signal) and is applied to the electrodes of the drive target ultrasound transducers48through the universal cord26and the cable100. In detail, the drive voltage is applied to the electrode through the signal wires112of the non-coaxial cable110of the cable100. The transmission circuit144has a pulse generation circuit158that generates a transmission signal based on a control signal. Under the control of the CPU152, the transmission circuit144generates a transmission signal for driving a plurality of ultrasound transducers48to generate ultrasonic waves using the pulse generation circuit158and supplies the transmission signal to a plurality of ultrasound transducers48. In a case of performing ultrasonography, the transmission circuit144generates the transmission signal of the drive voltage for performing ultrasonography using the pulse generation circuit158under the control of the CPU152. The reception circuit142is a circuit that receives electric signals output from the drive target ultrasound transducers48, which receive the ultrasonic waves (echoes), that is, reception signals. The reception circuit142comprises an amplifier that amplifies the reception signal, and as needed, an attenuator that attenuates the reception signals. A gain value of the amplifier that amplifies the reception signals is set in response to a control signal of the CPU152. An attenuation value of the attenuator that attenuates the reception signal is set in response to a control signal of the CPU152. The reception circuit142amplifies the reception signals received from the ultrasound transducers48in response to a control signal sent from the CPU152and delivers the signals after amplification to the A/D converter146. The A/D converter146is connected to the reception circuit142, converts the reception signals received from the reception circuit142from analog signals to digital signals, and outputs the digital signals after conversion to the ASIC148. The ASIC148is connected to the A/D converter146. As shown inFIG.8, the ASIC148configures a phase matching unit160, a B mode image generation unit162, a PW mode image generation unit164, a CF mode image generation unit166, and a memory controller151. In the embodiment, although the above-described functions (specifically, the phase matching unit160, the B mode image generation unit162, the PW mode image generation unit164, the CF mode image generation unit166, and the memory controller151) are realized by a hardware circuit, such as the ASIC148, the invention is not limited thereto. The above-described functions may be realized by the cooperation of a central processing unit (CPU) and software (computer program) for executing various kinds of data processing. The phase matching unit160executes processing of giving a delay time to the reception signals (reception data) digitized by the A/D converter146and performing phasing addition (performing addition after matching the phases of the reception data). With the phasing addition processing, sound ray signals in which the focus of the ultrasound echo is narrowed are generated. The B mode image generation unit162, the PW mode image generation unit164, and the CF mode image generation unit166generate an ultrasound image based on the electric signals (strictly, sound ray signals generated by phasing addition on the reception data) output from the drive target ultrasound transducers among a plurality of ultrasound transducers48when the ultrasound transducer unit46receives the ultrasonic waves. A brightness (B) mode is a mode in which amplitude of an ultrasound echo is converted into brightness and a tomographic image is displayed. A pulse wave (PW) mode is a mode in which a speed (for example, a rate of a blood flow) of an ultrasound echo source detected based on transmission and reception of a pulse wave is displayed. A color flow (CF) mode is a mode in which an average blood flow rate, flow fluctuation, intensity of a flow signal, flow power, and the like are mapped to various colors and displayed on a B mode image in a superimposed manner. The B mode image generation unit162is an image generation unit that generates a B mode image as a tomographic image of the inside (the inside of the body cavity) of the patient. The B mode image generation unit162performs correction of attenuation due to a propagation distance on each of the sequentially generated sound ray signals according to a depth of a reflection position of the ultrasonic wave through sensitivity time gain control (STC). Furthermore, the B mode image generation unit162executes envelope detection processing and logarithm (Log) compression processing on the sound ray signal after correction to generate a B mode image (image signal). The PW mode image generation unit164is an image generation unit that generates an image indicating a rate of a blood flow in a predetermined direction. The PW mode image generation unit164extracts a frequency component by performing fast Fourier transform on a plurality of sound ray signals in the same direction among the sound ray signals sequentially generated by the phase matching unit160. Thereafter, the PW mode image generation unit164calculates the rate of the blood flow from the extracted frequency component and generates a PW mode image (image signal) indicating the calculated rate of the blood flow. The CF mode image generation unit166is an image generation unit that generates an image indicating information regarding a blood flow in a predetermined direction. The CF mode image generation unit166generates an image signal indicating information regarding the blood flow by obtaining autocorrelation of a plurality of sound ray signals in the same direction among the sound ray signals sequentially generated by the phase matching unit160. Thereafter, the CF mode image generation unit166generates a CF mode image (image signal) as a color image, in which information relating to the blood flow is superimposed on the B mode image signal generated by the B mode image generation unit162, based on the above-described image signal. The above-described ultrasound image generation modes are merely an example, and modes other than the above-described three kinds of modes, for example, an amplitude (A) mode, a motion (M) mode, and a contrast radiography mode may be further include or a mode in which a Doppler image is obtained may be included. The memory controller151stores the image signal generated by the B mode image generation unit162, the PW mode image generation unit164, or the CF mode image generation unit166in the cine memory150. The DSC154is connected to the ASIC148, converts (raster conversion) the signal of the image generated by the B mode image generation unit162, the PW mode image generation unit164, or the CF mode image generation unit166into an image signal compliant with a normal television signal scanning system, executes various kinds of necessary image processing, such as gradation processing, on the image signal, and then, outputs the image signal to the monitor20. The cine memory150has a capacity for accumulating an image signal for one frame or image signals for several frames. An image signal generated by the ASIC148is output to the DSC154and is stored in the cine memory150by the memory controller151. In a freeze mode, the memory controller151reads out the image signal stored in the cine memory150and outputs the image signal to the DSC154. With this, an ultrasound image (static image) based on the image signal read from the cine memory150is displayed on the monitor20. The CPU152functions as a controller (control circuit) that controls each unit of the ultrasound processor device14, is connected to the reception circuit142, the transmission circuit144, the A/D converter146, and the ASIC148, and controls such circuits. In a case where the ultrasound endoscope12is connected to the ultrasound processor device14through the ultrasound connector32a, the CPU152automatically recognizes the ultrasound endoscope12by a method, such as Plug and Play (PnP). Incidentally, the cable100used in the embodiment includes a plurality of non-coaxial cables110. As shown inFIG.6, the non-coaxial cable110is not provided with a shield layer for each signal wire112, unlike a coaxial cable. As a result, in a case of the non-coaxial cable110, the signal wires112may be affected by the magnitude of static capacitance depending on the disposition in the first cable bundle116. For example, the static capacitance of the signal wire112disposed at the center of the non-coaxial cable110is smaller than the static capacitance of a plurality of signal wires112disposed in the periphery. The static capacitance of the signal wire112is affected by transmission and reception sensitivity of the ultrasound transducer48to which the signal wire112is electrically connected. A difference (variation) in static capacitance between the signal wire112results in a difference in sensitivity, and as a result, there is a concern that image quality deterioration (for example, image quality unevenness) of an ultrasound image occurs. Here, the transmission and reception sensitivity is defined as a ratio of the amplitude of the electric signal output from the ultrasound transducer48with reception of the ultrasonic wave to the amplitude of the ultrasonic wave transmitted from the ultrasound transducer48. FIG.9Ais a graph showing a relationship between an ultrasound transducer and static capacitance. The vertical axis indicates static capacitance (pF), and the horizontal axis indicates an element number of an ultrasound transducer. The graph shows the static capacitance of the signal wire connected to each ultrasound transducer. The element number is a number allocated to identify each ultrasound transducer. As shown in the graph ofFIG.9A, the static capacitance of each signal wire is not constant, and there is a difference in static capacitance between the signal wires periodically in a unit of the first cable bundle. Brackets in the drawing indicate the signal wires included in each first cable bundle (the same applies toFIGS.10and11). FIG.9Bis a graph showing a relationship between an ultrasound transducer and transmission and reception sensitivity. The vertical axis indicates transmission and reception sensitivity (dB), and the horizontal axis indicates an element number of an ultrasound transducer. As shown in the graph ofFIG.9B, the transmission and reception sensitivity of each ultrasound transducer is not constant due to the static capacitance of the signal wire. The magnitude of the transmission and reception sensitivity is different periodically between a plurality of ultrasound transducers for each first cable bundle. As shown inFIG.9B, compared to the static capacitance ofFIG.9A, in a case where the static capacitance of the signal wire decreases, the transmission and reception sensitivity increases, and in a case where the static capacitance of the signal wire increases, the transmission and reception sensitivity decreases. In a case where there is a difference in transmission and reception sensitivity between a plurality of ultrasound transducers, there is a concern that image quality deterioration of an ultrasound image occurs. In the ultrasonography system10of the embodiment, to decrease the difference in transmission and reception sensitivity between a plurality of ultrasound transducers48, the transmission and reception sensitivity of the ultrasound transducers48is corrected. The correction of the transmission and reception sensitivity will be described referring to the block diagram ofFIG.8. To correct the transmission and reception sensitivity, static capacitance data indicating the static capacitance of the signal wire112(not shown) included in the first cable bundle116is stored in, for example, an endoscope-side memory58as an example of a memory where the static capacitance data is associated with the element number of the ultrasound transducer48. The static capacitance data is acquired, for example, measuring the static capacitance of each signal wire112. The static capacitance of the signal wire112can be measured, for example, before shipment after the ultrasound endoscope12is assembled. The relationship between the static capacitance of each signal wire112of the first cable bundle116and the ultrasound transducer48is different for each ultrasound endoscope12, and thus, the static capacitance data is stored in the endoscope-side memory58provided in the ultrasound endoscope12. Note that the memory that stores the static capacitance data is not limited to the endoscope-side memory58, and may be a memory provided in the ultrasound processor device14. First, the static capacitance data of the ultrasound endoscope12is stored in the memory provided in the ultrasound processor device14. In a case where the ultrasound processor device14recognizes the connection of the ultrasound endoscope12, the static capacitance data corresponding to the ultrasound endoscope12used by the ultrasound processor device14may be read. in a case where the ultrasound endoscope12is connected to the ultrasound processor device14through the ultrasound connector32a, the CPU152automatically recognizes the ultrasound endoscope12. The CPU152can access the static capacitance data stored in the endoscope-side memory58of the ultrasound endoscope12. The CPU152as a processor periodically corrects the transmission and reception sensitivity of each ultrasound transducer48based on the static capacitance data stored in the endoscope-side memory58, and decreases the difference in transmission and reception sensitivity of each ultrasound transducer48compared to before correction. The transmission and reception sensitivity is periodically corrected, and the difference of the transmission and reception sensitivity is decreased, whereby image quality deterioration of an ultrasound image is suppressed. An example where the transmission and reception sensitivity of each ultrasound transducer48is periodically corrected based on the static capacitance data of the signal wire112has been described. In addition, it is preferable that transmission and reception sensitivity due to a difference in sensitivity of each ultrasound transducer48is corrected. For example, the sensitivity of the ultrasound transducer48is stored in the endoscope-side memory58in addition to the static capacitance data of the signal wire112. The CPU152corrects the transmission and reception sensitivity of each ultrasound transducer48based on the static capacitance data and the sensitivity data stored in the accessible endoscope-side memory58, and decreases the difference in transmission and reception sensitivity of each ultrasound transducer48compared to before correction. The transmission and reception sensitivity is corrected based on the static capacitance data and the sensitivity data, and the difference in transmission and reception sensitivity is decreased, whereby image quality deterioration of an ultrasound image is suppressed. The sensitivity of the ultrasound transducer48can be acquired by measuring the ultrasound transducer48or from characteristic data when the ultrasound transducer48is obtained. Next, a preferred form for periodically correcting the transmission and reception sensitivity will be described. A first form is a case where the transmission circuit144is used. The CPU152drives the ultrasound transducer48connected to the signal wire112having high static capacitance with a transmission signal of a higher voltage than the ultrasound transducer48connected to the signal wire112having low static capacitance. First, the CPU152specifies the ultrasound transducer48connected to the signal wire112having high static capacitance and the ultrasound transducer48connected to the signal wire112having low static capacitance based on the static capacitance data stored in the endoscope-side memory58. The pulse generation circuit158generates a transmission signal of a higher voltage than a transmission signal for driving the ultrasound transducer48connected to the signal wire112having low static capacitance under the control of the CPU152for the ultrasound transducer48connected to the signal wire112having high static capacitance. The transmission circuit144supplies, for example, a transmission signal of a rated voltage to the ultrasound transducer48connected to the signal wire112having low static capacitance, supplies a transmission signal of a higher voltage than the rated voltage to the ultrasound transducer48connected to the signal wire112having high static capacitance, and drives a plurality of ultrasound transducers48. For example, in a case where the drive voltage of the transmission signal of the ultrasound transducer48is 60 V, the drive voltage of the transmission signal of the ultrasound transducer48connected to the signal wire112having high static capacitance is set to 63 V. FIG.10is a graph showing a relationship between the ultrasound transducers and the transmission and reception sensitivity after correction. The transmission and reception sensitivity of the ultrasound transducer48can be increased by increasing the drive voltage of the transmission signal of the ultrasound transducer48connected to the signal wire112having high static capacitance (transmission signal increase). As a result, the difference in transmission and reception sensitivity between the ultrasound transducer48can be decreased compared to before correction. Next, a preferred second form for periodically correcting the transmission and reception sensitivity will be described. The second form is a case where the reception circuit142is used. The CPU152applies a higher gain value to the reception signal to the ultrasound transducer48connected to the signal wire112having high static capacitance than the reception signal from the ultrasound transducer48connected to the signal wire112having low static capacitance. First, the CPU152specifies the ultrasound transducer48connected to the signal wire112having high static capacitance and the ultrasound transducer48connected to the signal wire112having low static capacitance based on the static capacitance data stored in the endoscope-side memory58. The amplifier of the reception circuit142sets the gain value to the ultrasound transducer48connected to the signal wire112having high static capacitance higher than the gain value set to the ultrasound transducer48connected to the signal wire112having low static capacitance under the control of the CPU152. The reception circuit142applies a given gain value to the ultrasound transducer48connected to the signal wire112having low static capacitance, applies a gain value higher than the given gain value to the ultrasound transducer48connected to the signal wire112having high static capacitance, and amplifies the reception signals received from the ultrasound transducers48. FIG.11is a graph showing a relationship between the ultrasound transducers and the transmission and reception sensitivity after correction. The transmission and reception sensitivity of the ultrasound transducer48can be increased by increasing the gain value of the ultrasound transducer48connected to the signal wire112having high static capacitance (gain value increases). As a result, the difference in transmission and reception sensitivity between the ultrasound transducer48can be decreased compared to before correction. Next, a preferred third form for periodically correcting the transmission and reception sensitivity will be described. The third form is a case where the reception circuit142is used. The CPU152applies a higher attenuation value to the reception signal from the ultrasound transducer48connected to the signal wire112having low static capacitance than the reception signal from the ultrasound transducer48connected to the signal wire112having high static capacitance. First, the CPU152specifies the ultrasound transducer48connected to the signal wire112having high static capacitance and the ultrasound transducer48connected to the signal wire112having low static capacitance based on the static capacitance data stored in the endoscope-side memory58. The attenuator of the reception circuit142sets the attenuation value to the ultrasound transducer48connected to the signal wire112having low static capacitance higher than the attenuation value set to the ultrasound transducer48connected to the signal wire112having high static capacitance under the control of the CPU152. The reception circuit142applies a given attenuation value to the ultrasound transducer48connected to the signal wire112having high static capacitance, applies an attenuation value greater than the given attenuation value to the ultrasound transducer48connected to the signal wire112having low static capacitance, and attenuates the reception signals received from the ultrasound transducers48. FIG.12is a graph showing a relationship between the ultrasound transducers and the transmission and reception sensitivity after correction. The transmission and reception sensitivity of the ultrasound transducer48can be decreased by increasing the attenuation value of the ultrasound transducer48connected to the signal wire112having low static capacitance (attenuation value increases). As a result, the difference in transmission and reception sensitivity between the ultrasound transducer48can be decreased compared to before correction. In regard to the third form, the attenuation value may be decided at the time of shipment of the ultrasound endoscope12. For example, the endoscope-side memory58stores the signal wire112, the ultrasound transducer48, and the attenuation value in association with the static capacitance of the signal wire112. The attenuation value can be applied to the reception signal based on the stored ultrasound transducer48and attenuation value. It is preferable that the difference in transmission and reception sensitivity between the ultrasound transducers48is equal to or less than 2 dB. In a case where this range is set, image quality deterioration of an ultrasound image can be prevented. Next, preferred disposition of the ultrasound transducers48in the ultrasound transducer array50will be described.FIG.13Ais a conceptual diagram of scanning lines corresponding to the ultrasound transducers of the ultrasound transducer array.FIG.13Bis a graph showing a relationship between the disposed ultrasound transducers and the static capacitance. As shown inFIG.13A, the ultrasound transducer array50is configured with a plurality of ultrasound transducers48of, for example, an element number1to an element number n.FIG.13Ashows scanning lines corresponding to the element numbers. In the ultrasound transducer array50, the ultrasound transducer48connected to the signal wire112having low static capacitance is disposed on the center side. The ultrasound transducers48connected to the signal wires112having high static capacitance are disposed at ends positioned on both sides of the center. As shown in the graph ofFIG.13B, the static capacitance of the ultrasound transducer48disposed on the center side is lower than the static capacitance of the ultrasound transducer48disposed at the end. In a case where the ultrasound image is generated with the ultrasound endoscope12, an ultrasound image generated by the ultrasound transducer48at the center of the ultrasound transducer array50is important. Accordingly, even though correction is not needed, the ultrasound transducer48having high transmission and reception sensitivity is disposed on the center side of the ultrasound transducer array50, whereby an ultrasound image can be generated with higher accuracy than the ultrasound transducer48where correction is needed. Although the invention has been described, the invention is not limited to the above-described example, and various improvements or modifications may be of course made without departing from the spirit and scope of the invention. EXPLANATION OF REFERENCES 10: ultrasonography system12: ultrasound endoscope14: ultrasound processor device16: endoscope processor device18: light source device20: monitor21a: water supply tank21b: suction pump22: insertion part24: operating part26: universal cord28a: air and water supply button28b: suction button29: angle knob30: treatment tool insertion port32a: connector32b: connector32c: connector34a: air and water supply tube34b: suction tube36: ultrasound observation part38: endoscope observation part40: distal end part41: exterior member42: bending part43: flexible part44: treatment tool lead-out port45: treatment tool channel46: ultrasound transducer unit47: laminate48: ultrasound transducer49: piezoelectric body50: ultrasound transducer array52: electrode52a: individual electrode52b: transducer ground54: backing material layer55: internal space58: endoscope-side memory60: substrate60a: side60b: side60c: side62: electrode pad64: ground electrode pad76: acoustic matching layer78: acoustic lens80: filler layer82: observation window84: objective lens86: solid-state imaging element88: illumination window90: cleaning nozzle92: wiring cable100: cable102: outer coat104: second cable bundle106: resin layer108: second shield layer110: non-coaxial cable112: signal wire112a: conductor112b: insulating layer114: ground wire116: first cable bundle116a: distal end118: first shield layer130: fixing part140: multiplexer142: reception circuit144: transmission circuit146: A/D converter148: ASIC150: cine memory151: memory controller152: CPU154: DSC158: pulse generation circuit160: phase matching unit162: B mode image generation unit164: PW mode image generation unit166: CF mode image generation unit | 53,122 |
11857367 | DESCRIPTION OF THE EMBODIMENTS Before describing one particular embodiment in detail, a general overview of methods and devices utilising concepts described is provided. It is recognised throughout imaging systems that an extended aperture has potential to improve imaging performance [1]. When using ultrasound as an analysis tool, particularly in a clinical context, aperture size can be limited by complexity and expense associated with an extended aperture system. Furthermore, ultrasound transducers having large physical dimensions to allow for a large aperture have a limited adaptability to different applications. Taking as one example, clinical use of ultrasound for imaging, typical clinical ultrasound probes are controlled and moved by a physician to adapt to contours and shapes of a human body. Physical ultrasound transducer size becomes a compromise between cost, ergonomics and image performance. Providing a method by which ultrasound image quality may be improved without altering dimensions of conventional ultrasound probes may be useful. Improvements associated with a wider coherent aperture have been shown in synthetic aperture ultrasound imaging [2], [3]. In those arrangements, an extended aperture is obtained by mechanically moving and tracking an ultrasound transducer. Detailed position and orientation tracking information is used to identify a relative position and orientation of obtained ultrasound images which are then merged together into a final image [4]. However, tracking system noise and calibration errors propagate to coherent image reconstruction, causing image degradation. In practical terms, subwavelength localization accuracy is required to merge information from multiple poses. Such accuracy is challenging to achieve in conventional ultrasound calibration. For a practical implementation, a more accurate calibration technique is required [3], [5]. In addition, viability of the technique in-vivo is limited by long acquisition times (>15 minutes per image) which may break down a coherent aperture [6]. Resolution suffers from motion artefacts, tissue deformation and tissue aberration, all of which worsen with increased effective aperture size [7]. Methods according to some aspects and embodiments may provide a fully coherent multi-transducer ultrasound imaging system. That system can be formed from a plurality of ultrasound transducers which are synchronized, freely disposed in space and configured to transmit plane waves (PW). By coherently integrating different transducers a larger effective aperture, in both transmit and receive, can obtained and an improved final image can be formed. As described previously, coherent combination of information obtained by the different transducers requires the position of transmitters and receivers within the system to be known to subwavelength accuracy. In general, a method is described which can achieve an accurate subwavelength localization of ultrasound transmitters (and receivers) within a multi-transmitter system. Based on a spatial coherence function of backscattered echoes originating from a common point source received by the same transducer; multiple transducers of a multi-transducer ultrasound imaging system can be localized without use of an external tracking device. Using plane waves (PW) generates a higher energy wavefield than in a synthetic aperture approach, therefore improving penetration. Use of PW also enables higher frame rates [8]. The principles of classic PW imaging are summarized below together with nomenclature used and an overview of multiple transducer beamforming. A method to accurately calculate the spatial location of the different transducers is described. Experimental phantom measurements are described and corresponding results, obtained using a multi-transducer system, are shown. Results are compared to conventional PW imaging using a single transducer and incoherently compounded images from the plurality of transducers. Theory Ultrasound image quality improves by reducing the F number, which represents a ratio of focusing depth to an aperture size. Expanding an aperture is a direct way to improve imaging performance. Hence, if information from different transducers can be coherently combined, significantly increasing aperture size of a system, an enhanced image is expected. In one possible coherent multi-transducer method, a single transducer is used for each transmission to produce a plane wave (PW) that isolates an entire Field of View (FoV) of the transmit transducer. Resulting echoes scattered from a medium are recorded using all transducers forming part of the multi-transducer system. A data collection sequence is performed by transmitting from each individual transducer in turn. Knowing the location of each transducer (and taking into account full transmit and receive path lengths) coherent summation of collected data from multiple transducers can be used to form a larger aperture and obtain image, following a conventional PW imaging approach. Multi-Transducer Notation and Beamforming A 3-D framework consisting of N matrix arrays, freely disposed in space, having a partly shared field of view (FoV) is considered. Such a framework represents positioning of a plurality of ultrasound transducers. Other than an at least partly overlapping field of view, the transducers can be considered to be otherwise at arbitrary positions in space. The transducers are synchronized (in other words, in this arrangement, trigger and sampling times in both transmit and receive mode of the ultrasound transducers are the same). The ultrasound transducers are configured to take turns to transmit a plane wave into a medium. The arrangement is such that each transmitted wave is received by all transducers, including the transmitting one. Thus, a single plane wave shot yields N RF datasets—one associated with each receiving transducer. The framework is described using the following nomenclature:Points are noted in upper case letters (e.g. P);Vectors representing relative positions are represented in bold lowercase (e.g. r);Unit vectors are noted with a “hat”; andMatrices are written in bold uppercase (e.g. R). Index convention is to use i for the transmitting transducer, j for the receiving transducer, h for transducer elements, and k for scatterers. Other indices are described when used. The set-up be defined by N matrix array transducers Ti, i=1 . . . N, with H elements as illustrated inFIG.1. The position and orientation of Tiis represented by the axes {{circumflex over (x)}i; ŷi; {circumflex over (z)}i} and the origin Oidefined at the centre of the transducer surface with the {circumflex over (z)}idirection orthogonal to the transducer surface and directed away from transducer i. A plane wave transmitted by transducer Tiis defined by the plane Pi, which can be characterized through the normal to the plane {circumflex over (n)}i, and the origin Oi. The RF data received by transducer j on element h at time t is noted TiRj(h; t). The resulting image and all transducer coordinates are defined in a world coordinate system arbitrarily located in space, unless specifically referred to a transducer's local coordinate system in which case the superscript i is used. FIG.1is a geometric representation of a multi-transducer beamforming scheme. In the example shown inFIG.1, transducer T1transmits a plane wave and T2receives the echo scattered from Qkon element h. Using the notation set out above, plane wave imaging beamforming [8] can be extended to the multi-transducer scheme shown inFIG.1. Assuming that transducer Ti transmits a plane wave, the image point to be beamformed located at Qkcan be computed from the echoes received at transducer Tjas: si,j(Qk)=∑h=1HTiRj(h,ti,h,j(Qk))=∑h=1HTiRj(h,Di,h,j(Qk)c)(1) where c is the speed of sound of the medium, and D is the distance travelled by the wave, which can be split into the transmit and the receive distances: Di,h,j(Qk)=dT(Qk,Pi)+dR,h(Qk,Oj+rh) (2) with dTmeasuring the distance between a point and a plane (transmit distance), and dR;hbeing the distance between a point and the receive element (receive distance). These distances can be computed as follows: dT(Qk,Pi)=|(Oi−Qk)·{circumflex over (n)}i| (3) and dR,h(Qk,Oj+rh)=∥Qk(Oj+rh)∥=∥Qk−(Oj+Rjrhj)∥ (4) where ∥ ∥ is the usual Euclidean distance, and Rj=[{circumflex over (x)}jŷj{circumflex over (z)}j] is a 3×3 matrix parameterized through three rotation angles: Øj={Øx,Øy,Øz}j that together with the offset Ojcharacterize the position and orientation of transducer Tjwith 6 parameters [9]. With the total distances computed, equation (1) can be evaluated for each pair of transmit-receive transducers, and the total beamformed image S(Qk) can be obtained by coherently adding the individually beamformed images: S(Qk)=∑i=1N∑j=1Nsi,j(Qk)(5) Calculation of the Transducer Locations In order to carry out the coherent multi-transducer compounding described above, the position and orientation of each imaging transducer is required. This then allows for computation of travel time of a transmitted wave to any receiving transducer. This section describes one method to accurately calculate those positions by exploiting consistency of received RF data when transducers receive simultaneously from the same transmitted (and scattered) wave. The method described assumes the medium is substantially homogeneous except for K point scatterers located at positions Qk, k=1 . . . K, and all transducers are considered identical. The following transmit sequence is considered:a plane wave is transmitted by transducer Tiand received by N transducers forming the multi-transducer system;a plane wave is transmitted by Tjand also received by all transducers; the process continues until the N transducers have transmitted in turn. During the time during which each transmitter operates in turn, it is assumed that the system and medium under study remain perfectly still. The wavefield resulting from the same scatterer and received by the same transducer Tj, when transmitting with all transducers, must be correlated or have spatial covariance [10]. That is to say, for each element h, the only difference in timing is the transmit time (receive time is equal since the receiving transducer is the same). The received signals at the element h will be time correlated when the difference in transmit time is compensated for. One method comprises finding the “optimal” parameters for which the time correlation between received RF datasets sharing a receive transducer is at a maximum for all scatterers in the common FoV. Since the reception time depends also on the speed of sound in the medium c and on the position of the scatterers Qk, the unknown parameters are: θ={c,Q1, . . . ,QK,ϕ1,O1, . . . ,ϕN,ON} (6) Note that, since the parameters that define transducer locations in space depend on the definition of the world coordinate system, the vector of unknown parameters can be reduced by defining the world coordinate system the same as the local coordinate system of one transducer. The similarity between signals received by the same element can be computed using the normalized crossed correlation NCC, NCC(yi,h,j,k(τ),yj,h,j,k(τ))==∑τ=0T(yi,h,j,k(τ)-y¯i,h,j,k(τ))(yj,h,j,k(τ)-y_j,h,j,k(τ))[∑τ=0T(yi,h,j,k(τ)-y¯i,h,j,k(τ))2∑τ=0T(yj,h,j,k(τ)-y¯j,h,j,k(τ))2]12(7) where yi;h;j;krepresents the signal backscattered from Qkand received by element h on transducer j when transmitting from Ti, and can be calculated as: yi,h,j,k(τ;θ)=TiRj(h,τ+ti,h,j(Qk;θ)) with τ∈[0,T](8) being T the time transmit pulse length. Then, the total similarity, Xj,k, between RF data received by the same transducer j can be calculated taking into account all the elements as: Xj,k(θ)=∑iN∑hHNCC(Gi,h,j,k(τ;θ),Gj,h,j,k(τ;θ))Wi,h,j,k(θ)Wj,h,j,k(θ)(9) where Gi,h,j,k=√{square root over (yi,h,j,k2+{yi,h,j,k}2)}is the envelope of the signal yi,h,j,kis the Hilbert transform;and Wi,h,j,kis defined as: Wi,h,j,k(θ)=12+12H∑hb≠hHNCC(yi,h,j,k(τ;θ),yi,hb,j,k(τ;θ))withh,hb∈[1,…,H] The function Wi,h,j,kis an element-wise weight that represents how well each element correlates with the rest of the elements in the same transducer j. If intra-transducer channel correlation is not considered, the undesired scenario where the wave receive times are erroneous but in a similar manner for different transmitting transducers could yield to a low dissimilarity value for the wrong parameters. Summing over all receiving transducers and scatterers yields a final cost function to be maximized: X(θ)=∑jN∑kKXj,k(θ)(11) The “optimal” parametersθ, which include: relative position and orientation of all transducers involved, the speed of sound in the medium, and the position of the scatterers within the medium, can be found by applying a search algorithm that maximizes the cost functional X: θ¯=argmaxθX(θ)(12) Equation (12) can be maximized by using gradient-based optimization methods [11]. Methods FIG.2illustrates schematically an experimental setup comprising two ultrasound transducers. The method was tested experimentally using 2 identical linear arrays having a partly shared field of view (FoV) of an ultrasound phantom. The identical linear arrays were located on the same plane (y=0). In such a 2-D framework, the parameters that define the position and orientation of the transducers are reduced to one rotation angle and one 2-D translation [9]. The experimental sequence starts with transducer1transmitting a plane wave into the region of interest, in which 5 scatterers are located in the common FOV of transducers1and2. The backscattered ultrasound field is received by both transducers in the system (T1R1and T1R2). Under the same conditions, the sequence is repeated, transmitting with transducer2and acquiring the backscattered echoes with both transducers, T2R1and T2R2. Phantom Acquisitions were performed on a custom-made wire target phantom (200 μm diameter) submersed in distilled water. The phantom was positioned within the overlapping imaging region of the transducers, so that all scatterers were in the common FoV. Experimental Setup The experimental setup comprises two synchronized 256-channel Ultrasound Advanced Open Platform (ULA-OP 256) systems (MSD Lab, University of Florence, Italy) [12]. Each ULA-OP 256 system was used to drive an ultrasonic linear array made of 144 piezoelectric elements with a 6 dB bandwidth ranging from 2 MHz to 7.5 MHz (imaging transducer LA332, Esaote, Firenze, Italy). Before acquisition, probes were carefully aligned to be located in the same elevational plane using a precise optomechanical setup. Each probe was held by a 3-D printed shell structure connected to a double-tilt and rotation stage and then mounted on a xyz translation and rotation stage (Thorlabs, USA). The imaging plane of both transducers (y=0) was that defined by two parallel wires immersed in the water tank. FIG.3illustrates the experimental setup ofFIG.2in more detail. Components shown inFIG.3are labelled with letters: (A) Linear array. (B) 3-D printed probe holder. (C) Double-tilt and rotation stage. (D) Rotation stage. (E) xyz translation stage. Pulse Sequencing and Experimental Protocol Two independent experiments were carried out. First, a stationary acquisition in which both probes were mounted and fixed in the optomechanical setup described above. The second experiment consisted of a free-hand demonstration. In this case, both probes were held and controlled by an operator. The transducer movements were carefully restricted to the same elevational plane, i.e. y=0 and to keep two common targets in the shared FoV. Two different types of pulse sequences were used. During the stationary experiment, for each probe and at alternating sequence, i.e. only one transducer transmits at each time while both probes receive, 121 plane waves, covering a total sector angle of 60° (from −30° to 30°, 0.5° step), were transmitted from the 144 elements of each probe at 3 MHz with a pulse repetition frequency equal to 4000 Hz. The total sector angle between transmitted plane waves was chosen approximately the same as the angle defined between the probes. RF raw data scattered up to 77 mm deep were acquired at a sampling frequency of 39 MHz. No apodization was applied either on transmission or reception. The total time for this sequence was 6o.5 ms. During the free-hand demonstration, 21 plane angles (from −5° to 5°, 0.5° step) were transmitted from each probe and RF raw data backscattered up to 55 mm deep were acquired. The remaining settings were identical to the fixed probe experiment. The total acquired time using this sequence was 1 s. Data Processing An initial estimate of parameters θ0={c,Q1, . . . ,QK,ϕ1,O1,ϕ2,O2} needed to start the optimization algorithm was chosen as follows:The speed of sound of the propagation medium was chosen according to the literature, in the case of water this is c=1496 m/s [13].Considering the world coordinate system to be the same as the local coordinate system of transducer1(ϕ1=0, O1=[0, 0]) the parameters {ϕ2, O2} that define the position of transducer2were calculated by using point-based image registration [14].For the scatterer positions Qk, their initial value was calculated using a best-fit one-way geometric delay for the echoes returning from the targets, as described in [15]. Optimization was done using all the targets within the shared FoV. For the stationary experiment, since there was no motion, only one set of optimal parameters is needed and all RF data corresponding to plane waves transmitted at different angles can be beamformed using the same optimal parameters. However, to validate the optimization algorithm, 121 optimal parameter sets were calculated, one per transmit angle. For the free-hand demonstration, each frame was generated using a different set of optimal parameters, where each subsequent optimization was initialized with the optimum value of the previous frame. The proposed method was compared with the conventional B-mode imaging using one single transducer and with the incoherent compounding of the B-mode images acquired by two independent transducers. The images acquired during the stationary experiment were used for this image performance analysis. A final image was obtained using equation (5), by coherently adding the totality of the individual images acquired in one sequence (T1R1, T1R2, T2R1, T2R2): S(Qk)=s1,1(Qk)+s1,2(Qk)+s2,1(Qk)+s2,2(Qk) (13) Spatial resolution was calculated from the point spread function (PSF) on a single scatterer. An axial-lateral plane for 2-D PSF analysis was chosen by finding the location of the peak value in the elevation dimension from the envelope detected data. Lateral and axial PSF profiles were taken from the centre of the point target. The lateral resolution was then assessed by measuring the width of the PSF at the −6 dB level and the axial resolution as the dimension of the PSF at the −6 dB level in the axial (depth) direction. In addition, the performance of the proposed multitransducer system, in terms of image quality such a resolution, was described using a frequency domain or k-space representation. Axial-lateral RF PSFs were extracted from the beamformed data and the k-space representation was calculated using a 2-D Fourier transform. While the axial resolution is determined by the transmitted pulse length and the transmit aperture function, the lateral response of the system can be predicted by the convolution of the transmit and receive aperture functions [16]. Results The 121 optimal parameter sets calculated for each of the transmit angles in the stationary experiment converged to the same results. The initial and optimal values obtained are summarized in Table I below. TABLE IINITIAL ESTIMATE AND OPTIMUM VALUESOF THE SYSTEM PARAMETERSParameterInitial valueOptimum valuec1496m/s1450.4m/sQ1[8.54, 28.48]mm[8.66, 28.16]mmQ2[3.78, 37.31]mm[3.84, 36.87]mmQ3[−1.10, 45.05]mm[−1.15, 45.41]mmQ4[−6.00, 54.07]mm[−6.03, 53.94]mmQ5[−10.68, 62.00]mm[−10.67, 62.12]mmΦ255.33°56.73°O2[39.55, 22.83]mm[38.80, 23.06]mm FIG.4shows graphically coherent multi-transducer images obtained using initial estimates of parameters and optimum values, the data corresponding to that shown in Table I. It can be seen that a blurring effect on a PSF in an image obtained using initial estimates of positional parameters may be compensated after optimization methods are implemented. The convergence illustrated in Table I and inFIG.4is also validated by results originating from the free-hand experiment. In this case, each transmit angle was optimized over total acquisition time. After calculating an initial estimate of positional parameters of a first transmitted PW, each subsequent optimization was initialized with the optimum value of the previous transmission event. FIG.5is a box-plot of a normalized value of optimal parameters which define a rigid-body transformation between coordinate systems and the speed of sound over the duration of the experiment. As could be predicted, rotation and translation parameters present the higher value range, whilst the speed of sound in the medium can be considered substantially constant. The averaged value of the optimal speed of sound over the acquisition time was 1466.00 m/s and the standard deviation 0.66 m/s. FIG.6shows images of a wire phantom obtained using a single transducer (T1R1) incoherently combined collected data (envelope detected images T1R1, T2R2) and coherently combined collected data (T1R1, T1R2, T2R1, T2R2) from two ultrasound transducers. Comparison of the resulting images from a single transducer and those from a multitransducer method, it can be seen that the reconstructed images of the wire targets were clearly improved. The PSFs of the three images can be compared.FIGS.7and8show a corresponding transverse cut of PSF at a scatterer depth indicated byFIG.6for each of the images, using a single PW at 0° and compounding 121 PW over a total angle range of 60°, respectively. To analyse the multi-transducer method, a world coordinate system that leads to the best resolution and more conventional PSF shape is used. This coordinate system is defined by rotating the local coordinate system of transducer T1by the bisector angle between the two transducers. In this coordinate system, the best possible resolution is aligned with the x-axis. The incoherent multitransducer results show benefit from the optimization, since the optimum parameters were used to incoherently compound enveloped-detected sub-images T1R1and T2R2. The effect of apodization in the multi-coherent PSF, accentuating the low lateral frequencies, was analysed in the PSF generated compounding 121 PW over a total angle range of 60°. The performance of all them is summarized in Table II. TABLE IIIMAGING PERFORMANCE FOR THE DIFFERENT METHODSAxialLateral1st2ndresolutionResolutionsidelobesidelobe[mm][mm][dB][dB]PW Conventional0.94450.6674−14.96−20.79Multi Incoherent0.94740.7837−20.87—Multi Coherent0.81090.1817−11.46−7.01PW Conventional0.90020.6546−20.22—(121 Angles)Multi Coherent w/o0.82460.1911−9.94−9.64(121 angles)Multi Coherent w/0.83910.2278−20.73−9.45(121 angles) It can be seen that the coherent multi-transducer acquisition results in best lateral resolution, and worst lateral resolution corresponds to an incoherent image generated by combining the independent images acquired by both transducers. Large differences are observed in the behaviour of the side lobes, which are higher in the coherent multi-transducer method. When a single PW is used, the biggest difference is between the second side lobes, being raised by 13 dB for the coherent multi-transducer method compared to the conventional single transducer method, while difference of the first side lobes is 3.5 dB. This suggests that whilst significant image improvements can be achieved, the image may suffer from the effects of side lobes. Apodization results in a significant reduction of the first side lobe and resolution improvement of 65% compared to a conventional image acquired by a single transducer. FIG.9shows a comparison of envelope-detected PSFs and k-space representation obtained using a single transducer and a coherent multi-transducer. The PSFs obtained using a single transducer (T1R1) and coherently compounding the images acquired by both transducers were analysed in the k-space representation.FIG.9shows the corresponding results using a single PW at 0°. Images are represented in the local coordinate system of transducer1. An important consequence of the linear system is that the superposition principle can be applied. As expected, the total k-space representation shows an extended lateral region which corresponds to the sum of the four individual k spaces that form an image in the coherent multi-transducer method. It will be appreciated that since both transducers are identical but have different spatial locations, they exhibit the same k-space response (identical transmit and receive aperture functions) but in different spatial locations. The discontinuity in the aperture of the system, given by the separation between the transducers, leads to gaps in the spatial frequency space. The discontinuity can be filled compounding PW over an angle range similar to the angle defined by the two transducers. FIG.10illustrates envelope-detected PSFs and k-space representations of a multitransducer ultrasound method, compounding 121 plane waves covering a total angle range of 60°, without and with apodization. In particular,FIG.10shows the resulting PSF after compounding 121 angles with a separation of 0.5°, which define a total sector of 60°, and the corresponding continuous k-space. The topography of the continuous k-space can be re-shaped by weighting data from the different images which are combined to form a final image. A more conventional transfer function, displaying reduced side lobes can be created accentuating the low lateral spatial frequencies, which are mostly defined by the sub-images T1R2and T2R1.FIG.10shows a PSF and its corresponding k-space representation generated weighting the sub-images T1R1, T1R2, T2R1and T2R2with the vector [1; 2; 2; 1]. Discussion The study described introduces a new synchronized multi-transducer ultrasound system and method which is capable of significantly outperforming conventional PW ultrasound imaging by coherently adding all individual images acquired by different transducers. In addition to an extended FoV that the use of multiple transducers allows for, improvements in resolution have been experimentally shown. Furthermore, a final image formed from a coherent combination of sub-images may present different characteristics to those shown in the individual images. For example, a final image may have areas with optimal performance in a common FoV of multiple transducers, and its quality may deteriorate outside this region where the number of transducers with a shared FoV decreases. The worst regions of a final image will typically be defined by the performance of individual images and correspond to the parts of the combined “final” image with no overlapping FoV. Different transmit beam profiles (such diverging waves) may increase the overlapped FoV and extend the high-resolution areas of a final image. The significant differences between the k-space representations for the single and the multi-transducer methods shown in the Figures further explain differences in imaging performance. The more extended k-space representation, the higher resolution [17]. The appearance of the total response of a multi-transducer system can be explained using the rotation and translation properties of the 2-D Fourier transform. This total extent determines the highest spatial frequencies present in the image and therefore dictates resolution. The relative amplitudes of the spatial frequencies present, i.e. the topography of k-space, determine the texture of imaged targets. Weighting the data from the different transducers can reshape the k-space, accentuating certain spatial frequencies and allow for creation of a more conventional response of a system. The presence of uniformly spaced unfilled areas in a system's k-space response may indicate the presence of grating lobes in the system's spatial impulse response [16]. A sparse array (such as the two-transducer system described above) creates gaps in k-space response. If a k-space has negligible gaps, the k-space magnitude response becomes smooth and continuous over a finite region. This is motivation to find and use a good spatial distribution for transducers in a system and suggests that while it may be beneficial to compound PW at different angles, it may not always be necessary in order to produce an improved image. Wavefront aberration caused by an inhomogeneous medium can limit the quality of ultrasound images and is one significant barrier to achieving diffraction-limited resolution with large aperture transducers [18]. The method and apparatus described above have been tested in relation to a homogeneous medium, with the speed of sound constant along the propagation path. However, since the speed of sound is a parameter which may be optimised, the method described can be adapted to apply to non-homogeneous media in which the speed of sound varies in space. In this case, for example, the medium could be modelled by piecewise continuous layers. The optimization method could be applied in a recursive manner, dividing FoV into appropriate sub areas with different speeds of sound. More accurate speed of sound estimation may allow for improved beamforming and allow for higher order phase aberration correction. Furthermore, speed of sound maps are of great interest in tissue characterization [19], [20]. To successfully improve the PSF, the multitransducer method described above requires coherent alignment of the backscattered echoes from multiple transmit and receive positions. This requirement is achieved by a precise knowledge of all transducer positions, which in practice is not possible to achieve by manual measurements or using electromagnetic or optical trackers [21]. The method described above allows for precise and robust transducer location based upon spatial coherence of backscattered echoes coming from the same scatter and being received by the same transducer. The precise location of the transducers required for coherent image creation is calculated by optimizing spatial coherence. The use of gradient-descent methods requires an initial estimate of the parameters close enough to the global maximum of the cost function. The distance between maxima, which corresponds to the pulse length, dictates this tolerance. For the experimental configuration described above, this is approximately 1.5 μs (equivalent to 2.19 mm). This tolerance value can be achieved by imaging registration [14]. In practice, in a free-hand situation, and assuming that at some initial instant the registration is accurate, the initial guess can be ensured if the transducers move relatively little in the time between two transmissions. The method has been validated in a free-hand demonstration. It will be appreciated that the experimental set up and associated method described above method is limited in that it assumes all transducers are located on the same plane, i.e. they share the same imaging plane. An alignment procedure before imaging acquisition has been performed to obtain the images shown in the Figures. The use of a 3-D matrix array allows those limitations to be overcome and can be used to build up higher-resolution volumes than current ultrasound transducer aperture sizes allow. It will also be appreciated that for convergence of the optimization algorithm described to a unique solution, N point scatterers, (same as number of transducers), may be needed in the common FoV. In reality, a plurality of notable scatterers within a medium are likely, so the limitation is not significant. Whilst the method has been validated for point scatterers, different scatterers may require a different approach. Different transmit and receive paths experience unique clutter effects [22], generating spatially incoherent noise and PSF distortions that can form the basis for further work. In conventional PW imaging, frame rate is limited by travel and attenuation times, which depend on the speed of sound and the attenuation coefficient. For the experimental setup described above, the minimum time between 2 isonifications is around 94 μs. Hence the maximum frame rate is limited to 10.7 kHz, which is reduced when different compounding angles are used. In the case of a multi-transducer method, the frame rate is reduced by the number of transducers as Fmax/N. FIG.11shows a set of individual sub-images forming a final “multi coherent” image. These were obtained by individually beamforming the 4 RF datasets acquired from one complete sequence, i.e. transmitting a PW at 0° with probe T1and simultaneously receiving with both probes (T1R1,T1R2) and repeating the transmission with probe T2(T2R1,T2R2). The optimum parameters used to reconstruct the images are φ2=53.05°;O2=[41.10, 25.00] mm, c=1437:3 m/s. Lines indicate the field of view of transducer T1(upright) and T2(slanted). FIG.12shows experimental images of a contrast phantom obtained by different methods.FIG.12(a)shows coherent plane wave compounding 41 PW with transducer T1;FIG.12(b)shows coherent plane wave compounding 41 PW with transducer T2;FIG.12(c)shows coherent multi transducer method with transmission of a single PW at 0° from each transducer;FIG.12(d)shows coherent multi transducer method with additional compounding and each transducer emitting 41 PW. The optimum parameters used to reconstruct the multi-coherent images are φ2=53:05°;O2=[41.10; 25.00] mm, c=1437:3 m/s. Lines indicate the field of view of transducer T1(upright) and T2(slanted). The results obtained from the anechoic lesion phantom are presented inFIGS.11and12, where the field of view (FoV) of each transducer is indicated by upright and slanted lines (T1and T2respectively).FIG.11shows the individual sub-images that form the final multi coherent image and that are obtained through beamforming the 4 RF datasets acquired in a single cycle of the imaging process, i.e. transmitting a PW at 0° with probe T1and simultaneously receiving with both probes (T1R1,T1R2) and repeating the transmission with probe T2(T2R1,T2R2). Reconstruction of these sub-images is possible after finding, through optimization, relative positions of the probes. A direct result of the combination of these 4 sub images is the extended FoV of the multi coherent image.FIG.12(c)shows a multi coherent image obtained by coherently compounding 4 sub-images. It can be seen that, as predicted by a k-space representation, any overlapping regions in the sub-images will contribute to improved resolution in the final multi coherent image because of the effective enlarged aperture created. Images acquired using coherent PW compounding with a single transducer (T1R1and T2R2, compounding 41 PW angles) and coherently compounding the RF data acquired by both transducers (using equation (6)) transmitting each one a single PW at 0° and transmitting each one 41 PW are compared inFIG.12. TABLE IIIMAGING PERFORMANCE FOR THE DIFFERENTMETHODS ASSESSED USING THE CONTRAST PHANTOMLateralFrameresolutionContrastrate[mm][dB]CNR [−][Hz]Single T1R1(1 PW at o°)2.633−6.7080.70210700Single T1R1Compounding1.555−8.2600.795260(41 PW, sector 20°)Multi Coherent (1 PW at o°)0.713−7.2510.7215350Multi CoherentCompounding (41 PW per0.693−8.6080.793130array, sector 20°) Table II above shows the corresponding imaging metrics in terms of lateral resolution, contrast, CNR and frame rate. To reconstruct the coherent multi-transducer images, the initial estimate of parameters was chosen as described above and 3 strong scatterers generated by nylon wires were used in the optimization. It can be seen that, in general, the multi coherent image has better defined edges, making the border easier to delineate than in an image obtained by a single transducer. The reconstructed images of the wire targets are clearly improved, the speckle size is reduced and the anechoic region is easily identifiable from the phantom background. Resolution significantly improved in the coherent multi-transducer method without frame rate sacrifice and at small expense of contrast. For single transducer, with coherent compounding, the lateral resolution, measured at the first target position is, 1.555 mm (measured at a frame rate of 260 Hz). Using multi-probe image (without additional compounding) the resolution improved to 0.713 mm (with an improved frame rate of 5350 Hz). In the single transducer case, a lesion is visible with a contrast of −8.26 dB and a CNR of 0.795, while both metrics are slightly reduced in the multitransducer coherent image (without additional compounding) to −7.251 dB and 0.721, respectively. Using compounding with 41 PW over each probe these improve to −8.608 dB and 0.793. These results suggest that target detectability is a function of both resolution and contrast. The dependence of the imaging depth on the angle between both probes has also been investigated.FIG.13shows a spatial representation of the FoV of two linear arrays and the depth of the common FoV, measured at the intersection of the centre of both individual fields of view. The depth of common FoV as function of the angle between both probes when transmitting plane waves at 0° is described. It can be seen fromFIG.13that imaging depth increases at larger angle between the probes. Described arrangements introduce a coherent multi-transducer ultrasound system that significantly outperforms single transducer arrangements through coherent combination of signals acquired by different synchronized transducers that have a shared FoV. Although the experiments described were performed as a demonstration in 2-D using linear arrays, the framework proposed encompasses the 3rd spatial dimension. The use of matrix arrays capable of volumetric acquisitions may be used for a true 3-D demonstration. Since the multicoherent image is formed by 4 RF datasets that are acquired in two consecutive transmissions, it will be appreciated that tissue and/or probe motion do not break the coherence between consecutive acquisitions. To ensure this is the case, high frame rate acquisition is useful. Whilst described arrangements use plane waves, different transmit beam profiles such as diverging waves may increase the overlapped FoV, extending the final high-resolution image. Indeed, there is a complex interplay between FoV and resolution gain as probes are moved relative to one another. In the method presented overlap of insonated regions allows relative probe positions to be determined. Any overlap in either transmit or receive sensitivity fields contributes to improved resolution because of the enlarged aperture of the combination of transducers. The final image achieves an extended FoV, but the resolution will only improve in regions of overlapping fields. This is best towards the centre where overlap includes transmission and reception for both individual probes. There is also an improvement (albeit lesser) in regions where the overlap is only on transmit or receive fields (seeFIGS.11and12). Thus, there are net benefits, but of different kinds, in different locations. In a similar way, this also will determine the imaging depth achieved by described methods. Whilst the relative position of the individual transducers and the angles of the transmitted plane waves determines depth of common FoV (seeFIG.12), an improvement of imaging sensitivity in deep regions is expected since the effective receive aperture is larger than in a single probe system. Improvements in resolution are primarily determined by an effective extended aperture rather than compounding PW at different angles. Results show that in the coherent multi-transducer method there is a trade-off of between resolution and contrast [18]. While a large gap between the probes will result in an extended aperture which improves resolution, the contrast may be compromised due to the effects of sidelobes associated with creation of a discontinuous aperture. Further coherent compounding can be used to improve the contrast by reducing sidelobes.FIG.12illustrates that target detectability is determined by both resolution and contrast [29]. The differences between k-space representations for the single and the coherent multi-transducer methods further explain the differences in imaging performance; the more extended the k-space representation, the higher the resolution [30]. The relative amplitudes of the spatial frequencies present, i.e. the topography of k-space, determine the texture of imaged targets. Weighting the individual data from the different transducers can reshape the k-space, accentuating certain spatial frequencies and so can potentially create a more conventional response for the system. Moreover, the presence of uniformly spaced unfilled areas in a system's k-space response may indicate the presence of grating lobes in the system's spatial impulse response [28]. A sparse array may create gaps in the k-space response. Only with minimal separation between transducers the k-space magnitude response will become smooth and continuous over an extended region. This suggests that there is an interplay between the relative spatial positioning of the individual transducers and the angles of the transmitted plane waves; where either one or both of these can determine the resolution and contrast achievable in the final image [18]. Relative position data can be used to decide what range of PW angles to use and to change these in real time to adaptively change system performance. In real life applications, resolution and contrast will be influenced by a complex combination of probe separation and angle, aperture width, fired PW angle and imaging depth. It will be appreciated that different factors may determine the image performance of the system. Image enhancements related to increasing aperture size are well described [12]. Nevertheless, in clinical practice the aperture is limited because extending it often implies increasing system cost and complexity. Described implementations use conventional equipment and image-based calibration to extend the effective aperture size while increasing the received amount of RF data (data x N). Estimated time for “first” initialization of a system in accordance with described arrangements is less than 1 minute, which is comparable to other calibration methods [31], [32]. Once the algorithm has been correctly initialized, the subsequent running times for the optimization can be significantly decreased. For example, in the free-hand experiment, where each optimization was initialized with the output from the previous acquisition, the optimization was up to 4 times faster than the first one. Regarding to the amount of data, similar to 3-D and 4-D ultrafast imaging where the data is significantly large [33], in the proposed multi-transducer method computation may be a bottleneck for real time imaging. Graphical processing unit (GPU)-based platforms and high-speed buses are key to future implementation of these new imaging modes [34]. In addition to the system complexity, large-aperture arrays represent ergonomic operator problems and have limited flexibility to adapt to different applications. In described arrangements, an extended aperture is the result of adding multiple freely placed transducers together, which allows more flexibility. Small arrays are easy to couple to the skin and adapt to the body shape. Whilst use of multiple probes may increase the operational difficulty for an individual performing the scan, it is possible to manipulate multiple probes using a single, potentially adjustable, multiprobe holder that would allow the operator to hold multiple probes with only one hand while keeping directed to the same region of interest. Such a probe holder has been demonstrated as a potential device for incoherent combination of multiple images for extended FoV imaging [4]. Approaches and arrangements described may provide a different strategy in ultrasound according to which large assemblies of individual arrays may be operated coherently together. To successfully improve the PSF, multitransducer methods according to arrangements require coherent alignment of backscattered echoes from multiple transmit and receive positions. This can be achieved through precise knowledge of all transducer positions, which in practice is not achievable by manual measurements or using electromagnetic or optical trackers [35]. Approaches described provide methods for precise and robust transducer location by maximizing coherence of backscattered echoes arising from the same point scatterer and received by the same transducer using sequential transmissions from each of transducer of a system. Equivalent to applications providing free-hand tracked ultrasound for image guide applications [31], [32], spatial calibration helps to guarantee performance of described multi-coherent ultrasound methods. It will be appreciated that use of gradient-descent methods requires an initial estimate of parameters close enough to a global maximum of a cost function, including the position of calibration targets. The distance between maxima, which depends on NCC and corresponds to the pulse length, dictates this tolerance. This is approximately 1.5 μs (equivalent to 2.19 mm) for the experimental configuration described above. This tolerance value can be realistically achieved through image registration [27]. In practice, in a free-hand situation, and assuming that at some initial instant the registration is accurate, this initial guess can be ensured if the transducers move relatively little in the time between two transmissions and share a common FoV. In PW imaging, the frame rate is only limited by the round-trip travel time, which depends on the speed of sound and the depth. For the experimental setup described, the minimum time between two insonifications is around 94 μs. Hence the maximum frame rate is limited to Fmax=10:7 kHz, which in the case of the described multi transducer coherent method, is reduced by the number of probes as Fmax/N. To guarantee free-hand performance of the described implementation of a multi transducer method, perfect coherent summation must be achieved over consecutive transmissions of the N transducers of the system. However, when the object under insonification moves between transmit events, this condition is no longer achieved. In other words, the free-hand performance is limited by the maximum velocity at which the probes move. Considering that coherence breaks for a velocity at which the observed displacement is larger than half a pulse wavelength per frame [26], the maximum velocity of the probes is Vmax=λFmax/2N, which in the example shown here is 1.33 m/s. This speed far exceeds the typical operator hand movements in a regular scanning session and hence, the coherent summation over two consecutive transmission is achieved. The method has been validated in a free-hand demonstration. Wavefront aberration caused by inhomogeneous medium can significantly limit the quality of medical ultrasound images and is the major barrier to achieve diffraction-limited resolution with large aperture transducers [36]. The technique described in this work has been tested in a scattering medium, with the assumption of a constant speed of sound along the propagation path. However, since the speed of sound is a parameter in the optimization, the technique could be adapted for nonhomogeneous media where the speed of sound varies in space [18]. In this case, the medium could be modelled through piecewise continuous layers. The optimization method could be applied in a recursive way, dividing the FoV in sub areas with different speeds of sound. More accurate speed of sound estimation would improve beamforming and allow higher order phase aberration correction. It will be appreciated that “speed of sound” maps would be of great interest in tissue characterization [37], [38]. In addition, the use of multiple transducers allows multiple interrogations from different angles, which might give insight into the aberration problem and help to test new algorithms to remove the clutter. The approach presented here has been formulated and validated for detectable and isolated point scatterers within the shared imaging region, which in practice may not be always possible. Whilst the theory has been presented in relation to point-like scatterers, approaches rely on a measure of coherence which may well be more tolerant, as indicated in the contrast phantom demonstrated inFIG.12. This suggests that the method may work when there are identifiable prominent local features, and the concept of maximizing coherence of data received by each receiver array when insonated by different transmitters could allow wider usage. Indeed, an optimization based on spatial coherence might be more robust in the case where point targets are not available, due to the expected decorrelation of speckle with receiver location [39]—[41]. This may also lead to improvements in computational efficiency. Measures of spatial coherence have been used previously in applications such as phase aberration correction [42], flow measurements [43], and beamforming [44]. On the other hand, isolated point scatterers can be artificially generated by other techniques, for instance by inclusion of microbubble contrast agents [45]. Ultrasound super-resolution imaging recognises that spatially isolated individual bubbles can be considered as point scatterers in the acoustic field and accurately localized [47]. The feasibility of the coherent multi-transducer method in complex media, including a new approach mainly based on spatial coherence [20], and the potential use of microbubbles. Arrangements described may provide a new coherent multi-transducer ultrasound imaging system and a robust method to accurately localize the multiple transducers. The subwavelength localization accuracy required to merge information from multiple probes is achieved by optimizing the coherence function of the backscattered echoes coming from the same point scatterer insonated by sequentially all transducers and received by the same one, without the use of an external tracking device. The theory described has application with a multiplicity of 2-D arrays placed in 3-D and the method was experimentally validated in a 2-D framework using a pair of linear array and ultrasound phantoms. The improvements in imaging quality have been shown. Overall the performance of the multi-transducer approach is better than PW imaging with one single linear array. Results suggest that the coherent multitransducer imaging has the potential to improve ultrasound image quality in a wide range of scenarios. As described above, a coherent multi-transducer ultrasound imaging system (CMTUS) enables an extended effective aperture (super-aperture) through coherent combination of multiple transducers. As described above, an improved quality image can be obtained by coherently combining the radio frequency (RF) data acquired by multiple synchronized transducers that take turns to transmit plane waves (PW) into a common FoV). In such a coherent multi-transducer ultrasound (CMTUS) method, optimal beamforming parameters, which include the transducer locations and an average speed of sound in a medium under study, can be deduced by maximizing coherence of received RF data by cross-correlation techniques. As a result, a discontinuous large effective aperture (super aperture) is created, significantly improving imaging resolution. While the use of multiple arrays to create a large aperture instead of using a single big array may be more flexible for different situations such as typical intercostal imaging applications where the acoustic windows are narrow, the discontinuities dictated by the spatial separation between the multiple transducers may determine the global performance of the CMTUS method. It will be appreciated that as a consequence of the discontinuous aperture there is a trade-off between resolution and contrast. Arrangements recognise that since average speed of sound in a medium under study is optimized by the CMTUS method, an improvement in the beam formation with some higher order phase aberration correction is expected. Inhomogeneous Media A k-Wave Matlab toolbox was used to simulate the non-linear wave propagation through an inhomogeneous medium (Treeby and Cox, 2010; Treeby et al., 2012). A CMTUS system formed by two identical linear arrays, similar to the ones experimentally available, was simulated as follows: Each of the arrays had a central frequency of 3 MHz and 144 active elements in both transmit and receive, with element pitch of 240 μm and kerf of 40 μm. For plane waves the modelled transducer had an axial focus of infinity with all 144 elements firing simultaneously. The apodisation across the transducer was modelled by applying a Hanning filter across the transducer width. Table IV summarizes the simulation parameters that define each of the linear arrays. TABLE IVParameterValueNumber of elements144Pitch240μmKerf40μmCentral frequency3MHzTransmit pulse cycles3Sampling frequency (downsampled)30.8MHz A simulation was performed for each transmit event, i.e. each plane wave at a certain angle. In total 7 transmit simulations per linear array were performed to produce a plane wave data set, which covers a total sector angle of 30° (from −15° to 15°, 5° step). In the case of CMTUS this results in 14 transmit events in total (7 plane waves per array). This plane wave sequence was chosen to match in resolution a focused system with F-number 1.9, decimating the required number of angles by a factor of 6 to optimize the simulation time without affecting resolution. The spatial grid was fixed at 40 μm (six grid points per wavelength) with a time step corresponding to a Courant-Friedrichs-Lewy (CFL) condition of 0.05 relative to a propagation speed of 1540 m/s. Received signals were downsampled at 30.8 MHz. Channel noise was introduced to the RF simulated data as Gaussian noise with a SNR of 35 dB at 50 mm imaging depth. The ultrasound pulses were propagated through heterogeneous scattering media using tissue maps (speed of sound, density, attenuation and nonlinearity). A medium defined only with the properties of general soft tissue was used as control case. To model the scattering properties observed in vivo, sub-resolution scatterers were added to the tissue maps. A total of 15 scatterers of 40 μm diameter, with random spatial position and amplitude (defined by a 5% difference in speed of sound and density from the surrounding medium), were added per resolution cell, in order to fully develop speckle. Three point-like targets and an anechoic lesion were included in the media to allow the measurement of the basis metrics for comparing the imaging quality for different scenarios. A circular anechoic lesion of 12 mm diameter located at the centre of the aperture of both arrays (common FoV), was modelled as a region without scatterers. The point-like targets were simulated as circles of 0.2 mm diameter with a 25% difference in speed of sound and density with the surrounding tissue to generate appreciable reflection. The same realization of scatterers was superimposed on all maps and through the different simulations to keep the speckle pattern in the CMTUS system, so any changes in the quality imaging metrics are due to changes in the overlying tissues, the imaging depth and the acoustical field. The k-Wave Matlab toolbox uses a Fourier co-location method to compute spatial derivatives and numerically solve the governing model equations, which requires discretisation of the simulation domain into an orthogonal grid. Consequently, continuously defined acoustic sources and media need to be sampled on this computational grid, introducing staircasing errors when sources do not exactly align with the simulation grid. To minimize these staircasing errors, the transmit array was always aligned to the computational grid, i.e. simulations were performed in the local coordinate system of the transmit array. This implies that to simulate a sequence in which the array T2transmits, the propagation medium, including the sub-resolution scatterers, was converted into the local coordinate system of probe T2using the same transformation matrix that defines the relative position of both transducers in space. A sample tissue map with the transducers, point-like targets and anechoic lesion locations, represented in both local coordinate systems, is shown inFIG.14. FIG.14illustrates an example of a speed of sound map of a propagation medium with a muscle layer of 8 mm thickness and a fat layer of 25 mm. Locations of ultrasound probes, point-like targets and anechoic lesion are shown.FIG.14(a)shows the medium expressed in the local coordinate system of the array T1and used to simulate the RF data T1R12, i.e. when the array Titransmits.FIG.14(b)shows the medium expressed in the local coordinate system of the array T2and used to simulate the RF data T2R12, i.e. when the array T2transmits. In this example, the angle between the probes that defines their position in space is 60° and the corresponding imaging depth 75 mm. CMTUS Discontinuous Effective Aperture It is demonstrated above that the discontinuous effective aperture obtained by CMTUS determines the quality of the resulting image. To investigate the effects of the discontinuous aperture, determined by the relative location of the CMTUS arrays in space, different CMTUS systems with the arrays located at different spatial locations were modelled. Simulations were performed in the same control medium, where only soft tissue material was considered. To modify the relative location of the probes while keeping the imaging depth (fixed at 75 mm), the angle between the arrays was changed. The array T1was always positioned at the centre of the x-axis of the simulation grid while the array T2was rotated around the centre of the propagation medium. Then, different cases of CMTUS with two arrays located at different angles, from 30° to 75° in steps of 15°, were simulated. FIG.15shows a schematic representation of the probes in space, where the different spatial parameters (angle between probes, θ, and gap, Gap, in the resulting effective aperture, Ef) are labelled. Note that, at larger angles, both the effective aperture of the system defined by both probes and the gap between them increase. The relationships between probe position, and the resulting effective aperture and gap are shown inFIG.15. CMTUS Image Penetration The image penetration of CMTUS was investigated by changing the local orientation of the arrays and using the same control propagation medium (only soft tissue). For a given effective aperture (fixed gap), each probe was rotated around its centre the same angle but in the opposite direction. In that way, a certain given rotation, for example negative in T1and positive in T2will result in a deeper common FoV, and the opposite for the counter-rotation.FIG.16shows the imaging depth dependence on the transducer orientation (defined by the position of the common FoV of both arrays). Using this scheme, four different imaging depths were simulated: 57.5 mm, 75 mm, 108 and 132 mm. FIG.16shows a schematic representation of the spatial location of the two linear arrays, T1and T2, and their field of view at different imaging depths. The imaging depth is obtained steering the linear arrays the same angle but in opposite directions. Three different cases are shown: (a) 57.5 mm imaging depth; (b) 75 mm imaging depth; and (c)108imaging depth. The circle indicates the centre of the common field of view, which defines the imaging depth in CMTUS. CMTUS Through Aberrating Media To investigate the effect of aberrating inhomogeneities in the medium, three different kinds of tissue were defined in the propagation media (general soft tissue, fat and muscle). The imaging depth was set to 75 mm with a configuration of the arrays in space that defines an effective aperture of 104.7 mm with 45.3 mm gap. The acoustic properties assigned to each tissue type were chosen from the literature and are listed below: TissueSpeed ofDensityAttenuationNonlinearitytypeSound [m/s][kg/m3][dB/MHz/cm]B/ASoft tissue154010000.756Fat14789500.6310Muscle154710500.157.4 A medium defined only with the soft tissue properties was used as control case. Then, clutter effects were analysed by using heterogenous media in which two layers with the acoustic properties of muscle and fat were introduced into the control case medium. In the different studied cases, the thickness of the muscle layer was set to 8 mm while fat ranged from 5 to 35 mm thickness.FIG.14shows an example of the propagation medium with a muscle layer of 8 mm and a fat layer of 25 mm. In-Vitro Experiments A sequence similar to the one used in simulations was used to image a phantom. The imaging system consisted of two 256-channel Ultrasound Advanced Open Platform (ULA-OP 256) systems (MSD Lab, University of Florence, Italy). The systems were synchronized, i.e. with the same trigger and sampling times in both transmit and receive mode. Each ULAOP 256 system was used to drive an ultrasonic linear array made of 144 piezoelectric elements with a 6 dB bandwidth ranging from 2 MHz to 7.5 MHz (imaging transducer LA332, Esaote, Firenze, Italy). The two probes were mounted on xyz translation and rotation stage (Thorlabs, USA) and were carefully aligned in the same elevational plane (y=0). For each probe in an alternating sequence, i.e. only one probe transmits at each time while both probes receive, 7 PW, covering a total sector angle of 30° (from −15° to 15°, 5° step), were transmitted at 3 MHz and pulse repetition frequency (PRF) of 1 kHz. RF data backscattered up to 135 mm deep were acquired at a sampling frequency of 19.5 MHz. No apodization was applied either on transmission or reception. A subset of the simulated results was experimentally validated in-vitro. A phantom custom made with three point-like targets and ananaechoic region, was imaged with the imaging system and pulse sequences described below. The averaged speed of sound of the phantom was 1450 m/s. The phantom was immersed in a water tank to guarantee good acoustic coupling. To induce aberration, a layer of paraffin wax of 20 mm thickness was placed between the probes and the phantom. The measured speed of sound of paraffin wax was 1300 m/s. The control experiment was performed first without the paraffin wax sample present. After the control scan, the paraffin wax sample was positioned over the phantom without movement of the phantom or tank. Then, the target was scanned as before. The paraffin wax sample was positioned to sit immediately over the phantom, coupled to the transducers by water. A final control scan was performed to verify registration of the phantom, tank and transducers, after the paraffin wax sample was scanned and removed. Data Processing The RF data, both simulated and experimentally acquired, were processed in different combinations to study image quality. For a single probe system, beamforming of RF data was performed using the conventional delay-and-sum method for coherent plane wave compounding. The multi-transducer beamforming was performed as described above. For each simulated case, the optimum beamforming parameters, calculated by maximizing the cross-correlation of backscattered signals from common targets acquired by individual receive elements as described above were used to generate CMTUS images. For the simulated RF data, where the actual position of the arrays in space is known, an additional image, noted as 2-probes, was beam-formed by assuming a speed of sound of 1540 m/s and using the spatial location of the array elements. Note that, in the experimental case this is not possible because the actual position of the arrays in space is not accurately known a priori. Finally, the data corresponding to the sequence when the array T1transmits and receives, i.e. T1R1, and noted here as 1-probe, was used as a base line for array performance, providing a point of comparison to the current coherent plane wave compounding method in both simulated and experimental scenarios. Note that, for all the cases except CMTUS, an assumed value of the speed of sound was used to beamform the data (1540 m/s for simulated data and 1450 m/s for experimental data). In order to achieve a comparison between imaging modalities as fair as possible in terms of transmitted energy, the CMTUS and the 2-probes images are obtained by compounding only 6 different PW, while the 1 probe system images are generated compounding the total number of the transmit plane waves, i.e. 7 PW from −15° to 15°, in 5° step. In that vein, the CMTUS and 2-probes images are the results of compounding the RF data when the array T1transmits PW at zero and positive angles) (0°,5°,10° and the array T2transmits PW at zero and negative angles (0°,−5°,−10°). An even number of transmissions was set because the CMTUS optimization is based on a pair of transmissions, one per array. In addition, firing at opposite angles with the 2 arrays guarantees the CMTUS performance since an overlap of the isonated regions is mandatory to determine the relative probe-to-probe position. For each resulting image, lateral resolution (LR), contrast and contrast-to-noise ratio 273 (CNR) were measured to quantify the impact of both the aperture size and the clutter. LR was calculated from the point-spread-function (PSF) of the middle point-like target. An axial-lateral plane for 2-D PSF analysis was chosen by finding the location of the peak value in the elevation dimension from the envelope-detected data. Lateral and axial PSF profiles were taken from the centre of the point target and aligned with the principal resolution directions. LR was then assessed by measuring the width of the PSF at the −6 dB level. The contrast and CNR were measured from the envelope-detected images. Contrast and CNR were calculated as: Contrast=20 log10(μi/μo) CNR=|μi−λo|/√{square root over (μi2+μo2)}. Where μiand μoare the means of the signal inside and outside of the region, respectively. All image metrics were computed before log-compress transformation was applied. Results A. Simulation Results Control Case: Conventional Aperture Imaging The conventional aperture image, corresponding to the sequence when the array T1transmits and receives, i.e. T1R1(1-probe), provides the base line for imaging quality through the different scenarios. FIG.17illustrates the resulting image at 75 mm depth and without any aberrating to layer in the propagation medium. A speed of sound of 1540 m/s was used to reconstruct these images. The point target (FIG.17(b)) has a lateral resolution of 1.78 mm and the lesion (FIG.17(c)) is visible with a contrast of −16.78 dB and CNR of 0.846. Note that, while the lesion is easily identified from the background, it is difficult to delineate its edges. CMTUS Discontinuous Effective Aperture FIG.18shows a simulated PSF and lesion images from the same non-aberrating medium and for increasing effective aperture and gap of the CMTUS system. It can be seen that, the PSF depends on the size of the effective aperture and the gap between the probes. As expected, the central lobe of the PSF reduces in width with increase in size of the effective aperture. However, while at extended apertures the width of the main lobe decreases, the amplitude of the side lobes increases with the corresponding gap in the aperture, affecting contrast as can be seen in the lesion images. The effects of the side lobes in the image quality can be seen inFIG.18, where an effective aperture with a gap of 64.1 mm significantly raises the amplitude of the side lobes close to the main lobe's one and affects the lesion image. FIG.19compares corresponding computed image quality metrics (LR, contrast and CNR) as function of the obtained effective aperture. Results show that both the main lobe of the PSF and the lateral resolution decrease with larger effective aperture size. Since an increasing effective aperture represents also a larger gap between the probes, contrast and resolution follow opposite trends. In general, comparing with the 1-probe system, CMTUS produces the best lateral resolution in all the cases but shows degradation in contrast at the particular imaging depth of 75 mm. At the maximum effective aperture simulated, resolution is the best with 0.34 mm, while the contrast and CNR drop to a minimum of −15.51 dB and 0.82, respectively.FIG.19shows the lateral point spread functions extracted fromFIG.18at the depth of peak point intensity and in the principal direction. Corresponding computed quality metrics as function of the effective aperture size in CMTUS: Lateral resolution (LR) measured at −6 dB from the lateral points spread function, contrast and contrast-to-noise-ratio (CNR) measured onFIG.18. CMTUS Image Penetration FIG.20compares CMTUS images with the 1-probe system at two different imaging depths (100 mm and 155 mm). Image degradation with depth is clearly observed in all the cases. However at larger depths the 1-probe shows a greater level of degradation. At the maximum imaging depth shown (155 mm), the point targets and the lesion can still be identified in the CMTUS image while in the 1-probe image is not obvious. FIG.21summarises computed image metrics as a function of imaging depth. As expected, in both systems, all image metrics worsen at larger imaging depths. Nevertheless, results show that their dependence on the imaging depth is different between the 1-probe and the CMTUS cases. The slope of the curve LR-depth is significantly higher in the 1-probe system than in the CMTUS method, which suggests that loss in resolution with imaging depth is faster at smaller apertures. While at reduced imaging depths (<100 mm) contrast and CNR seem to be affected in a similar way in both systems, the loss in contrast metrics are less accentuated in the CMTUS system at depths larger than 100 mm, where CMTUS method overcomes the performance of the 1-probe system not only in terms of resolution but also in contrast. The extended effective aperture created by CMTUS consequently increases the sensitivity of the imaging system, particularly at large imaging depths. CMTUS Through Aberrating Media FIG.22is a comparison of simulated images acquired by a conventional aperture 1-probe (a-d), 2-probes (e-h) and CMTUS method (i-l) through aberrating layers of increasing thickness (thickness of fat layer increases from 0 mm, 10 mm, 25 mm to 35 mm). 1-probe images using 7 PW transmissions; 2-probes and CMTUS images using 6 PW transmissions. FIG.22shows the simulated images for the control case (propagation medium only with soft tissue) and for imaging through aberrating layers of different thickness. The different methods, i.e. 1-probe, 2-probes and CMTUS are compared. It can be seen that, in the presence of aberration, the PSF and contrast of the 2-probes image significantly degrade when comparing with the control case. This effect is clearly seen in the point targets imaged through a fat layer of 35 mm thickness, where results show that if aberration is not corrected, extended apertures do not show benefits in terms of resolution. Indeed, in the presence of aberration, it is not possible to coherently reconstruct the image using the two separate transducers (2-probes system case). FIG.23shows simulated delayed RF data for a medium with a fat layer of 35 mm thickness and backscattered from a point-like target, obtained by coherently adding the 4 delayed backscattered echoes from the same point-like target (T1R1; T1R2; T2R1; T2R2) and different beamforming parameters:FIG.23(a)2-probes;FIG.23(b)CMTUS. FIG.23shows an example of the delayed echos from the point-like target for the 2-probes and CMTUS cases, corresponding to a propagation medium with a fat layer of 35 mm thickness. These flat backscattered echoes are obtained by coherently adding the 4 delayed backscattered echoes from the same point-like target (T1R1; T1R2; T2R1; T2R2) and the corresponding beamforming parameters. It is worth pointing out that in the 2-probes case, the different echoes do not properly align, creating interference when coherently adding them together. However, after optimizing the beamforming parameters in the CMTUS, all echos align better and can be coherently added together, minimizing the aberrating consequences. Similar effects are seen in the anechoic lesion. While differences in the background speckle pattern are observed between the different imaging methods, a higher loss of contrast due to aberration can be appreciated only in the 2-probes images. Nevertheless, no significant changes in imaging quality because of aberration are appreciated in either the 1-probe or CMTUS systems. Although both systems are able to image through aberrating layers, they show clear differences. The CMTUS shows more detailed images than the 1-probe system. The speckle size is reduced and the different tissue layers are only visible in the CMTUS images. FIG.24is a comparison of computed quality metrics across different imaging methods.FIG.24shows computed quality metrics, lateral resolution (LR), contrast and contrast-to-noise-ratio (CNR), as function of the clutter thickness (fat layer). Three different methods are compared: 1-probe coherent plane wave compound using 7 PW transmissions, 2-probes using 6 PW transmissions and CMTUS using 6 PW transmissions. Imaging metrics as function of fat layer thickness are shown. As expected, in the absence of aberration, resolution improves with increasing aperture size. In this case, the worst lateral resolution corresponds to 367 the 1-probe system with 1.78 mm, which is the one with smallest aperture size, while the 368 2-probes and CMTUS images are similar with 0.40 mm. The trends show that if aberration is not corrected, there are no significant improvements in the imaging metrics related to the aperture size for thicker thickness of fat layers. At clutter thickness larger than 10 mm, image quality of the system formed by 2 transducers without aberration correction (2-probes) is significantly degraded, while CMTUS imaging metrics are not affected by aberration errors, following the same trend as a conventional aperture (1-probe) and providing a constant value of resolution over clutter thickness without any significant loss of contrast. At the thickest fat layer simulated, resolution is 1.7 mm and 0.35 mm for the 1-probe and CTMUS images, respectively, while in the case of 2-probes images is no longer possible to reconstruct the point-target to measure resolution. Contrast and CNR also show a similar significant loss for the 2-probes image that presents a contrast of −10.84 dB and CNR of 0.69, while those values are significantly better for the 1-probe (−18.44 dB contrast and 0.87 CNR) and CMTUS (−17.41 dB contrast and 0.86 CNR) images. Experimental Results Coherent plane wave imaging with a conventional aperture imaging (using a single probe) provides the reference for image quality with and without the paraffin wax layer. To reconstruct these images the reference speed of sound in water of 1496 m/s was used and 7 PW were compounded. FIG.25shows experimental images of a control (a,c) and the paraffin cases (b,d). Two different methods are compared: 1-probe coherent plane wave compound using 7 PW transmissions (a,b) and CMTUS using 6 PW transmissions (c,d).FIG.25shows a comparison of the phantom images acquired with 1-probe and CMTUS in the control case and through a paraffin wax sample. The CMTUS images were reconstructed using the optimum beamfoming parameters, which include the average speed of sound and compounding 6 PW. All images are shown in the same dynamic range of −60 dB. In both cases, 1-probe and CMTUS images, little variation is observed between the control and the paraffin images, which agree with the simulation results. The value of the optimum beamforming parameters used to reconstruct the CMTUS images is {c=1488.5 m/s; θ2=30.04°; r2=[46.60, 12.33] mm} for the control case and {c=1482.6 m/s, θ2=30:00°; r2=[46.70, 12:37] mm} for the paraffin. There are slight changes in all the values and a drop in the average speed of sound which agrees with the lower speed of propagation of sound of the paraffin wax. FIG.26shows a comparison of computed quality metrics, lateral resolution (LR), contrast and contrast-to-noise-ratio (CNR), experimentally measured for two different acquisition techniques. Two different methods are compared: 1-probe coherent plane wave compound using 7 PW transmissions and CMTUS using 6 PW transmissions.FIG.26summarizes the computed image metrics for both the control and the paraffin cases. Little variation was observed in all the imaging metrics. Although minimum image degradation by aberrating layers was observed in the CMTUS, the overall image quality improved compared with the conventional single aperture and the observed image degradation follows the same trend. FIG.27compares experimental point target images. The first point target located at mm depth was described using its lateral PSF with and without the paraffin wax layer. No significant effects due to the aberration are observed in the PSF in any of the cases. The PSF shape is similar with and without the paraffin wax layer and agree with the one observed in simulations. In general, the CMTUS method leads to a PSF with significant narrower main lobe but also with side lobes of bigger amplitude than the 1-probe conventional imaging system. FIG.27shows experimental point target images. Column (a) corresponds to the control and column (b) to the paraffin. First row corresponds to 1-probe system and middle row to CMTUS. Bottom row shows the corresponding lateral point spread functions for the two cases displayed: 1-probe system (dashed line) and CMTUS (solid line). 1-probe images using 7 PW transmissions. CMTUS images using 6 PW transmissions. FIG.28shows the coherent summation of the delayed echos from the point-like target before and after optimization. The effects of the paraffin layer are clearly seen. When the beamforming parameters, including the averaged speed of sound, are optimized by the CMTUS methods, all echos align better, minimizing the aberrating paraffin effects.FIG.28shows experimental delayed RF data acquired from the phantom with the paraffin wax sample. CMTUS flat backscattered echo from a point-like target, obtained by coherently adding the 4 delayed backscattered echoes from the same point-like target (T1R1; T1R2; T2R1; T2R2) using different beamforming parameters: (a) initial guess values; (b) optimum values. Discussion The implications for imaging using the CMTUS method with two linear arrays have been investigated here with simulations and experiments. The analysis shows that the performance of the CMTUS depends on the relative location of the arrays, the CMTUS sensitivity increases with the imaging depth and the resulting extended aperture preserves in the presence of aberration. These findings show that, if the separation between transducers is limited, the extended effective aperture created by CMTUS confers benefits in resolution and contrast that improve image quality at large imaging depths and even in the presence of acoustic clutter imposed by tissue layers of different speed of sound. Unlike the improvement achieved in resolution, benefits in contrast are not so significant. Simulation results suggest that, the discontinuous effective aperture may degrade contrast when the gap in the aperture is bigger than a few centimeters. In probe design, there is a requirement of half wavelength spacing between elements in order to avoid the occurrence of unwanted grating lobes in the array response. Moreover, previous studies indicated that, unlike resolution, contrast does not continue to increase uniformly at larger aperture sizes. Nevertheless, while the contrast may be degraded by big discontinuities in the aperture, the main lobe resolution continues to improve at larger effective apertures. Since the lesion detectability is a function of both the contrast and resolution overall there are benefits from extended aperture size, even when contrast is limited. A narrow main lobe allows fine sampling of high resolution targets, providing improved visibility of edges of clinically relevant targets. In addition, when imaging at larger depths, an extended aperture has the potential to improve the attenuation-limited image quality. In those challenging cases at large imaging depths, CMTUS shows improvements not only in resolution but also in contrast. Results agree with the hypothesis that in the absence of aberration, the aperture size determines resolution. However, previous work suggests that despite predicted gains in resolution, there are practical limitations to the gains made at larger aperture sizes. Inhomogeneities caused changes in the side lobes and focal distance, limiting the improvement in resolution. The resulting degradation is primarily thought to be arrival time variation called phase aberration. The outer elements on a large transducer suffer from severe phase errors due to an aberrating layer of varying thickness, placing limits on the gains to be made from large arrays. Findings presented here agree with these previous studies, and in the presence of aberration clutter, aperture size will be limited in practice. Nevertheless, the CMTUS method takes into account the average speed of sound in the medium and shows promise for extending the effective aperture beyond this practical limit imposed by the clutter. More accurate speed of sound estimation would improve beamforming and allow higher order phase aberration correction. However other challenges imposed by aberration still remain. Both phase aberration and reverberation can be primary contributors to degraded image quality. While phase aberration effects are caused by variations in sound speed due to tissue inhomogeneity, reverberation is caused by multiple reflections within inhomogeneous medium, generating clutter that distorts the appearance of the wavefronts from the region of interest. For fundamental imaging, reverberations have been shown to be a significant cause of image quality degradation and are the principal reason why harmonic ultrasound imaging is better than fundamental imaging. It is envisaged that the role of redundancy in the large array in averaging multiple realizations of the reverberation signal may provide a mechanism for clutter reduction. Whilst some choices made in the design of described experiments may not directly translate to clinical practice, it will be appreciated that they do not compromise the conclusions drawn from the results set out above. For example, the available H6J experimental setup drove the election of the frequency, which is higher than is traditionally used in abdominal imaging (1-2 MHz). In addition, although both the simulated and experimental phantoms are a simplistic model of real human tissue, they are able to capture the main potential causes that degrade ultrasound images, including attenuation, gross sound speed error, phase aberration, and reverberation clutter. Although illustrative embodiments of the invention have been disclosed in detail herein, with reference to the accompanying drawings, it is understood that the invention is not limited to the precise embodiment and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope of the invention as defined by the appended claims and their equivalents. REFERENCES [1] M. Moshfeghi and R. Waag, “In vivo and in vitro ultrasound beam distortion measurements of a large aperture and a conventional aperture focussed transducer,” Ultrasound in Medicine and Biology, vol. 14, no. 5, pp. 415-428, 1988.[2] N. Bottenus, W. Long, M. Morgan, and G. Trahey, “Evaluation of large-aperture imaging through the ex vivo human abdominal wall,” Ultrasound in medicine & biology, 2017.[3] H. K. Zhang, A. Cheng, N. Bottenus, X. Guo, G. E. Trahey, and E. M. Boctor, “Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation,” Journal of Medical Imaging, vol. 3, no. 2, pp. 027 001-027 001, 2016.[4] J. A. Jensen, O. Holm, L. Jerisen, H. Bendsen, S. I. Nikolov, B. G. Tomov, P. Munk, M. Hansen, K. Salomonsen, J. Hansen et al., “Ultrasound research scanner for real-time synthetic aperture data acquisition,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, vol. 52, no. 5, pp. 881-891, 2005.[5] N. Bottenus, W. Long, H. K. Zhang, M. Jakovljevic, D. P. Bradway, E. M. Boctor, and G. E. Trahey, “Feasibility of swept synthetic aperture ultrasound imaging,” IEEE transactions on medical imaging, vol. 35, no. 7, pp. 1676-1685, 2016.[6] H. K. Zhang, R. Finocchi, K. Apkarian, and E. M. Boctor, “Co-robotic synthetic tracked aperture ultrasound imaging with cross-correlation based dynamic error compensation and virtual fixture control,” in Ultrasonics Symposium (IUS), 2016 IEEE International. IEEE, 2016, pp. 1-4.[7] K. L. Gammelmark and J. A. Jensen, “2-d tissue motion compensation of synthetic transmit aperture images,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 61, no. 4, pp. 594-610, 2014.[8] G. Montaldo, M. Tanter, J. Bercoff, N. Benech, and M. Fink, “Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 56, no. 3, pp. 489-506, 3 2009. [Online]. Available: http://ieeexplore.ieee.org/document/4816058/[9][9] A. W. Fitzgibbon, “Robust registration of 2d and 3d point sets,” Image and Vision Computing, vol. 21, no. 13-14, pp. 1145-1153, 2003.[10] R. Mallart and M. Fink, “The van cittert-zernike theorem in pulse echo measurements,” The Journal of the Acoustical Society of America, vol. 90, no. 5, pp. 2718-2727, 1991.[11] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder-mead simplex method in low dimensions,” SIAM Journal on optimization, vol. 9, no. 1, pp. 112-147, 1998.[12] E. Boni, L. Bassi, A. Dallai, F. Guidi, V. Meacci, A. Ramalli, S. Ricci, and P. Tortoli, “Ula-op 256: A 256-channel open scanner for development and real-time implementation of new ultrasound methods,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, vol. 63, no. 10, pp. 1488-1495, 2016.[13] M. Greenspan and C. E. Tschiegg, “Tables of the speed of sound in water,” The Journal of the Acoustical Society of America, vol. 31, no. 1, pp. 75-76, 1959.[14] R. A. Beasley, J. D. Stefansic, A. J. Herline, L. Guttierez, and R. L. Galloway, “Registration of ultrasound images,” in Medical Imaging 1999: Image Display, vol. 3658. International Society for Optics and Photonics, 1999, pp. 125-133.[15] M. E. Anderson and G. E. Trahey, “The direct estimation of sound speed using pulse-echo ultrasound,” The Journal of the Acoustical Society of America, vol. 104, no. pp. 3099-3106, 1998.[16] W. F. Walker and G. E. Trahey, “The application of k-space in pulse echo ultrasound,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, vol. 45, no. 3, pp. 541-558, 1998.[17] M. E. Anderson and G. E. Trahey, “A seminar on k-space applied to medical ultrasound,” Department of Biomedical Engineering, Duke University, 2000.[18] J. C. Lacefield, W. C. Pilkington, and R. C. Waag, “Distributed aberrators for emulation of ultrasonic pulse distortion by abdominal wall,” Acoustics Research Letters Online, vol. 3, no. 2, pp. 47-52, 2002.[19] J. Bamber and C. Hill, “Acoustic properties of normal and cancerous human liveri. dependence on pathological condition,” Ultrasound in medicine & biology, vol. 7, no. 2, pp. 121-133, 1981.[20] M. Imbault, A. Faccinetto, B.-F. Osmanski, A. Tissier, T. Deffieux, J.-L. Gennisson, V. Vilgrain, and M. Tanter, “Robust sound speed estimation for ultrasound-based hepatic steatosis assessment,” Physics in Medicine and Biology, vol. 62, no. 9, p. 3582, 2017.[21] L. Mercier, T. Langer, F. Lindseth, and L. D. Collins, “A review of calibration techniques for freehand 3-d ultrasound systems,” Ultrasound in medicine & biology, vol. 31, no. 2, pp. 143-165, 2005.[22] G. F. Pinton, G. E. Trahey, and J. J. Dahl, “Spatial coherence in human tissue: Implications for imaging and measurement,” IEEE transactions on ultrasonics, ferroelectrics, and frequency control, vol. 61, no. 12, pp. 1976-1987, 2014.[23] Y. Desailly, O. Couture, M. Fink, and M. Tanter, “Sono-activated ultrasound localization microscopy,” Applied Physics Letters, vol. 103, no. 17, p. 174107, 2013.[24] B. T. Fang, “Trilateration and extension to global positioning system navigation,” Journal of Guidance, Control, and Dynamics, vol. 9, no. 6, pp. 715-717, 1986.[25] E. Boni, L. Bassi, A. Dallai, F. Guidi, V. Meacci, A. Ramalli, S. Ricci, and P. Tortoli, “ULA-OP 256: A 256-channel open scanner for development and real-time implementation of new ultrasound methods,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 63, no. 10, pp. 1488-1495, 2016.[26] B. Denarie, T. A. Tangen, I. K. Ekroll, N. Rolim, H. Torp, T. Bj° astad, and L. Lovstakken, “Coherent plane wave compounding for very high frame rate ultrasonography of rapidly moving targets,” IEEE Transactions on Medical Imaging, vol. 32, no. 7, pp. 1265-1276, 2013.[27] R. A. Beasley, J. D. Stefansic, A. J. Herline, L. Guttierez, and R. L. Galloway, “Registration of ultrasound images,” in Medical Imaging 1999: Image Display, vol. 3658. International Society for Optics and Photonics, 1999, pp. 125-133.[28] W. F. Walker and G. E. Trahey, “The application of k-space in pulse echo ultrasound,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, no. 3, pp. 541-558, 1998.[29] S. W. Smith, R. F. Wagner, J. M. Sandrik, and H. Lopez, “Low contrast detectability and contrast/detail analysis in medical ultrasound,” IEEE Transactions on Sonics and Ultrasonics, vol. 30, no. 3, pp. 164-173, 1983.[30] M. E. Anderson and G. E. Trahey, “A seminar on k-space applied to medical ultrasound,” Department of Biomedical Engineering, Duke University, 2000.[31] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling, “A closed-form differential formulation for ultrasound spatial calibration: multi-wedge phantom,” Ultrasound in Medicine & Biology, vol. 40, no. 9, pp. 2231-2243, 2014.[32] E. Boctor, A. Viswanathan, M. Choti, R. H. Taylor, G. Fichtinger, and G. Hager, “A novel closed form solution for ultrasound calibration,” in Biomedical Imaging: Nano to Macro, 2004. IEEE International Symposium on. IEEE, 2004, pp. 527-530.[33] J. Provost, C. Papadacci, J. E. Arango, M. Imbault, M. Fink, J.-L. Gennisson, M. Tanter, and M. Pernot, “3D ultrafast ultrasound imaging in vivo,” Physics in Medicine & Biology, vol. 59, no. 19, p. L1, 2014.[34] M. Tanter and M. Fink, “Ultrafast imaging in biomedical ultrasound,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 61, no. 1, pp. 102-119, 2014.[35] L. Mercier, T. Lange, F. Lindseth, and L. D. Collins, “A review of calibration techniques for freehand 3-D ultrasound systems,” Ultrasound in Medicine & Biology, vol. 31, no. 2, pp. 143-165, 2005.[36] J. C. Lacefield, W. C. Pilkington, and R. C. Waag, “Distributed aberrators for emulation of ultrasonic pulse distortion by abdominal wall,” Acoustics Research Letters Online, vol. 3, no. 2, pp. 47-52, 2002.[37] J. Bamber and C. Hill, “Acoustic properties of normal and cancerous human liver-I. dependence on pathological condition,” Ultrasound in Medicine & Biology, vol. 7, no. 2, pp. 121-133, 1981.[38] M. Imbault, A. Faccinetto, B.-F. Osmanski, A. Tissier, T. Deffieux, J.-L. Gennisson, V. Vilgrain, and M. Tanter, “Robust sound speed estimation for ultrasound-based hepatic steatosis assessment,” Physics in Medicine and Biology, vol. 62, no. 9, p. 3582, 2017.[39] N. Bottenus and K. F. Üstüner, “Acoustic reciprocity of spatial coherence in ultrasound imaging,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 62, no. 5, p. 852, 2015.[40] D.-L. Liu and R. C. Waag, “About the application of the van cittertzernike theorem in ultrasonic imaging,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 42, no. 4, pp. 590-601, 1995.[41] W. F. Walker and G. E. Trahey, “Speckle coherence and implications for adaptive imaging,” The Journal of the Acoustical Society of America, vol. 101, no. 4, pp. 1847-1858, 1997.[42] D.-L. Liu and R. C. Waag, “Estimation and correction of ultrasonic wavefront distortion using pulse-echo data received in a two-dimensional aperture,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 45, no. 2, pp. 473-490, 1998.[43] Y. L. Li and J. J. Dahl, “Coherent flow power doppler (CFPD): flow detection using spatial coherence beamforming,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 62, no. 6, pp. 1022-1035, 2015.[44] M. A. Lediju, G. E. Trahey, B. C. Byram, and J. J. Dahl, “Shortlag spatial coherence of backscattered echoes: Imaging characteristics,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 58, no. 7, 2011.[45] L. Peralta, K. Christensen-Jeffries, R. Paley, J. V. Hajnal, and R. J. Eckersley, “Microbubble contrast agents for coherent multi-transducer ultrasound imaging,” in The 24st European Symposium on Ultrasound Contrast Imaging. ICUS, 2019, pp. 96-97.[46] K. Christensen-Jeffries, R. J. Browning, M.-X. Tang, C. Dunsby, and R. J. Eckersley, “In vivo acoustic super-resolution and super-resolved velocity mapping using microbubbles,” IEEE Transactions on Medical Imaging, vol. 34, no. 2, pp. 433-440, 2015.[47] K. Christensen-Jeffries, S. Harput, J. Brown, P. N. Wells, P. Aljabar, C. Dunsby, M.-X. Tang, and R. J. Eckersley, “Microbubble axial localization errors in ultrasound super-resolution imaging,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 64, no. 11, pp. 1644-1654, 2017. | 94,971 |
11857368 | DETAILED DESCRIPTION The present disclosure describes aspects of a “universal” ultrasound device configured to image a subject at multiple different frequency ranges. The universal ultrasound device includes multiple ultrasonic transducers at least some of which can operate at different frequency ranges, thereby enabling the use of a single ultrasound device to generate medically-relevant images of a subject at different depths. As a result, a single device (the universal ultrasound device described herein) may be used by medical professionals or other users to perform different imaging tasks that presently require use of multiple conventional ultrasound probes. Some embodiments are directed to an ultrasound device comprising an ultrasound probe. The ultrasound probe comprises a semiconductor die; a plurality of ultrasonic transducers integrated on the semiconductor die, the plurality of ultrasonic transducers configured to operate in a first mode associated with a first frequency range and a second mode associated with a second frequency range, wherein the first frequency range is at least partially non-overlapping with the second frequency range; and control circuitry. The control circuitry is configured to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the first frequency range, in response to receiving an indication to operate the ultrasound probe in the first mode, and control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range, in response to receiving an indication to operate the ultrasound probe in the second mode. The inventors have recognized that conventional ultrasound probes are limited because each of them operates at just a single one of several medically-relevant frequency ranges. For example, some conventional ultrasound probes operate only at frequencies in the range of 1-3 MHz (e.g., for applications such as obstetric, abdomen and gynecological imaging), whereas other conventional probes operate only at frequencies in the range of 3-7 MHz (e.g., for applications such as breast, vascular, thyroid, and pelvic imaging). Still other conventional ultrasound probes operate only at frequencies in the range of 7-15 MHz (e.g., for applications such as musculosketal and superficial vein and mass imaging). Since higher frequency ultrasound signals attenuate faster in tissue than lower frequency ultrasound signals, conventional probes operating only at higher frequencies are used for generating images of a patient at shallow depths (e.g., 5 cm or less) for applications such as central line placement or the aforementioned imaging of superficial masses located just beneath the skin. On the other hand, conventional probes operating only at lower frequencies are used to generate images of a patient at greater depths (e.g., 10-25 cm) for applications such as cardiac and kidney imaging. As a result, a medical professional needs to use multiple different probes, which is inconvenient and expensive, as it requires procuring multiple different probes configured to operate at different frequency ranges. By contrast, the universal ultrasound device, developed by the inventors and described herein, is configured to operate at multiple different medically-relevant frequency ranges and image patients at a sufficiently high resolution for forming medically-relevant images at a wide range of depths. As such, multiple conventional ultrasound probes can all be replaced by the single universal ultrasound device described herein, and medical professionals or other users may use a single universal ultrasound probe to perform multiple imaging tasks instead of using a multitude of conventional ultrasound probes each having limited applicability. Accordingly, some embodiments provide for wideband ultrasound probe having multiple ultrasonic transducers configured to operate in each of a multiple of modes including a first mode associated with a first frequency range and a second mode associated with a second frequency range, which is at least partially non-overlapping with the first frequency range. The multi-frequency ultrasound probe further comprises control circuitry that is configured to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the first frequency range, in response to receiving an indication to operate the ultrasound probe in the first mode, and control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range, in response to receiving an indication to operate the ultrasound probe in the second mode. The ultrasonic transducers may be integrated on a single substrate such as a single complementary metal oxide semiconductor (CMOS) chip, or may be on multiple chips within an ultrasound probe (e.g., as shown inFIGS.5G and5H). In some embodiments, the first frequency range may include frequencies in the range of 1-5 MHz. For example, the first frequency range may be contained entirely within a range of 1-5 MHz (e.g., within a range of 2-5 MHz, 1-4 MHz, 1-3 MHz, 2-5 MHz, and/or 3-5 MHz). Accordingly, when the ultrasonic transducers of the universal ultrasound probe are operated to generate and/or detect ultrasound signals having frequencies in the first frequency range, ultrasound signals detected by the ultrasonic transducers may be used to form an image of a subject up to target depths within the subject, the target depths being in a range of 10-25 cm (e.g., within a range of 10-20 cm, 15-25 cm, 10-15 cm, 15-20 cm, and/or 20-25 cm). In some embodiments, the second frequency range may be contained entirely within a range of 5-12 MHz (e.g., within a range of 5-10 MHz, 7-12 MHz, 5-7 MHz, 5-9 MHz, 6-8 MHz, 7-10 MHz, and/or 6-9 MHz). Accordingly, when the ultrasonic transducers of the universal ultrasound probe are operated to generate and/or detect ultrasound signals having frequencies in the second frequency range, ultrasound signals detected by the ultrasonic transducers may be used to form an image of a subject up to target depths within the subject, the target depths being in a range of 1-10 cm (e.g., within a range of 1-5 cm, 5-10 cm, 3-8 cm, 3-6 cm, and/or 3-5 cm). In some embodiments, the multiple modes of the universal ultrasound probe in combination span at least 10 MHz or between 8-15 MHz. For this reason, a universal ultrasound probe may be sometimes called a “wideband” probe, a multi-modal probe (having multiple frequency range modes), and/or a multi-frequency probe. It should be appreciated that a universal ultrasound probe is not limited to operating in only two modes and may operate in any suitable number of modes (e.g., 3, 4, 5, etc.) with each of the modes being associated with a respective frequency range. For example, in some embodiments, the universal ultrasound probe may operate in first, second, and third modes associated with a first, second, and third frequency ranges, respectively. The first, second, and third frequency ranges may be any suitable set of three ranges that, pairwise, do not entirely overlap one another. For example, the first frequency range may be contained entirely within a range of 1-3 MHz, the second frequency range may be contained entirely within a range of 3-7 MHz, and the third frequency range may be contained entirely within a range of 7-12 MHz. As another example, the first frequency range may be contained entirely within a range of 1-5 MHz, the second frequency range may be contained entirely within a range of 3-7 MHz, and the third frequency range may be contained entirely within a range of 5-10 MHz. In addition, each mode may also have different elevational focal regions, a feature not possible with a single 1D array using an elevational focusing acoustic lens. Each mode may also have different pitch of elements based on the frequency of operation. The different pitch may be implemented, for example, by subset selection and combinations of transducer cells. As may be appreciated from the foregoing examples of frequency ranges, an operating mode of the ultrasound probe may be associated with a frequency bandwidth of at least 1 MHz, in some embodiments. In other embodiments, an operating mode of the ultrasound probe may be associated with a bandwidth of at least 2 MHz, at least 3 MHz, or at least 4 MHz or higher, as aspects of the technology described herein are not limited in this respect. At least some of the transducers of the ultrasound probe, and in some embodiments each transducer, may not only operate at different frequency ranges, but also may operate in a particular frequency range (e.g., at a center frequency of the frequency range) with a wide bandwidth. In other embodiments (e.g., for Doppler imaging), an operating mode of the ultrasound prove may span bandwidths narrower than 1 MHz. As described, ultrasound devices in accordance with one or more of the various aspects described herein may be used for Doppler imaging—that is, in a Doppler mode. The ultrasound device may measure velocities in a range from about 1 cm/s to 1 m/s, or any other suitable range. When operating in a particular mode, ultrasonic transducers of a probe may generate ultrasound signals having the largest amount of power at a peak power frequency for the mode (e.g., which may be a center frequency of the frequency range associated with the mode). For example, when operating in a mode associated with a frequency range of 1-5 MHz, the ultrasonic transducers may be configured to generate ultrasound signals having the largest amount of power at 3 MHz. Therefore, the peak power frequency for this mode is 3 MHz in this example. As another example, when operating in a mode associated with a frequency range of 5-9 MHz, the ultrasonic transducers may be configured to generate ultrasound signals having the largest amount of power at 7 MHz, which is the peak power frequency in this example. As may be appreciated from the foregoing examples of frequency ranges, a universal ultrasound probe may be configured to operate in multiple modes including a first mode associated with a first frequency range having a first peak power frequency and a second mode associated with as second frequency range having a second peak power frequency. In some instances, the difference between the first and second peak power frequencies is at least a threshold amount (e.g., at least 1 MHz, at least 2 MHz, at least 3 MHz, at least 4 MHz, at least 5 MHz, etc.). It should be appreciated that, when operating in a frequency range, an ultrasonic transducer may, in some embodiments, generate signals at frequencies outside of the operating frequency range. However, such signals would be generated at less than a fraction (e.g., ½, ⅓, ⅕, etc.) of the largest power at which a signal at a center frequency of the range is generated, for example 3 dB or 6 dB down from the maximum power. The universal ultrasound probe described herein may be used for a broad range of medical imaging tasks including, but not limited to, imaging a patient's liver, kidney, heart, bladder, thyroid, carotid artery, lower venous extremity, and performing central line placement. Multiple conventional ultrasound probes would have to be used to perform all these imaging tasks. By contrast, a single universal ultrasound probe may be used to perform all these tasks by operating, for each task, at a frequency range appropriate for the task, as shown in Table I together with corresponding depths at which the subject is being imaged. TABLE 1Illustrative depths and frequencies at which a universalultrasound probe implemented in accordance withembodiments described herein can image a subject.OrganFrequenciesDepth (up to)Liver/Right Kidney2-5 MHz15-20 cmCardiac (adult)1-5 MHz20 cmBladder2-5 MHz; 3-6 MHz10-15 cm; 5-10 cmLower extremity venous4-7 MHz4-6 cmThyroid7-12 MHz4 cmCarotid5-10 MHz4 cmCentral Line Placement5-10 MHz4 cm It should be appreciated that Table 1 provides a non-limiting example of some organs for imaging at respective depths and frequencies. However, other organs or targets may corresponded to the listed frequency ranges. For instance, the 2-5 MHz range may generally be used for abdominal, pelvic and thoracic sonography. Further examples of anatomical targets within this frequency range include the gallbladder, bile ducts, pancreas, gastrointestinal tract, urinary tract, spleen, adrenal glands, abdominal aorta, groin, anterior abdominal wall, peritoneum, breast, and pelvic muscles. Additionally, the 2-5 MHz range or 3-6 MHz range may generally be used for obstetrics, such as fetal imaging or imaging of the placenta. Additionally, in the 7-12 MHz range, examples of anatomical targets other than those listed in Table 1 include the parathyroid, breast, scrotum, rotator cuff, tendons, and extracranial cerebral vessels. It should be appreciated that this list of examples is non-limiting, and any suitable organ and frequency range combination may be used herein. FIG.1Afurther illustrates how a universal ultrasound probe may operate in different modes, associated with different frequency ranges, to image a subject at different depths. As shown inFIG.1A, ultrasound probe100is being used to image subject101. When operating in a first mode, associated with a first frequency range (e.g., 1-3 MHz), the ultrasonic transducers in probe100may be configured to image the subject at or about a point109, also labeled P2, located at a depth D2(e.g., 15-20 cm) from the subject's skin. When operating in a second mode, associated with a second frequency range (e.g., 6-8 MHz), the ultrasonic transducers in probe100may be configured to image the subject at or about a point107, also labeled P1, located at a depth D1(e.g., 1-5 cm) from the subject's skin. In some embodiments, the distance D2is greater than the distance D1by at least a threshold distance (e.g., at least 5 cm, at least 7 cm, between 3 and 7 cm, or any range or number within such ranges). Ultrasound probe100transmit may be configured to transmit data collected by the probe100to one or more external devices for further processing. For example, as shown inFIG.1A, ultrasound probe100may be configured to transmit data collected by probe100via wired connection103to computing device105(a laptop in this non-limiting example), which may process the data to generate and display an image111of the subject101on a display. Various factors contribute to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges. One such factor is that the ultrasonic transducers may be formed by capacitive micromachined ultrasonic transducers (CMUTs) and, in some embodiments, at least some (and in some embodiments each) of multiple ultrasonic transducers in the universal ultrasound probe is configured to operate in collapsed mode and in non-collapsed mode. As described herein, a “collapsed mode” refers to a mode of operation in which at least one portion of a CMUT ultrasonic transducer membrane is mechanically fixed and at least one portion of the membrane is free to vibrate based on a changing voltage differential between the electrode and the membrane. When operating in collapsed mode, a CMUT ultrasonic transducer is capable of generating more power at higher frequencies. Switching operation of multiple ultrasonic transducers from non-collapsed mode into collapsed mode (and vice versa) allows the ultrasound probe to change the frequency range at which the highest power ultrasound signals are being emitted. Accordingly, in some embodiments, an ultrasound probe operates in a first mode associated with a first frequency range (e.g., 1-5 MHz, with a peak power frequency of 3 MHz) by operating its transducers in non-collapsed mode, and operates in a second mode associated with a second frequency range (e.g., 5-9 MHz, with a peak power frequency of 7 MHz) by operating its transducers in collapsed mode. In some embodiments, the ultrasound probe includes control circuitry (e.g., circuitry108shown inFIG.1B) configured to control the probe to operate in either first mode or the second mode and, to this end, may apply appropriate voltages to the ultrasonic transducers to cause them to operate in collapsed mode or in non-collapsed mode. For example, in some embodiments, the control circuitry is configured to cause ultrasonic transducers in the probe to operate in collapsed mode by applying a voltage to the transducers that exceeds a threshold voltage, which is sometimes called a “collapse” voltage. The collapse voltage may be in the range of 30-110 Volts and, in some embodiments, may be approximately 50 Volts. It should be noted that, while in some embodiments operating a probe's transducers in collapsed and non-collapsed modes may be a factor that helps the probe to operate in multiple frequency range modes, there may also be other factors that allow the probe to do so (e.g., an analog receiver capable of broadband signal amplification of about 1-15 MHz). Another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is that the ultrasonic transducers may be arranged in an array having a pitch adequate for both high-frequency and low frequency scanning. For example, in some embodiments, at least some of the ultrasonic transducers may be spaced apart from its nearest neighbor at a distance less than half of a wavelength corresponding to the highest frequency at which the probe is designed to operate to reduce (e.g., eliminate) aliasing effects. At least some, and in some cases each, mode(s) may also have different pitch of elements based on the frequency of operation. The different pitch is enabled by subset selection and combining of CMOS ultrasonic transducer (CUT) cells. Adequate pitches for a frequency are generally spaced between about λ and λ/4, where λ is the wavelength at the specified frequency. Exemplary pitches may include, but are not limited to, 500 microns (μm) (very low frequencies), 200 μm (moderate frequencies), and 125 μm (high frequencies). Also, in certain embodiments, pitches may be made wider due to element directivity helping to suppress aliasing artifacts (e.g., on the order of λ). The previously listed pitches are non-limiting, as other pitches are possible. In some embodiments, the pitch may be within a range of about 150 to 250 microns (including any value within that range) per transducer for sector scanning. For example, a 208 micron pitch may correspond 3.7 MHz operation. Another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is that the ultrasound transducers may be arranged in an array having an aperture (determined by the width and height of the array) that allows for both shallow and deep scans to be performed. For example, each mode may have a different active aperture. The total aperture accommodates the largest field-of-view needed to cover the application space of any one probe. Examples include all combinations of 1 cm, 2 cm, 3 cm, 4 cm, 5 cm in the azimuth direction and 1 cm, 2 cm, 3 cm, 4 cm, 5 cm in the elevation direction. Another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is the selection of a CUT cell size. Grouping CUT cells together increases both directivity and sensitivity. In addition, directivity increases with frequency as the element remains fixed in size. Thus, grouping CUT cells together for lower frequencies can be balanced with less grouping for higher frequencies to maintain a consistent directivity. Another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is that, in addition to being capable of operating in multiple frequency ranges, ultrasonic transducers in the probe are capable of generating low-frequency and high-frequency acoustic waveforms having a broad bandwidth (e.g., at least 100 KHz, at least 500 KHz, at least 1 MHz, at least 2 MHz, at least 5 MHz, at least 7 MHz, at least 15 MHz, at least 20 MHz, etc.). Another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is that, in some embodiments, the probe may include programmable delay mesh circuitry that allows for transmit beamforming to focus at multiple depths, including depths in the range of 2-35 cm. Programmable delay mesh circuitry is further described in U.S. Pat. No. 9,229,097, assigned to the assignee of the present application, the contents of which are incorporated by reference herein in their entirety. Still another factor that contributes to the ability of the universal ultrasound probe to operate in multiple modes associated with different and medically-relevant frequency ranges is that, in some embodiments, the probe may include circuitry that allows for receive beamforming to focus at multiple depths, including depths in the range of 2-35 cm. In one exemplary embodiment, a universal ultrasound probe may include an array of 576×256 ultrasonic transducers, spaced at a pitch of 52 μm, and having an array aperture of about 3 cm×1.33 cm. At least some of the transducers can operate in a frequency range of 1-15 MHz with a bandwidth of 0.1-12 MHz. In another exemplary embodiment, a universal ultrasound probe may include an array of 64×140 transducers spaced at 208 μm, and having an array aperture of about 3 cm×1.33 cm, operating in a frequency range of 1.5-5 MHz, and from 5-12 MHz. In some embodiments, a universal ultrasound probe (e.g., probe100) may be implemented in any of numerous physical configurations, and has the capabilities incorporated to perform imaging in modes as may be used when imaging with two or more of the following: a linear probe, a sector probe, a phased array probe, a curvilinear probe, a convex probe, and/or a 3D imaging probe. Additionally, in some embodiments, the ultrasound probe may be embodied in a hand-held device. The hand-held device may include a screen to display obtained images (e.g., as shown inFIGS.6A-6B). Additionally or alternatively, the hand-held device may be configured to transmit (via a wireless or a wired connection) data to an external device for further processing (e.g., to form one or more ultrasound images). As another example, in some embodiments, the ultrasound probe may be embodied in a pill (e.g., as shown inFIGS.5A-5H) to be swallowed by a subject and configured to image the subject as it is traveling through his/her digestive system. As another example, in some embodiments, the ultrasound probe may be embodied in a patch configured to be affixed to the subject (e.g., as shown inFIGS.7A-D). The aspects and embodiments described above, as well as additional aspects and embodiments, are described further below. These aspects and/or embodiments may be used individually, all together, or in any combination of two or more, as the technology described herein is not limited in this respect. FIG.1Bshows an illustrative example of a monolithic ultrasound device100embodying various aspects of the technology described herein. As shown, the device100may include one or more transducer arrangements (e.g., arrays)102, transmit (TX) circuitry104, receive (RX) circuitry106, a timing & control circuit108, a signal conditioning/processing circuit110, a power management circuit118, and/or a high-intensity focused ultrasound (HIFU) controller120. In the embodiment shown, all of the illustrated elements are formed on a single semiconductor die112. It should be appreciated, however, that in alternative embodiments one or more of the illustrated elements may be instead located off-chip. In addition, although the illustrated example shows both TX circuitry104and RX circuitry106, in alternative embodiments only TX circuitry or only RX circuitry may be employed. For example, such embodiments may be employed in a circumstance where one or more transmission-only devices100are used to transmit acoustic signals and one or more reception-only devices100are used to receive acoustic signals that have been transmitted through or reflected off of a subject being ultrasonically imaged. It should be appreciated that communication between one or more of the illustrated components may be performed in any of numerous ways. In some embodiments, for example, one or more high-speed busses (not shown), such as that employed by a unified Northbridge, or one or more high-speed serial links (e.g. 1 Gbps, 2.5 Gbps, 5 Gbps, 10 Gbps, 20 Gbps) with any suitable combined bandwidth (e.g. 10 Gbps, 20 Gbps, 40 Gbps, 60 Gbps, 80 Gbps, 100 Gbps, 120 Gbps, 150 Gbps, 240 Gbps) may be used to allow high-speed intra-chip communication or communication with one or more off-chip components. In some embodiments, the communication with off-chip components may be in the analog domain, using analog signals. The one or more transducer arrays102may take on any of numerous forms, and aspects of the present technology do not necessarily require the use of any particular type or arrangement of transducer cells or transducer elements. Indeed, although the term “array” is used in this description, it should be appreciated that in some embodiments the transducer elements may not be organized in an array and may instead be arranged in some non-array fashion. In various embodiments, each of the transducer elements in the array102may, for example, include one or more capacitive micromachined ultrasonic transducers (CMUTs), one or more CMOS ultrasonic transducers (CUTs), one or more piezoelectric micromachined ultrasonic transducers (PMUTs), one or more broadband crystal transducers, and/or one or more other suitable ultrasonic transducer cells. In some embodiments, the transducer elements of the transducer array102may be formed on the same chip as the electronics of the TX circuitry104and/or RX circuitry106or, alternatively integrated onto the chip having the TX circuitry104and/or RX circuitry106. In still other embodiments, the transducer elements of the transducer array102, the TX circuitry104and/or RX circuitry106may be tiled on multiple chips. The transducer arrays102, TX circuitry104, and RX circuitry106may be, in some embodiments, integrated in a single ultrasound probe. In some embodiments, the single ultrasound probe may be a hand-held probe including, but not limited to, the hand-held probes described below with reference toFIGS.6A-Band8. In other embodiments, the single ultrasound probe may be embodied in a patch that may be coupled to a patient.FIGS.7A-Dprovide a non-limiting illustration of such a patch. The patch may be configured to transmit, wirelessly, data collected by the patch to one or more external devices for further processing. In other embodiments, the single ultrasound probe may be embodied in a pill that may be swallowed by a patient. The pill may be configured to transmit, wirelessly, data collected by the ultrasound probe within the pill to one or more external devices for further processing.FIGS.5A-5Hillustrate non-limiting examples of such a pill. A CUT may include, for example, a cavity formed in a CMOS wafer, with a membrane overlying the cavity, and in some embodiments sealing the cavity. Electrodes may be provided to create a transducer cell from the covered cavity structure. The CMOS wafer may include integrated circuitry to which the transducer cell may be connected. The transducer cell and CMOS wafer may be monolithically integrated, thus forming an integrated ultrasonic transducer cell and integrated circuit on a single substrate (the CMOS wafer). Such embodiments are further described with reference toFIG.4below, and additional information regarding microfabricated ultrasonic transducers may also be found in U.S. Pat. No. 9,067,779 and U.S. Patent Application Publication 2016/0009544 A1, both assigned to the assignee of the present application, and the contents of both of which are incorporated by reference herein in their entireties. It should be appreciated that the foregoing is just one example of an ultrasonic transducer. In some embodiments, the ultrasonic transducer (e.g., a CMUT) may be formed on a wafer separate from a substrate with circuitry. The wafer with the ultrasonic transducers may be bonded to an electrical substrate, which may be an interposer, a printed circuit board (pcb), an application specific circuit (ASIC) substrate, a substrate with analog circuitry, a substrate having integrated CMOS circuitry (a CMOS substrate), or any other substrate with electrical functionality. In some embodiments, the ultrasonic transducers may not be formed on a wafer. For example, broadband crystal transducers may be individually placed on a suitable substrate and coupled to an electrical substrate. Further alternatives are possible. The TX circuitry104(if included) may, for example, generate pulses that drive the individual elements of, or one or more groups of elements within, the transducer array(s)102so as to generate acoustic signals to be used for imaging. The RX circuitry106, on the other hand, may receive and process electronic signals generated by the individual elements of the transducer array(s)102when acoustic signals impinge upon such elements. In some embodiments, the timing & control circuit108may be, for example, responsible for generating all timing and control signals that are used to synchronize and coordinate the operation of the other elements in the device100. In the example shown, the timing & control circuit108is driven by a single clock signal CLK supplied to an input port116. The clock signal CLK may be, for example, a high-frequency clock used to drive one or more of the on-chip circuit components. In some embodiments, the clock signal CLK may, for example, be a 1.5625 GHz or 2.5 GHz clock used to drive a high-speed serial output device (not shown inFIG.1) in the signal conditioning/processing circuit110, or a 20 Mhz, 40 MHz, 100 MHz, 200 MHz, 250 MHz, 500 MHz, 750 MHz, or 1000 MHz clock used to drive other digital components on the die112, and the timing & control circuit108may divide or multiply the clock CLK, as necessary, to drive other components on the die112. In other embodiments, two or more clocks of different frequencies (such as those referenced above) may be separately supplied to the timing & control circuit108from an off-chip source. The power management circuit118may be, for example, responsible for converting one or more input voltages VIN from an off-chip source into voltages needed to carry out operation of the chip, and for otherwise managing power consumption within the device100. In some embodiments, for example, a single voltage (e.g., 0.4V, 0.9V, 1.5V, 1.8V, 2.5V, 3.3V, 5V, 12V, 80V, 100V, 120V, etc.) may be supplied to the chip and the power management circuit118may step that voltage up or down, as necessary, using a charge pump circuit or via some other DC-to-DC voltage conversion mechanism. In other embodiments, multiple different voltages may be supplied separately to the power management circuit118for processing and/or distribution to the other on-chip components. As shown inFIG.1B, in some embodiments, a HIFU controller120may be integrated on the die112so as to enable the generation of HIFU signals via one or more elements of the transducer array(s)102. In other embodiments, a HIFU controller for driving the transducer array(s)102may be located off-chip, or even within a device separate from the device100. That is, aspects of the present disclosure relate to provision of ultrasound-on-a-chip HIFU systems, with and without ultrasound imaging capability. It should be appreciated, however, that some embodiments may not have any HIFU capabilities and thus may not include a HIFU controller120. Moreover, it should be appreciated that the HIFU controller120may not represent distinct circuitry in those embodiments providing HIFU functionality. For example, in some embodiments, the remaining circuitry ofFIG.1B(other than the HIFU controller120) may be suitable to provide ultrasound imaging functionality and/or HIFU, i.e., in some embodiments the same shared circuitry may be operated as an imaging system and/or for HIFU. Whether or not imaging or HIFU functionality is exhibited may depend on the power provided to the system. HIFU typically operates at higher powers than ultrasound imaging. Thus, providing the system a first power level (or voltage level) appropriate for imaging applications may cause the system to operate as an imaging system, whereas providing a higher power level (or voltage level) may cause the system to operate for HIFU. Such power management may be provided by off-chip control circuitry in some embodiments. In addition to using different power levels, imaging and HIFU applications may utilize different waveforms. Thus, waveform generation circuitry may be used to provide suitable waveforms for operating the system as either an imaging system or a HIFU system. In some embodiments, the system may operate as both an imaging system and a HIFU system (e.g., capable of providing image-guided HIFU). In some such embodiments, the same on-chip circuitry may be utilized to provide both functions, with suitable timing sequences used to control the operation between the two modalities. In the example shown, one or more output ports114may output a high-speed serial data stream generated by one or more components of the signal conditioning/processing circuit110. Such data streams may be, for example, generated by one or more USB 2.0, 3.0 and 3.1 modules, and/or one or more 1 Gb/s, 10 Gb/s, 40 Gb/s, or 100 Gb/s Ethernet modules, integrated on the die112. In some embodiments, the signal stream produced on output port114can be fed to a computer, tablet, or smartphone for the generation and/or display of 2-dimensional, 3-dimensional, and/or tomographic images. It should be appreciated that the listed images are only examples of possible image types. Other examples may include 1-dimensional images, 0-dimensional spectral Doppler images, and time-varying images, including images combing 3D with time (time varying 3D images). In embodiments in which image formation capabilities are incorporated in the signal conditioning/processing circuit110, even relatively low-power devices, such as smartphones or tablets which have only a limited amount of processing power and memory available for application execution, can display images using only a serial data stream from the output port114. As noted above, the use of on-chip analog-to-digital conversion and a high-speed serial data link to offload a digital data stream is one of the features that helps facilitate an “ultrasound on a chip” solution according to some embodiments of the technology described herein. Devices100such as that shown inFIGS.1A and1Bmay be used in any of a number of imaging and/or treatment (e.g., HIFU) applications, and the particular examples discussed herein should not be viewed as limiting. In one illustrative implementation, for example, an imaging device including an N×M planar or substantially planar array of CMUT elements may itself be used to acquire an ultrasonic image of a subject, e.g., a person's abdomen, by energizing some or all of the elements in the array(s)102(either together or individually) during one or more transmit phases, and receiving and processing signals generated by some or all of the elements in the array(s)102during one or more receive phases, such that during each receive phase the CMUT elements sense acoustic signals reflected by the subject. In other implementations, some of the elements in the array(s)102may be used only to transmit acoustic signals and other elements in the same array(s)102may be simultaneously used only to receive acoustic signals. Moreover, in some implementations, a single imaging device may include a P×Q array of individual devices, or a P×Q array of individual N×M planar arrays of CMUT elements, which components can be operated in parallel, sequentially, or according to some other timing scheme so as to allow data to be accumulated from a larger number of CMUT elements than can be embodied in a single device100or on a single die112. Transmit and Receive Circuitry FIG.2is a block diagram illustrating how, in some embodiments, the TX circuitry104and the RX circuitry106for a given transducer element204may be used either to energize the transducer element204to emit an ultrasonic pulse, or to receive and process a signal from the transducer element204representing an ultrasonic pulse sensed by it. In some implementations, the TX circuitry104may be used during a “transmission” phase, and the RX circuitry may be used during a “reception” phase that is non-overlapping with the transmission phase. As noted above, in some embodiments, a device100may alternatively employ only TX circuitry104or only RX circuitry106, and aspects of the present technology do not necessarily require the presence of both such types of circuitry. In various embodiments, TX circuitry104and/or RX circuitry106may include a TX circuit and/or an RX circuit associated with a single transducer cell (e.g., a CUT or CMUT), a group of two or more transducer cells within a single transducer element204, a single transducer element204comprising a group of transducer cells, a group of two or more transducer elements204within an array102, or an entire array102of transducer elements204. In the example shown inFIG.2, the TX circuitry104/RX circuitry106includes a separate TX circuit and a separate RX circuit for each transducer element204in the array(s)102, but there is only one instance of each of the timing & control circuit108and the signal conditioning/processing circuit110. Accordingly, in such an implementation, the timing & control circuit108may be responsible for synchronizing and coordinating the operation of all of the TX circuitry104/RX circuitry106combinations on the die112, and the signal conditioning/processing circuit110may be responsible for handling inputs from all of the RX circuitry106on the die112. In other embodiments, timing and control circuit108may be replicated for each transducer element204or for a group of transducer elements204. As shown inFIG.2, in addition to generating and/or distributing clock signals to drive the various digital components in the device100, the timing & control circuit108may output either an “TX enable” signal to enable the operation of each TX circuit of the TX circuitry104, or an “RX enable” signal to enable operation of each RX circuit of the RX circuitry106. In the example shown, a switch202in the RX circuitry106may always be opened during the TX circuitry104is enabled, so as to prevent an output of the TX circuitry104from driving the RX circuitry106. The switch202may be closed when operation of the RX circuitry106is enabled, so as to allow the RX circuitry106to receive and process a signal generated by the transducer element204. As shown, the TX circuitry104for a respective transducer element204may include both a waveform generator206and a pulser208. The waveform generator206may, for example, be responsible for generating a waveform that is to be applied to the pulser208, so as to cause the pulser208to output a driving signal to the transducer element204corresponding to the generated waveform. In the example shown inFIG.2, the RX circuitry106for a respective transducer element204includes an analog processing block210, an analog-to-digital converter (ADC)212, and a digital processing block214. The ADC212may, for example, comprise a 5-bit, 6-bit, 7-bit, 8-bit, 10-bit, 12-bit or 14-bit, and 5 MHz, 20 MHz, 25 MHz, 40 MHz, 50 MHz, or 80 MHz ADC. The ADC timing may be adjusted to run at sample rates corresponding to the mode based needs of the application frequencies. For example, a 1.5 MHz acoustic signal may be detected with a setting of 20 MHz. The choice of a higher vs. lower ADC rate provides a balance between sensitivity and power vs. lower data rates and reduced power, respectively. Therefore, lower ADC rates facilitate faster pulse repetition frequencies, increasing the acquisition rate in a specific mode and, in at least some embodiments, reducing the memory and processing requirements while still allowing for high resolution in shallow modes. After undergoing processing in the digital processing block214, the outputs of all of the RX circuits on the die112(the number of which, in this example, is equal to the number of transducer elements204on the chip) are fed to a multiplexer (MUX)216in the signal conditioning/processing circuit110. In other embodiments, the number of transducer elements is larger than the number of RX circuits, and several transducer elements provide signals to a single RX circuit. The MUX216multiplexes the digital data from the RX circuits, and the output of the MUX216is fed to a multiplexed digital processing block218in the signal conditioning/processing circuit110, for final processing before the data is output from the die112, e.g., via one or more high-speed serial output ports114. The MUX216is optional, and in some embodiments parallel signal processing is performed, for example where the output of each RX circuit is fed into a suitable dedicated digital processing block. A high-speed serial data port may be provided at any interface between or within blocks, any interface between chips and/or any interface to a host. Various components in the analog processing block210and/or the digital processing block214may reduce the amount of data that needs to be output from the die112via a high-speed serial data link or otherwise. In some embodiments, for example, one or more components in the analog processing block210and/or the digital processing block214may thus serve to allow the RX circuitry106to receive transmitted and/or scattered ultrasound pressure waves with an improved signal-to-noise ratio (SNR) and in a manner compatible with a diversity of waveforms. The inclusion of such elements may thus further facilitate and/or enhance the disclosed “ultrasound-on-a-chip” solution in some embodiments. Although particular components that may optionally be included in the analog processing block210are described below, it should be appreciated that digital counterparts to such analog components may additionally or alternatively be employed in the digital processing block214. The converse is also true. That is, although particular components that may optionally be included in the digital processing block214are described below, it should be appreciated that analog counterparts to such digital components may additionally or alternatively be employed in the analog processing block210. Layout of Ultrasonic Transducers FIG.3shows substrate302(e.g., a semiconductor die) of an ultrasound device having multiple ultrasound circuitry modules304formed thereon. As shown, an ultrasound circuitry module304may comprise multiple ultrasound elements306. An ultrasound element306may comprise multiple ultrasonic transducers308, sometimes termed ultrasonic transducers. In the illustrated embodiment, substrate302comprises 144 modules arranged as an array having two rows and 72 columns. However, it should be appreciated that a substrate of a single substrate ultrasound device may comprise any suitable number of ultrasound circuitry modules (e.g., at least two modules, at least ten modules, at least 100 modules, at least 400 modules, at least 1000 modules, at least 5000 modules, at least 10,000 modules, at least 25,000 modules, at least 50,000 modules, at least 100,000 modules, at least 250,000 modules, at least 500,000 modules, between two and a million modules, or any number or range of numbers within such ranges) that may be arranged as an two-dimensional array of modules having any suitable number of rows and columns or in any other suitable way. In the illustrated embodiment, each ultrasound circuitry module304comprises 64 ultrasound elements arranged as an array having 32 rows and two columns. However, it should be appreciated that an ultrasound circuitry module may comprise any suitable number of ultrasound elements (e.g., one ultrasound element, at least two ultrasound elements, at least four ultrasound elements, at least eight ultrasound elements, at least 16 ultrasound elements, at least 32 ultrasound elements, at least 64 ultrasound elements, at least 128 ultrasound elements, at least 256 ultrasound elements, at least 512 ultrasound elements, between two and 1024 elements, at least 2500 elements, at least 5,000 elements, at least 10,000 elements, at least 20,000 elements, between 5000 and 15000 elements, between 8000 and 12000 elements, between 1000 and 20,000 elements, or any number or range of numbers within such ranges) that may be arranged as a two-dimensional array of ultrasound elements having any suitable number of rows and columns or in any other suitable way. In the illustrated embodiment, each ultrasound element306comprises 16 ultrasonic transducers arranged as a two-dimensional array having four rows and four columns. However, it should be appreciated that an ultrasound element may comprise any suitable number and/or groupings of ultrasonic transducer cells (e.g., one, at least two, four, at least four, 9, at least 9, at least 16, 25, at least 25, at least 36, at least 49, at least 64, at least 81, at least 100, between one and 200, or any number or range of numbers within such ranges) that may be arranged as a two dimensional array having any suitable number of rows and columns (square or rectangular) or in any other suitable way. In addition, the transducer cells may include shapes such as circular, oval, square, hexagonal, or other regular or irregular polygons, for example. It should be appreciated that any of the components described above (e.g., ultrasound transmission units, ultrasound elements, ultrasonic transducers) may be arranged as a one-dimensional array, as a two-dimensional array, or in any other suitable manner. In some embodiments, an ultrasound circuitry module may comprise circuitry in addition to one or more ultrasound elements. For example, an ultrasound circuitry module may comprise one or more waveform generators and/or any other suitable circuitry. In some embodiments, module interconnection circuitry may be integrated with the substrate302and configured to connect ultrasound circuitry modules to one another to allow data to flow among the ultrasound circuitry modules. For example, the device module interconnection circuitry may provide for connectivity among adjacent ultrasound circuitry modules. In this way, an ultrasound circuitry module may be configured to provide data to and/or received data from one or more other ultrasound circuitry modules on the device. Ultrasonic Transducers The ultrasonic transducers of a universal ultrasound probe may be formed in any of numerous ways and, in some embodiments, may be formed as described with reference toFIG.4. FIG.4is a cross-sectional view of an ultrasound device including a CMOS wafer integrated with an engineered substrate having sealed cavities, according to a non-limiting embodiment of the present application. The device400may be formed in any suitable way and, for example, by implementing the methods described in the aforementioned U.S. Pat. No. 9,067,779. The device400includes an engineered substrate402integrated with a CMOS wafer404. The engineered substrate402includes a plurality of cavities406formed between a first silicon device layer408and a second silicon device layer410. A silicon oxide (SiO2) layer412(e.g., a thermal silicon oxide-a silicon oxide formed by thermal oxidation of silicon) may be formed between the first and second silicon device layers408and410, with the cavities406being formed therein. In this non-limiting example, the first silicon device layer408may be configured as a bottom electrode and the second silicon device layer410may be configured as a membrane. Thus, the combination of the first silicon device layer408, second silicon device layer410, and cavities406may form an ultrasonic transducer (e.g., a CMUT), of which six are illustrated in this non-limiting cross-sectional view. To facilitate operation as a bottom electrode or membrane, one or both of the first silicon device layer408and second silicon device layer410may be doped to act as conductors, and in some cases are highly doped (e.g., having a doping concentration greater than 1015 dopants/cm 3 or greater). In some embodiments, the silicon oxide layer412containing the formed cavities may be formed as a plurality of insulating layers. For example, the silicon oxide layer412may comprise a first layer, with the formed cavities, and a second continuous layer with no cavities as an insulating layer for collapsed mode operation, for example. The engineered substrate402may further include an oxide layer414on top of the second silicon device layer410, which may represent the BOX layer of a silicon-on-insulator (SOI) wafer used to form the engineered substrate402. The oxide layer414may function as a passivation layer in some embodiments and, as shown, may be patterned to be absent over the cavities406. Contacts424, and passivation layer430may be included on the engineered substrate402. The passivation layer430may be patterned to allow access to one or more contacts424, and may be formed of any suitable passivating material. In some embodiments, the passivation layer430is formed of silicon nitride Si3N4and in some embodiments is formed by a stack of SiO2 and Si3N4, although alternatives are possible. The engineered substrate402and CMOS wafer404may be bonded together at bond points416aand416b. The bond points may represent eutectic bond points, for example formed by a eutectic bond of a layer on engineered substrate402with a layer on CMOS wafer404, or may be any other suitable bond type described herein (e.g., a silicide bond or thermocompression bond). In some embodiments, the bond points416aand416bmay be conductive, for example being formed of metal. The bond points416amay function solely as bond points in some embodiments, and in some embodiments may form a seal ring, for example hermetically sealing the ultrasonic transducers of the device400, and improving device reliability. In some embodiments, the bond points416amay define a seal ring that also provides electrical connection between the engineered substrate and CMOS wafer. Similarly, the bond points416bmay serve a dual purpose in some embodiments, for example serving as bond points and also providing electrical connection between the ultrasonic transducers of the engineered substrate402and the IC of the CMOS wafer404. In those embodiments in which the engineered substrate is not bonded with a CMOS wafer the bond points416bmay provide electrical connection to any electrical structures on a substrate to which the engineered substrate is bonded. The CMOS wafer404includes a base layer (e.g., a bulk silicon wafer)418, an insulating layer420(e.g., SiO2), and a metallization422. The metallization422may be formed of aluminum, copper, or any other suitable metallization material, and may represent at least part of an integrated circuit formed in the CMOS wafer. For example, metallization422may serve as a routing layer, may be patterned to form one or more electrodes, or may be used for other functions. In practice, the CMOS wafer404may include multiple metallization layers and/or post-processed redistribution layers, but for simplicity, only a single metallization is illustrated. The bond points416bmay provide electrical connection between the metallization422of CMOS wafer404and the first silicon device layer408of the engineered substrate. In this manner, the integrated circuitry of the CMOS wafer404may communicate with (e.g., send electrical signals to and/or receive electrical signals from) the ultrasonic transducer electrodes and/or membranes of the engineered substrate. In the illustrated embodiments, a separate bond point416bis illustrated as providing electrical connection to each sealed cavity (and therefore for each ultrasonic transducer), although not all embodiments are limited in this manner. For example, in some embodiments, the number of electrical contacts provided may be less than the number of ultrasonic transducers. Electrical contact to the ultrasonic transducer membranes represented by second silicon device layer410is provided in this non-limiting example by contacts424, which may be formed of metal or any other suitable conductive contact material. In some embodiments, an electrical connection may be provided between the contacts424and the bond pad426on the CMOS wafer. For example, a wire bond425may be provided or a conductive material (e.g., metal) may be deposited over the upper surface of the device and patterned to form a conductive path from the contacts424to the bond pad426. However, alternative manners of connecting the contacts424to the IC on the CMOS wafer404may be used. In some embodiments an embedded via (not shown inFIG.4) may be provided from the first silicon device layer408to a bottom side of the second silicon device layer410, thus obviating any need for the contacts424on the topside of the second silicon device layer410. In such embodiments, suitable electrical isolation may be provided relative to any such via to avoid electrically shorting the first and second silicon device layers. The device400also includes isolation structures (e.g., isolation trenches)428configured to electrically isolate groups of ultrasonic transducers (referred to herein as “ultrasonic transducer elements”) or, as shown inFIG.4, individual ultrasonic transducers. The isolation structures428may include trenches through the first silicon device layer408that are filled with an insulating material in some embodiments. Alternatively, the isolation structures428may be formed by suitable doping. Isolation structures428are optional. Various features of the device400are now noted. For instance, it should be appreciated that the engineered substrate402and CMOS wafer404wafer may be monolithically integrated, thus providing for monolithic integration of ultrasonic transducers with CMOS ICs. In the illustrated embodiment, the ultrasonic transducers are positioned vertically (or stacked) relative to the CMOS IC, which may facilitate formation of a compact ultrasound device by reducing the chip area required to integrate the ultrasonic transducers and CMOS IC. Additionally, the engineered substrate402includes only two silicon layers408and410, with the cavities406being formed between them. The first silicon device layer408and second silicon device layer410may be thin, for example each being less than 50 microns in thickness, less than 30 microns in thickness, less than 20 microns in thickness, less than 10 microns in thickness, less than 5 microns in thickness, less than 3 microns in thickness, or approximately 2 microns in thickness, among other non-limiting examples. In some embodiments it is preferable for one of the two wafers (e.g., silicon layer408or silicon layer410) of the engineered substrate to be sufficiently thick to minimize vibration, prevent vibration or shift the frequency of unwanted vibration to a range outside of the operating range of the device, thereby preventing interference. Through modeling of the geometries in the physical stack of the transducer integrated with the CMOS, thicknesses of all layers can be optimized for transducer center frequency and bandwidth, with minimal interfering vibration. This may include, but is not limited to, changing layer thicknesses and features in the transducer engineered substrate and changing the thickness of the CMOS wafer418. These layer thicknesses are also chosen to provide uniformity across the area of the array, and therefore tighter frequency uniformity, using commercially available wafers. The array may be substantially flat, in that the substrate may lack curvature. Still, as described herein, multiple ultrasound imaging modes may be achieved, including those for which curved transducer arrays are typically used. The lack of curvature of the substrate may be quantified in some embodiments as the substrate deviating from planar by no more than 0.5 cm across the array, e.g. deviation of 0.2 cm, 0.1 cm, or less. Thus, while the engineered substrate may be thin, it may have a thickness of at least, for example, 4 microns in some embodiments, at least 5 microns in some embodiments, at least 7 microns in some embodiments, at least 10 microns in some embodiments, or other suitable thickness to prevent unwanted vibration. Such dimensions contribute to achieving a small device and may facilitate making electrical contact to the ultrasonic transducer membrane (e.g., second silicon device layer410) without the need for thru-silicon vias (TSVs). TSVs are typically complicated and costly to implement, and thus avoiding use of them may increase manufacturing yield and reduce device cost. Moreover, forming TSVs requires special fabrication tools not possessed by many commercial semiconductor foundries, and thus avoiding the need for such tools can improve the supply chain for forming the devices, making them more commercially practical than if TSVs were used. The engineered substrate402as shown inFIG.4may be relatively thin, for example being less than I 00 microns in total thickness, less than 50 microns in total thickness, less than 30 microns in total thickness, less than 20 microns in total thickness, less than 10 microns in total thickness, or any other suitable thickness. The significance of such thin dimensions includes the lack of structural integrity and the inability to perform various types of fabrication steps (e.g., wafer bonding, metallization, lithography and etch) with layers having such initially thin dimensions. Thus, it is noteworthy that such thin dimensions may be achieved in the device400, via a process sequence. Also, the silicon device layers408and410may be formed of single crystal silicon. The mechanical and electrical properties of single crystal silicon are stable and well understood, and thus the use of such materials in an ultrasonic transducer (e.g., as the membrane of a CMUT) may facilitate design and control of the ultrasonic transducer behavior. In one embodiment, there is a gap between parts of the CMOS wafer404and the first silicon device layer408since the two are bonded at discrete bond points416brather than by a bond covering the entire surface of the CMOS wafer404. The significance of this gap is that the first silicon device layer408may vibrate if it is sufficiently thin. Such vibration may be undesirable, for instance representing unwanted vibration in contrast to the desired vibration of the second silicon device layer410. Accordingly, it is beneficial in at least some embodiments for the first silicon device layer408to be sufficiently thick to minimize vibration, avoid vibration or shift the frequency of any unwanted vibration outside of the operating frequency range of the device. In alternative embodiments, it may be desirable for both the first and second silicon device layers408and410to vibrate. For instance, they may be constructed to exhibit different resonance frequencies, thus creating a multi-frequency device. The multiple resonance frequencies (which may be related as harmonics in some embodiments) may be used, for example, in different operating states of an ultrasonic transducer. For example, the first silicon device layer408may be configured to resonate at half the center frequency of the second silicon device layer410. In still another embodiment, the strength of the bond between silicon device layer410and silicon oxide layer412allows for cavities406formed within silicon oxide layer412to have a larger diameter than would be possible with a weaker bond between layers410and412. The diameter of a cavity is indicated as “w” inFIG.4. The bond strength is provided at least in part by using a fabrication process in which the engineered substrate402is formed by bonding (e.g., at temperature less than about 400° C.) of two wafers, one containing silicon device layer408and the other containing silicon device layer410, followed by a high temperature anneal (e.g., about 1000° C.). Ultrasonic transducers implemented using wide cavities may generate ultrasonic signals having more power at a particular frequency than ultrasonic signals generated at the same particular frequency by ultrasonic transducers implemented using cavities have a smaller diameter. In turn, higher power ultrasonic signals penetrate deeper into a subject being imaged thereby enabling high-resolution imaging of a subject at greater depths than possible with ultrasonic transducers having smaller cavities. For example, conventional ultrasound probes may use high frequency ultrasound signals (e.g., signals having frequencies in the 7-12 MHz range) to generate high-resolution images, but only at shallow depths due to the rapid attenuation of high-frequency ultrasound signals in the body of a subject being imaged. However, increasing the power of the ultrasonic signals emitted by an ultrasound probe (e.g., as enabled through the use of cavities having a larger diameter as made possible by the strength of the bond between layers410and412) allows the ultrasonic signals to penetrate the subject deeper resulting in high-resolution images of the subject at greater depths than previously possible with conventional ultrasound probes. Additionally, an ultrasonic transducer formed using a larger diameter cavity may generate lower frequency ultrasound signals than an ultrasonic transducer having a cavity with a smaller diameter. This extends the range of frequencies across which the ultrasonic transducer may operate. An additional technique may be to selectively etch and thin portions of the transducer top membrane410. This introduces spring softening in the transducer membrane, thereby lowering the center frequency. This may be done on all, some or none of the transducers in the array in any combination of patterns. Forms of Universal Ultrasound Device A universal ultrasound device may be implemented in any of a variety of physical configurations including, for example, as a part of an internal imaging device, such as a pill to be swallowed by a subject or a pill mounted on an end of a scope or catheter, as part of a handheld device including a screen to display obtained images, as part of a patch configured to be affixed to the subject, or as part of a hand-held probe. In some embodiments, a universal ultrasound probe may be embodied in a pill to be swallowed by a subject. As the pill travels through the subject, the ultrasound probe within the pill may image the subject and wirelessly transmit obtained data to one or more external devices for processing the data received from the pill and generating one or more images of the subject. For example, as shown inFIG.5A, pill502comprising an ultrasound probe may be configured to communicate wirelessly (e.g., via wireless link501) with external device500, which may be a desktop, a laptop, a handheld computing device, and/or any other device external to pill502and configured to process data received from pill502. A person may swallow pill502and, as pill502travels through the person's digestive system, pill502may image the person from within and transmit data obtained by the ultrasound probe within the pill to external device500for further processing. In some embodiments, the pill502may comprise an onboard memory and the pill502may store the data on the onboard memory such that the data may be recovered from the pill502once it has exited the person. In some embodiments, a pill comprising an ultrasound probe may be implemented by potting the ultrasound probe within an outer case, as illustrated by an isometric view of pill504shown inFIG.5B.FIG.5Cis a section view of pill504shown inFIG.5Bexposing views of the electronic assembly and batteries. In some embodiments, a pill comprising an ultrasound probe may be implemented by encasing the ultrasound probe within an outer housing, as illustrated by an isometric view of pill506shown inFIG.5D.FIG.5Eis an exploded view of pill506shown inFIG.5Dshowing outer housing portions510aand510bused to encase electronic assembly510c. In some embodiments, the ultrasound probe implemented as part of a pill may comprise one or multiple ultrasonic transducer (e.g., CMUT) arrays, one or more image reconstruction chips, an FPGA, communications circuitry, and one or more batteries. For example, as shown inFIG.5F, pill508amay include multiple ultrasonic transducer arrays shown in sections508band508c, multiple image reconstruction chips as shown in sections508cand508d, a Wi-Fi chip as shown in section508d, and batteries as shown in sections508dand508e. FIGS.5G and5Hfurther illustrate the physical configuration of electronics module506cshown inFIG.5E. As shown inFIGS.5G and5H, electronics module506cincludes four CMUT arrays512(though more or fewer CMUT arrays may be used in other embodiments), bond wire encapsulant514, four image reconstruction chips516(though more or fewer image reconstruction chips may be used in other embodiments), flex circuit518, Wi-Fi chip520, FPGA522, and batteries524. Each of the batteries may be of size 13 PR48. Each of the batteries may be a 300 mAh 1.4V battery. Other batteries may be used, as aspects of the technology described herein are not limited in this respect. In some embodiments, the ultrasonic transducers of an ultrasound probe in a pill are physically arranged such that the field of view of the probe within the pill is equal to or as close to 360 degrees as possible. For example, as shown inFIGS.5G and5H, each of the four CMUT arrays may a field of view of approximately 60 degrees (30 degrees on each side of a vector normal to the surface of the CMUT array) or a field of view in a range of 40-80 degrees such that the pill consequently has a field of view of approximately 240 degrees or a field of view in a range of 160-320 degrees. In some embodiments, the field of view may be linear where under the array, rectilinear under the probe space, and trapezoidal out 30 degrees or for example any value between 15 degrees and 60 degrees, as a non-limiting example. In some embodiments, a universal ultrasound probe may be embodied in a handheld device602illustrated inFIGS.6A and6B. Handheld device602may be held against (or near) a subject600and used to image the subject. Handheld device602may comprise an ultrasound probe (e.g., a universal ultrasound probe) and display604, which in some embodiments, may be a touchscreen. Display604may be configured to display images of the subject generated within handheld device602using ultrasound data gathered by the ultrasound probe within device602. In some embodiments, handheld device602may be used in a manner analogous to a stethoscope. A medical professional may place handheld device602at various positions along a patient's body. The ultrasound probe within handheld device602may image the patient. The data obtained by the ultrasound probe may be processed and used to generate image(s) of the patient, which image(s) may be displayed to the medical professional via display604. As such, a medical professional could carry hand-held device (e.g., around their neck or in their pocket) rather than carrying around multiple conventional probes, which is burdensome and impractical. In some embodiments, a universal ultrasound probe may be embodied in a patch that may be coupled to a patient. For example,FIGS.7A and7Billustrate a patch710coupled to patient712. The patch710may be configured to transmit, wirelessly for example, data collected by the patch710to one or more external devices (not shown) for further processing. For purposes of illustration, a top housing of the patch710is depicted in a transparent manner to depict exemplary locations of various internal components of the patch. FIGS.7C and7Dshow exploded views of patch710. As particularly illustrated inFIG.7C, patch710includes upper housing714, lower housing716, and circuit board718. Circuit board718may be configured to support various components, such as for example heat sink720, battery722and communications circuitry724. In one embodiment, communication circuitry724includes one or more short- or long-range communication platform. Exemplary short-range communication platforms include, Bluetooth (BT), Bluetooth Low Energy (BLE), Near-Field Communication (NFC). Long-range communication platforms include, Wi-Fi and Cellular. While not shown, the communication platform may include front-end radio, antenna and other processing circuitry configured to communicate radio signal to and auxiliary device (not shown). The radio signal may include ultrasound imaging information obtained by patch710. In an exemplary embodiment, communication circuitry transmits periodic beacon signals according to IEEE 802.11 and other prevailing standards. The beacon signal may include a BLE advertisement. Upon receipt of the beacon signal or the BLE advertisement, an auxiliary device (not shown) may respond to patch710. That is, the response to the beacon signal may initiate a communication handshake between patch710and the auxiliary device. The auxiliary device may include laptop, desktop, smartphone or any other device configured for wireless communication. The auxiliary device may act as a gateway to cloud or internet communication. In an exemplary embodiment, the auxiliary device may include the patient's own smart device (e.g., smartphone) which communicatively couples to patch710and periodically receives ultrasound information from patch710. The auxiliary device may then communicate the received ultrasound information to external sources. Circuit board718may comprise one or more processing circuitry, including one or more controllers to direct communication through communication circuitry724. For example, circuit board718may engage communication circuitry periodically or on as-needed basis to communicate information with one or more auxiliary devices. Ultrasound information may include signals and information defining an ultrasound image captured by patch710. Ultrasound information may also include control parameters communicated from the auxiliary device to patch710. The control parameters may dictate the scope of the ultrasound image to be obtained by patch710. In one embodiment, the auxiliary device may store ultrasound information received from patch710. In another embodiment, the auxiliary device may relay ultrasound information received from patch710to another station. For example, the auxiliary device may use Wi-Fi to communicate the ultrasound information received from patch710to a cloud-based server. The cloud-based server may be a hospital server or a server accessible to the physician directing ultrasound imaging. In another exemplary embodiment, patch710may send sufficient ultrasound information to the auxiliary device such that the auxiliary device may construct an ultrasound image therefrom. In this manner, communication bandwidth and power consumption may be minimized at patch710. In still another embodiment, the auxiliary device may engage patch710through radio communication (i.e., through communication circuitry724) to actively direct operation of patch710. For example, the auxiliary device may direct patch710to produce ultrasound images of the patient at periodic intervals. The auxiliary device may direct the depth of the ultrasound images taken by patch710. In still another example, the auxiliary device may control the manner of operation of the patch so as to preserve power consumption at battery722. Upon receipt of ultrasound information from patch710, the auxiliary device may operate to cease imaging, increase imaging rate or communicate an alarm to the patient or to a third party (e.g., physician or emergency personnel). It should be noted that the communication platform described in relation withFIG.7may also be implemented in other form-factors disclosed herein. For example, the communication platform (including control circuitry and any interface) may be implemented in the ultrasound pill as illustrated inFIGS.5A-5H, the handheld device as illustrated inFIGS.6A-6Bor the handheld probe as illustrated inFIG.8. As shown inFIG.7C, a plurality of through vias726(e.g., copper) may be used for a thermal connection between heat sink720and one or more image reconstruction chips (e.g., CMOS) (not shown inFIG.7C). As further depicted inFIG.7C, patch710may also include dressing728that provides an adhesive surface for both the patch housing as well as to the skin of a patient. One non-limiting example of such a dressing728is Tegaderm™, a transparent medical dressing available from 3M Corporation. Lower housing716includes a generally rectangular shaped opening730that aligns with another opening732in dressing728. Referring toFIG.7D, another “bottom up” exploded view of the patch710illustrates the location of ultrasonic transducers and integrated CMOS chip (generally indicated by734) on circuit board718. An acoustic lens736mounted over the transducers/CMOS734is configured to protrude through openings730,732to make contact with the skin of a patient. Although the embodiment ofFIGS.7A-7Ddepict an adhesive dressing728as a means of affixing patch710to patient712, it will be appreciated that other fastening arrangements are also contemplated. For example, a strap (not shown) may be used in lieu of (or in addition to) dressing728in order to secure the patch710at a suitable imaging location. In some embodiments, a universal ultrasound probe may be embodied in hand-held probe800shown inFIG.8. Hand-held probe800may be configured to transmit data collected by the probe800wirelessly to one or more external host devices (not shown inFIG.8) for further processing. In other embodiments, hand-held probe800may be configured transmit data collected by the probe800to one or more external devices using one or more wired connections, as aspects of the technology described herein are not limited in this respect. Some embodiments of the technology described herein relate to an ultrasound device that may be configured to operate in any one of multiple operating modes. Each of the operating modes may be associated with a respective configuration profile that specifies a plurality of parameter values used for operating the ultrasound device. In some embodiments, the operating mode of the ultrasound device may be selected by a user, for example, via a graphical user interface presented by a mobile computing device communicatively coupled to the ultrasound device. In turn, an indication of the operating mode selected by the user may be communicated to the ultrasound device, and the ultrasound device may: (1) access a configuration profile associated with the selected operating mode; and (2) use parameter values specified by the accessed configuration profile to operate in the selected operating mode. Accordingly, some embodiments provide for a system comprising: (1) an ultrasound device (e.g., a handheld ultrasound probe or a wearable ultrasound probe) having a plurality of ultrasonic transducers and control circuitry; and (2) a computing device (e.g., a mobile computing device such as a smart phone) that allows a user to select an operating mode for the ultrasound device (e.g., via a graphical user interface presented to the user via a display coupled to and/or integrated with the computing device) and provides an indication of the selected operating mode to the ultrasound device. In turn, the control circuitry in ultrasound device may: (1) receive the indication of the selected operating mode; (2) responsive to receiving an indication of the first operating mode: obtain a first configuration profile specifying a first set of parameter values associated with the first operating mode; and control, using the first configuration profile, the ultrasound device to operate in the first operating mode; and (3) responsive to receiving an indication of the second operating mode, obtain a second configuration profile specifying a second set of parameter values associated with the second operating mode; and control, using the second configuration profile, the ultrasound device to operate in the second operating mode. In some embodiments, different configuration profiles for different operating modes include different parameter values for one or more parameters used for operating the ultrasound device. For example, in some embodiments, different configuration profiles may specify different azimuth aperture values, elevation aperture values, azimuth focus values, elevation focus values, transducer bias voltage values, transmit peak-to-peak voltage values, transmit center frequency values, receive center frequency values, polarity values, ADC clock rate values, decimation rate values, and/or receive duration values. It should be appreciated that other parameters may be used in the configuration profiles, such as receive start time, receive offset, transmit spatial amplitude, transmit waveform, pulse repetition interval, axial resolution, lateral resolution at focus, elevational resolution at focus, and signal gain. It should be appreciated that two different configuration profiles for two different operating modes may differ in any suitable number of parameter values (e.g., at least one parameter value, at least two parameter values, at least five parameter values, etc.), as aspects of the technology described herein are not limited in this respect. The configuration profiles may differ in one or more of the above-described parameter values and/or any other suitable parameter values. Examples of parameter values for different operating modes are provided in Tables 2-4 below. Each of the rows of the Tables 2-4 indicates illustrative parameter values for a particular configuration profile associated with a respective operating mode. As one example, a first configuration profile for a first operating mode may specify a first azimuth aperture value and a second configuration profile for a second operating mode may specify a second azimuth aperture value different from the first azimuth aperture value. As another example, a first configuration profile for a first operating mode may specify a first elevation aperture value and a second configuration profile for a second operating mode may specify a second elevation aperture value different from the first elevation aperture value. The azimuth and elevation aperture values for an operating mode may control the size of the active aperture of the transducer array in the ultrasound probe. The physical aperture of the array, which is determined by the width and height of the array, may be different from the active aperture of the array as used in a particular operating mode. Indeed, the transducer arrangement may be configured to provide multiple possible active apertures. For example, only some of the transducers may be used for transmitting/receiving ultrasound signals, which results in the active aperture of the array being different from what the aperture would be if all the ultrasound transducers were used. In some embodiments, the azimuth and elevation aperture values may be used to determine which subset of transducer elements are to be used for transmitting/receiving ultrasound signals. In some embodiments, the azimuth and elevation aperture values may indicate, as a function of length, an extent of the transducer array used in the azimuthal and elevational orientations, respectively. As another example, a first configuration profile for a first operating mode may specify a first azimuth focus value and a second configuration profile for a second operating mode may specify a second azimuth focus value different from the first azimuth focus value. As another example, a first configuration profile for a first operating mode may specify a first elevation focus value and a second configuration profile for a second operating mode may specify a second elevation focus value different from the first elevation focus value. The azimuthal and elevational focus values may be used to control the focal point of the transducer array independently in two dimensions. As such, different foci may be selected for the elevation and azimuthal dimensions. The focal point may be varied as between different operating modes either independently or together. The azimuth and elevation focus values may be used to control the programmable delay mesh circuitry in order to operate the ultrasound probe to have the focal point defined by the azimuth and elevation focus values. As another example, a first configuration profile for a first operating mode may specify a first bias voltage value (for biasing the voltage across one or more ultrasonic transducers of the ultrasound probe) and a second configuration profile for a second operating mode may specify a second bias voltage value different from the first bias voltage value. As described herein, ultrasonic transducers may operate in collapsed mode or in non-collapsed mode. In some embodiments, application of at least a threshold bias voltage (a “collapse” voltage) across one or more ultrasound transducers may cause these transducers to operate in collapsed mode. As another example, a first configuration profile for a first operating mode may specify a first transmit peak-to-peak voltage value (e.g., the voltage value for the electrical signal representing the transmit waveform) and a second configuration profile for a second operating mode may specify a second transmit peak-to-peak voltage value different from the first transmit peak-to-peak voltage value. The transmit peak-to-peak voltage value may represent the peak-to-peak voltage swing in amplitude (in Volts) for the transducer driver. Different peak-to-peak voltage swings may be used in different operating modes. For example, an operating mode for near-field imaging may use a smaller peak-to-peak voltage swing (than what might be used in other operating modes) to prevent saturating the receivers in the near-field range. A higher voltage peak-to-peak voltage swing may be used for deeper imaging, for generation of tissue harmonics, or for imaging with diverging or plane waves, for example. As another example, a first configuration profile for a first operating mode may specify a first transmit center frequency value (e.g., the center frequency of an ultrasound signal transmitted by the ultrasound transducers) and a second configuration profile for a second operating mode may specify a second transmit center frequency value different from the first transmit center frequency value. In some embodiments, the difference between the first and second center frequencies may be at least 1 MHz, at least 2 MHz, between 5 MHz and 10 MHz. In some embodiments, the first center frequency value may be within the 1-5 MHz range and the second center frequency value may be within the 5-9 MHz range. In some embodiments, the first center frequency value may be within the 2-4 MHz range and the second center frequency value may be within the 6-8 MHz range. As another example, a first configuration profile for a first operating mode may specify a first receive center frequency value (e.g., the center frequency of an ultrasound signal received by the ultrasound transducers) and a second configuration profile for a second operating mode may specify a second receive center frequency value different from the first receive center frequency value. In some embodiments, a configuration profile may specify a transmit center frequency value that is equal to the receive center frequency value. In other embodiments, a configuration profile may specify a transmit center frequency value that is not equal to the receive center frequency value. For example, the receive center frequency value may be a multiple of the transmit frequency value (e.g., may be twice the transmit frequency value as the case may be in the context of harmonic imaging). In some embodiments, the transducers may be capable of harmonic imagining using various pressure within about 0.1 to 1 MPa (including any value within that range) over a range of about 5 to 15 cm. The pressure may induce a harmonic vibration in the tissue, and the receiver may receive the signal at the harmonic mode and/or filter out the fundamental frequency. As another example, a first configuration profile for a first operating mode may specify a first polarity value and a second configuration profile for a second operating mode may specify a second polarity value different from the first receive center frequency value. In some embodiments, the polarity parameter may indicate whether to operate the pulsers (e.g., pulser208) in the transmit chain in unipolar mode or in bipolar mode. Operating pulsers in bipolar mode may be advantageous as it results in lower second harmonic distortion for some tissues. On the other hand, operating pulsers in unipolar mode may provide for greater transducer acoustic power for certain bias voltages. As another example, a first configuration profile for a first operating mode may specify a first ADC clock rate value (e.g., the clock rate at which to operate one or more analog-to-digital converters on the ultrasound device) and a second configuration profile for a second operating mode may specify a second ADC clock rate value different from the first ADC clock rate value. The ADC clock rate value may be used to set the rate at which to operate one or more ADCs in the receive circuitry of the ultrasound probe (e.g., ADC212part of receive circuitry106shown inFIG.2). As another example, a first configuration profile for a first operating mode may specify a first decimation rate value and a second configuration profile for a second operating mode may specify a second decimation rate value different from the first decimation rate value. In some embodiments, the decimation rate value may be used to set the rate of decimation performed by one or more components in the receive circuitry of the ultrasound probe. The decimation rate value and the ADC clock rate value together determine the bandwidth of the receiver in an operating mode, which bandwidth in turn defines the axial resolution of the operating mode. In addition, the ratio of the ADC rate and the decimation rate provides the effective sampling rate of the receiver in the operating mode. As another example, a first configuration profile for a first operating mode may specify a first receive duration value and a second configuration profile for a second operating mode may specify a second receive duration value different from the first receive duration value. The receive duration value indicates a length of time over which a receiver acquires samples. In some embodiments, an ultrasound probe may be configured to operate in an operating mode for cardiac imaging, an operating mode for abdominal imaging, an operating model for kidney imaging, an operating mode for liver imaging, an operating mode for ocular imaging, an operating mode for imaging the carotid artery, an operating mode for imaging the interior vena cava, and/or an operating mode for small parts imaging. In some embodiments, an ultrasound probe may be configured to operate in at least some (e.g., at least two, at least three, at least five, all) of these operating modes such that a user may use a single ultrasound probe to perform multiple different types of imaging. This allows a single ultrasound probe to perform imaging tasks that, conventionally, could be accomplished by using only multiple different ultrasound probes. In some embodiments, a user may select an operating mode for an ultrasound probe through a graphical user interface presented by a mobile computing device (e.g., a smartphone) coupled to the ultrasound probe (via a wired or a wireless connection). For example, the graphical user interface may present the user with a menu of operating modes (e.g., as shown inFIGS.12A and12B) and the user may select (e.g., by tapping on a touchscreen, clicking with a mouse, etc.) one of the operating modes in the graphical user interface. In turn, the mobile computing device may provide an indication of the selected operating mode to the ultrasound device. The ultrasound device may obtain a configuration profile associated with the selected operating mode (e.g., by accessing it in a memory on the ultrasound device or receiving it from the mobile computing device) and use the parameter values specified therein to operate in the selected operating mode. In some embodiments, the ultrasound device may provide, to the mobile computing device, data obtained through operation in a particular operating mode. The mobile computing device may process the received data to generate one or more ultrasound images and may present the generated ultrasound image(s) to the user through the display of the mobile computing device. In some embodiments, the plurality of ultrasonic transducers includes a plurality of metal oxide semiconductor (MOS) ultrasonic transducers (e.g., CMOS ultrasonic transducers). In some embodiments, a MOS ultrasonic transducer may include a cavity formed in a MOS wafer, with a membrane overlying and sealing the cavity. In some embodiments, the plurality of ultrasonic transducers includes a plurality of micromachined ultrasonic transducers (e.g., capacitive micromachined ultrasonic transducers). In some embodiments, the plurality of ultrasonic transducers includes a plurality of piezoelectric ultrasonic transducers. In some embodiments, a selection of an operating mode may be provided to a handheld ultrasound probe through a mobile computing device coupled (e.g., via a wired connection) to the ultrasound probe. In other embodiments, the selection of an operating mode may be provided to the ultrasound probe directly. For example, the ultrasound probe may comprise a mechanical control mechanism (e.g., a switch, a button, a wheel, etc.) for selecting an operating mode. As another example, the ultrasound probe may comprise a display (e.g., as shown inFIGS.6A-B) and use the display to present a GUI to a user through which the user may select an operating mode for the ultrasound probe. FIG.9is a diagram illustrating how a universal ultrasound device may be used to image a subject, in accordance with some embodiments of the technology described herein. In particular,FIG.9shows an illustrative ultrasound system900comprising an ultrasound device902communicatively coupled to computing device904via communication link912. The ultrasound device902may be used to image subject901in any of a plurality of operating modes, examples of which are provided herein. In some embodiments, the operating mode in which to operate ultrasound device902may be selected by a user of computing device904. For example, in the illustrated embodiment, computing device904comprises a display906and is configured to present, via display906, a graphical user interface comprising a menu910of different operating modes (further examples are shown inFIGS.12A-B). The graphical user interface may comprise a GUI element (e.g., an icon, an image, text, etc.) for each of the operating modes that may be selected. A user may select one of the displayed menu options by tapping the screen of the computing device (when the display comprises a touch screen), using a mouse, a keyboard, voice input, or in any other suitable way. After receiving the user's selection, the computing device904may provide an indication of the selected operating mode to the ultrasound device902via communication link912. In some embodiments, responsive to receiving an indication of a selected operating mode from computing device904, ultrasound device902may access a configuration profile associated with the operating mode. The configuration profile may specify values of one or more parameters, which are to be used for configuring the ultrasound probe to function in the selected operating mode. Examples of such parameter values are provided herein. In some embodiments, the configuration profile for the selected mode may be stored onboard ultrasound probe902(e.g., in the configuration profile memory1302shown inFIG.13). In other embodiments, the configuration profile for the selected mode may be provided, via communication link912, to the ultrasound probe from the computing device904. In yet other embodiments, one or more of the parameter values of a configuration mode may be stored onboard the ultrasound probe and one or more other parameter values of the configuration mode may be provided, via communication link912, to the ultrasound probe from the computing device904. In some embodiments, data obtained by the ultrasound device902during operation in a particular operating mode may be provided, via communication link912, to computing device904. The computing device904may process the received data to generate one or more ultrasound images and display the generated ultrasound image(s) via display906(e.g., as shown inFIG.14). In some embodiments, ultrasound probe902may be a handheld ultrasound probe of any suitable type described herein including, for example, the ultrasound probe illustrated inFIG.8. In some embodiments, the handheld ultrasound probe may comprise a display and, for example, may be an ultrasound probe of the kind illustrated inFIGS.6A-6B(in such embodiments, some or all of the functionality performed by the computing device904may be performed onboard the ultrasound probe). In other embodiments, ultrasound probe902may be a wearable ultrasound probe and, for example, may be a skin-mountable ultrasound patch such as the patch illustrated inFIGS.7A-7D. FIG.13shows a block diagram of ultrasound device902, in some embodiments. As shown inFIG.13, ultrasound device902may include components shown and described with reference toFIG.1Bincluding, one or more transducer arrangements (e.g., arrays)102, transmit (TX) circuitry104, receive (RX) circuitry106, a timing & control circuit108, a signal conditioning/processing circuit110, a power management circuit118, and/or a high-intensity focused ultrasound (HIFU) controller120. In the embodiment shown, all of the illustrated elements are formed on a single semiconductor die112. It should be appreciated, however, that in alternative embodiments one or more of the illustrated elements may be instead located off-chip. Additionally, as shown in the embodiment ofFIG.13, ultrasound device902may comprise a configuration profile memory1302, which may store one or more configuration profiles for a respective one or more operating modes. For example, in some embodiments, configuration profile memory1302may store parameter values for each of one or more configuration profiles. In some embodiments, control circuitry (e.g., circuitry108) may be configured to access, in the configuration profile memory1302, parameter values for a selected configuration profile and configure one or more other components of the ultrasound probe (e.g., transmit circuitry, receive circuitry, ultrasound transducers, etc.) to operate in accordance with the accessed parameter values. In some embodiments, computing device904may be a portable device. For example, computing device904may be a mobile phone, a smartphone, a tablet computer, or a laptop. The computing device904may comprise a display, which may be of any suitable type, and/or may be communicatively coupled to a display external to the computing device904. In other embodiments, the computing device904may be a fixed device (e.g., a desktop computer, a rackmount computer, etc.). In some embodiments, communication link912may be a wired link. In other embodiments, communication link912may be a wireless link (e.g., a Bluetooth or Wi-Fi connection). FIG.10is a flowchart of an illustrative process1000for operating a universal ultrasound device, in accordance with some embodiments of the technology described herein. Illustrative process1000may be performed by any suitable device(s) and, for example, may be performed by ultrasound device902and computing device904described with reference toFIG.9. As another example, in some embodiments, an ultrasound device may perform all acts of process1000. Process1002begins at act1002, where a graphical user interface (GUI) showing multiple operating modes is shown on a display. The display may be part of a computing device communicatively coupled to an ultrasound probe (e.g., the display a mobile smartphone). The GUI may include a GUI element (e.g., an icon, an image, a text portion, a menu item, etc.) for each of the multiple operating modes. In some embodiments, each GUI element representing an operating may be selectable by a user, for example, through tapping with a finger or stylus (when the display is a touchscreen) and/or clicking. Additionally or alternatively, a GUI element may be selected through keyboard input and/or voice input, as aspects of the technology described herein are not limited in this respect. In some embodiments, the GUI may be generated by an application program executing on the computing device. For example, the GUI may be generated by application program “app” executing on a mobile smartphone. The application program may be configured to not only generate and display the GUI, but also receive a user's selection of the operating mode at act1004and provide an indication of the user's selection to the ultrasound device at act1006. Additionally, in some embodiments, the application program may be configured to receive data gathered by the ultrasound device, generate one or more ultrasound images using the data, and display the generated ultrasound image(s) using the display of the mobile computing device. In other embodiments, the application program may receive ultrasound image(s) generated onboard an ultrasound device (rather than generating the ultrasound image(s) itself) and display them. FIG.11shows an example GUI1100that may be displayed as part of act1002. The GUI1100comprises a first portion1102containing GUI elements1104,1106,1108, and1110representing different operating modes. Although the GUI1100shows a menu of four operating modes, this is merely for illustration and not by way of limitation, as a GUI may show any suitable number of operating modes. Furthermore, in some embodiments, a GUI may show some of the operating modes and allow the user to reveal additional operating modes, for example, by scrolling or navigating the GUI in any other suitable way. The illustrative GUI1100also comprises a second portion1112showing a GUI element corresponding to the “Cancel” option, which allows a user to not select any of the operating modes shown in first portion1102. FIG.12Ashows another example GUI1202that may be displayed as part of act1002. The GUI1202comprises multiple selectable GUI elements corresponding to respective operating modes including GUI elements1204,1206,1208,1210, and1212corresponding, respectively, to operating modes for performing abdominal imaging, small parts imaging, cardiac imaging, lung imaging, and ocular imaging. The “Shop Presets” GUI element1214allows a user to download (e.g., through a purchase) one or more configuration profiles for additional operating mode(s). After the additional configuration profiles are downloaded, they may be used to control the ultrasound device to operate in the additional operating mode(s). GUI1202also includes a GUI element1216corresponding to the “Cancel” option, which may allow a user to not select any of the operating modes shown in GUI1202. Additionally, as shown inFIG.12A, GUI1202includes an operating mode indicator1218, which indicates a highlighted operating mode. As the user scrolls through different operating modes (e.g., by swiping along a touch screen, scrolling using a mouse or keyboard, etc.) different operating modes may be highlighted by operating mode indicator1218. A user may select a highlighted operating mode (e.g., by tapping, clicking, etc.). In response, the GUI may provide the user with a visual confirmation of his/her selection. For example, as shown inFIG.12B, after a user selects the cardiac operating mode, the GUI provides a visual confirmation of the user's selection by changing the color of the operating mode indicator. It should be appreciated that an operating mode indicator need not be implemented through a colored box or other shape surrounding text identifying the mode. For example, in some embodiments, an operating mode indicator may be provided by underlining text, changing text size, changing font size, italicizing text, and/or in any other suitable way. While in some embodiments the operating mode indicator may be visual, in yet other embodiments, the operating mode indicator may be provided as an audio indicator (e.g., through playback of recorded or synthesized speech indicating the operating mode). Similarly, a visual confirmation of a selection may be provided to the user in any suitable way and, in some embodiments, an audio confirmation may be provided in addition to or instead of the visual confirmation. After the GUI is displayed at act1002, process1000proceeds to act1004where a user's selection of the operating mode is received (e.g., by computing device904), as a result of the user selecting one of the operating modes. As discussed, the user may select one of the operating modes through the GUI by using a touchscreen, mouse, keyboard, voice input, and/or any other suitable way. The GUI may provide the user with a visual confirmation of the selection. Next, process1000proceeds to act1006, where an indication of the selected operating mode is provided to the ultrasound device. For example, an indication of the selected operating mode may be provided by computing device904to ultrasound device902. The indication may be provided in any suitable format, as aspects of the technology are not limited in this respect. In some embodiments, the indication may include at least a portion (e.g., all of) a configuration profile associated with the selected operating modes. For example, the indication may include one or more parameter values for the selected operating mode. In other embodiments, however, the indication may include information identifying the selected operating mode, but not include any of the parameter values for the mode, which parameter values may be stored onboard the ultrasound device. Next, at act1008, the ultrasound device obtains a configuration profile for the selected operating mode. In some embodiments, the configuration profile may be stored in at least one memory onboard the ultrasound device (e.g., configuration profile memory1302shown inFIG.13). In other embodiments, at least some of the configuration profile (e.g., at least some of the parameter values) may be provided to the ultrasound probe from an external device (e.g., computing device904may transmit to ultrasound probe902at least some or all of the configuration profile for the selected operating mode). Next, at act1010, the parameter values in the obtained configuration profile may be used to configure the ultrasound device to operate in the selected mode. To this end, the parameter values may be used to configure one or more components of the ultrasound device and, to this end, may be loaded into one or more registers, memories, and the like, from which locations they may be utilized by ultrasound probe circuitry during operation in the selected operating mode. Next, at act1012, the ultrasound device may be operated in the selected operating mode using the parameter values specified in the configuration profile for the selected operating mode. For example, in some embodiments, control circuitry of an ultrasound device (e.g., control circuitry108shown inFIG.13) may control one or more components of the ultrasound probe (e.g., waveform generator, programmable delay mesh circuitry, transmit circuitry, receive circuitry, etc.) using the parameter values specified in the configuration profile. As described herein, data obtained by an ultrasound probe may be processed to generate an ultrasound image. In some embodiments, data obtained by an ultrasound probe may be provided to a computing device (e.g., computing device904) and processed to generate one or more ultrasound images. In turn, the ultrasound image(s) may be presented to a user through a display of the computing device. FIG.14shows an example of a graphical user interface1400configured to show one or more ultrasound images (e.g., a single ultrasound image, a series or movie of ultrasound images) to a user. In the example ofFIG.14, the GUI1400displays ultrasound images in image portion1406. In addition to an ultrasound image, the GUI1400may include other components such as status bar1402, scale1404, and selectable options1408,1410, and1412. In some embodiments, the status bar1402may display information about the state of the ultrasound device such as, for example, the operating frequency and/or a battery life indicator. In some embodiments, the scale1404may show a scale for the image portion1406. The scale1404may show a scale for the image portion1406. The scale1404may correspond to depth, size, or any other suitable parameter for display with the image portion1406. In some embodiments, the image portion1406may be displayed without the scale1404. In some embodiments, the selectable option1408may allow a user to access one or more preset operating modes, selected from one or more operating modes of the ultrasound device. The selectable option1410may allow the user to take a still image of the image portion1406. The selectable option1412may allow the user to record a video of the image portion1406. The selectable option1414may allow a user to access any or all of the operating modes of the ultrasound device. In some embodiments, any or all of the selectable options1408,1410,1412, and1414may be displayed, while in other embodiments none of them may be displayed. As described above, in some embodiments, an ultrasound probe may be operated in one of multiple operating modes each of which is associated with a respective configuration profile. A configuration profile for an operating mode may specify one or more parameter values used by the ultrasound probe to function in the operating mode. Tables 2-4 shown below illustrate parameter values in configuration profiles for a plurality of illustrative operating modes. The parameter values for a particular configuration profile are shown in a particular row across all three tables (a single table showing all the parameter values was split into three tables for ease of presentation, with the first two columns repeated in each table to simplify cross-referencing.) Accordingly, each row in Tables 2-4 specifies parameter values of a configuration profile for a particular operating mode. For example, parameter values for an operating mode for abdominal imaging may be found in the first row of Table 2, the first row of Table 3 and the first row of Table 4. As another example, parameter values for an operating mode for thyroid imaging may be found in the last row of Table 2, the last row of Table 3, and the last row of Table 4. As may be appreciated from Tables 2-4, while some parameter values change across multiple modes, some parameter values may be the same in multiple modes, as not all parameter values for different modes differ from one another. However, one or more parameter values are different for any two given operating modes. It should be appreciated that the values shown in Tables 2-4 are examples of possible parameters. Any range of values suitable for the operating modes is possible; for example for any of the values listed, alternative values within +/−20% may be used. Table 2 illustrates parameter values for multiple operating modes, including: (1) parameter values for the “TX Frequency (Hz)” parameter, which indicate transmit center frequency values and, in this example, are specified in Hertz; (2) parameter values for the “TX #cycles” parameter, which may indicate the number of transmit cycles used by the transducer array; (3) parameter values for the “TX Az. Focus (m)” parameter, which may indicate the azimuth focus values and, in this example, are specified in meters; (4) parameter values for the “TX El. Focus (m)” parameter, which may indicate elevation focus values and, in this example, are specified in meters; (5) parameter values for the “TX Az. F #” parameter, which may indicate the F numbers of the transmitter azimuth (and may be obtained by dividing the azimuth focus value by the azimuth aperture value); (6) parameter values for the “TX El. F #” parameter, which may indicate the F numbers of the transmitter elevation (and may be obtained by dividing the elevation focus value by the azimuth aperture value); and (7) parameter values for the “TX Az. Aperture (m)” parameter, which may indicate azimuth aperture values and, in this example, are specified in meters. In some embodiments the “TX Az. Aperture” parameter may have a range of 1.8-3.5 cm (e.g., 1.9-3.4 cm or 2.0-3.3 cm). Table 2 also includes a column for “TX El. Aperture (m)” which is duplicated in Table 3 for purposes of simplicity, and described further below in connection with Table 3. TABLE 2Illustrative parameter values for configuration profiles associated with different operating modes.TXTX Az.TX El.TX Az.TX El.FrequencyTX #FocusFocusTX Az.TX El.ApertureAperturePreset Name(Hz)cycles(m)(m)F#F#(m)(m)abdomen350000010.10.07450.0250.013abdomen_thi175000010.10.07450.0250.013abdomen_vascular280000020.18INF2INF0.0280.013abdomen_vascular_thi160000010.08INF2INF0.0280.013cardiac230000020.14INF3INF0.0280.013cardiac_thi150000020.06INF3INF0.0200.013carotid810000020.0450.064180.0110.003carotid_flow40000004INF0.1110.0280.013interleave_cardiac_flow_bmode30000001INF0.09220.0280.013interleave_cardiac_flow_color20000004INF0.127.50.0280.013interleave_carotid_flow_bmode75000002INF0.06290.0280.007interleave_carotid_flow_color40000004INF0.05110.0280.013joint70000001INF0.0316.50.0280.005joint_power40000004INF0.0110.750.0280.013m_mode40000001INF0.03220.0280.013msk62000002INF0.0326.50.0280.005msk_superficial830000010.020.020.0050.004obstetric350000010.140.09450.0280.013patch_kidney30000002INF0.06220.0280.013thyroid750000020.0450.03542.50.0110.013 Table 3 illustrates additional parameter values for the multiple operating modes, including: (1) parameter values for the “TX El. Aperture (m)” parameter, which may indicate elevation aperture values and, in this example, are specified in meters. In some embodiments the TX El. Aperture parameter may have a range of 1.5-2.5 cm (e.g., 1.75-2.25 cm); (2) parameter values for the “Bias Voltage (V)” parameter, which may indicate transducer bias voltage values and, in this example, are specified in Volts; (3) parameter values for the “TX Pk-Pk. Voltage (V)”, which may indicate transmit peak-to-peak voltage values and, in this example, are specified in Volts; and (4) parameter values for the “Bipolar?” parameter, which may indicate to polarity values, which in this example are either unipolar or bipolar. TABLE 3Illustrative parameter values (for additional parameters) in the configuration profiles of Table 2.TX FrequencyTX El.BiasTX Pk-Pk.Preset Name(Hz)Aperture (m)Voltage (V)Voltage (V)Bipolar?abdomen35000000.0137031TRUEabdomen_thi17500000.0137038FALSEabdomen_vascular28000000.0136040TRUEabdomen_vascular_thi16000000.0136040TRUEcardiac23000000.0136031TRUEcardiac_thi15000000.0136040TRUEcarotid81000000.0038020TRUEcarotid_flow40000000.0138041TRUEinterleave_cardiac_flow_bmode30000000.0134831FALSEinterleave_cardiac_flow_color20000000.0134831FALSEinterleave_carotid_flow_bmode75000000.0078041TRUEinterleave_carotid_flow_color40000000.0138041TRUEjoint70000000.0059018FALSEjoint_power40000000.0139016FALSEm_mode40000000.0136031FALSEmsk62000000.0059018FALSEmsk_superficial83000000.0049020TRUEobstetric35000000.0137031TRUEpatch_kidney30000000.0136025FALSEthyroid75000000.0138020TRUE Table 4 illustrates additional parameter values for the multiple operating modes, including: (1) parameter values for the “RX Frequency (Hz)” parameter, which may indicate receive center frequency values and, in this example, are specified in Hertz; (2) parameter values for the “ADC Rate (Hz)” parameter, which may indicate ADC clock rate values and, in this example, are specified in Hertz; (3) parameter values for the “Decimation Rate” parameter, which may indicate decimation rate values; (4) parameter values for the “Bandwidth (Hz)” parameter, which may indicate bandwidths of the receiver and, in this example, are specified in Hertz; (5) parameter values for the “Low (Hz)” parameter and for the “High (Hz)” parameter, which respectively indicate the low and high cutoffs of the operating frequency range and, in this example, are specified in Hertz; (6) parameter values for the “RX Depth (m)” parameter, which may be provided in meters; and (7) parameters values for the “RX Duration (us)” parameter, which may indicate receiver duration values and, in this example, are specified in microseconds. TABLE 4Illustrative parameter values (for more additional parameters) in theconfiguration profiles of Table 2.TXRxADCRXRXFrequencyFrequencyRateDecimationBandwidthLowHighDepthDurationPreset Name(Hz)(Hz)(Hz)Rate(MHz)(Hz)(Hz)(m)(us)abdomen350000035000002500000043125000193750050625000.1280.0abdomen_thi175000035000002500000043125000193750050625000.1280.0abdomen_vascular280000028000002500000062083333175833338416670.15100.0abdomen_vascular_thi160000032000002500000062083333215833342416670.15100.0cardiac230000023000002500000062083333125833333416670.15100.0cardiac_thi150000030000002500000062083333195833340416670.15100.0carotid8100000810000025000000341666676016667101833330.0426.7carotid_flow400000040000002500000081562500321875047812500.0426.7interleave_cardiac_flow_bmode300000030000002500000016781250260937533906250.03523.3interleave_cardiac_flow_color200000020000002500000016781250160937523906250.03523.3interleave_carotid_flow_bmode750000075000002500000052500000625000087500000.16106.7interleave_carotid_flow_color400000040000002500000091388889330555646944440.16106.7joint700000070000002500000034166667491666790833330.0320.0joint_power400000040000002500000052500000275000052500000.02516.7m_mode400000040000002500000081562500321875047812500.15100.0msk620000062000002500000034166667411666782833330.0426.7msk_superficial8300000830000025000000262500005175000114250000.0213.3obstetric350000035000002500000043125000193750050625000.15100.0patch_kidney300000030000002500000091388889230555636944440.1173.3thyroid750000075000002500000043125000593750090625000.0426.7 As previously described, different resolutions may be achieved or provided with the different operating modes, in at least some embodiments. For example, the operating modes reflected in Tables 2-4 may provide axial resolutions ranging between 300 μm and 2,000 μm, including any value within that range, as well as other ranges. The same operating modes may provide lateral resolution at the focus between 200 μm and 5,000 μm, including any value within that range, as well as other ranges. The same operating modes may provide elevational resolution at the focus between 300 μm and 7,000 μm, including any value within that range, as well as other ranges. As a non-limiting example, the first “abdomen” mode may provide an axial resolution of approximately 400 μm, a lateral resolution at focus of approximately 2,000 μm, and an elevational resolution at focus of approximately 2,700 μm. By contrast, the “interleave_cardiac_flow_color” mode may provide an axial resolution of approximately 1,700 μm, a lateral resolution at focus of approximately 900 μm, and an elevational resolution at focus of approximately 7,000 μm. These represent non-limiting examples. As has been described, ultrasound devices (e.g., probes) according to aspects of the present application may be used in various modes with various associated frequency ranges and depths. Thus, ultrasound devices according to the various aspects herein may be used to generate differing ultrasound beams. To illustrate the point, various non-limiting examples are now described with respect toFIGS.15-17. The ultrasound devices described herein may, in at least some embodiments, generate two or more ultrasound beam types (e.g., beam shapes) typically associated with linear, sector, curvilinear (convex), and mechanically scanned (moved) probes. In at least some embodiments, an ultrasound probe according to aspects of the present application may generate ultrasound beams typically associated with all of linear, sector, curvilinear, and mechanically scanned probes. FIG.15shows one example of a beam shape for an ultrasound probe according to a non-limiting embodiment of the present application. As illustrated inFIG.15, the ultrasound probe may utilize a linear beam shape1502generated by the transducer array1500. It should be appreciated that the beam shape1502may be based on accumulated azimuth transmit intensities over a spatial region. By acquiring multiple elevational transmit angles and/or focuses and coherently summing them, the azimuthal beam shape can effectively become a narrow slice. The depth of the waist of the beam shape1502may be fixed at a location that is appropriate based on the attenuation of the frequency being used. In some embodiments, a linear beam shape1502may be used at 3-7 MHz, 5-12 MHz, or 7-15 MHz. The linear beam shape1502may provide higher resolution and shallower imaging at increasing frequencies. FIG.16shows another example of a beam shape for an ultrasound probe according to a non-limiting embodiment. As illustrated inFIG.16, the ultrasound probe may utilize a sector beam shape1602generated by the transducer array1500. It should be appreciated that the beam shape1602may be based on accumulated azimuth transmit intensities over a spatial region. In some embodiments, a sector beam shape1602may be used at 1-3 MHz, 2-5 MHz, or 3.6-10 MHz. These frequency ranges may be used for cardiac, abdominal, pelvic, or thoracic imaging, for example. In some embodiments, the sector beam shape1602may be suitable for deep tissue imaging. FIG.17shows another example of a beam shape for an ultrasound probe. As illustrated inFIG.17, the ultrasound probe may utilize a 3D beam shape1702generated by the transducer array1500. It should be appreciated that the beam shape1502may be based on accumulated azimuth transmit intensities over a spatial region. In some embodiments, a 3D beam shape1702may be used at 3.5-6.5 MHz or 7.5-11 MHz. In some embodiments the 3D beam shape1702may be a result of electronically scanning/sweeping either a sector or curvilinear profile, without mechanically scanning the probe. In some embodiments, the 3D beam shape1702may be suitable for 3D volume imaging. According to at least some embodiments of the present application, an ultrasound probe may generate all the beam shapes shown inFIGS.15-17, as well as potentially generating additional beam shapes. For example, transducer array1500may generate all the beam shapes shown inFIGS.15A-15C. Moreover, as has been described herein, the various modes of operation, and the various associated beam shapes, may be generated with a substantially flat ultrasonic transducer array. Thus, in at least some embodiments, beam shapes typically associated with a curvilinear transducer array may instead be achieved with a substantially flat ultrasonic transducer arrangement. Having thus described several aspects and embodiments of the technology set forth in the disclosure, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the technology described herein. For example, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described. In addition, any combination of two or more features, systems, articles, materials, kits, and/or methods described herein, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure. The above-described embodiments can be implemented in any of numerous ways. One or more aspects and embodiments of the present disclosure involving the performance of processes or methods may utilize program instructions executable by a device (e.g., a computer, a processor, or other device) to perform, or control performance of, the processes or methods. In this respect, various inventive concepts may be embodied as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement one or more of the various embodiments described above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various ones of the aspects described above. In some embodiments, computer readable media may be non-transitory media. The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects as described above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present disclosure need not reside on a single computer or processor, but may be distributed in a modular fashion among a number of different computers or processors to implement various aspects of the present disclosure. Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer, as non-limiting examples. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smartphone or any other suitable portable or fixed electronic device. Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible formats. Such computers may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks. The following non-limiting exemplary embodiments are provided to illustrate inventive aspects of the disclosure. Example 1 is directed to an ultrasound device, comprising: an ultrasound probe, including a semiconductor die, and a plurality of ultrasonic transducers integrated on the semiconductor die, the plurality of ultrasonic transducers configured to operate in a first mode associated with a first frequency range and a second mode associated with a second frequency range, wherein the first frequency range is at least partially non-overlapping with the second frequency range; and control circuitry configured to: control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the first frequency range, in response to receiving an indication to operate the ultrasound probe in the first mode; and control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range, in response to receiving an indication to operate the ultrasound probe in the second mode. Example 2 is directed to the ultrasound device of example 1, wherein a width of the first frequency range is at least 1 MHz and a width of the second frequency range is at least 1 MHz. Example 3 is directed to the ultrasound device of example 1, wherein a difference between a first center frequency in the first frequency range and a second center frequency in the second frequency range is at least 1 MHz. Example 4 is directed to the ultrasound device of example 3, wherein the difference is at least 2 MHz. Example 5 is directed to the ultrasound device of example 4, wherein the difference is between about 6 MHz and about 9 MHz. Example 6 is directed to the ultrasound device of example 1, wherein the first frequency range is contained entirely within a range of 1-5 MHz. Example 7 is directed to the ultrasound device of example 6, wherein the first frequency range is contained entirely within a range of 2-4 MHz. Example 8 is directed to the ultrasound device of example 1, wherein the second frequency range is contained entirely within a range of 5-9 MHz. Example 9 is directed to the ultrasound device of example 8, wherein the second frequency range is contained entirely within a range of 6-8 MHz. Example 10 is directed to the ultrasound device of example 1, the plurality of ultrasonic transducers is further configured to operate in a third mode associated with a third frequency range that is at least partially non-overlapping with the first frequency range and the second frequency range, and wherein the control circuitry is further configured to: control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the third frequency range, in response to receiving an indication to operate the ultrasound probe in the third mode. Example 11 is directed to the ultrasound device of example 10, wherein the first frequency range is contained entirely within a range of 1-3 MHz, the second frequency range is contained entirely within a range of 3-7 MHz, and the third frequency range is contained entirely within a range of 7-15 MHz. Example 12 is directed to the ultrasound device of example 1, wherein: when the plurality of ultrasonic transducers are controlled to detect ultrasound signals having frequencies in the first frequency range, ultrasound signals detected by the plurality of ultrasonic transducers are used to form an image of a subject up to a first depth within the subject; and when the plurality of ultrasonic transducers are controlled to detect ultrasound signals having frequencies in the second frequency range, ultrasound signals detected by the plurality of ultrasonic transducers are used to form an image of a subject up to a second depth within the subject, wherein the second depth is smaller than the first depth. Example 13 is directed to the ultrasound device of example 12, wherein the first depth is contained within a range of up to 8-25 cm from a surface of the subject. Example 14 is directed to the ultrasound device of example 13, wherein the first depth is contained within a range of up to 15-20 cm from the surface of the subject. Example 15 is directed to the ultrasound device of example 12, wherein the second depth is contained within a range of up to 3-7 cm from a surface of the subject. Example 16 is directed to the ultrasound device of example 1, wherein the plurality of ultrasound transducers are capacitive ultrasonic transducers, and wherein the control circuitry is configured to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range at least in part by causing the plurality of ultrasonic transducers to operate in a collapsed mode, in which at least one portion of a membrane of the plurality of ultrasonic transducers is mechanically fixed and at least one portion of the membrane is free to vibrate based on a changing voltage differential between an electrode and the membrane. Example 17 is directed to the ultrasound device of example 1, wherein the control circuitry is configured to: cause a first voltage to be applied to the plurality of ultrasonic transducers in response to the indication to operate the ultrasound probe in the first frequency range; and cause a second voltage to be applied to the plurality of ultrasonic transducers in response to the indication to operate the ultrasound probe in the second frequency range, wherein the second voltage is higher than the first voltage. Example 18 is directed to the ultrasound device of example 17, wherein the second voltage is greater than a collapse voltage for the plurality of ultrasonic transducers, the collapse voltage comprising a voltage which causes a membrane of an ultrasonic transducers to make contact to a bottom of a cavity of the ultrasonic transducer. Example 19 is directed to the ultrasound device of example 18, wherein the collapse voltage is at least 30 Volts. Example 20 is directed to the ultrasound device of example 1, wherein the plurality of ultrasonic transducers includes multiple ultrasonic transducers at least one of which is configured to generate ultrasound signals in the first frequency range and in the second frequency range. Example 21 is directed to the ultrasound device of example 1, wherein the plurality of ultrasonic transducers includes a plurality of CMOS ultrasonic transducers. Example 22 is directed to the ultrasound device of example 21, wherein the plurality of CMOS ultrasonic transducers includes a first CMOS ultrasonic transducer including a cavity formed in a CMOS wafer, with a membrane overlying and sealing the cavity. Example 23 is directed to the ultrasound device of example 1, wherein the plurality of ultrasonic transducers includes a plurality of micromachined ultrasonic transducers. Example 24 is directed to the ultrasound device of example 23, wherein the plurality of micromachined ultrasonic transducers includes a plurality of capacitive micromachined ultrasonic transducers. Example 25 is directed to the ultrasound device of example 23, wherein the plurality of micromachined ultrasonic transducers includes a plurality of piezoelectric ultrasonic transducers. Example 26 is directed to the ultrasound device of example 1, wherein the ultrasound probe further comprises a handheld device. Example 27 is directed to the ultrasound device of example 26, wherein the handheld device further comprises a display. Example 28 is directed to the ultrasound device of example 26, wherein the handheld device further comprises a touchscreen. Example 29 is directed to the ultrasound device of example 1, wherein the ultrasound probe comprises a patch configured to be affixed to a subject. Example 30 is directed to a skin-mountable ultrasound patch, comprising: a monolithic ultrasound chip including a semiconductor die, and a plurality of ultrasonic transducers integrated on the semiconductor die, at least one of the plurality of ultrasonic transducers configured to operate in a first mode associated with a first frequency range and a second mode associated with a second frequency range, wherein the first frequency range is at least partially non-overlapping with the second frequency range; and a dressing configured to receive and retain the ultrasound chip, the dressing further configured to couple to a patient's body. Example 31 is directed to the ultrasound patch of example 30, wherein the monolithic ultrasound chip further comprises a control circuitry configured to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the first frequency range, in response to receiving an indication to operate the ultrasound probe in the first mode; and to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range, in response to receiving an indication to operate the ultrasound probe in the second mode. Example 32 is directed to the ultrasound patch of example 31, wherein the control circuitry defines a CMOS circuitry. Example 33 is directed to the ultrasound patch of example 30, wherein the dressing further comprises an adhesive layer to couple the patch to the patient's body. Example 34 is directed to the ultrasound patch of example 30, further comprising a housing to receive the monolithic ultrasound chip, the housing having an upper portion and a lower portion, wherein the lower housing portion further comprises an aperture to expose the ultrasonic transducers to the subject's body. Example 35 is directed to the ultrasound patch of example 30, further comprising a communication platform to communicate ultrasound signals to and from the ultrasound chip. Example 36 is directed to the ultrasound patch of example 30, further comprising a circuit board to receive the ultrasound chip. Example 37 is directed to the ultrasound patch of example 30, further comprising a communication platform to communicate with an external communication device. Example 38 is directed to the ultrasound patch of example 37, wherein the communication platform is selected from the group consisting of Near-Field Communication (NFC), Bluetooth (BT), Bluetooth Low Energy (BLE) and Wi-Fi. Example 39 is directed to a wearable ultrasound device, comprising: a ultrasound chip including an array of ultrasonic transducers, each ultrasonic transducer defining a capacitive micro-machined ultrasonic transducer (CMUT) operable to transceive signals; and a dressing configured to receive and retain the ultrasound chip, the dressing further configured to couple to a subject body; wherein the array of ultrasonic transducers further comprises a first plurality of CMUTs configured to operate in a collapse mode and a second plurality of CMUTs configured to operation in a non-collapse mode. Example 40 is directed to the wearable ultrasound device of example 39, wherein the ultrasound chip further comprises a control circuitry configured to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the first frequency range, in response to receiving an indication to operate the ultrasound probe in the first mode; and to control the plurality of ultrasonic transducers to generate and/or detect ultrasound signals having frequencies in the second frequency range, in response to receiving an indication to operate the ultrasound probe in the second mode. Example 41 is directed to the wearable ultrasound device of example 40, wherein the ultrasound chip defines a solid-state device. Example 42 is directed to the wearable ultrasound device of example 39, wherein the ultrasonic transducer is configured to generate a first frequency band when operated at collapse mode and to generate a second frequency band when operated at non-collapse mode. Example 43 is directed to the wearable ultrasound device of example 39, wherein the ultrasound chip is configured to switch between collapse and non-collapse modes of operation. Example 44 is directed to the wearable ultrasound device of example 39, further comprising a communication platform to communicate with an external communication device. Example 45 is directed to the wearable ultrasound device of example 44, wherein the communication platform is selected from the group consisting of Near-Field Communication (NFC), Bluetooth (BT), Bluetooth Low Energy (BLE) and Wi-Fi. Example 46 is directed to the wearable ultra-sound device of example 45, wherein the communication platform receives imaging instructions from an auxiliary device and transmits one or more ultrasound images to the auxiliary device in response to the received instructions. Example 47 is directed to the wearable ultrasound device of example 39, wherein the dressing further comprises an opening to accommodate an optical lens adjacent the array of ultrasonic transducers. According to some aspects of the present application, a system is provided, comprising: a multi-modal ultrasound probe configured to operate in a plurality of operating modes associated with a respective plurality of configuration profiles; and a computing device coupled to the multi-modal ultrasound probe and configured to, in response to receiving input indicating an operating mode selected by a user, cause the multi-modal ultrasound probe to operate in the selected operating mode. In some embodiments, the plurality of operating modes includes a first operating mode associated with a first configuration profile specifying a first set of parameter values and a second operating mode associated with a second configuration profile specifying a second set of parameter values different from the first set of parameter values. In some such embodiments, the computing device causes the multi-modal ultrasound probe to operate in a selected operating mode by providing an indication of the selected operating mode to the multi-modal ultrasound probe. In some such embodiments, the multi-modal ultrasound probe comprises a plurality of ultrasonic transducers and control circuitry configured to: responsive to receiving an indication of the first operating mode from the computing device, obtain a first configuration profile specifying a first set of parameter values associated with the first operating mode; and control, using the first configuration profile, the ultrasound device to operate in the first operating mode, and responsive to receiving an indication of the second operating mode from the computing device, obtain a second configuration profile specifying a second set of parameter values associated with the second operating mode, the second set of parameter values being different from the first set of parameter values; and control, using the second configuration profile, the ultrasound device to operate in the second operating mode. In some such embodiments, the first set of parameter values specifies a first azimuth aperture value and the second set of parameter values specifies a second azimuth aperture value different from the first azimuth aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth aperture value and to operate in the second operating mode at least in part by using the second azimuth aperture value. In some such embodiments, the first set of parameter values specifies a first elevation aperture value and the second set of parameter values specifies a second elevation aperture value different from the first elevation aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation aperture value and to operate in the second operating mode at least in part by using the second elevation aperture value. In some such embodiments, the first set of parameter values specifies a first azimuth focus value and the second set of parameter values specifies a second azimuth focus value different from the first azimuth focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth focus value and to operate in the second operating mode at least in part by using the second azimuth focus value. In some such embodiments, the first set of parameter values specifies a first elevation focus value and the second set of parameter values specifies a second elevation focus value different from the first elevation focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation focus value and to operate in the second operating mode at least in part by using the second elevation focus value. In some such embodiments, the first set of parameter values specifies a first bias voltage value for at least one of the plurality of ultrasonic transducers and the second set of parameter values specifies a second bias voltage value for the at least one of the plurality of ultrasonic transducers, the second bias voltage value being different from the first bias voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first bias voltage value and to operate in the second operating mode at least in part by using the second bias voltage value. In some such embodiments, the first set of parameter values specifies a first transmit peak-to-peak voltage value and the second set of parameter values specifies a second transmit peak-to-peak voltage value different from the first transmit peak-to-peak voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit peak-to-peak voltage value and to operate in the second operating mode at least in part by using the second transmit peak-to-peak voltage value. In some such embodiments, the first set of parameter values specifies a first transmit center frequency value and the second set of parameter values specifies a second transmit center frequency value different from the first transmit center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit center frequency value and to operate in the second operating mode at least in part by using the second transmit center frequency value. In some such embodiments, the first set of parameter values specifies a first receive center frequency value and the second set of parameter values specifies a second receive center frequency value different from the first receive center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive center frequency value and to operate in the second operating mode at least in part by using the second receive center frequency value. In some such embodiments, the first set of parameter values specifies a first polarity value and the second set of parameter values specifies a second polarity value different from the first polarity value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first polarity value and to operate in the second operating mode at least in part by using the second polarity value. In some such embodiments, the hand-held ultrasound probe further comprises an analog-to-digital converter (ADC), the first set of parameter values specifies a first ADC clock rate value and the second set of parameter values specifies a second ADC clock rate value different from the first ADC clock rate value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by operating the ADC at the first ADC clock rate value and to operate in the second operating mode at least in part by operating the ADC at the second ADC clock rate value. In some such embodiments, the first set of parameter values specifies a first decimation rate value and the second set of parameter values specifies a second decimation rate value different from the first decimation rate value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first decimation rate value and to operate in the second operating mode at least in part by using the second decimation rate value. In some such embodiments, the first set of parameter values specifies a first receive duration value and the second set of parameter values specifies a second receive value different from the first receive duration value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive duration value and to operate in the second operating mode at least in part by using the second receive duration value. In some embodiments, the multi-modal ultrasound probe is a hand-held ultrasound probe. In some embodiments, the computing device is a mobile computing device. According to some aspects of the present application a method is provided for controlling operation of a multi-modal ultrasound probe configured to operate in a plurality of operating modes associated with a respective plurality of configuration profiles, the method comprising: receiving, at a computing device, input indicating an operating mode selected by a user; and causing the multi-modal ultrasound probe to operate in the selected operating mode using parameter values specified by a configuration profile associated with the selected operating mode. According to some aspects of the present application a system is provided, comprising: an ultrasound device, comprising: a plurality of ultrasonic transducers, and control circuitry; and a computing device having at least one computer hardware processor and at least one memory, the computing device communicatively coupled to a display and to the ultrasound device, the at least one computer hardware processor configured to: present, via the display, a graphical user interface (GUI) showing a plurality of GUI elements representing a respective plurality of operating modes for the ultrasound device, the plurality of operating modes comprising first and second operating modes; responsive to receiving, via the GUI, input indicating selection of either the first operating mode or the second operating mode, provide an indication of the selected operating mode to the ultrasound device, wherein the control circuitry is configured to: responsive to receiving an indication of the first operating mode, obtain a first configuration profile specifying a first set of parameter values associated with the first operating mode; and control, using the first configuration profile, the ultrasound device to operate in the first operating mode, and responsive to receiving an indication of the second operating mode, obtain a second configuration profile specifying a second set of parameter values associated with the second operating mode, the second set of parameter values being different from the first set of parameter values; and control, using the second configuration profile, the ultrasound device to operate in the second operating mode. In some embodiments, the first set of parameter values specifies a first azimuth aperture value and the second set of parameter values specifies a second azimuth aperture value different from the first azimuth aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth aperture value and to operate in the second operating mode at least in part by using the second azimuth aperture value. In some embodiments, the first set of parameter values specifies a first elevation aperture value and the second set of parameter values specifies a second elevation aperture value different from the first elevation aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation aperture value and to operate in the second operating mode at least in part by using the second elevation aperture value. In some embodiments, the first set of parameter values specifies a first azimuth focus value and the second set of parameter values specifies a second azimuth focus value different from the first azimuth focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth focus value and to operate in the second operating mode at least in part by using the second azimuth focus value. In some embodiments, the first set of parameter values specifies a first elevation focus value and the second set of parameter values specifies a second elevation focus value different from the first elevation focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation focus value and to operate in the second operating mode at least in part by using the second elevation focus value. In some embodiments, the first set of parameter values specifies a first bias voltage value for at least one of the plurality of ultrasonic transducers and the second set of parameter values specifies a second bias voltage value for the at least one of the plurality of ultrasonic transducers, the second bias voltage value being different from the first bias voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first bias voltage value and to operate in the second operating mode at least in part by using the second bias voltage value. In some embodiments, the first set of parameter values specifies a first transmit peak-to-peak voltage value and the second set of parameter values specifies a second transmit peak-to-peak voltage value different from the first transmit peak-to-peak voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit peak-to-peak voltage value and to operate in the second operating mode at least in part by using the second transmit peak-to-peak voltage value. In some embodiments, the first set of parameter values specifies a first transmit center frequency value and the second set of parameter values specifies a second transmit center frequency value different from the first transmit center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit center frequency value and to operate in the second operating mode at least in part by using the second transmit center frequency value. In some such embodiments, a difference between the first and second center frequency values is at least 1 MHz. In some such embodiments, the difference is at least 2 MHz. In some such embodiments, the difference is between 5 MHz and 10 MHz. In some such embodiments, the first center frequency value is within 1-5 MHz and the second center frequency value is within 5-9 MHz. In some such embodiments, the first center frequency value is within 2-4 MHz and the second center frequency value is within 6-8 MHz. In some such embodiments, the first center frequency value is within 6-8 MHz and the second center frequency value is within 12-15 MHz. In some embodiments, the first set of parameter values specifies a first receive center frequency value and the second set of parameter values specifies a second receive center frequency value different from the first receive center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive center frequency value and to operate in the second operating mode at least in part by using the second receive center frequency value. In some such embodiments, the first set of parameter values further specifies a first transmit center frequency value that is equal to the first receive center frequency value. In some such embodiments, the first set of parameter values further specifies a first transmit center frequency value that is not equal to the first receive center frequency value. In some such embodiments, the first receive center frequency value is a multiple of the first transmit frequency value. In some such embodiments, the first receive center frequency value is approximately two times the first transmit frequency value. In some embodiments, the first set of parameter values specifies a first polarity value and the second set of parameter values specifies a second polarity value different from the first polarity value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first polarity value and to operate in the second operating mode at least in part by using the second polarity value. In some embodiments, the ultrasound device further comprises an analog-to-digital converter (ADC), the first set of parameter values specifies a first ADC clock rate value and the second set of parameter values specifies a second ADC clock rate value different from the first ADC clock rate value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by operating the ADC at the first ADC clock rate value and to operate in the second operating mode at least in part by operating the ADC at the second ADC clock rate value. In some embodiments, the first set of parameter values specifies a first decimation rate value and the second set of parameter values specifies a second decimation rate value different from the first decimation rate value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first decimation rate value and to operate in the second operating mode at least in part by using the second decimation rate value. In some embodiments, the first set of parameter values specifies a first receive duration value and the second set of parameter values specifies a second receive value different from the first receive duration value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive duration value and to operate in the second operating mode at least in part by using the second receive duration value. In some embodiments, the plurality of GUI elements comprises GUI elements representing at least two of: an operating mode for cardiac imaging, an operating mode for abdominal imaging, an operating mode for small parts imaging, an operating mode for lung imaging, an operating mode for ocular imaging, an operating mode for vascular imaging, an operating mode for 3D imaging, an operating mode for shear imaging, or an operating mode for Doppler imaging. In some embodiments, the display is a touch screen, and the computing device is configured to receive the input indicating the selection via the touch screen. In some embodiments, the control circuitry is configured to obtain the first configuration profile by receiving it from the computing device. In some embodiments, the control circuitry is configured to obtain the first configuration profile by accessing it in a memory of the ultrasound device. In some embodiments, the control circuitry is configured to operate the ultrasound device in the first operating mode and provide, to the computing device, data obtained through operation of the ultrasound device in the first operating mode. In some such embodiments, the at least one computer hardware processor is configured to: generate an ultrasound image from the data; and display the ultrasound image via the display. In some embodiments, the computing device comprises the display. In some embodiments, the computing device is a mobile device. In some embodiments, the computing device is a smartphone. In some embodiments, the ultrasound device is a handheld ultrasound probe. In some embodiments, wherein the ultrasound device is a wearable ultrasound device. In some embodiments, the plurality of ultrasonic transducers includes a plurality of metal oxide semiconductor (MOS) ultrasonic transducers. In some embodiments, the plurality of MOS ultrasonic transducers includes a first MOS ultrasonic transducer including a cavity formed in a MOS wafer, with a membrane overlying and sealing the cavity. In some embodiments, the plurality of ultrasonic transducers includes a plurality of micromachined ultrasonic transducers. In some embodiments, the plurality of ultrasonic transducers includes a plurality of capacitive micromachined ultrasonic transducers. In some embodiments, the plurality of ultrasonic transducers includes a plurality of piezoelectric ultrasonic transducers. In some embodiments, the plurality of ultrasonic transducers comprises between 5000 and 15000 ultrasonic transducers arranged in a two-dimensional arrangement. According to some aspects of the present application a method is provided, comprising: receiving, via a graphical user interface, a selection of an operating mode for an ultrasound device configured to operate in a plurality of modes including a first operating mode and a second operating mode; responsive to receiving a selection of a first operating mode, obtaining a first configuration profile specifying a first set of parameter values associated with the first operating mode; and controlling, using the first configuration profile, the ultrasound device to operate in the first operating mode, and responsive to receiving a selection of the second operating mode, obtaining a second configuration profile specifying a second set of parameter values associated with the second operating mode, the second set of parameter values being different from the first set of parameter values; and controlling, using the second configuration profile, the ultrasound device to operate in the second operating mode. According to some aspects of the present application a handheld multi-modal ultrasound probe is provided, configured to operate in a plurality of operating modes associated with a respective plurality of configuration profiles, the handheld ultrasound probe comprising: a plurality of ultrasonic transducers; and control circuitry configured to: receive an indication of a selected operating mode; access a configuration profile associated with the selected operating mode; and control, using parameter values specified in the accessed configuration profile, the handheld multi-modal ultrasound probe to operate in the selected operating mode. According to some aspects of the present application an ultrasound device is provided, capable of operating in a plurality of operating modes including a first operating mode and a second operating mode, the ultrasound device comprising: a plurality of ultrasonic transducers, and control circuitry configured to: receive an indication of a selected operating mode; responsive to determining that the selecting operating mode is the first operating mode, obtain a first configuration profile specifying a first set of parameter values associated with the first operating mode; and control, using the first configuration profile, the ultrasound device to operate in the first operating mode, and responsive to receiving an indication of the second operating mode, responsive to determining that the selecting operating mode is the second operating mode, obtain a second configuration profile specifying a second set of parameter values associated with the second operating mode, the second set of parameter values being different from the first set of parameter values; and control, using the second configuration profile, the ultrasound device to operate in the second operating mode. In some embodiments, the ultrasound device comprises a mechanical control mechanism for selecting an operating mode among the plurality of operating modes. In some embodiments, the ultrasound device comprises a display and the ultrasound device is configured to generate a graphical user interface (GUI) for selecting an operating mode among the plurality of modes and present the generated GUI through the display. In some embodiments, the first set of parameter values specifies a first azimuth aperture value and the second set of parameter values specifies a second azimuth aperture value different from the first azimuth aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth aperture value and to operate in the second operating mode at least in part by using the second azimuth aperture value. In some embodiments, the first set of parameter values specifies a first elevation aperture value and the second set of parameter values specifies a second elevation aperture value different from the first elevation aperture value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation aperture value and to operate in the second operating mode at least in part by using the second elevation aperture value. In some embodiments, the first set of parameter values specifies a first azimuth focus value and the second set of parameter values specifies a second azimuth focus value different from the first azimuth focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first azimuth focus value and to operate in the second operating mode at least in part by using the second azimuth focus value. In some embodiments, the first set of parameter values specifies a first elevation focus value and the second set of parameter values specifies a second elevation focus value different from the first elevation focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first elevation focus value and to operate in the second operating mode at least in part by using the second elevation focus value. In some embodiments, the first set of parameter values specifies a first bias voltage value for at least one of the plurality of ultrasonic transducers and the second set of parameter values specifies a second bias voltage value for the at least one of the plurality of ultrasonic transducers, the second bias voltage value being different from the first bias voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first bias voltage value and to operate in the second operating mode at least in part by using the second bias voltage value. In some embodiments, the first set of parameter values specifies a first transmit peak-to-peak voltage value and the second set of parameter values specifies a second transmit peak-to-peak voltage value different from the first transmit peak-to-peak voltage focus value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit peak-to-peak voltage value and to operate in the second operating mode at least in part by using the second transmit peak-to-peak voltage value. In some embodiments, the first set of parameter values specifies a first transmit center frequency value and the second set of parameter values specifies a second transmit center frequency value different from the first transmit center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first transmit center frequency value and to operate in the second operating mode at least in part by using the second transmit center frequency value. In some embodiments, the first set of parameter values specifies a first receive center frequency value and the second set of parameter values specifies a second receive center frequency value different from the first receive center frequency value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive center frequency value and to operate in the second operating mode at least in part by using the second receive center frequency value. In some embodiments, the first set of parameter values specifies a first polarity value and the second set of parameter values specifies a second polarity value different from the first polarity value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first polarity value and to operate in the second operating mode at least in part by using the second polarity value. In some embodiments, the ultrasound device comprises an analog-to-digital converter (ADC), the first set of parameter values specifies a first ADC clock rate value and the second set of parameter values specifies a second ADC clock rate value different from the first ADC clock rate value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by operating the ADC at the first ADC clock rate value and to operate in the second operating mode at least in part by operating the ADC at the second ADC clock rate value. In some embodiments, the first set of parameter values specifies a first decimation rate value and the second set of parameter values specifies a second decimation rate value different from the first decimation rate value, and the control circuitry is configure to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first decimation rate value and to operate in the second operating mode at least in part by using the second decimation rate value. In some embodiments, the first set of parameter values specifies a first receive duration value and the second set of parameter values specifies a second receive value different from the first receive duration value, and the control circuitry is configured to control the plurality of ultrasonic transducers to operate in the first operating mode at least in part by using the first receive duration value and to operate in the second operating mode at least in part by using the second receive duration value. In some embodiments, the plurality of operating modes comprises an operating mode for cardiac imaging, an operating mode for abdominal imaging, an operating mode for small parts imaging, an operating mode for lung imaging, and an operating model for ocular imaging. In some embodiments, the ultrasound device is a handheld ultrasound probe. In some embodiments, the ultrasound device is a wearable ultrasound device. In some embodiments, the plurality of ultrasonic transducers includes a plurality metal oxide semiconductor (MOS) ultrasonic transducers. In some embodiments, the plurality of ultrasonic transducers includes a plurality of capacitive micromachined ultrasonic transducers. According to some aspects of the present application a mobile computing device is provided, communicatively coupled to an ultrasound device, the mobile computing device comprising: at least one computer hardware processor; a display; and at least one non-transitory computer-readable storage medium storing an application program that, when executed by the at least one computer hardware processor causes the at least one computer hardware processor to: generate a graphical user interface (GUI) having a plurality of GUI elements representing a respective plurality of operating modes for the multi-modal ultrasound device; present the GUI via the display; receive, via the GUI, user input indicating selection of one of the plurality of operating modes; and provide an indication of the selected operating mode to the ultrasound device. In some embodiments, the plurality of GUI elements comprises a GUI element representing an operating mode for cardiac imaging, an operating mode for abdominal imaging, an operating mode for small parts imaging, an operating mode for lung imaging, and an operating model for ocular imaging. In some embodiments, the display is a touchscreen, and the mobile computing device is configured to receive the user input indicating the selection via the touchscreen. In some embodiments, the at least one computer hardware processor is further configured to: receive data obtained by the ultrasound device during operation in the selected operating mode; generate at least one ultrasound image from the data; and display the at least one generated ultrasound image via the display. | 173,213 |
11857369 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The best mode for carrying out the invention is presented in terms of its preferred embodiment, herein depicted within the Figures. 1. Detailed Description of the Figures Referring toFIG.1throughFIG.6, an exemplary embodiment of a system and method for generation and display of ultrasound imaging data according to a preferred embodiment of the present invention is shown incorporating a wearable scanner module, generally noted as10, for conforming to a target imaging area of a patient. As shown in conjunction withFIG.1, the wearable scanner module10is formed into the shape of a brassiere that can be donned in a closely fitted manner about a user's upper torso. Such a configuration allows for the plurality of imaging transducers12, to be positioned in a spiral array about a target site of the user's breasts in consistent and repeatable geometry, as well as to provide a minimal size and complexity in order to survey the target site. It should be noted that the use of such a module configuration has been selected as an example of one such design choice that can impart a particular functionality into the current invention. In light of the present teachings, it should subsequently become apparent to a person having ordinary skill in the relevant art that other various module configurations can be equivalently utilized, both for this particular target sight (i.e., scanning of breast tissue), as well as for other utilization with other target sights. By way of example, and not as a limitation,FIG.7shows one such proposed alternate configuration of an alternate exemplary embodiment of a system and method for generation and display of ultrasound imaging data according to the present invention in which a scarf shaped module70can be donned in a closely fitted manner about a user's upper torso while still allowing for the plurality of imaging transducers12to be positioned about a target site of the user's breasts in consistent and repeatable geometry. Similarly, it should be seen that alternate module configurations can further be provided for alternate scan target sites, such as, for example, wearable modules adapted to fit about an elbow, knee, or ankle for use in scanning for soft tissue changes in those particular target areas. Utilizing the configurations ofFIG.1orFIG.7for further exemplary enablement, the imaging transducers12are positioned such as to have sufficient points of reference is critical to monitoring changes in biological function, such as in the progression of a disease state such as neoplasia. To this end, the wearable scanner module10,70can be easily and consistently positioned over time about the image target area such as to provide multiple time lapse images of sufficient resolution (e.g., corresponding to a frequency of approximately 10 MHz) that can be transmitted to a user's personal computer, either through portable media or via an extranet or internet network connection. The imaging data from the scanner module can further be compiled, transmitted and displayed to a remote physician's computer to allow for suggested identification of image abnormalities. Software compilation of the imaging data obtained through the scanner module10allows for identification, and possible characterization, of image abnormalities. Feature such as changes in major blood vessels or changes in vascularization are features that can be readily imaged using ultrasound, and which also may provide sufficient pre-diagnostic information such as to allow the user to make a determination that anatomical or tissue changes have occurred over time such as to indicate a concern sufficient to obtain specialized medical diagnostic or treatment. Such pre-diagnostic changes may further be compared with healthy levels such that the communication of such data to a physician may be initiated when changes reached a predetermine threshold. The physician communication may be in the form of an alert generated based upon such identified variability of results in order to obtain specialized medical diagnostic or treatment when the pre-diagnostic information of tissue changes occurring over time reaches an alert threshold. As shown, the plurality of multiple ultrasonic imaging transducers12are aligned in a manner to provide multiple reference points and provide multiple consistent images over time. The array of imaging transducers12are in electrical communication with a data bus14such as to provide input to an electronics module16. The electronics module16is meant to house a central processing microprocessor sufficient to support obtaining of the data signals, summations of the data into readable images, as well as to register permanent reference features of the body. The electronics module16is connectable to the data bus14through a data input connection18. Similarly, a data output connection20enables electrical communication with external computer processing resources such as a dedicated computer appliance or a general purpose personal computer. As shown, the electronics module16is intended to be small enough to be wearable, yet removable to be portable between scanning functions and reporting functions. As one with ordinary skill in the relevant art should identify, the utilization of such a form factor is not intended to be limiting of the present invention, and the use of other form factors, such as, for example, embedded systems, wireless devices, or other configurations using existing or newly acquired technology is readily imported to the key functionality of the present invention. 2. Operation of the Preferred Embodiment In operation, the features and benefits of the System and Method for Generation and Display of Ultrasound Imaging Data of the present invention can provide advantages over other existing technologies by including the use of ultrasound as a medical pre-screening system in which the individual user is enabled to obtain, without specialized operational, electrical or medical input or training, with high contrast images. These images are provided without ionized radiation or radioactive agents, and without compression or pain. Such images are referenced automatically to key anatomical features of the particular user, and a plurality of images, obtained over time and referenced to prior images, can be provided in order to provide visual indication of changes to soft tissue related anatomical structures. Such changes may provide an indication of abnormality to a lay user such as to create sufficient curiosity, interest or concern to prompt the user to seek qualified medical review of the situation. It is further a key operational feature of the present invention to provide a simple and convenient operational package specifically adapted for at-home use. Such a system is intended to provide graphical imaging output that may enable the individual users to better understand their results, as well as electronically conveying those results to the user's physician who may more quickly determine a proper course of action, if any. The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. The Title, Background, Summary, Brief Description of the Drawings and Abstract of the disclosure are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the Detailed Description, it can be seen that the description provides illustrative examples and the various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirement of 35 U.S.C. § 101, 102, or 103, nor should they be interpreted in such a way. Any unintended embracement of such subject matter is hereby disclaimed. They are not intended to be exhaustive nor to limit the invention to precise forms disclosed and, obviously, many modifications and variations are possible in light of the above teaching. The embodiments are chosen and described in order to best explain principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and its various embodiments with various modifications as are suited to the particular use contemplated. It is intended that a scope of the invention be defined broadly by the Drawings and Specification appended hereto and to their equivalents. Therefore, the scope of the invention is in no way to be limited only by any adverse inference under the rulings of Warner-Jenkinson Company, v. Hilton Davis Chemical, 520 US 17 (1997) or Festo Corp. v. Shoketsu Kinzoku Kogyo Kabushiki Co., 535 U.S. 722 (2002), or other similar caselaw or subsequent precedent should not be made if any future claims are added or amended subsequently to this Patent Application. | 9,772 |
11857370 | LISTING OF REFERENCE NUMERALS USED IN THE DRAWINGS 100—Rectum102—Prostate200—Transducer Probe300—Lesion400—Left Calibration Button402—Middle Calibration Button404—Right Calibration Button500—MRI Calibration Button600—Display Device+Input Device602—Processing Unit604—Input Device606—Trans-rectal Side Fire Ultrasound Transducer Probe608—Data Store700—Grid702—Line704—Point800—Landmark802—Identified Lesion DETAILED DESCRIPTION OF THE NON-LIMITING EMBODIMENTS The following detailed description is merely exemplary and is not intended to limit the described embodiments or the application and uses of the described embodiments. As used, the word “exemplary” or “illustrative” means “serving as an example, instance, or illustration.” Any implementation described as “exemplary” or “illustrative” is not necessarily to be construed as preferred or advantageous over other implementations. All of the implementations described below are exemplary implementations provided to enable persons skilled in the art to make or use the embodiments of the disclosure and are not intended to limit the scope of the disclosure. The scope of the invention is defined by the claims. For the description, the terms “upper,” “lower,” “left,” “rear,” “right,” “front,” “vertical,” “horizontal,” and derivatives thereof shall relate to the examples as oriented in the drawings. There is no intention to be bound by any expressed or implied theory in the preceding Technical Field, Background, Summary or the following detailed description. It is also to be understood that the devices and processes illustrated in the attached drawings, and described in the following specification, are exemplary embodiments (examples), aspects and/or concepts defined in the appended claims. Hence, dimensions and other physical characteristics relating to the embodiments disclosed are not to be considered as limiting, unless the claims expressly state otherwise. It is understood that the phrase “at least one” is equivalent to “a”. The aspects (examples, alterations, modifications, options, variations, embodiments and any equivalent thereof) are described regarding the drawings. It should be understood that the invention is limited to the subject matter provided by the claims, and that the invention is not limited to the particular aspects depicted and described. Referring now toFIG.1A,FIG.1A(SHEET1/27) is a coronal plane representative view of a prostate and rectum. Typically, MRI devices image cross sectional “slices” of the human body on one or more planes.FIG.1depicts how an existing MRI device might take a single cross sectional image (i.e., transverse cross sectional image as indicated by the axis labelled A-A) of a prostate102and a rectum100. In this example the image is taken along the axis labelled. A-A that is at or near the mid-line of the prostate102. Note that inFIG.1Athe axis labelled A-A represents the transverse plane. It will be appreciated that an MRI device may take one or more transverse cross sectional images at different points along the dotted axis Y-Y (i.e., towards the head or towards the feet). These transverse cross-sectional images, when sequenced and combined, would then provide cross-sectional image of the entire prostate102along the transverse plane. FIG.1B(SHEET2/27) is a transverse plane cross sectional representative view ofFIG.1Aalong the transverse axis marked A-A inFIG.1A(i.e., the transverse plane marked A-A inFIG.1B). In this figure a single transverse cross sectional representation of a prostate102and rectum100is depicted. It will be appreciated that imaging the entire prostate102will require a sequence of these cross-sectional images, with each of these cross-sectional images taken along the dotted axis marked Y-Y inFIG.1Aand shown as Y in this figure. FIG.2A(SHEET3/27) is a coronal plane representative view of a prostate102and rectum100.FIG.2Adepicts how an existing MRI device might take a single sagittal cross sectional image (i.e., an image along the sagittal plane as shown by the axis labelled B-B) of a prostate102and a rectum100at a point along the dotted axis labelled S-S. In this example the image is taken along the axis labelled B-B which is at or near the sagittal mid-line of the prostate102. It will be appreciated that an MRI device may take one or more sagittal cross sectional images at different points along the dotted axis S-S (i.e., towards the left and/or the right). These sagittal cross-sectional images, when sequenced, would then provide a cross-sectional image of the entire prostate along the sagittal plane. FIG.2B(SHEET4/27) is a sagittal plane cross sectional representative view ofFIG.2Aalong the sagittal plane marked by the axis B-B inFIG.2Aand the plane B-B inFIG.28. This figure illustrates the orientation of an image taken along the sagittal plane (as shown by the axis labelled B-B inFIG.2A) compared to an cross-sectional image taken along the transverse plane as depicted inFIG.1B. It will be appreciated that imaging the entire prostate102will require a sequence of these cross-sectional images, with each of these cross-sectional images taken along the dotted axis marked S-S inFIG.2A(and shown as S inFIG.28). FIG.2C(SHEET5/27) is a transverse plane view of the prostate and rectum ofFIG.2A. The axes marked as BRight, BLeft, BMid, Bx, and By represent cross-sections along the sagittal plane. This figure depicts how an existing MRI device might take a sagittal cross sectional image of a prostate and a rectum along the axis marked B Mid-B Mid in this figure (i.e., the same axis marked B-B inFIG.2Aand the plane marked B-B inFIG.2B). FIG.2D(SHEET6/27) is a second transverse plane view of the prostate and rectum ofFIG.2A. A trans-rectal side-fire transducer probe200is also depicted to illustrate how a trans-rectal side-fire ultrasound transducer probe200might image the prostate102. As is depicted inFIG.1A-FIG.2C, existing MRI imaging devices are typically configured to take transverse, coronal, or sagittal cross sectional images that are perpendicular relative to the other two planes. For example, a sagittal MRI cross-sectional image would be perpendicular to both the transverse and coronal planes. When scanning the entirety of a prostate102, then, a series of sagittal MRI cross-sectional images, (each be perpendicular to both the transverse and coronal planes) would be used to build a representation of the entire prostate102. That is, a sagittal cross-sectional representation of the entire prostate102can be constructed by effectively “stacking” (or “sandwiching”) the individual sagittal cross-sectional images. A side-fire ultrasound imaging device (as might be used in the rectum), in contrast and as depicted inFIG.2D, is configured to capture cross-sectional images along a path defined by an arc (when in a transverse plane view). That is, the cross-sectional images captured by an ultrasound imaging device would generally be at oblique angles relative to the coronal and/or sagittal planes. When viewed in a transverse plane view, the cross-sectional images would appear to be “fan-like”. The skilled person would understand that the “fan-like” images are in a cylindrical coordinate system. It will be understood that, depending on the position of the ultrasound imaging device in the rectum, a cross-section image that is parallel to the coronal plane or the sagittal plane may also be captured. For example, an ultrasound image captured at the axis marked B-B inFIG.2A(and BMid-BMid inFIG.2D) would result in an ultrasound cross-sectional image that would be parallel to the sagittal plane. Other ultrasound images captured in this series, however, would be at oblique angles relative to the coronal and/or sagittal plane (e.g., an image at the axes marked Bx-Bx, By-By, etc). For instance, one or more ultrasound scans of the prostate102and part of the rectum can be taken as the transducer probe200is rolled in the rectum. In this example, the ultrasound scans may be taken at the axes marked by (and points in between) B-Left, B-x, B-Mid, B-y, and B-Right. It should be noted that in this example the plane marked by B-Mid would correspond to a sagittal MRI image of a prostate taken at the same sagittal plane (i.e., B-Mid). In some embodiments the transducer probe200may be configured to provide a continuous data feed to the processing unit so that the plane being viewed will be updated in real time or near real time as the transducer probe200is rolled in the rectum. It will be appreciated that in the example depicted inFIG.2Dan ultrasound cross-sectional image that is parallel with the transverse plane would be difficult, if not impossible to capture. This is because it would be physically difficult to position the side-fire ultrasound transducer probe200in the rectum in a way that would allow for an image to be captured parallel to the transverse plane. In the examples depicted inFIG.2CandFIG.2D, the MRI image of the patient's prostate at the axis marked B-Mid corresponds to the ultrasound image of the patient's prostate at the axis B-Mid. These two images can be used as a basis for fusing the sequence of MRI images that capture the entire prostate (as would be captured as shown byFIG.2CB-Right to B-Left) with the sequence of ultrasound images that capture the entire prostate (as would be captured as shown byFIG.2DB-Right to B-Left). In this example, a processing unit602may be configured to resample the sequence of the MRI images of the patient's prostate (that capture the entire prostate) so that they correspond to the sequence of ultrasound images of the patient's prostate (that capture the entire prostate as depicted inFIG.2D). For instance, in an embodiment the MRI image B-Right may, by way of a resampling, be “mapped”(or registered) to the Ultrasound image B-Right so that the combined view would provide an MRI and ultrasound image at the plane marked by the axis B-Right. The other MRI images (e.g., Bx, By, B-Left) would be likewise mapped (or registered), via a resampling, to the ultrasound images corresponding to Bx, By, and B-Left. Note that in this embodiment, no resampling is required for the MM image at the plane marked by the axis B-Mid since the ultrasound image would be on the same plane (i.e., B-Mid). In another embodiment MRI images of the patient's prostate may be resampled and fused with the ultrasound image of the patient's prostate “on-the-fly” once the dimensions of the prostate are known. This is useful in embodiments where the ultrasound imaging device is configured to provide a stream of image data (such as a “video” stream). In an example, a user of an ultrasound imaging device would, while performing an initial ultrasound scan of the prostate, identify the rightmost edge, the leftmost edge, and the midline of the prostate. Once this information has been entered into the ultrasound imaging device the MRI image data corresponding to the prostate is resampled so that it maps to a corresponding ultrasound image on the display device of the ultrasound imaging device. In some embodiments it may be necessary to interpolate some of the MRI imaging data so that it will match a corresponding ultrasound image at a particular angle. This may be necessary when the gaps between sequential sagittal MRI images is greater than a set amount. For example, in the case where the MRI imaging data is very tightly sampled (e.g., 0.1 mm), interpolation of the MRI imaging data may not be required. In other example where the MRI imaging data is not as tightly sampled (e.g., around 1.5 mm), interpolation of the MRI imaging data may be necessary so that the MRI imaging data will match the corresponding ultrasound image at a particular angle. An example of how MRI images may be mapped (or registered) to a corresponding ultrasound “fan” image is provided inFIG.2E(SHEET7/27). In this example the “grid”700represents an MRI transverse slice similar to the one shown inFIG.1AandFIG.1B. The shaded boxes in the grid700represent the prostate102as it might appear in a digital MRI image. The lines702radiating from a point704represent “fan” slices and the dots706represent the sample points for determining the pixel values of the MRI transverse slice. During the resampling process, one or more resampling algorithms can be used. These include, but are not limited to, nearest neighbor, bi-linear interpolation, and/or bi-cubic. After resampling, pixels from the original sampled pixels of the transverse MRI slice will be “mapped” (or registered) to one or more “fan” slices. In some examples, a sampled pixel of a transverse MRI slice may be mapped (or registered) to many corresponding “fan” slices. Once the MRI data has been merged with the ultrasound imaging data, the data from both images can be displayed on the ultrasound display device simultaneously. For instance, in some embodiments the MRI image corresponding to the ultrasound image can be displayed side-by-side on the ultrasound imaging device. This would allow a user of the device to compare the MRI image to the corresponding ultrasound image. This will provide a more complete view of the area of the prostate being examined, including any lesions. This enhanced prostate images would allow surgeons and/or urologists to perform procedures on the prostate102(such as biopsies of lesions, etc.) while live-imaging the prostate. This would not be possible in an MRI device, and the ultrasound image alone would not provide sufficient information for the surgeon/urologist to perform the procedure. In an embodiment, the lesion is modelled as a sphere. By way of example,FIG.3AtoFIG.3Fdepict how MRI imaging fused with ultrasound imaging in the current disclosure might be used to detect a lesion300on a prostate102. In this exampleFIG.3A(SHEET8/27) is a coronal plane representative view of a prostate and rectum, the prostate having a lesion.FIG.3B(SHEET9/27) is a transverse plane view of the prostate and rectum ofFIG.3A. InFIG.3AandFIG.3B, the prostate102has a lesion300. In this example, different representative views of a single MRI image of a part of the prostate102(including a part of the lesion300) is taken along the transverse (or axial) axis inFIG.3Aand the transverse (or axial) plane marked by Tx-Tx inFIG.3B. FIG.3C(SHEET10/27) is a coronal plane representative view of the prostate and rectum ofFIG.3A.FIG.3D(SHEET11/27) is a sagittal plane representative view of the prostate and rectum ofFIG.3A.FIG.3CandFIG.3Drepresent different views of a single MRI image of a part of the prostate102(including a part of the lesion300) that is taken along the sagittal plane marked by Ay-Ay. FIG.3E(SHEET12/27) is a coronal plane representative view of the prostate and rectum ofFIG.3A.FIG.3F(SHEET13/27) is a transverse plane representative view of the prostate and rectum ofFIG.3A.FIG.3EandFIG.3Frepresent different views of a single ultrasound image of a part of a prostate102(including part of the lesion300) that is taken along the plane marked Bz inFIG.3E(i.e., a plane that is oblique to the coronal plane) and the axis marked Bz inFIG.3F. In contrast to the MRI device, a surgeon or urologist can perform a procedure while simultaneously using the ultrasound imaging device. However, the ultrasound imaging lacks the resolution and fidelity of the MRI images. This makes positively identifying structures such as lesions difficult, at least when compared to MRI images. Fusing the MRI image data with the ultrasound image feed, then, provides the necessary details for a urologist or surgeon to identify lesions in a prostate while also allowing the urologist or surgeon to perform procedures on the prostate. A skilled person would understand that MRI data uses a Cartesian coordinate system. A skilled person would understand that scale is part of MRI data, that there is a voxel to millimeters scale (mm). This voxel to mm scale allows for the determination of the size of the prostate from the MRI data. In an embodiment, all or a part of the prostate boundary (i.e., an alignment mark) is identified and labelled in the MRI data. For example, in some embodiments the MRI data is marked using DICOM annotation tags to identify lines, points, and regions of interest. These lines, points, and regions of interest are used to identify structures (such as lesions) and landmarks800that include, but are not limited to, the border between the rectal wall and prostate in the midline frame of the sagittal series of MRI images. It will be appreciated that any anatomical landmarks800that can be consistently visualized between MRI and ultrasound can be marked and used. In some embodiments the landmark800is scaled so that the size of the prostate can be derived from the length of the landmark800. In this embodiment the size of the prostate can be determined by the length of the landmark800since the landmark800is scaled. A skilled person would understand that ultrasound data also has a voxel to millimeters scale. In an embodiment, the systems and methods may be usable with computed tomography scan, and any imaging modality that provides 3D information. For example, the 3D imaging information may be stored in the Digital Imaging and Communications in Medicine (DICOM) format. In another embodiment, the systems and methods may be usable with recorded ultrasound and live ultrasound fusion. The use of recorded ultrasound imaging data of a patient's prostate will allow for comparison of the patient's prostate over a period of time. For example, a recording of ultrasound data made presently, may be fused or visually inspected in relation to live ultrasound imaging done a year after the recording. The live ultrasound imaging may also be recorded and used in the future. FIG.4(SHEET14/27) is a system diagram of an embodiment of a system. In this embodiment the system includes a display device600for displaying data from the processing unit602. Data from the processing unit602may include, but is not limited to, images and video (e.g. ultrasound scan images/video, and/or MRI images), and UI components. In some embodiments the display device600may be responsive to touch. In the embodiments where the display device600is responsive to touch this “touchscreen” can also be used, at least in part, as an input device. In this embodiment the processing unit602is configured to accept input from one or more input devices608; retrieve, store, and process data from the data store608; display data to the display device600; and control, operate, and send and receive data from a trans-rectal side-fire ultrasonic transducer probe606. In some embodiments the processing unit602is a personal computer having (at least) a motherboard, a memory, a processing unit, a video processing unit (e.g. internal or external video card), a mass data storage device (e.g. hard disk drive, solid state disk drive), an external data storage device (e.g. a digital video disk player/recorder, a Blu-ray disk player/recorder), a power supply, a network connection device (e.g. Ethernet card and port, a WiFi card and antenna), peripheral connection device and connectors (e.g. USB/USB2/USB3/USB3.1 connectors, Thunderbolt connectors, parallel ports, serial ports, etc.), and any other components associated with a desktop, laptop, or enterprise-class computing device. In this embodiment the system may have an input device604. This input device is configured to accept input from a user of the system. Examples can include, but are not limited to, keyboards, mice, touchpads, touchscreen, trackballs, and the like. It will be appreciated that, in embodiments where the display device600includes input functionality (e.g., a touchscreen), that the separate input device604supplement the input device of the display device600, or in some embodiments may not be required. In this embodiment the system includes a trans-rectal side-fire ultrasonic transducer probe. Since side-fire ultrasonic transducer probes are largely constrained to moving in two directions (roll and in/out) fewer tracking components were necessary when compared to an end-fire ultrasonic transducer probe (which have up to 6 degrees of freedom in term of position and orientation). In this embodiment the side-fire ultrasonic transducer probe includes an Inertial Monitoring Unit (IMU) that tracks the roll, pitch, and yaw angle of the side-fire ultrasonic transducer probe. In an embodiment, only the roll angle of the side-fire ultrasound probe is used for alignment and tracking. It will be appreciated that other types of transducer probes (such as end-fire) could be used. Using sensors other than a side-fire transducer probe, however, may require more complex spatial monitoring devices. In this embodiment, the MRI image and/or report data may be loaded on the processing unit602via physical media (e.g. CDs, DVDs, Blu-Ray discs, USB Drives, etc.), over a computer network, or Picture Archiving and Communications Systems (PACS). This MRI image and/or report data can then be used by the processing unit602in the merge step, described below. Examples of MRI image and/or report data include, but are not limited to, reports following the PI-RADS (TRADEMARK) guidelines or other generally accepted. MRI reporting formats. It will be appreciated that the components of the system may be connected via any known communication protocol or connection means. For example, the display device600may be connected to the processing unit602via an HDMI, VGA, Displayport, wirelessly (via infrared, WiFi, or RF communications), or DVI connection, for example. The input device604may be connected to the processing unit602via (for example) USB, PS2, serial port, Thunderbolt, or wirelessly (via infrared, WiFi, or RF communications). Similarly, the ultrasonic transducer probe606may be connected to the processing unit602via (for example) USB, PS/2, serial port, Thunderbolt, wirelessly (via infrared, WiFi, or RF communications), or a high-bandwidth connection protocol. It will also be appreciated that the system may be contained within a portable enclosure rated for use in a clinical setting such as a hospital or medical office. The portable enclosure is configured to house the components so that the system can be moved from one location to another without having to relocate or reconfigure parts of the system. In some embodiments portions of the system may be implemented in a cloud computing environment. For instance, in some embodiments, the processing unit and/or data store may be partially or fully implemented in a cloud computing environment. Any remaining parts of the system that cannot easily be implemented in a cloud environment (e.g., the ultrasound probe, display, input, etc.) may then be configured within a portable enclosure. Referring again toFIG.4(SHEET14/27), in another embodiment a system for visually assisting an operator of an ultrasound system is provided. The system includes a data store608for storing a first imaging data of a first prostate using a first coordinate system, the first imaging data marked with a landmark800for identifying the first prostate. The system further includes an ultrasound transducer606for collecting: live ultrasound image data of a second prostate, and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second prostate. The system includes a processing unit602for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second prostate; and transforming the first imaging data of the first prostate from the first coordinate system to a cylindrical coordinate system. The system also includes a display device600for displaying both the transformed image and the ultrasound image data corresponding to the positional information of the ultrasound transducer. The system may also include an input device600for receiving a first imaging data of a first prostate using a first coordinate system, the first imaging. Referring again toFIG.4(SHEET14/27), in yet another embodiment a system for visually assisting an operator of an ultrasound system is provided. This embodiment includes a data store608for storing a 3D model prostate imaging data, the 3D model prostate imaging data in a cylindrical coordinate space. The system further includes an ultrasound transducer606for collecting: live ultrasound image data of a second prostate; and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second prostate. Aa processing unit602is included for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second prostate; and transforming the 3D model prostate imaging data based on the received positional information corresponding to the alignment point of the second prostate. A display device600is included for displaying both the transformed image and the ultrasound image data corresponding to the positional information of the ultrasound transducer. In some embodiments the system may further include an input device600for receiving a region of interest for the 3D model prostate. FIG.5(SHEET15/27) is a flow chart depicting an embodiment workflow. In this workflow a user first selects, on an input device of an ultrasound imaging device, one or more regions of interest to investigate in a prostate. These regions of interest may include, but are not limited to, the zones in the zone classification system (i.e., the 39 zones). In some instances, the urologist or surgeon may consult a MRI report when selecting a region of interest to investigate. MRI reports can include, but are not limited to, reports following the PI-RADS (TRADEMARK) guidelines. The urologist or surgeon may also simply select regions to investigate. FIG.6A(SHEET16/27) is an embodiment partial user interface (UI) for the workflow ofFIG.5. This partial UI displays the zones of a prostate in a selectable table format. The urologist or surgeon (or the assistant) may select the regions by using the input device of the ultrasound imaging device.FIG.68(SHEET16/27) is an alternate embodiment partial UI for the workflow ofFIG.5. Instead of the selectable table format, the alternate partial UI ofFIG.6Bdisplays the zones of the prostate as an image, with the respective zones of the prostate mapped on the images. Again, the urologist or surgeon (or the assistant) may select the regions by using the input device of the ultrasound imaging device. FIG.7(SHEET17/27) is an embodiment partial UI for the workflow ofFIG.5. Once the regions of interest have been selected the urologist or surgeon (or an assistant) performs an overview scan of the prostate using a side-fire trans-rectal ultrasound transducer probe. While performing the overview scan, the urologist or surgeon (or an assistant) marks (via the input device of the ultrasound imaging device) the left edge of the prostate, the right edge of the prostate, and the mid-line of the prostate as the ultrasound scan reaches the respective left edge, right edge, and mid-line of the prostate. In this example UI, the urologist or surgeon (or an assistant) would click on the left calibration button400once the left edge of the prostate is displayed on the display device of the ultrasound imaging device. The urologist or surgeon (or an assistant) would click on the middle calibration button402once the mid-line of the prostate is displayed on the display device of the ultrasound imaging device. Finally, the urologist or surgeon (or an assistant) would click on the right calibration button404once the right edge of the prostate is displayed on the display device of the ultrasound imaging device. Once the alignment information has been entered into the ultrasound imaging device, the ultrasound imaging device transforms a pre-rendered 3D representation of the prostate so that its dimensions and characteristics are similar to that of the actual scanned prostate. In this example the 3D representation of the prostate is stretched/shrunk, or scaled, to better align with the size of the actual prostate. In this embodiment the 3D representation of the prostate is pre-sliced so as to speed up the transformation process. That is, since the “roll arc” of a side-fire ultrasonic transducer probe in a rectum is known, the 3D representation of the prostate can be mapped to specific roll/zone angles of the ultrasonic transducer probe prior to knowing the actual size of the prostate being investigated. These “pre-slices” can then be transformed (stretched/shrunk, or scaled) as required. In an embodiment, the 3D representation of the prostate is built as a 3D mesh model. Utilities for building 3D mesh models include, but are not limited to, Computer Aided Design software, BLENDER (TRADEMARK), UNITY (TRADEMARK), etc. Once a 3D representation of the prostate has been built, the mesh is “sliced” into “fan” representations that correspond, at least in part, to the ultrasound images that would be captured using the device (such as, for example, the “fan” slices as described inFIG.20). Once the remapping and transformation is complete the urologist or the surgeon (or the assistant) can use the ultrasound imaging device to scan the regions of interest. Examples of how the 3D representation of the prostate (including zone information) is displayed simultaneously with the ultrasound image is provided inFIG.8AandFIG.8B.FIG.8A(SHEET18/27) is an embodiment partial UI for the workflow ofFIG.5.FIG.8Ashows the representation of the prostate being displayed in an overlay format.FIG.8B(SHEET18/27) is an alternate embodiment partial UI for the workflow ofFIG.5.FIG.8Bshows the representation of the prostate being displayed in a side-by-side format. In this example UIs depicted inFIG.8AandFIG.8B, as the ultrasound transducer probe is rolled in the rectum the corresponding zone in the prostate is displayed in the representation of the prostate on the left side of the screen. As the urologist or the surgeon (or the assistant) scans different areas of the prostate, the corresponding zone will be highlighted in the representations of the prostate. In an embodiment, the zones selected by the user will be highlighted as the transducer probe is rolled/rotated in the rectum. The image slice shown is determined according to the following function: I=M/2×(θa/α+1) θa=(θ−θm)/(θr−θm)×α[when θ−θmis positive] θa=(θ−θm)/(θm−θl)×α[when θ−θmis negative]Where:I—image index (0 to M in the fan image series)θ—the probe rotation angleθa—the aligned probe rotation angleθm—the probe rotation angle at mid-lineθl—the probe rotation angle at leftmost (ccw) edge of prostate (patient right)θr—the probe rotation angle at rightmost (cw) edge of prostate (patient left)M—number of images minus 1 (even number)α—the fan half angle (fan spans −α to α) FIG.9(SHEET19/27) is a flow chart depicting an alternate embodiment workflow. In this example previously captured MRI image and/or report data is loaded into the processing unit602of the system so that it may be remapped and used in the simultaneous display of MRI image and/or report data and ultrasound image data. FIG.10A(SHEET20/27) depicts example MRI images and/or report data that might be loaded into the processing unit602. This data may include identified landmark800and identified lesions802. The MRI images and/or report should contain a landmark800“marking” that identifies a structural component in the prostate region with which the system can calibrate and/or orient the ultrasound images. It will be appreciated that the landmark800could be any clearly defined body structure that would be identifiable on both an MRI image and an ultrasound image. This can include, but is not limited to, a rectal wall, an edge of a prostate, a midline of a prostate, etc. Referring again toFIG.9, in this embodiment the system is configured to accept at least sagittal MRI images of the prostate. In other embodiments, transverse or coronal MRI images of the prostate may also be accepted in place of, or in addition to, the sagittal MRI images. FIG.108(SHEET21/27) is an embodiment partial UI for the workflow ofFIG.9. Once the MRI image and/or report data has been loaded into the processing unit602, the urologist, surgeon, or an assistant rolls the ultrasound transducer so that the mid-line of the prostate is in view and then selects the mid-line calibration button on the system. In this example, as the urologist or surgeon (or an assistant) performs an overview scan of the prostate using a side-fire trans-rectal ultrasound transducer probe, the urologist or surgeon (or an assistant) inputs the one or more alignment markers as these markers are displayed on the display device600. In this example UI, the urologist or surgeon (or an assistant) would click on the MRI Calibration Button500once the mid-line of the prostate is displayed on the display device of the ultrasound imaging device. Once the mid-line of the prostate is known, the processing unit “re-slices” the MRI image data so that the MRI image data corresponds to ultrasound image (or video) data. In the case where the MRI images and/or report data consist of sagittal MRI image data, the processing unit602is configured to remap the sagittal MRI image data to “fan-shaped” images that correspond to the ultrasound imaging data being captured by the system. In this embodiment the processing unit602uses the landmark800information in the MRI images and/or report data, in addition to the mid-line calibration information, to orient and calibrate the transformation. In another embodiment, the transforming (reslicing) of the MRI image data to ultrasound image (or video) data may be completed on another computing system before it is used for improving performance by reducing processing time. In an embodiment the MRI sagittal slices will be transformed/remapped by resampling the voxels (3D pixels) in fan planes arranged by rotating around the line annotation axis (drawn by the radiologist) at regular angular intervals (i.e. 2α/M). This results in a series of fan images. The processing unit602may also be configured to calculate lesion angles. In an embodiment, the MRI lesion coordinates will be transformed/remapped by placing each lesion in the nearest fan slice sample point. Depending on the size of the lesion, the lesion may span across multiple fan slices. A skilled person would understand that the angles between fan slices are consistent but the fan slices go from narrow to wider. FIG.10C(SHEET22/27) is an embodiment partial UI for the workflow ofFIG.9.FIG.10D(SHEET23/27) is an embodiment partial UI for the workflow ofFIG.9. Once the remapping is complete the urologist or the surgeon (or the assistant) can use the ultrasound imaging device to scan the area of interest, and the corresponding MRI data will be displayed simultaneously.FIG.10CandFIG.10Dprovide two different examples of how the MRI image and ultrasound image might be simultaneously displayed.FIG.10Cis an embodiment partial UI for the workflow ofFIG.9.10C shows the corresponding re-sliced MRI image displayed in a side-by-side format.FIG.10Dis an alternate embodiment partial UI for the workflow ofFIG.9.FIG.10Dshows the corresponding re-sliced MRI image displayed in an overlay format. In an embodiment the MRI image that will be displayed is determined by the following function: I=M/2×(θa/α+1) θa=θ−θmWhere:I—image index (0 to M in the fan image series)θ—the probe rotation angleθa—the aligned probe rotation angleθm—the probe rotation angle at mid-lineM—number of images minus 1 (even number)α—the fan half angle (fan spans −α to α) In the example UIs depicted inFIG.10CandFIG.10D, as the ultrasound transducer probe is rolled in the rectum the corresponding re-sliced MRI image is displayed in the representation of the prostate on the left side of the screen. As the urologist or the surgeon (or the assistant) scans different areas of the prostate, the corresponding re-sliced MRI image will be updated. In the embodiment where lesions are also tracked, lesion information is also be displayed on the re-sliced MRI image. The lesion information may also be highlighted, outlined, etc. for improved visibility. A urologist or surgeon can then compare the re-sliced MRI image with the ultrasound image when performing the ultrasound scan. This is especially useful in biopsy scenarios—the urologist or surgeon can determine whether the region being examined using ultrasound corresponds to the lesion information being displayed on the MRI image. The urologist or surgeon can then guide a biopsy probe or needle to the area of interest and take a sample. Referring now toFIG.11(SHEET24/27), an alternate embodiment of Workflow A (workflow A2) is provided. In this embodiment the system accepts, as input, the zones of interest, the model fan image slices (that were previously rendered), alignment data corresponding to the actual left, mid-line, and right of the prostate, and the current rotation angle of the ultrasonic transducer probe. Once the alignment information has been acquired, the Model Fan Image Slices can be remapped and/or transformed so that the representation of the prostate is similar to the actual prostate. The processing unit is then configured to determine the zones to highlight (the zones corresponding to the zones of interest) and which model fan image slice to display based on the probe rotation angle. The 3D model of the prostate does not have an absolute scale. Therefore the left, mid, and right alignment scales the 3D model to the size of the prostate being imaged by the ultrasound probe. Referring now toFIG.12(SHEET25/27), an alternate embodiment of Workflow B (workflow B2) is provided. In this embodiment the system accepts, as input, MRI Sagittal Image Slices, MRI Line Annotation Coordinates, and MRI Lesion Region of Interest (ROI) Sagittal Coordinates. Once this information has been received, the system transforms/remaps the Sagittal Images Slices using the MRI Line Annotation Coordinates as guides. The result of this transformation/remapping is the MRI Fan Image Slices. Similarly, the system transforms/remaps the MRI Lesion ROI Sagittal Coordinates using the MRI Line Annotation Coordinates as guides. The result of this transformation/remapping is the MRI Lesion ROI Fan Coordinates, which map, at least in part, the MRI Lesion ROI on the MRI Fan Image Slices. In another embodiment, any MRI data set can be used as an input. A skilled person would know how to transform an MRI data set into various planar views or slices. Once the alignment to the prostate mid-line has been input into the system (in this example, by a user of the system), the Probe Rotation Angle determines, at least in part, which Fan Slice and/or Lesion ROI to display. This information is then displayed on the screen of the device so that a user/urologist/surgeon may refer to them as the procedure is performed. It will be understood that as the frequency of the ultrasound is increased, the resolution of the ultrasound image (and its associated data) will be increased. For example, in some embodiments it may be advantageous to use an ultrasound probe capable of using micro-ultrasound or high resolution ultrasound (e.g., an ultrasound 29 MHz probe) to obtain ultrasound imaging data. The higher resolution may provide more detail that assists the operator in performing cognitive fusion. Referring now toFIG.13(SHEET26/27), an example method for visually assisting an operator of an ultrasound system is provided. The method includes receiving a first imaging data of a first prostate using a first coordinate system. In this embodiment the first imaging data is previously captured MRI data. The first imaging data is marked with a landmark800for identifying the first prostate. In some examples the landmark800is sealed so that an approximate size of the prostate can be determined from the landmark800. The first imaging data is then transformed from the first coordinate system to a cylindrical coordinate system. As was discussed, various algorithms for transforming data from a coordinate space to a cylindrical coordinate space are known (e.g., nearest neighbor, bi-linear interpolation, and/or bi-cubic). As ultrasound image data is being collected from a patient, a live ultrasound image of a second prostate as received from an ultrasound transducer is displayed. Furthermore, as the ultrasound image data is collected, positional information from the ultrasound transducer corresponding to an alignment point of the prostate is received. For example, the positional information can be obtained from roll sensors in the ultrasound transducer. The transformed image from the transformed first imaging data of the first prostate corresponding to the alignment point using the landmark800is then displayed in such a way so that the transformed image and the live ultrasound image are displayed simultaneously, for example on a display. In another embodiment, a visual assistance interface is provided. The visual assistance interface may supplement or replace the displaying of the transformed image. The visual assistance interface may be a list of regions of interest or target landmarks and the corresponding roll angles (for guiding the movement of the ultrasound transducer by the operator) to show or target the region of interest or target. For example, a text box showing that a first region of interest is at −40 degrees (roll angle), and a second region of interest is at +30 degrees (roll angle). Another embodiment of the visual assistance interface may be an angle meter for showing the current roll angle (or positional information) of the ultrasound transducer. The visual assistance interface for showing the roll angle may be a text box showing the current roll angle (or positional information) or a graphical element such as an analog instrument gauge showing the roll angle (or positional information). In another embodiment, the visual assistance interface is shown along with the displayed transformed image. In another embodiment, the visual assistance interface is shown along with the displayed generated image. As the ultrasound transducer606is moved, new positional information from the ultrasound transducer606is sent. Once this new positional information from the ultrasound transducer is received, the transformed image and the live ultrasound image corresponding to the new positional information of the ultrasound transducer are displayed simultaneously. In another embodiment, the first imaging data is also marked with a region of interest in the prostate. During the transformation this region of interest is also transformed so that the transformed first image data also includes the region of interest information. As the ultrasound transducer606transmits new positional information, a determination is made whether the new positional information corresponds to the region of interest in the transformed first image data. If the region of interest is in the transformed image data corresponding to the new positional information, then a visual indicator of the region of interest is displayed on the transformed image. In another embodiment, the ultrasound transducer provides positional information including roll, pitch, and yaw from the IMU. The roll, pitch and yaw information are used to track how the ultrasound probe is being moved in 3D space. For example, the pitch and yaw positional information tracks how the cylinder or “fan-shape” model of the ultrasound images (or image data) is being moved in 3D space. The roll, pitch, and yaw positional information allows for more accurate tracking and modelling of the movement of the ultrasound transducer. This may allow for more accurate tracking between the live ultrasound image data and first imaging data (e.g. the recorded ultrasound data or MRI scan). Or, this may allow for more accurate tracking between the live ultrasound image data and the 3D model anatomical region (e.g. 3D model prostate). In another embodiment the first imaging data is recorded ultrasound imaging data of the prostate, and the first coordinate system is a cylindrical coordinate system. It will be appreciated that the first prostate (as captured in previous image data) and the second prostate (as captured by live ultrasound) are the same prostate—that is, the prostate belongs to the same patient even though the imaging data of the first prostate and the second prostate may be separated by time. In some embodiments the time between when the first imaging data and the second imaging data may be within hours, days, or weeks. In other embodiments the separation of time is more significant (e.g., months, years). It will be appreciated that longer separations of time may be useful for long-term monitoring of the prostate. In contrast, shorter separations of time may be more useful for biopsies and/or diagnosis purposes. In some embodiments the first imaging data is magnetic resonance imaging (MRI) data and the first coordinate system is a Cartesian coordinate system. Other imaging data formats and coordinate systems can be used without departing from the scope of this disclosure. For instance, in another embodiment the first imaging data is ultrasound data and the first coordinate system is a cylindrical coordinate system. In some embodiments the landmark800is a line along a border between a rectal wall and the first prostate in a midline frame of a sagittal series of image frames of the first imaging data. The landmark800can also identify or provide information regarding the approximate size and orientation of the prostate. In an embodiment the positional information is a roll angle of the ultrasound transducer. This roll angle information can be collected, for example, by a roll sensor incorporated in the ultrasound transducer. In an embodiment the positional information is a roll angle of about 0 degrees and the alignment point is a mid-line of the second prostate. In another embodiment the positional information is a roll angle from about +80 degrees to about −80 degrees. In an embodiment the ultrasound probe is a side-fire ultrasound probe. In an embodiment the transformed image and the live ultrasound image are displayed side-by-side. In another embodiment the transformed image and the corresponding ultrasound image are displayed overlaid. Referring now toFIG.14(SHEET27/27), in another embodiment a method for visually assisting an operator of an ultrasound system is provided. The method includes generating imaging data for a 3D model prostate that is in a cylindrical coordinate space. A live ultrasound image of a prostate as received from an ultrasound transducer is then displayed. Positional information from the ultrasound transducer corresponding to alignment points of the prostate is also received. Once the positional information corresponding to the alignment points is received the imaging data of the 3D model prostate is transformed. This transformation can include, but is not limited to, stretching, shrinking, and/or adjusting the 3D model of the prostate so that it approximately corresponds to the prostate. The generated image from the generated imaging data of the 3D model prostate corresponding to the positional information of the ultrasound transducer is then displayed. In this embodiment the generated image and the live ultrasound image are displayed simultaneously for visually assisting the operator of the ultrasound system; In another embodiment a region of interest for the 3D model prostate is received. As the ultrasound transducer606transmits new positional information, a determination is made whether the new positional information corresponds to the region of interest in the transformed 3D image data. If the region of interest is in the transformed image data corresponding to the new positional information, then a visual indicator of the region of interest is displayed on the transformed image. A region of interest for the 3D model prostate can be received in a variety of ways. This can include, but is not limited to, providing a graphical user interface for the selection of a region of interest of the 3D model prostate by the operator. In an embodiment, an input device is provided that allows an operator to input the region of interest. It will be appreciated that the 3D model of the prostate can be subdivided into various zones and/or regions. The number of zones and/or regions can depend, for example, on the type of MRI reporting (e.g., PI-RADS, etc). For example. In some embodiments the 3D model of the prostate has 39 regions of interest. In some embodiments the positional information is a roll angle of the ultrasound transducer. In some embodiments the alignment points are the positional information of the ultrasound transducer corresponding to a left edge of the prostate, a mid-line of the prostate, and a right edge of the prostate. In some embodiments the transforming is a scaling transformation of the image data. The following clauses are offered as further description of the examples of the apparatus. Any one or more of the following clauses may be combinable with any another one or more of the following clauses and/or with any subsection or a portion or portions of any other clause and/or combination and permutation of clauses. Any one of the following clauses may stand on its own merit without having to be combined with any other clause or any portion of any other clause, etc. CLAUSE 1: A method for visually assisting an operator of an ultrasound system, comprising: receiving a first imaging data of a first anatomical region using a first coordinate system, the first imaging data marked with a landmark for identifying the first anatomical region; transforming the first imaging data of the first anatomical region from the first coordinate system to a cylindrical coordinate system; displaying a live ultrasound image of a second anatomical region as received from an ultrasound transducer; receiving positional information from the ultrasound transducer corresponding to an alignment point of the second anatomical region; and displaying a transformed image from the transformed first imaging data of the first anatomical region corresponding to the alignment point using the landmark. CLAUSE 2: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transformed image and the live ultrasound image are displayed simultaneously. CLAUSE 3: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising the steps of receiving new positional information from the ultrasound transducer; and displaying both the transformed image and the live ultrasound image corresponding to the new positional information of the ultrasound transducer. CLAUSE 4: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising the steps of: receiving the first imaging data, the first imaging data being marked with a region of interest in or on the first anatomical region; determining if the region of interest is visible in the transformed image corresponding to the positional information received; and once determining that the regions of interest is visible, then showing a visual indicator of the region of interest on the transformed image. CLAUSE 5: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the first imaging data is recorded ultrasound imaging data of the first anatomical region, and the first coordinate system is a cylindrical coordinate system. CLAUSE 6: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising wherein the first anatomical region and the second anatomical region are the same anatomical region. CLAUSE 7: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the first imaging data is magnetic resonance imaging (MRI) data and the first coordinate system is a Cartesian coordinate system. CLAUSE 8: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the landmark is a line along a border between a rectal wall and the first anatomical region in a midline frame of a sagittal series of image frames of the first imaging data. CLAUSE 9: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the landmark identifies the approximate size and orientation of the anatomical region. CLAUSE 10: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the positional information is a roll angle of the ultrasound transducer. CLAUSE 11: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the positional information is a roll angle of about 0 degrees and the alignment point is a mid-line of the second anatomical region. CLAUSE 12: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the positional information is a roll angle from about +80 degrees to about −80 degrees. CLAUSE 13: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the ultrasound probe is a side-fire ultrasound probe. CLAUSE 14: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transformed image and the live ultrasound image are displayed side-by-side. CLAUSE 15: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transformed image and the corresponding ultrasound image are displayed overlaid. CLAUSE 16: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the first anatomical region and the second anatomical region are the same anatomical region of a patient. CLAUSE 17: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the anatomical region is a prostate. CLAUSE 18: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the anatomical region is an organ, organ system, tissue, thyroid, rectum, or urinary tract. CLAUSE 19: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising a method for visually assisting an operator of an ultrasound system, comprising: generating imaging data for a 3D model anatomical region, the imaging data in a cylindrical coordinate space; displaying a live ultrasound image of an anatomical region as received from an ultrasound transducer; receiving positional information from the ultrasound transducer corresponding to alignment points of the anatomical region; transforming the imaging data of the 3D model anatomical region based on the received positional information corresponding to the alignment points; and displaying a generated image from the generated imaging data of the 3D model anatomical region corresponding to the positional information of the ultrasound transducer; wherein the generated image and the live ultrasound image are displayed simultaneously for visually assisting the operator of the ultrasound system. CLAUSE 20: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising receiving a region of interest for the 3D model anatomical region; determining if the region of interest is visible in the generated image corresponding to the positional information received; and once determining that the region of interest is visible, then showing a visual indicator of the region of interest on the generated image. CLAUSE 21: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising providing a graphical user interface for the selection of a region of interest of the 3D model anatomical region by the operator. CLAUSE 22: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the 3D model of the anatomical region has 39 regions of interest. CLAUSE 23: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the positional information is a roll angle of the ultrasound transducer. CLAUSE 24: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the alignment points are the positional information of the ultrasound transducer corresponding to a left edge of the anatomical region, a mid-line of the anatomical region, and a right edge of the anatomical region. CLAUSE 25: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transforming is a scaling transformation of the image data. CLAUSE 26: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the positional information is a roll angle from about +80 degrees to about −80 degrees. CLAUSE 27: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the ultrasound probe is a side-fire ultrasound probe. CLAUSE 28: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transformed image and the live ultrasound image are displayed side-by-side. CLAUSE 29: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the transformed image and the corresponding ultrasound image are displayed overlaid. CLAUSE 30: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the first anatomical region and the second anatomical region are the same anatomical region of a patient. CLAUSE 31: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the anatomical region is a prostate. CLAUSE 32: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph wherein the anatomical region is an organ, organ system, tissue, thyroid, rectum, or urinary tract. CLAUSE 33: A system for visually assisting an operator of an ultrasound system of any of the clauses, or any portion of any clause, mentioned in this paragraph comprising a data store for storing a first imaging data of a first anatomical region using a first coordinate system, the first imaging data marked with a landmark for identifying the first anatomical region; an ultrasound probe606for collecting: live ultrasound image data of a second anatomical region, and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second anatomical region; a processing unit602for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second anatomical region, and transforming the first imaging data of the first anatomical region from the first coordinate system to a cylindrical coordinate system; and a display device600for displaying both the transformed image and the ultrasound image data corresponding to the positional information of the ultrasound transducer. CLAUSE 34: A system of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising an input device600for receiving a first imaging data of a first anatomical region using a first coordinate system. CLAUSE 35: A system of any of the clauses, or any portion of any clause, mentioned in this paragraph comprising a system for visually assisting an operator of an ultrasound system comprising: a data store for storing a 3D model anatomical region imaging data, the 3D model anatomical region imaging data in a cylindrical coordinate space; an ultrasound probe606for collecting: live ultrasound image data of a second anatomical region, and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second anatomical region; a processing unit602for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second anatomical region, and transforming the 3D model anatomical region imaging data based on the received positional information corresponding to the alignment point of the second anatomical region; and a display device600for displaying both the transformed image and the ultrasound image data corresponding to the positional information of the ultrasound transducer. CLAUSE 36: A system of any of the clauses, or any portion of any clause, mentioned in this paragraph further comprising an input device600for receiving a region of interest for the 3D model anatomical region, CLAUSE 37. A method for visually assisting an operator of an ultrasound system, comprising, receiving a first imaging data of a first anatomical region using a first coordinate system, the first imaging data marked with a landmark for identifying the first anatomical region; transforming the first imaging data of the first anatomical region from the first coordinate system to a cylindrical coordinate system; displaying a live ultrasound image of a second anatomical region as received from an ultrasound transducer; receiving positional information from the ultrasound transducer corresponding to an alignment point of the second anatomical region; and displaying a visual assistance interface; wherein the visual assistance interface and the live ultrasound image are displayed simultaneously. CLAUSE 38: A method of any of the clauses, or any portion of any clause, mentioned in this paragraph, further comprising: displaying a transformed image from the transformed first imaging data of the first anatomical region corresponding to the alignment point using the landmark; wherein the transformed image and/or the visual assistance interface, and the live ultrasound image are displayed simultaneously. CLAUSE 39: A method for visually assisting an operator of an ultrasound system, comprising: generating imaging data for a 3D model anatomical region, the imaging data in a cylindrical coordinate space; displaying a live ultrasound image of an anatomical region as received from an ultrasound transducer; receiving positional information from the ultrasound transducer corresponding to alignment points of the anatomical region; transforming the imaging data of the 3D model anatomical region based on the received positional information corresponding to the alignment points; and displaying a visual assistance interface; wherein the visual assistance and the live ultrasound image are displayed simultaneously. CLAUSE 40. A method of any of the clauses, or any portion of any clause, mentioned in this paragraph, further comprising: displaying a generated image from the generated imaging data of the 3D model anatomical region corresponding to the positional information of the ultrasound transducer; wherein the generated image and/or the visual assistance interface, and the live ultrasound image are displayed simultaneously. CLAUSE 41: A system for visually assisting an operator of an ultrasound system comprising: a data store for storing a first imaging data of a first anatomical region using a first coordinate system, the first imaging data marked with a landmark for identifying the first anatomical region; an ultrasound probe606for collecting: live ultrasound image data of a second anatomical region, and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second anatomical region; a processing unit602for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second anatomical region, and transforming the first imaging data of the first anatomical region from the first coordinate system to a cylindrical coordinate system; and a display device600for displaying both a visual assistance interface and the ultrasound image data corresponding to the positional information of the ultrasound transducer. CLAUSE 42: A system for visually assisting an operator of an ultrasound system comprising: a data store for storing a 3D model anatomical region imaging data, the 3D model anatomical region imaging data in a cylindrical coordinate space; an ultrasound probe606for collecting: live ultrasound image data of a second anatomical region, and positional information from the ultrasound transducer, including positional information corresponding to an alignment point of the second anatomical region; a processing unit602for: receiving positional information from the ultrasound transducer corresponding to the alignment point of the second anatomical region, and transforming the 3D model anatomical region imaging data based on the received positional information corresponding to the alignment point of the second anatomical region; and a display device600for displaying both a visual assistance interface and the ultrasound image data corresponding to the positional information of the ultrasound transducer. CLAUSE 43: A system of any of the clauses, or any portion of any clause, mentioned in this paragraph, wherein: the display device600for displaying a visual assistance interface and/or the transformed image, and the ultrasound image data corresponding to the positional information of the ultrasound transducer. This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. It may be appreciated that the assemblies and modules described above may be connected with each other as required to perform desired functions and tasks within the scope of persons of skill in the art to make such combinations and permutations without having to describe each and every one in explicit terms. There is no particular assembly or component that may be superior to any of the equivalents available to the person skilled in the art. There is no particular mode of practicing the disclosed subject matter that is superior to others, so long as the functions may be performed. It is believed that all the crucial aspects of the disclosed subject matter have been provided in this document. It is understood that the scope of the present invention is limited to the scope provided by the independent claim(s), and it is also understood that the scope of the present invention is not limited to: (i) the dependent claims, (ii) the detailed description of the non-limiting embodiments, (iii) the summary, (iv) the abstract, and/or (v) the description provided outside of this document (that is, outside of the instant application as filed, as prosecuted, and/or as granted). It is understood, for this document, that the phrase “includes” is equivalent to the word “comprising.” The foregoing has outlined the non-limiting embodiments (examples). The description is made for particular non-limiting embodiments (examples). It is understood that the non-limiting embodiments are merely illustrative as examples. | 67,853 |
11857371 | DETAILED DESCRIPTION Hereinafter, the terms used in the specification will be briefly defined, and the embodiments will be described in detail. The terms used in this specification are those general terms currently widely used in the art in consideration of functions regarding the present invention, but the terms may vary according to the intention of those of ordinary skill in the art, precedents, or new technology in the art. Also, some terms may be arbitrarily selected by the applicant, and in this case, the meaning of the selected terms will be described in detail in the detailed description of the present specification. Thus, the terms used in the specification should be understood not as simple names but based on the meaning of the terms and the overall description of the invention. When a part “includes” or “comprises” an element, unless there is a particular description contrary thereto, the part can further include other elements, not excluding the other elements. In addition, terms such as “ . . . unit”, “ . . . module”, or the like refer to units that perform at least one function or operation, and the units may be implemented as hardware or software or as a combination of hardware and software. Throughout the specification, an “ultrasound image” refers to an image of an object, which is obtained using ultrasound waves. The object may refer to a part of a human body. For example, the object may include organs, such as the liver, the heart, the brain, a breast, and the abdomen, or a fetus. In the present specification, the term “user” may refer to a medical professional, such as a doctor, a nurse, a medical laboratory technologist, a medical imaging technologist, or a sonographer, but the user is not limited thereto. Throughout the specification, the term “measuring device” may refer to a measuring application that receives a user input for configuring a position of a measuring point and provides measurement information with respect to an object in a ultrasound image based on the configured position of the measuring point, by using an image indicating a measuring point in an ultrasound image as a medium. The term “measuring device image” refers to a measuring point, and may refer to an image medium of a graphic user interface for receiving the user input configuring the position of the measuring point. For example, when the ultrasound apparatus receives a user input selecting a measuring device for measuring a distance, the ultrasound apparatus may display on the ultrasound image the measuring device image indicating two measuring points in the ultrasound image. Also, the ultrasound apparatus may receive a user input configuring the position of the measuring point via the measuring device image. When receiving the user input configuring the position of the measuring point, the ultrasound apparatus may measure a distance based on the configured position of the measuring point. Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Also, parts in the drawings unrelated to the detailed description are omitted to ensure clarity of the inventive concept. Like reference numerals refer to like elements throughout. FIG.1is a diagram illustrating an ultrasound apparatus1000according to exemplary embodiments. The ultrasound apparatus1000may include a display unit141, an input unit103, and a controller (not shown). The input unit103may further include a predetermined screen, or a display panel or a touch screen106for visually providing information to a user, in addition to an input device for a user to input data, commands or requests to the ultrasound apparatus1000. Also, the display unit141may operate as a touch screen receiving a user's touch input. Referring toFIG.1, the display unit141of the ultrasound apparatus1000may display an ultrasound image130on the touch screen106. Also, the ultrasound apparatus1000may display on the ultrasound image130a measuring device image110for measuring an object included in the ultrasound image130. The ultrasound apparatus1000may determine one from among a plurality of measuring devices, based on a user input that is input through the input unit103. For example, the ultrasound apparatus1000may determine one from among the plurality of measuring devices, when receiving the user input selecting the measuring device. When the ultrasound apparatus1000determines one from among the plurality of measuring devices, the ultrasound apparatus1000may display a measuring device image110corresponding to the measuring device, on the ultrasound image130. For example, when the ultrasound apparatus1000receives a user input selecting an icon105indicating a length measuring device for measuring a length of a specific part of an organ or a specific bone, the ultrasound apparatus1000may display a length measuring device image110corresponding to the length measuring device, on the ultrasound image130. The measuring device image110may include a measuring point which indicates a point on the ultrasound image that is to be measured. Also, the measuring device image110may include an adjusting portion for receiving a user input adjusting a position of the measuring point. In this case, the measuring point may be disposed apart from the adjusting portion. Accordingly, the user may precisely configure the measuring portion, without covering with a finger the measuring portion on the ultrasound image. Also, the ultrasound apparatus1000may receive a touch input changing a position of the adjusting portion. Also, the ultrasound apparatus1000may adjust a position of at least one of the plurality of measuring points, based on the changed position of the adjusting portion, and may obtain a measurement value, based on a position of the plurality of measuring points including the at least one measuring point, the position of which is adjusted. Also, the ultrasound apparatus1000may display the obtained measurement value. When the ultrasound apparatus1000receives a touch input with respect to the adjusting portion, the ultrasound apparatus1000may change a position of the adjusting portion and the measuring point, in the ultrasound image, by changing at least one of a position and a shape of the measuring device image110. For example, when receiving the touch input with respect to the adjusting portion, the ultrasound apparatus1000may determine the position of the measuring point based on the position of the adjusting portion. Also, when the position of the measuring point is determined, the ultrasound apparatus1000may display the measuring point by shifting the measuring point in the measuring device image. In this case, the ultrasound apparatus1000may shift the measuring point by changing at least one of the shape and the position of the measuring device image110. Also, when the position of the measuring point is determined, the ultrasound apparatus1000may calculate a measurement value with respect to a measurement item corresponding to a measuring device that is selected based on the determined position of the measuring point. The measurement item may include information associated with a length, an angle, an area, and a volume. Shapes of a measuring device image for measuring a distance may be a physical vernier caliper or a physical pincette. Also, shapes of a measuring device image for measuring an area may be the physical scissors. Also, shapes of a measuring device image for measuring an outline may be a physical pen. Accordingly, the user may instinctively recognize an interface method with respect to the measuring device. FIG.2is a flowchart of a method of obtaining a measurement value via the ultrasound apparatus1000, according to an embodiment. The ultrasound apparatus1000may display a measuring device image including a plurality of measuring points, which indicate points on an ultrasound image that are to be measured, and an adjusting portion for adjusting the plurality of measuring points, on the ultrasound image, in operation S210. The ultrasound apparatus1000may receive a user input selecting one from among a plurality of measuring devices. The measuring devices may refer to a measuring application which receives a user input configuring a position of the measuring point, and provides a measurement value with respect to an object in an ultrasound image based on the position of the measuring point, by using an image indicating a point on an ultrasound image that is to be measured as a medium. The ultrasound image may be at least one selected from a B (brightness) mode image indicating a magnitude of an ultrasound echo signal reflected from an object as a brightness, a C (color) mode image indicating a speed of a moving object as a color by using a Doppler effect, a D (Doppler) mode image indicating an image of the moving object as a spectrum by using the Doppler effect, an M (motion) mode image indicating a motion of an object according to time in a predetermined position, and an E (elastic) mode image indicating a difference in a reaction between when compression is and is not applied to an object, as an image. However, the ultrasound image is not limited thereto. Also, the ultrasound image may be a two-dimensional image, a three-dimensional image, or a four-dimensional image. The ultrasound apparatus1000may obtain the ultrasound image by photographing an object. Also, the ultrasound apparatus1000may receive the ultrasound image from an external device. The ultrasound apparatus1000may determine one from among a plurality of measuring devices, based on a user input selecting one from among the plurality of measuring devices for measuring the object in the ultrasound image. For example, the ultrasound apparatus1000may provide a measuring device selection menu for selecting one from among the plurality of measuring devices. The ultrasound apparatus1000may display the measuring device selection menu together with the ultrasound image on one screen. Also, the ultrasound apparatus1000may display the measuring device selection menu on a separate screen that is different from a touch screen on which the ultrasound image is displayed. Also, the ultrasound apparatus1000may determine one from among the plurality of measuring devices, based on a user input selecting one from among a plurality of measurement items. The measurement item may include a length, a width, or an angle, but it is not limited thereto. When the ultrasound apparatus1000receives the user input selecting one measurement item, the ultrasound apparatus1000may determine a measuring device that is predetermined in correspondence to the selected measurement item. Also, the ultrasound apparatus1000may determine one from among the plurality of measuring devices, based on a pattern of a user input. For example, when the ultrasound apparatus1000receives a user input touching two points on a touch screen and dragging the two points in opposite directions, the ultrasound apparatus1000may determine an oval measuring device as the measuring device. Also, when the ultrasound apparatus1000receives a user input touching two points on the touch screen and rotating one point, the ultrasound apparatus1000may determine a degree measuring device as the measuring device. The ultrasound apparatus1000may display on the ultrasound image a measuring device image corresponding to the selected measuring device. The measuring device image refers to a point that is to be measured in the ultrasound image, and may refer to an image medium of a graphic user interface for receiving a user input configuring a position of the measuring point. The measuring device image may be pre-stored in correspondence to the measuring device. When the ultrasound apparatus1000determines one from among the plurality of measuring devices, the ultrasound apparatus1000may display a measuring device image corresponding to the determined measuring device, on the ultrasound image. For example, when a vernier caliper measuring device is determined, the ultrasound apparatus1000may display a vernier caliper measuring device image on the ultrasound image. Also, when a pincette measuring device is determined, the ultrasound apparatus1000may display a pincette measuring device image on the ultrasound image. Also, when a scissors measuring device is determined, the ultrasound apparatus1000may display a scissors measuring device image on the ultrasound image. The measuring device image may include a plurality of measuring points which indicate points on the ultrasound image that are to be measured. The points on the ultrasound image that are to be measured may be points in the ultrasound image, which is a reference for measurement. The measuring points on the ultrasound image may be configured by the user via the measuring device image. A position of the plurality of measuring points in the measuring device image may be pre-determined in correspondence to the measuring device image. For example, the plurality of measuring points in the measuring device image having a shape of the scissors, the plurality of measuring points may be both edges of the scissors. Also, the measuring device image may include an adjusting portion for adjusting the position of the plurality of measuring points. Also, a position of the adjusting portion in the measuring device image may be pre-determined in correspondence to the measuring device image. For example, the adjusting portion in the measuring device image having the shape of the scissors may be a handle portion of the scissors. The adjusting portion may be disposed not to overlap the plurality of measuring points in the measuring device image. For example, the adjusting portion may be apart from the plurality of measuring points in the measuring device image by a predetermined distance. Also, the ultrasound apparatus1000may display the measuring device image such that the adjusting portion may be distinguished from other portions of the measuring device image. For example, the ultrasound apparatus1000may display the adjusting portion in a different color from other portions of the measuring device image. The ultrasound apparatus1000may display the measuring device image half-transparently so that a portion of the ultrasound image, which overlaps the measuring device image, is not covered by the measuring device image. The ultrasound apparatus1000may receive a touch input changing a position of the adjusting portion of the measuring device image on the ultrasound image, in operation S220. The ultrasound apparatus1000may receive a touch and drag input with respect to the adjusting portion in the measuring device image. For example, the ultrasound apparatus1000may receive the input of touching the adjusting portion via a finger or an electronic pen, and dragging the finger or the electronic pen to another position in a screen, while maintaining the state of touching. When the ultrasound apparatus1000receives the touch and drag input, the ultrasound apparatus1000may move the adjusting portion along the drag. In this case, the ultrasound apparatus1000may move the adjusting portion in the measuring device image by changing at least one of a position and a shape of the ultrasound image. In operation S230, the ultrasound apparatus1000may adjust a position of at least one of the plurality of measuring points based on the changed position of the adjusting portion, and may obtain a measurement value based on a position of the plurality of measuring points including the at least one measuring point, the position of which is adjusted. When the position of the adjusting portion is changed in the ultrasound image, the ultrasound apparatus1000may determine the position of the measuring point based on the position of the adjusting portion. For example, the ultrasound apparatus1000may determine a point that is apart from a central point of the adjusting portion, by a pre-determined distance, as the measuring point. Also, the ultrasound apparatus1000may determine a point that is apart from a pre-determined adjusting point in the adjusting portion, by a pre-determined distance, along a direction of a straight line connecting the adjusting point and at least one reference point in the ultrasound image, as the measuring point. In this case, the ultrasound apparatus1000may adjust a position of at least one of the plurality of measuring points by changing at least one of a position and a shape of the measuring device image. For example, the ultrasound apparatus1000may adjust the position of the at least one of the plurality of measuring points by adjusting a length of the measuring device image. Also, for example, the ultrasound apparatus1000may adjust the position of the at least one of the plurality of measuring points by rotating the measuring device image. Also, when the measuring device image is formed of two partial images crossing each other, based on a reference point, the ultrasound apparatus1000may adjust the position of the at least one of the plurality of measuring points by rotating the two partial images based on the reference point. The ultrasound apparatus1000may obtain a measurement value with respect to a measurement item corresponding to a selected measuring device, based on the position of the plurality of measuring points. When the position of the at least one of the plurality of measuring points is adjusted, the ultrasound apparatus1000may obtain the measurement value with respect to the measurement item corresponding to the selected measuring device, based on the position of the plurality of measuring points in the ultrasound image. For example, the ultrasound apparatus1000may measure a distance between two measuring points on the ultrasound image, based on a position of the two measuring points on the measuring device image. The ultrasound apparatus1000may generate a circle having a straight line connecting the two measuring points as a diameter, based on the position of the measuring points, and may calculate at least one of the diameter, a circumferential length, and an area of the generated circle. In this case, the ultrasound apparatus1000may convert a scale on the ultrasound image into a scale on a real object, in order to calculate a length, an area, and a volume. Also, the ultrasound apparatus1000may configure an interest area on the ultrasound image based on the position of the plurality of measuring points and obtain a measurement value with respect to the configured interest area. For example, the ultrasound apparatus1000may configure a gate on the ultrasound image based on a position of two measuring points and measure a blood flow speed of an area indicated by the gate. The ultrasound apparatus1000may display the obtained measurement value in operation S240. The ultrasound apparatus1000may display the obtained measurement value on the measurement device image. Also, the ultrasound apparatus1000may receive a touch input ending a touch and drag with respect to the adjusting portion. When receiving the touch input ending the touch and drag with respect to the adjusting portion, the ultrasound apparatus1000may display a button image for storing a measurement value, corresponding to the ultrasound image, on the ultrasound image. Also, when receiving the touch input ending the touch and drag, the ultrasound apparatus1000may delete the measuring device image on the touch screen and may display a button image for re-adjusting the position of the plurality of measuring points, on the ultrasound image. FIG.3is a view for describing a method of providing a measuring device selection menu340via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.3, the ultrasound apparatus1000may display the measuring device selection menu340. The measuring device selection menu340may include icons341through346for selecting a measuring device. Also, the measuring device selection menu340may include information indicating shapes of an area measured by the measuring device. For example, the ultrasound apparatus1000may display a word ‘line’ indicating a distance, and an image indicating the distance, together with an icon for selecting a length measuring device. Also, the measuring device selection menu340may include information indicating a measurement item measured by the measuring device. For example, the ultrasound apparatus1000may display a word ‘angle’ indicating an angle measured by the measuring device, and an angle image, together with an icon indicating an angle measuring device. Also, when the ultrasound apparatus1000receives a user input selecting one from among the plurality of measuring devices, the ultrasound apparatus1000may display a measuring device image corresponding to the selected measuring device, on the ultrasound image. For example, when receiving a user input selecting the icon341for selecting a vernier caliper measuring device, the ultrasound apparatus1000may display a vernier caliper measuring device image corresponding to the vernier caliper measuring device, on the ultrasound image. FIG.3illustrates a case in which the measuring device selection menu340includes the vernier caliper measuring device icon341corresponding to the vernier caliper measuring device for measuring a distance between a plurality of measuring points, a pincette measuring device icon342corresponding to a pincette measuring device, a scissors measuring device icon343corresponding to a scissors measuring device for measuring a measurement item with respect to an oval, a pen measuring device icon345corresponding to a pen measuring device for measuring an item with respect to a trace generated based on a user's touch trace, and an angle measuring device icon346corresponding to an angle measuring device. However, the measuring device may have other various shapes, and the measuring device selection menu340may include other measuring device icons corresponding to the measuring devices having various shapes. FIG.4is a flowchart of a method of providing a measuring device via the ultrasound apparatus1000, according to an embodiment. The ultrasound apparatus1000may receive an input of selecting a vernier caliper measuring device, in operation S410. The ultrasound apparatus1000may display a vernier caliper measuring device image, on an ultrasound image, in operation S420. The ultrasound apparatus1000may display the vernier caliper measuring device image corresponding to the vernier caliper measuring device, on the ultrasound image. The vernier caliper measuring device image may have a physical shape of a vernier caliper. The vernier caliper measuring device image may include two measuring points which indicate positions of the two points in the ultrasound image that are to be measured. Also, the vernier caliper measuring device image may include an adjusting portion for adjusting the position of the measuring points. The ultrasound apparatus1000may receive a touch input changing a position of the adjusting portion in the vernier caliper measuring device image, on the ultrasound image, in operation S430. When the ultrasound apparatus1000receives a touch and drag input with respect to the adjusting portion, the ultrasound apparatus1000may move the adjusting portion along the dragged area. In this case, the ultrasound apparatus1000may move the adjusting portion by changing at least one of a shape and a position of the adjusting portion. The ultrasound apparatus1000may determine a position of two measuring points that are apart from the adjusting portion by a pre-determined distance, based on the changed position of the adjusting portion, in operation S440. When the at least one of the shape and the position of the adjusting portion is changed, the ultrasound apparatus1000may determine a position of a plurality of measuring points, based on the position of the adjusting portion. For example, the ultrasound apparatus1000may determine a position of a measuring point based on a position of an adjusting point. The adjusting point may be a point in the measuring device image, which is becomes a reference point for determining the position of the measuring point. The position of the adjusting point may be a fixed point in the adjusting portion. For example, the position of the adjusting point may be a central point of the adjusting portion. The ultrasound apparatus1000may determine the position of the adjusting point based on the position of the adjusting portion. When the ultrasound apparatus1000determines the position of the adjusting point, the ultrasound apparatus1000may determine a position of a point that is apart from a first adjusting point by a pre-determined distance and a position of a point that is apart from a second adjusting point by a pre-determined distance, the points apart from the first and second adjusting points being from among a plurality of points on a straight line connecting the first and second adjusting points, and then may determine points that are respectively apart from the determined positions by a pre-determined distance, along a direction perpendicular to the straight line, as the measuring point. The ultrasound apparatus1000may adjust a position of at least one of two measuring points in the vernier caliper measuring device image, in operation S450. The ultrasound apparatus1000may adjust the position of the at least one of the two measuring points, by changing at least one of a position and a shape of the vernier caliper measuring device image. For example, the ultrasound apparatus1000may adjust the position of the at least one of the two measuring points, by rotating the vernier caliper measuring device image. Also, the ultrasound apparatus1000may adjust the position of the at least one of the two measuring points, by lengthening the vernier caliper measuring device image. The ultrasound apparatus1000may calculate a distance between the two measuring points based on the determined position of the two measuring points, in operation S460. The ultrasound apparatus1000may display the calculated distance, in operation S470. FIG.5Ais a view for describing a method of providing a distance measuring function via a vernier caliper measuring device, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.5A, when the vernier caliper measuring device is selected, the ultrasound apparatus1000may determine a measuring point and an adjusting portion on an ultrasound image. For example, when receiving a user input selecting the icon341indicating the vernier caliper measuring device in the measuring device selection menu340illustrated inFIG.3, the ultrasound apparatus1000may obtain a position of two measuring points511and513and the adjusting portion530on the ultrasound image. The adjusting portion530may include a first adjusting portion531and a second adjusting portion533. A user may configure the measuring points511and513at a part that is to be measured in an object in the ultrasound image. For example, when the object is a fetus, and the measuring part is a nuchal translucency (NT) of the fetus, the user may locate the measuring points511and531at two end points of the NT. In this case, the user may move the first and second adjusting portions531and533to locate the measuring points511and513at the measuring part that is to be measured. In detail, the measuring points511and513may be points on the ultrasound image, which is a reference for measurement. Also, the first and second adjusting portions531and533may be an area on the ultrasound image that receives a user's touch input for changing the location of the measuring points511and513. Default locations of the two measuring points511and513and the adjusting portion530may be pre-determined in correspondence to the vernier caliper measuring device. The vernier caliper measuring device may be a device for calculating a distance between the two measuring points511and513. The ultrasound apparatus1000may calculate the distance between the two measuring points511and513based on the obtained location of the two measuring points511and513. When the ultrasound apparatus1000receives a touch and drag input with respect to the adjusting portion530, the ultrasound apparatus1000may move the adjusting portion530along the dragged area. In this case, the ultrasound apparatus1000move the adjusting portion530by changing at least one of a shape and a position of the adjusting portion530. The first adjusting portion531may be configured to move above and below. Also, the second adjusting portion533may be configured to move above and below or to rotate based on a first adjusting point521. Also, both of the adjusting portions531and533may simultaneously move in a parallel direction. Other areas of the adjusting portion530, except for the first adjusting portion531and the second adjusting portion533, may be areas for the two measuring points511and513and the adjusting portion530to simultaneously move in the parallel direction. When the adjusting portion530moves, the ultrasound apparatus1000may determine a position of the measuring points511and513based on the position of the adjusting portion530. For example, the ultrasound apparatus1000may determine a position of adjusting points521and523based on a position of the first adjusting portion531and the second adjusting portion533. The adjusting points521and523may be a point in the ultrasound image, which is becomes a reference point for determining the position of the measuring points511and513. The position of the adjusting points521and523may be a central point of the first adjusting portion531and the second adjusting portion533. Also, when the position of the first adjusting portion531and the second adjusting portion533changes in the ultrasound image, the ultrasound apparatus1000may determine a position of a point526that is apart from the first adjusting point521by a pre-determined distance and a position of a point527that is apart from the second adjusting point523by a pre-determined distance, the points526and527apart from the first and second adjusting points521and523being from among a plurality of points on a straight line525connecting the first and second adjusting points521and523, and then may determine the points511and513that are respectively apart from the determined positions526and527by a pre-determined distance, along a direction perpendicular to the straight line525, as the measuring point. Accordingly, the measuring points511and513may be located at the position that is apart from the adjusting portions531and533, by the pre-determined distance. As illustrated inFIG.5A, since the adjusting portion530is spaced apart from the measuring points511and513, the measuring points511and513may be precisely configured without being covered by a finger, when the user configures the measuring points511and513by touching a random point in the adjusting portion530on a touch screen. Also, when the position of the first adjusting portion531and the second adjusting portion533is changed in the ultrasound image, the ultrasound apparatus1000may store the changed position of the first adjusting portion531and the second adjusting portion533and the calculated position of the measuring points511and513. FIG.5Bis a view for describing a method of displaying a vernier caliper measuring device image510corresponding to a vernier caliper measuring device via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.5B, when the vernier caliper measuring device is selected, the ultrasound apparatus1000may display the vernier caliper measuring device image510corresponding to the vernier caliper measuring device. The vernier caliper measuring device image510may include two measuring points511and513which indicate two points in an ultrasound image that are to be measured. Also, the vernier caliper measuring device image510may include an adjusting portion555indicating an adjusting area. The adjusting portion555may include a first adjusting portion557indicating a position of a first adjusting area and a second adjusting portion559indicating a position of a second adjusting area. The measuring points are located apart from the adjusting portion555by a pre-determined distance, and thus, the adjusting portion555indicating the adjusting area may be displayed at a location that is apart from the measuring points by the pre-determined distance. The vernier caliper measuring device image510may have a physical shape of a vernier caliper. For example, the adjusting portion555may correspond to an area of a bar shape. Also, the two measuring points511and513may be located to be apart from the adjusting portion555by a pre-determined distance. Also, when the adjusting portion555and the measuring points511and513are changed by a user's touch input, the ultrasound apparatus1000may change at least one of a position and a shape of the vernier caliper measuring device image510such that the adjusting portion555in the vernier caliper measuring device image510is located on the adjusting area and the two measuring points511and513indicate the measuring points. Accordingly, a user may recognize a measuring point configured on the ultrasound image and a position of the adjusting area530, from the vernier caliper measuring device image510displayed on the ultrasound image. Also, the ultrasound apparatus1000may receive a touch input with respect to the adjusting area530, by receiving a touch input moving the adjusting portion555in the vernier caliper measuring device image510. For example, the position of the first adjusting area may be moved along an area touched by the user. The ultrasound apparatus1000may receive a user's touch input moving the position of the first adjusting portion557by displaying the first adjusting portion557on the moved first adjusting area. FIG.5Cis a view for describing a method of indicating a measuring point by changing a position or a shape of the vernier caliper measuring device image510, according to a user input, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.5C, the ultrasound apparatus1000may receive a touch input changing the position of the second adjusting portion559. For example, the ultrasound apparatus1000may receive a touch input touching and dragging the second adjusting portion559. When receiving the touch input touching and dragging the second adjusting portion559, the ultrasound apparatus1000may adjust a position of two measuring points511and513in the ultrasound image by changing at least one of a position and a shape of the vernier caliper measuring device image510. For example, the ultrasound apparatus1000may receive the touch input rotating the second adjusting portion559based on the first adjusting point521that is the center of the first adjusting portion557. When receiving the touch input rotating the second adjusting portion559, the ultrasound apparatus1000may determine a position of the second adjusting point523based on the position of the second adjusting portion559. When the position of the second adjusting point523is determined, the ultrasound apparatus1000may determine the position of the two measuring points511and513, based on the first adjusting point521and the determined position of the second adjusting point523. Also, when receiving the touch input rotating the second adjusting portion559, the ultrasound apparatus1000may rotate the vernier caliper measuring device image510. When the vernier caliper measuring device image510is rotated, a point in the ultrasound image that is indicated by the measuring points511and513in the vernier caliper measuring device image510may be a point that is to be measured. Also, for example, the ultrasound apparatus1000may receive a touch input reducing or increasing a length of the vernier caliper measuring device image510. For example, the ultrasound apparatus1000may receive a touch input moving the second adjusting portion559in a lengthwise direction of the vernier caliper measuring device image510. When receiving the touch input moving the second adjusting portion559in the lengthwise direction of the vernier caliper measuring device image510, the ultrasound apparatus1000may determine the position of the second adjusting point523, based on the position of the second adjusting portion559. When the position of the second adjusting point523is determined, the ultrasound apparatus1000may determine the position of the second measuring point513, based on the first adjusting point521and the determined position of the second adjusting point523. Also, a distance between the first measuring point511and the second measuring point513may be calculated based on the first measuring point511and the determined position of the second measuring point513. Also, when receiving the touch input moving the second adjusting portion559in the lengthwise direction of the vernier caliper measuring device image510, the length of the vernier caliper measuring device image510may be increased or decreased. When the length of the vernier caliper measuring device image510is increased or decreased, the point in the ultrasound image that is indicated by the measuring points511and513in the vernier caliper measuring device image510may be a point that is to be measured. FIG.5Dis a view for describing a method of configuring a measuring point by changing a position of the vernier caliper measuring device image510, according to a user input, via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.5D, the ultrasound apparatus1000may receive a touch input moving the entire vernier caliper measuring device image510. For example, the ultrasound apparatus1000may receive a user input touching other areas of the entire adjusting portion except for the first adjusting portion and the second adjusting portion. When the ultrasound apparatus1000receives the user input touching other areas of the entire adjusting portion except for the first adjusting portion and the second adjusting portion, the ultrasound apparatus1000may display an image560indicting that the entire vernier caliper measuring device image510is selected, on the vernier caliper measuring device image510. Also, when the ultrasound apparatus1000receives a touch input moving the entire vernier caliper measuring device image510, the ultrasound apparatus1000may move the entire vernier caliper measuring device image510. When the entire vernier caliper measuring device image510is moved, a point in the ultrasound image that is indicated by a measuring point in the vernier caliper measuring device image510may be a point that is to be measured. Also, although it is not illustrated inFIG.5D, the left and the right of the vernier caliper measuring device image510may be changed. For example, when the ultrasound apparatus1000receives a user input double-clicking the vernier caliper measuring device image510, the ultrasound apparatus1000may display the vernier caliper measuring device image510changing the left and the right. FIG.5Eis a view for describing a method of indicating a measuring point by changing a position or a shape of the vernier caliper measuring device image510, according to a user input, via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.5E, the ultrasound apparatus1000may receive a touch input changing a position of the first adjusting portion557. When receiving the input changing the position of the first adjusting portion557, the ultrasound apparatus1000may adjust a position of the first measuring point511in an ultrasound image by changing at least one of the position and the shape of the vernier caliper measuring device image510. Also, the ultrasound apparatus1000may receive a touch input increasing or decreasing a length of the vernier caliper measuring device image510. For example, the ultrasound apparatus1000may receive a touch input moving the first adjusting portion557in a lengthwise direction of the vernier caliper measuring device image510. When receiving the touch input moving the first adjusting portion557in the lengthwise direction of the vernier caliper measuring device image510, the ultrasound apparatus1000may determine the position of the first adjusting point521, based on the position of the first adjusting portion557. When the position of the first adjusting point521is determined, the ultrasound apparatus1000may determine the position of the first measuring point511, based on the changed first adjusting point521and the second adjusting point523. Also, a distance between the first measuring point511and the second measuring point513may be calculated based on the determined position of the first measuring point511and the second measuring point513. Also, when receiving the touch input moving the first adjusting portion557in the lengthwise direction of the vernier caliper measuring device image510, the ultrasound apparatus1000may change the position of the first adjusting portion557by increasing or decreasing the length of the vernier caliper measuring device image510. When the length of the vernier caliper measuring device image510is increased or decreased, the point in the ultrasound image that is indicated by the measuring points511and513in the vernier caliper measuring device image510may be a point that is to be measured. FIG.6is a view for describing a method of storing a calculated measurement value via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.6, the ultrasound apparatus1000may display a button image for storing the calculated measurement value, on an ultrasound image. For example, when the ultrasound apparatus1000receives a user input ending a touch and drag input with respect to an adjusting portion in a measuring device image, the ultrasound apparatus1000may display the button image for storing the calculated measurement value. When receiving the input touching the button image for storing the calculated measurement value, the ultrasound apparatus1000may store the calculated measurement value in correspondence to identification information of the ultrasound image. Also, when the user selects a measuring part and a measuring item620before selecting a measuring device, the ultrasound apparatus1000may display or store the calculated measurement value as a measurement value corresponding to the pre-selected measurement part and measurement item, when receiving the user input touching the icon for storing the measurement value. Also, the ultrasound apparatus1000may store not only the measurement value but also a position of a measuring point and an adjusting area. FIG.7Ais a view for describing a method of providing a distance measuring function via a pincette measuring device, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.7A, the ultrasound apparatus1000may display a pincette measuring device image710on an ultrasound image. When the ultrasound apparatus1000receives a user input selecting the pincette measuring device icon342in the measuring device selection menu340, the ultrasound apparatus1000may display the pincette measuring device image710on the ultrasound image. The pincette measuring device image710may have a physical shape of a pincette. Also, the pincette measuring device image710may be formed of two images. The pincette measuring device image710may include two measuring points751and753which indicate two points in the ultrasound image that are to be measured. Also, the pincette measuring device image710may include two adjusting portions761and763for adjusting a position of the two measuring points751and753. The position of the measuring points751and753and the adjusting portions761and763may be pre-determined in the pincette measuring device image710. For example, the measuring points751and753in the pincette measuring device image710may be a pincer portion of the pincette. Also, the adjusting portions761and763in the pincette measuring device image710may be a handle portion of the pincette. The ultrasound apparatus1000may display the pincette measuring device image710such that the two measuring points751and753in the pincette measuring device image710indicate the two points in the ultrasound image that are to be measured, and the two adjusting portions761and763in the pincette measuring device image710are located in the adjusting area receiving a touch input of a user. Accordingly, the ultrasound apparatus1000may receive the user's touch input via the adjusting portions761and763. When receiving a user's touch input moving the adjusting portions761and763on the ultrasound image, the ultrasound apparatus1000may determine the position of the two measuring points on the ultrasound image, based on the changed position of the adjusting portions761and763. For example, the ultrasound apparatus1000may determine a point that is apart from the adjusting portions761and763by a pre-determined distance as the two measuring points. In this case, the ultrasound apparatus1000may determine the two measuring points such that a straight line connecting the two measuring points is parallel to the adjusting portions761and763. Also, when receiving the user's touch input moving the adjusting portions761and763on the ultrasound image, the ultrasound apparatus1000may change at least one of a position and a shape of the pincette measuring device image710such that the measuring points751and753in the pincette measuring device image710indicate the points to be measured. For example, when the ultrasound apparatus1000receives a touch input rotating the adjusting portions761and763in the same direction, the ultrasound apparatus1000may rotate the pincette measuring device image710based on a pre-determined point in the pincette measuring device image710. Also, when receiving the touch input moving the adjusting portions761and763, the ultrasound apparatus1000may move the entire pincette measuring device image710to another position in the ultrasound image. Also, when receiving a user input selecting the first adjusting portion761and the second adjusting portion763in the pincette measuring device image710and dragging the first and second adjusting portions761and763towards opposite directions, or towards a direction in which the first and second adjusting portions761and763get near to each other, the ultrasound apparatus1000may perform a parallel displacement of the two images respectively corresponding to two pincers in opposite directions or in a direction in which the two images get near to each other. Also, the ultrasound apparatus1000may adjust a length of the pincette measuring device image710based on a pre-determined point in the pincette measuring device image710. FIG.7Bis a view for describing a method of providing a pincette measuring device via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.7B, the ultrasound apparatus1000may display a measurement value obtained via the pincette measuring device image710. When the ultrasound apparatus1000receives a user's touch input moving the adjusting portions761and763on an ultrasound image, the ultrasound apparatus1000may determine a position of measuring points711and713and may calculate a distance between the measuring points711and713based on the determined position. Also, the ultrasound apparatus1000may display information770of the calculated distance on the ultrasound image. Also, the ultrasound apparatus1000may display an image760indicating a position and a range of a measured area, on the ultrasound image. Also, the ultrasound apparatus1000may configure the measured area as an interest area and may display interest information with respect to the configured interest area. For example, the ultrasound apparatus1000may configure the measured area as a simple volume. Also, the ultrasound apparatus1000may display information of a blood flow at a part indicated by the configured sample volume as a spectrum. Accordingly, the user may adjust a length of the sample volume via the pincette measuring device image710. FIG.8is a flowchart of a method of providing a measuring function via a scissors measuring device, via the ultrasound apparatus1000, according to an embodiment. The ultrasound apparatus1000may receive a user input selecting the scissors measuring device, in operation S810. The ultrasound apparatus1000may display a scissors measuring device image on an ultrasound image, in operation S820. The ultrasound apparatus1000may display the scissors measuring device image corresponding to the scissors measuring device on the ultrasound image. The scissors measuring device image may have a physical shape of the scissors. The scissors measuring device image may include two measuring points, which indicate two points in the ultrasound image that are to be measured. The measuring points may be a point on the ultrasound image, which becomes a reference point for measurement. The plurality of measuring points in the scissors measuring device image having the physical shape of the scissors may be end points at both edges of the scissors. Also, the scissors measuring device image may include an adjusting portion indicating an adjusting area. Also, the adjusting area may be an area on the ultrasound image for receiving a user's touch input for changing a position of the measuring points. Also, in the scissors measuring device image having a shape of the scissors, the adjusting portion may be a handle portion of the scissors. Also, the adjusting portion and the measuring point may be disposed to be apart from each other in the measuring device image. For example, the adjusting portion may be disposed not to overlap the measuring point in the measuring device image. Also, for example, the adjusting portion may be disposed in an area that is apart from the measuring point by a pre-determined distance, in the measuring device image. The ultrasound apparatus1000may receive a touch input changing a position of the adjusting portion of the scissors measuring device image, on the ultrasound image, in operation S830. When receiving a touch and drag input with respect to the adjusting portion, the ultrasound apparatus1000may move the adjusting portion along the dragged area. In this case, the ultrasound apparatus1000may move the adjusting portion by changing at least one of a shape and a position of the adjusting portion. The ultrasound apparatus1000may determine a position of two measuring points that are apart from the adjusting portion by a pre-determined distance, based on the changed position of the adjusting portion, in operation S840. When the at least one of the shape and the position of the adjusting portion is changed, the ultrasound apparatus1000may determine a position of the plurality of measuring points, based on the position of the adjusting portion. For example, the ultrasound apparatus1000may determine a position of an adjusting point based on the position of the adjusting portion. The adjusting point may be a point in the measuring device image, which is a reference point for determining the position of the measuring point. The adjusting point may be a fixed point in the adjusting portion. When the position of the adjusting point is determined, the ultrasound apparatus1000may determine a point that is apart from the adjusting point by a pre-determined distance along a direction of a straight line connecting the adjusting point and at least one reference point determined on the ultrasound image, as the measuring point. The ultrasound apparatus1000may adjust the position of the adjusting portion and two measuring points by changing at least one of a position and a shape of the scissors measuring device image. For example, when an area indicating a handle of the scissors is moved, the ultrasound apparatus1000may change the scissors measuring device image as a shape in which both edges of the scissors are closed or a shape in which both edges of the scissors are open, such that ends of the both edges of the scissors indicate the points in the ultrasound image that are to be measured. The ultrasound apparatus1000may determine a circle having a segment connecting the two measuring points as a diameter, based on the determined position of the two measuring points, in operation S850. The ultrasound apparatus1000may calculate and display a measurement value with respect to the circle, in operation S860. The ultrasound apparatus1000may calculate at least one of a diameter, a circumferential length, and an area of the circle. FIG.9Ais a view for describing a method of providing a measuring function via a scissors measuring device, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.9A, the ultrasound apparatus100may display a scissors measuring device image on an ultrasound image. When the ultrasound apparatus1000receives a user input selecting the icon343indicating the scissors measuring device, the ultrasound apparatus1000may display the scissors measuring device image corresponding to the scissors measuring device, on the ultrasound image. FIG.9Bis a view for describing a method of obtaining a measurement value with respect to a circle via the scissors measuring device, via the ultrasound apparatus1000, according to an embodiment. When the ultrasound apparatus1000receives a user input selecting the scissors measuring device, the ultrasound apparatus1000may obtain a position of two measuring points911and913and two adjusting areas961and963, corresponding to the scissors measuring device, on the ultrasound image. The measuring points may be points on the ultrasound image that is a reference point for measurement. Also, the adjusting areas may be areas on the ultrasound image that receive a user's touch input for changing the position of the measuring points. A default location of the two measuring points911and913and the two adjusting areas961and963may be pre-determined in correspondence to the scissors measuring device. The scissors measuring device may be a device for measuring a diameter, a circumferential length, and an area of a circle area953having a straight line951connecting the two measuring points911and913as the diameter. The ultrasound apparatus1000may determine the circle area953based on the obtained position of the two measuring points911and913. Also, the ultrasound apparatus1000may calculate the diameter, the circumferential length, and the area of the circle area953based on the obtained position of the measuring points911and913. When receiving a touch and drag input with respect to the adjusting areas961and963, the ultrasound apparatus1000may move the adjusting areas961and963along the dragged area. In this case, the ultrasound apparatus1000may move the adjusting areas961and963by changing the position of the adjusting areas961and963. Also, when the position of the adjusting areas is changed by the user's input, the ultrasound apparatus1000may determine the position of the measuring points911and913, based on the position of the adjusting areas961and963. For example, the ultrasound apparatus1000may determine a position of two adjusting points921and923, based on the position of the first adjusting area961and the second adjusting area963. The position of the two adjusting points921and923may be pre-determined in the adjusting areas961and963. The two adjusting points921and923may include the first adjusting point921in the first adjusting area961and the second adjusting point923in the second adjusting area963. The ultrasound apparatus1000may determine a cross point925of straight lines connecting the two adjusting points921and923and the pre-stored two measuring points911and913. The cross point925may be a reference point for a movement of the first adjusting area961and the second adjusting area963. When the ultrasound apparatus1000receives a user input rotating the first adjusting area961or the second adjusting area963based on the cross point925, the ultrasound apparatus1000may rotate the first adjusting point921and the second adjusting point923based on the cross point925. When the second adjusting point923rotates based on the cross point925, the ultrasound apparatus1000may determine, based on the changed position of the second adjusting point923, a point that is apart from the second adjusting point923by a pre-determined distance, along a direction of the straight line connecting the second adjusting point923and the cross point925, as the first measuring point911. Also, when the first adjusting point921rotates based on the cross point925, the ultrasound apparatus1000may determine, based on the changed position of the first adjusting point921, a point that is apart from the first adjusting point921by a pre-determined distance, along a direction of the straight line connecting the first adjusting point921and the cross point925, as the second measuring point913. Also, when the position of the first adjusting area961and the second adjusting area963in the ultrasound image is changed, the ultrasound apparatus100may store the changed position of the first adjusting area961and the second adjusting area963, and the determined position of the measuring points911and913. The first adjusting area961and the second adjusting area963may be configured to be rotatable based on the cross point925. Also, the touch input with respect to the first adjusting area961and the second adjusting area963may be simultaneously received. Accordingly, the ultrasound apparatus1000may simultaneously change the two measuring points911and913, by receiving two-touch inputs. Also, when the ultrasound apparatus1000receives a long touch input with respect to the first adjusting area961and the second adjusting area963, and a touch input moving the first adjusting area961and the second adjusting area963, the ultrasound apparatus1000may simultaneously move the first adjusting area961, the second adjusting area963, and the cross point925, along a parallel direction. Also, when the ultrasound apparatus1000receives a long touch input with respect to the first adjusting area961and the second adjusting area963, and a touch input rotating the first adjusting area961and the second adjusting area963, the ultrasound apparatus1000may simultaneously rotate the first adjusting area961, the second adjusting area963, and the cross point925. Also, when the ultrasound apparatus1000receives a touch input with respect to the cross point925, and a touch input moving the cross point925, the ultrasound apparatus1000may move the position of the measuring points911and913based on the adjusting points921and923and the moved cross point925. For example, when receiving the touch input moving the cross point925such that the cross point925gets near the adjusting points921and923, the ultrasound apparatus1000may determine the position of the measuring points911and913based on the adjusting points921and923and the moved cross point925. In this case, the measuring points911and913may be distanced apart from each other. Also, when receiving the touch input moving the cross point925such that the cross point925gets far from the adjusting points921and923, the ultrasound apparatus1000may determine the position of the measuring points911and913based on the adjusting points921and923and the moved cross point925. In this case, the measuring points911and913may get near to each other. FIG.9Cis a view for describing a method of displaying a measuring device image910corresponding to a scissors measuring device, via the ultrasound apparatus100, according to an embodiment. Referring toFIG.9C, when the scissors measuring device is selected, the ultrasound apparatus1000may display the measuring device image910corresponding to the scissors measuring device. The measuring device image910corresponding to the scissors measuring device may include two measuring points911and913which indicate points in an ultrasound image that are to be measured. Also, the measuring device image910corresponding to the scissors measuring device may include two adjusting portions991and993which indicate adjusting areas. Also, the measuring device image910may display an image953indicating a circle having the two measuring points911and913as a diameter. The measuring device image910corresponding to the scissors measuring device may have a physical shape of the scissors. Also, a position of the two measuring points911and913and the two adjusting portions991and993may be pre-determined in the scissors measuring device image910. For example, the two measuring points911and913may be ends of both edges of the scissors of the scissors measuring device image910. Also, the two adjusting portions991and993may be a handle portion of the scissors of the scissors measuring device image910. The ultrasound apparatus1000may display the scissors measuring device image910such that the two measuring points911and913in the scissors measuring device image910indicate two points in the ultrasound image that are to be measured, and a cross point of the both edges of the scissors is located at the cross point925ofFIG.9B. Accordingly, the measuring device image910may have a shape in which two partial images cross each other based on the cross point. Also, the ultrasound apparatus1000may receive a touch input with respect to adjusting areas, by receiving a touch input moving the adjusting portions991and993in the measuring device image910. For example, the ultrasound apparatus1000may move a position of the first adjusting area along the touched area. Also, the ultrasound apparatus1000may receive a user's touch input moving a position of the first adjusting portion991, by displaying the first adjusting portion991on the moved position of the first adjusting area. Accordingly, the user may recognize the position of the measuring points and the adjusting areas configured on the ultrasound image, from the measuring device image910displayed on the ultrasound image. FIG.9Dis a view for describing a method of indicating a measuring point by changing a position or a shape of the measuring device image510, according to an input of a user, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.9D, the ultrasound apparatus1000may receive a touch input changing the position of the adjusting portions991and993. When receiving the touch input changing the position of the adjusting portions991and993, the ultrasound apparatus1000may adjust the position of the two measuring points911and913in the ultrasound image, by changing at least one of a position and a shape of the measuring device image910. For example, when the ultrasound apparatus1000receives a touch input rotating the adjusting portions991and993such that the adjusting portions991and993get far from each other in opposite directions or near to each other in the same direction, the ultrasound apparatus1000may determine the position of the two measuring points911and913based on the changed position of the adjusting portions991and993and the position of the cross point925. Also, when receiving the touch input rotating the adjusting portions991and993such that the adjusting portions991and993get far from each other in opposite directions or near to each other in the same direction, the ultrasound apparatus1000may rotate two partial images based on the cross point925. For example, the ultrasound apparatus1000may rotate the both edges of the scissors in the scissors measuring device image910, based on the cross point925. In this case, the speed and angle of the rotation may be determined based on a speed and a distance at which the adjusting portions991and993get far from each other in opposite directions. When the two partial images rotate, the point in the ultrasound image that is indicated by the measuring points911and913in the measuring device image910may be a point that is to be measured. Also, when the position of the measuring points911and913is determined, the ultrasound apparatus1000may determine a circle having a straight line formed by the two measuring points911and913as a diameter. Also, the ultrasound apparatus1000may display the image953indicating a circle at the determined position of the circle. Also, the ultrasound apparatus1000may calculate at least one of a length of a diameter, a circumferential length, and an area of the determine circle. Also, when the ultrasound apparatus1000receives two touch inputs with respect to the adjusting portions991and993, the ultrasound apparatus1000may simultaneously rotate the two partial images based on the cross point925. Also, when the ultrasound apparatus1000receives a long touch input with respect to the first adjusting portion991and the second adjusting portion993and a touch input moving the first adjusting portion991and the second adjusting portion993, the ultrasound apparatus1000may move the entire scissors measuring device image910. Also, when the ultrasound apparatus1000receives a long touch input with respect to the first adjusting portion991and the second adjusting portion993and a touch input rotating the first adjusting portion991and the second adjusting portion993, the ultrasound apparatus100may rotate the entire scissors measuring device image910. FIG.10Ais a view for describing a method of providing a measuring function in a direct change mode, via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.10A, the ultrasound apparatus1000may display a button image1030for entering into the direct change mode in which a configured measuring area may be changed without using a measuring device image. When a user's touch input with respect to the measuring device image is ended, the ultrasound apparatus1000may display the button image1030for entering into the direct change mode. In this case, the ultrasound apparatus1000may also display a button image for storing a measurement value. Also, when receiving a user input double touching an area of the ultrasound image, in which the measuring device image is not displayed, the ultrasound apparatus1000may display the button image1030for entering into the direct change mode. When receiving the user input touching the button image1030for entering into the direct change mode, the ultrasound apparatus1000may delete the measuring device image and display the image1010indicating a configured measuring area. Also, when the ultrasound apparatus1000enters into the direct change mode, the ultrasound apparatus1000may display an image indicating that the direct change mode is entered into. For example, the ultrasound apparatus1000may change a size or a color of the image1010indicating the measuring area. Also, the ultrasound apparatus1000may display a handle image indicating the adjusting portions in the image1010indicating the measuring area, on the adjusting portion. FIG.10Bis a view for describing a method of changing a configured measuring area, in a direct change mode, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.10B, when the ultrasound apparatus1000receives a touch input with respect to the image1010indicating the measuring area, the ultrasound apparatus1000may change the configured measuring area. Also, the ultrasound apparatus1000may calculate a diameter, a circumferential length, and an area of a circle based on the changed measuring area. Also, when the ultrasound apparatus1000receives a user input selecting a portion of the image1010indicating the measuring area and moving a position of the selected portion of the image1010in an ultrasound image, the ultrasound apparatus1000may change a position or a shape of the image1010indicating the measuring area, based on the position of the moved portion. Also, the ultrasound apparatus1000may store the changed position of the measuring area and a measuring point. Also, when receiving a user input selecting a scissors measuring device, the ultrasound apparatus1000may display a measuring device image, based on the stored position of the measuring point. FIG.11Ais a view for describing a method of providing a measuring function via a pen measuring device, via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.11A, when a pen measuring device icon345is selected, the ultrasound apparatus1000may display a pen measuring device image1110corresponding to the pen measuring device. The pen measuring device image1110may include a measuring point1121indicating a trace point1111in an ultrasound image and an adjusting portion1123receiving a user's touch input. The adjusting portion1123may be displayed in an adjusting area in the ultrasound image determining a position of the trance point1111. Also, the ultrasound apparatus1000may determine a point that is apart from the adjusting area by a pre-determined distance as the trace point1111. The pen measuring device image1110corresponding to the pen measuring device may have a physical shape of a pen. Also, a position of the adjusting portion1123and the measuring point1121in the pen measuring device image1110may be pre-determined. For example, the adjusting portion1123may be a handle portion in the pen measuring device image1110. Also, the measuring point1121may be a tip portion in the pen measuring device image1110. FIG.11Bis a view for describing a method of providing a trace function via the pen measuring device image1110, via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.11B, when receiving a touch input moving the pen measuring device image1110, the ultrasound apparatus1000may determine a segment connecting the trace point1111. When receiving a touch input moving a position of the adjusting portion1123of the pen measuring device image1110in the ultrasound image, the ultrasound apparatus1000may move the pen measuring device image1110along the area touched by the user. Also, the ultrasound apparatus1000may determine a position of the trace point1111, based on the position of the adjusting portion1123. Also, when receiving the touch input moving the position of the adjusting portion1123in the ultrasound image, the ultrasound apparatus1000may display a line image1120indicating the determined trace points1111on the ultrasound image. Also, when receiving the touch input moving the position of the adjusting portion1123in the ultrasound image, the ultrasound apparatus1000may renew the line1120connecting the trace points1111. When the line1120connecting the trace points1111is renewed, the ultrasound apparatus1000may calculate a length of the renewed line1120. Also, the ultrasound apparatus1000may calculate a displacement between a trace start point and a trace end point. Also, when the line1120connecting the trace points1111is a lopped curve, the ultrasound apparatus1000may calculate a minor axis, a major axis, a circumferential length, and an area of the looped curve. Although it is not illustrated inFIG.11B, the ultrasound apparatus1000may adjust a size of the pen measuring device image1110. For example, when the ultrasound apparatus receives a long touch input with respect to the adjusting portion1123and a touch input dragging the adjusting portion1123in a lengthwise direction or the pen measuring device image1110or in the direction opposite thereto, the ultrasound apparatus1000may adjust the length of the pen measuring device image1110. For example, when receiving the touch input extending the pen measuring device image1110in the lengthwise direction of the pen measuring device image1110, the ultrasound apparatus1000may enlarge the size of the pen measuring device image1110by extending the pen measuring device image1110along the dragged area. Also, for example, when receiving the touch input decreasing the pen measuring device image1110in the direction opposite to the lengthwise direction of the pen measuring device image1110, the ultrasound apparatus1000may decrease the pen measuring device image1110along the dragged area. For example, the ultrasound apparatus1000may change the pen measuring device image1110from 10 cm to 5 cm. FIG.11Cis a view for describing a method of providing a trace function via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.11C, the ultrasound apparatus1000may display a button image1130for renewing the configured line1120. For example, when the touch input with respect to the pen measuring device image1110is ended, the ultrasound apparatus1000may delete the pen measuring device image1110and display the button image1130for renewing the configured line1120. In this case, the ultrasound apparatus1000may not delete the image1120indicating the configured line1120. Also, when the touch input with respect to the pen measuring device image1110is ended, the ultrasound apparatus1000may display a button image1140for storing information with respect to the configured line1120. The information with respect to the configured line1120may include the position of the trace points1111forming the configured line1120and a measurement value with respect to the line1120. When receiving the touch input selecting the button image1130for renewing the configured line1120, the ultrasound apparatus1000may display the pen measuring device image1110on the ultrasound image such that the measuring point of the pen measuring device image1110indicates a start point or an end point of the configured line1120. Also, when receiving the touch input moving the position of the adjusting portion in the ultrasound image, the ultrasound apparatus1000may renew the configured line1120. FIG.12Ais a view for describing a method of providing an angle measuring function via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.12A, when an angle measuring device is selected, the ultrasound apparatus1000may display an angle measuring device image1210corresponding to the angle measuring device. The angle measuring device image1210may include two straight lines1221and1223generating an angle. Also, the angle measuring device image1210may include three adjusting portions1211,1213, and1215. The three adjusting portions1211,1213, and125may be located at a vertex of a triangle formed by the two straight lines1221and1223and at end points of the two straight lines1221and1223. The ultrasound apparatus1000may measure the angle of the two straight lines1221and1223. The ultrasound apparatus1000may measure the angle of the two straight lines1221and1223based on a position of the three adjusting portions1211,1213, and125. For example, the ultrasound apparatus1000may measure the angle of the two straight lines1221and1223based on a position of the central point of the three adjusting portions1211,1213, and1215. FIG.12Bis a view for describing a method of providing an angle measuring function via the ultrasound apparatus1000, according to an embodiment. Referring toFIG.12B, the ultrasound apparatus1000may change a position and a shape of the angle measuring device image1210, when receiving a touch input moving the three adjusting portions1211,1213, and1215, and may determine an angle of the two straight lines1221and1223. When receiving a touch input with respect to the adjusting portion1215located at the vertex formed by the two straight lines1221and1223, the ultrasound apparatus1000may move the entire angle measuring device image1210. Also, when receiving a touch input with respect to the first adjusting portion1211on the first straight line1221, the ultrasound apparatus1000may rotate the first straight line1221based on the vertex. Also, when receiving a touch input with respect to the second adjusting portion1213on the second straight line1223, the ultrasound apparatus1000may rotate the second straight line1223based on the vertex. When the position and the shape of the angle measuring device image1210are changed, the ultrasound apparatus1000may measure the angle formed by the two straight lines1221and1223, based on the position of the three adjusting portions1211,1213, and1215in the angle measuring device image1210. FIG.13is a view for describing a method of providing an angle measuring function via the ultrasound apparatus1000, according to another embodiment. Referring toFIG.13, when an angle measuring device is selected, the ultrasound apparatus1000may display an angle measuring device image1310corresponding to the angle measuring device. The angle measuring device image1310may include two straight lines1321and1323generating an angle. Also, the angle measuring device image1310may include four adjusting portions1311,1313,1331, and1333. The four adjusting portions1311,1313,1331, and1333may be located at end points of the two straight lines1321and1323and a middle point of the two straight lines1221and1223. The ultrasound apparatus1000may measure the angle formed by the two straight lines1321and1323. For example, the ultrasound apparatus1000may determine a cross point at which the two straight lines1321and1323meet each other when the two straight lines1321and1323extend, based on a position of the end points of the two straight lines1321and1323. When the cross point is determined, the ultrasound apparatus1000may calculate the angle formed by the two straight lines1321and1323at the cross point. When the ultrasound apparatus1000receives a touch input with respect to the adjusting portion1331located in a middle point of the first straight line1321, the ultrasound apparatus1000may move the entire first straight line1321in a parallel direction. Also, when the ultrasound apparatus1000receives a touch input with respect to the adjusting portion1333located in a middle point of the second straight line1323, the ultrasound apparatus1000may move the entire second straight line1323in a parallel direction. Also, when the ultrasound apparatus1000receives a touch input with respect to the adjusting portion1311located at an end point of the first straight line1321, the ultrasound apparatus1000may rotate the first straight line1321based on the adjusting portion1331located in the middle point of the first straight line1321. Also, when the ultrasound apparatus1000receives a touch input with respect to the adjusting portion1313located at an end point of the second straight line1323, the ultrasound apparatus1000may rotate the second straight line1323based on the adjusting portion1333located in the middle position of the second straight line1323. When a position of the first straight line1321and the second straight line1323is changed, the ultrasound apparatus1000may determine a cross point of the first straight line1321and the second straight line1323or a cross point of extension lines of the first straight line1321and the second straight line1323. Also, the ultrasound apparatus1000may calculate the angle formed by the first straight line1321and the second straight line1323based on the determined cross point. FIG.14is a block diagram of the ultrasound apparatus1000. Referring toFIG.14, the ultrasound apparatus1000may include a display unit1100, a user input unit1200, and a control unit1300. However, not all of the illustrated components are essential. The ultrasound apparatus1000may be realized by more or less components that the illustrated components. Hereinafter, the illustrated components will be described. The display unit1100may display an ultrasound image and an image for a user interface. The display unit1100may display on the ultrasound image a measuring device image including a plurality of measuring points indicating points in the ultrasound image that are to be measured, and adjusting portions for adjusting the plurality of measuring points. The measuring points may be disposed apart from the adjusting portions. When a position of the adjusting portions and of the measuring point of the measuring device image is changed in the ultrasound image, the display unit1100may update the measuring device image, on a screen. The user input unit1200may receive a touch input changing the position of the adjusting portion. Also, the user input unit1200may receive a touch and drag input with respect to the adjusting portion. The control unit1300may adjust a position of at least one of the plurality of measuring points based on the changed position of the adjusting portion and may obtain a measurement value based on a position of the plurality of measuring points including the at least one measuring point, the position of which is changed. Also, the control unit1300may adjust the position of the at least one of the plurality of measuring points by changing at least one of a position and a shape of the measuring device image. For example, the control unit1300may adjust the position of the at least one of the plurality of measuring points by adjusting a length of the measuring device image, when the touch input changing the position of the adjusting portion is received. For example, the control unit1300may adjust the position of the at least one of the plurality of measuring points by rotating the measuring device image, when the touch input changing the position of the adjusting portion is received. For example, when the measuring device image includes two partial images crossing each other based on a reference point, the control unit1300may adjust the position of the at least one of the plurality of measuring points by rotating the two partial images based on the reference point. Also, the control unit1300may generate a circle based on the position of the plurality of measuring points, and may calculate at least one of a diameter, a circumferential length, and an area of the generated circle. Also, the display unit1100may display the obtained measurement value on the measuring device image. Also, the display unit1100may display the measuring device image half-transparently so that an area of the ultrasound image, which overlaps the measuring device image, is not covered by the measuring device image. The display unit1100may display a button image for storing the measurement value, corresponding to the ultrasound image, on the ultrasound image. Also, the display unit1100may delete the measuring device image and display a button image for re-adjusting the position of the plurality of measuring points, on the ultrasound image. FIG.15is a block diagram of the ultrasound apparatus1000, according to another embodiment. Referring toFIG.15, the ultrasound apparatus1000may further include a probe20, an ultrasound transceiver100, an image processor200, a communication unit300, and a memory400, in addition to the display unit1100, the user input unit1200, and the control unit1300. The probe20, the ultrasound transceiver100, the image processor200, the communication unit300, the memory400, the display unit1100, the user input unit1200, and the control unit1300may be connected with one another via a bus700. The ultrasound apparatus1000may be a cart type apparatus or a portable type apparatus. Examples of portable ultrasound diagnosis apparatuses may include, but are not limited to, a picture archiving and communication system (PACS) viewer, a smartphone, a laptop computer, a personal digital assistant (PDA), and a tablet PC. The probe20transmits ultrasound waves to an object10in response to a driving signal applied by the ultrasound transceiver100and receives echo signals reflected by the object10. The probe20includes a plurality of transducers, and the plurality of transducers oscillate in response to electric signals and generate acoustic energy, that is, ultrasound waves. Furthermore, the probe20may be connected to the main body of the ultrasound apparatus1000by wire or wirelessly, and the ultrasound apparatus1000may include a plurality of probes20according to embodiments. A transmitter110supplies a driving signal to the probe20. The transmitter110includes a pulse generator112, a transmission delaying unit114, and a pulser116. The pulse generator112generates pulses for forming transmission ultrasound waves based on a predetermined pulse repetition frequency (PRF), and the transmission delaying unit114delays the pulses by delay times necessary for determining transmission directionality. The pulses which have been delayed correspond to a plurality of piezoelectric vibrators included in the probe20, respectively. The pulser116applies a driving signal (or a driving pulse) to the probe20based on timing corresponding to each of the pulses which have been delayed. A receiver120generates ultrasound data by processing echo signals received from the probe20. The receiver120may include an amplifier122, an analog-to-digital converter (ADC)124, a reception delaying unit126, and a summing unit128. The amplifier122amplifies echo signals in each channel, and the ADC124performs analog-to-digital conversion with respect to the amplified echo signals. The reception delaying unit126delays digital echo signals output by the ADC124by delay times necessary for determining reception directionality, and the summing unit128generates ultrasound data by summing the echo signals processed by the reception delaying unit166. The image processor200generates an ultrasound image by scan-converting ultrasound data generated by the ultrasound transceiver100and displays the ultrasound image. The ultrasound image may be not only a grayscale ultrasound image obtained by scanning an object in an amplitude (A) mode, a brightness (B) mode, and a motion (M) mode, but also a Doppler image indicating a motion of the object. The Doppler image may be a blood flow Doppler image showing flow of blood (also referred to as a color Doppler image), a tissue Doppler image showing a movement of tissue, or a spectral Doppler image showing a moving speed of an object as a waveform. A B mode processor212extracts B mode components from ultrasound data and processes the B mode components. An image generator220may generate an ultrasound image indicating signal intensities as brightness based on the extracted B mode components212. Similarly, a Doppler processor214may extract Doppler components from ultrasound data, and the image generator220may generate a Doppler image indicating a movement of an object as colors or waveforms based on the extracted Doppler components. According to an embodiment, the image generator220may generate a three-dimensional (3D) ultrasound image via volume-rendering with respect to volume data and may also generate an elasticity image by imaging deformation of the object10due to pressure. Furthermore, the image generator220may display various pieces of additional information in an ultrasound image by using text and graphics. In addition, the generated ultrasound image may be stored in the memory400. In addition, the ultrasound apparatus1000may include two or more displays1100according to embodiments. The communication module300is connected to a network30by wire or wirelessly to communicate with an external device or a server. The communication module300may exchange data with a hospital server or another medical apparatus in a hospital, which is connected thereto via a PACS. Furthermore, the communication module300may perform data communication according to the digital imaging and communications in medicine (DICOM) standard. The communication module300may transmit or receive data related to diagnosis of an object, e.g., an ultrasound image, ultrasound data, and Doppler data of the object, via the network30and may also transmit or receive medical images captured by another medical apparatus, e.g., a computed tomography (CT) apparatus, a magnetic resonance imaging (MRI) apparatus, or an X-ray apparatus. Furthermore, the communication module300may receive information about a diagnosis history or medical treatment schedule of a patient from a server and utilizes the received information to diagnose the patient. Furthermore, the communication module300may perform data communication not only with a server or a medical apparatus in a hospital, but also with a portable terminal of a medical doctor or patient. The communication module300is connected to the network30by wire or wirelessly to exchange data with a server32, a medical apparatus34, or a portable terminal36. The communication module300may include one or more components for communication with external devices. For example, the communication module1300may include a local area communication module310, a wired communication module320, and a mobile communication module330. The local area communication module310refers to a module for local area communication within a predetermined distance. Examples of local area communication techniques according to an embodiment may include, but are not limited to, wireless LAN, Wi-Fi, Bluetooth, ZigBee, Wi-Fi Direct (WFD), ultra wideband (UWB), infrared data association (IrDA), Bluetooth low energy (BLE), and near field communication (NFC). The wired communication module320refers to a module for communication using electric signals or optical signals. Examples of wired communication techniques according to an embodiment may include communication via a twisted pair cable, a coaxial cable, an optical fiber cable, and an Ethernet cable. The mobile communication module330transmits or receives wireless signals to or from at least one selected from a base station, an external terminal, and a server on a mobile communication network. The wireless signals may be voice call signals, video call signals, or various types of data for transmission and reception of text/multimedia messages. The memory400stores various data processed by the ultrasound apparatus1000. For example, the memory400may store medical data related to diagnosis of an object, such as ultrasound data and an ultrasound image that are input or output, and may also store algorithms or programs which are to be executed in the ultrasound apparatus1000. The memory400may be any of various storage media, e.g., a flash memory, a hard disk drive, EEPROM, etc. Furthermore, the ultrasound apparatus1000may utilize web storage or a cloud server that performs the storage function of the memory400online. The user input unit1200may further include various other input means including an electrocardiogram measuring module, a respiration measuring module, a voice recognition sensor, a gesture recognition sensor, a fingerprint recognition sensor, an iris recognition sensor, a depth sensor, a distance sensor, etc. All or some of the probe20, the ultrasound transceiver100, the image processor200, the communication module300, the memory400, the user input unit1200, and the controller1300may be implemented as software modules. However, embodiments of the present invention are not limited thereto, and some of the components stated above may be implemented as hardware modules. Furthermore, at least one selected from the ultrasound transceiver100, the image processor200, and the communication module300may be included in the controller1300. However, embodiments of the present invention are not limited thereto. The method of the present invention may be implemented as computer instructions which may be executed by various computer means, and recorded on a computer-readable recording medium. The computer-readable recording medium may include program commands, data files, data structures, or a combination thereof. The program commands recorded on the computer-readable recording medium may be specially designed and constructed for the inventive concept or may be known to and usable by one of ordinary skill in a field of computer software. Examples of the computer-readable medium include storage media such as magnetic media (e.g., hard discs, floppy discs, or magnetic tapes), optical media (e.g., compact disc-read only memories (CD-ROMs), or digital versatile discs (DVDs)), magneto-optical media (e.g., floptical discs), and hardware devices that are specially configured to store and carry out program commands (e.g., ROMs, RAMs, or flash memories). Examples of the program commands include a high-level language code that may be executed by a computer using an interpreter as well as a machine language code made by a complier. It should be understood that the exemplary embodiments described therein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each exemplary embodiment should typically be considered as available for other similar features or aspects in other exemplary embodiments. While one or more exemplary embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims. | 93,154 |
11857372 | DETAILED DESCRIPTION Embodiments of the present disclosure are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views. FIG.1shows an exemplary ultrasound system100including an ultrasound device102configured to obtain an ultrasound image of a target anatomical view of a subject101. As shown, the ultrasound system100includes an ultrasound device102that is communicatively coupled to the processing device104by a communication link112. The processing device104may be configured to receive ultrasound data from the ultrasound device102and use the received ultrasound data to generate an ultrasound image110on a display (which may be touch-sensitive) of the processing device104. The ultrasound device102may be configured to generate ultrasound data. The ultrasound device102may be configured to generate ultrasound data by, for example, emitting acoustic waves into the subject101and detecting the reflected acoustic waves. The detected reflected acoustic wave may be analyzed to identify various properties of the tissues through which the acoustic wave traveled, such as a density of the tissue. The ultrasound device102may be implemented in any of a variety of ways. For example, the ultrasound device102may be implemented as a handheld device (as shown inFIGS.1and2) or as a patch that is coupled to patient using, for example, an adhesive or a strap. The patch may be configured to wirelessly transmit data collected by the patch to one or more external devices for further processing. In other embodiments, the single ultrasound probe may be embodied in a pill that may be swallowed by a patient. The pill may be configured to transmit, wirelessly, data collected by the ultrasound probe within the pill to one or more external devices for further processing. The ultrasound device102may transmit ultrasound data to the processing device104using the communication link112. The communication link112may be a wired or wireless communication link. In some embodiments, the communication link112may be implemented as a cable such as a Universal Serial Bus (USB) cable or a Lightning cable. In these embodiments, the cable may also be used to transfer power from the processing device104to the ultrasound device102. In other embodiments, the communication link112may be a wireless communication link such as a BLUETOOTH, WiFi, or ZIGBEE wireless communication link. The processing device104may include one or more processing elements such as a processor142ofFIG.4configured to, for example, process ultrasound data received from the ultrasound device102. Additionally, the processing device104may include one or more storage elements such as memory144, which may be a non-transitory computer readable medium configured to, for example, store instructions that may be executed by the processor142and/or store all or any portion of the ultrasound data received from the ultrasound device102. It should be appreciated that the processing device104may be implemented in any of a variety of ways. For example, the processing device104may be implemented as a mobile device (e.g., a mobile smartphone, a tablet, or a laptop, etc.) with an integrated display screen108as shown inFIG.1. In other examples, the processing device104may be implemented as a stationary device such as a desktop computer. FIG.2illustrates an exemplary handheld ultrasound probe103, in accordance with certain embodiments described herein, which may be used as the ultrasound device102. The handheld ultrasound probe103may implement any of the ultrasound devices described herein. The handheld ultrasound probe103may have a suitable dimension and weight. For example, the ultrasound probe103may have a cable for wired communication with a processing device, and may have a length L about 100 mm-300 mm (e.g., 175 mm) and a weight about 200 grams-500 grams (e.g., 312 g). In another example, the ultrasound probe103may be capable of communicating with a processing device wirelessly. As such, the handheld ultrasound probe103may have a length about 140 mm and a weight about 265 g. It is appreciated that other dimensions and weight may be possible. Further description of ultrasound devices and systems may be found in U.S. Pat. No. 9,521,991, the content of which is incorporated by reference herein in its entirety; and U.S. Pat. No. 11,311,274, the content of which is incorporated by reference herein in its entirety. FIG.3is a block diagram of an example of the ultrasound device102in accordance with some embodiments of the technology described herein. The illustrated ultrasound device102may include one or more ultrasonic transducer arrangements (e.g., arrays)122, transmit (TX) circuitry124, receive (RX) circuitry126, a timing and control circuit128, a signal conditioning/processing circuit130, and/or a power management circuit138. The one or more ultrasonic transducer arrays122may take on any of numerous forms, and aspects of the present technology do not necessarily require the use of any particular type or arrangement of ultrasonic transducer cells or ultrasonic transducer elements. For example, multiple ultrasonic transducer elements in the ultrasonic transducer array122may be arranged in one-dimension, or two-dimensions. Although the term “array” is used in this description, it should be appreciated that in some embodiments the ultrasonic transducer elements may be organized in a non-array fashion. In various embodiments, each of the ultrasonic transducer elements in the array122may, for example, include one or more capacitive micromachined ultrasonic transducers (CMUTs), or one or more piezoelectric micromachined ultrasonic transducers (PMUTs). In a non-limiting example, the ultrasonic transducer array122may include between approximately 6,000-10,000 (e.g., 8,960) active CMUTs on the chip, forming an array of hundreds of CMUTs by tens of CMUTs (e.g., 140×64). The CMUT element pitch may be between 147-250 um, such as 208 um, and thus, result in the total dimension of between 10-50 mm by 10-50 mm (e.g., 29.12 mm×13.312 mm). In some embodiments, the TX circuitry124may, for example, generate pulses that drive the individual elements of, or one or more groups of elements within, the ultrasonic transducer array(s)122so as to generate acoustic signals to be used for imaging. The RX circuitry126, on the other hand, may receive and process electronic signals generated by the individual elements of the ultrasonic transducer array(s)122when acoustic signals impinge upon such elements. With further reference toFIG.3, in some embodiments, the timing and control circuit128may be, for example, responsible for generating all timing and control signals that are used to synchronize and coordinate the operation of the other components of the ultrasound device102. In the example shown, the timing and control circuit128is driven by a single clock signal CLK supplied to an input port136. The clock signal CLK may be, for example, a high-frequency clock used to drive one or more of the on-chip circuit components. In some embodiments, the clock signal CLK may, for example, be a 1.5625 GHz or 2.5 GHz clock used to drive a high-speed serial output device (not shown) in the signal conditioning/processing circuit130, or a 20 Mhz or 40 MHz clock used to drive other digital components on the die132, and the timing and control circuit128may divide or multiply the clock signal CLK, as necessary, to drive other components on the die132. In other embodiments, two or more clocks of different frequencies (such as those referenced above) may be separately supplied to the timing and control circuit128from an off-chip source. In some embodiments, the output range of a same (or single) transducer unit in an ultrasound device may be anywhere in a range of 1-12 MHz (including the entire frequency range from 1-12 MHz), making it a universal solution, in which there is no need to change the ultrasound heads or units for different operating ranges or to image at different depths within a patient. That is, the transmit and/or receive frequency of the transducers of the ultrasonic transducer array may be selected to be any frequency or range of frequencies within the range of 1 MHz-12 MHz. The ultrasound device102described herein may thus be used for a broad range of medical imaging tasks including, but not limited to, imaging a patient's liver, kidney, heart, bladder, thyroid, carotid artery, lower venous extremity, and performing central line placement. Multiple conventional ultrasound probes would have to be used to perform all these imaging tasks. By contrast, a single universal ultrasound device102may be used to perform all these tasks by operating, for each task, at a frequency range appropriate for the task, as shown in the examples of Table 1 together with corresponding depths at which the subject may be imaged. TABLE 1Illustrative depths and frequencies at which an ultrasounddevice implemented in accordance with embodimentsdescribed herein may image a subject.OrganFrequenciesDepth (up to)Liver/Right Kidney2-5 MHz15-20 cmCardiac (adult)1-5 MHz20 cmBladder2-5 MHz; 3-6 MHz10-15 cm; 5-10 cmLower extremity venous4-7 MHz4-6 cmThyroid7-12 MHz4 cmCarotid5-10 MHz4 cmCentral Line Placement5-10 MHz4 cm The power management circuit138may be, for example, responsible for converting one or more input voltages VIN from an off-chip source into voltages needed to carry out operation of the chip, and for otherwise managing power consumption within the ultrasound device102. In some embodiments, for example, a single voltage (e.g., 12V, 80V, 100V, 120V, etc.) may be supplied to the chip and the power management circuit138may step that voltage up or down, as necessary, using a charge pump circuit or via some other DC-to-DC voltage conversion mechanism. In other embodiments, multiple different voltages may be supplied separately to the power management circuit138for processing and/or distribution to the other on-chip components. In the embodiment shown above, all of the illustrated elements are formed on a single semiconductor die132. It should be appreciated, however, that in alternative embodiments one or more of the illustrated elements may be instead located off-chip, in a separate semiconductor die, or in a separate device. Alternatively, one or more of these components may be implemented in a DSP chip, a field programmable gate array (FPGA) in a separate chip, or a separate application specific integrated circuitry (ASIC) chip. Additionally, and/or alternatively, one or more of the components in the beamformer may be implemented in the semiconductor die132, whereas other components in the beamformer may be implemented in an external processing device in hardware or software, where the external processing device is capable of communicating with the ultrasound device102. In addition, although the illustrated example shows both TX circuitry124and RX circuitry126, in alternative embodiments only TX circuitry or only RX circuitry may be employed. For example, such embodiments may be employed in a circumstance where one or more transmission-only devices are used to transmit acoustic signals and one or more reception-only devices are used to receive acoustic signals that have been transmitted through or reflected off of a subject being ultrasonically imaged. It should be appreciated that communication between one or more of the illustrated components may be performed in any of numerous ways. In some embodiments, for example, one or more high-speed busses (not shown), such as that employed by a unified Northbridge, may be used to allow high-speed intra-chip communication or communication with one or more off-chip components. In some embodiments, the ultrasonic transducer elements of the ultrasonic transducer array122may be formed on the same chip as the electronics of the TX circuitry124and/or RX circuitry126. The ultrasonic transducer arrays122, TX circuitry124, and RX circuitry126may be, in some embodiments, integrated in a single ultrasound probe. In some embodiments, the single ultrasound probe may be a hand-held probe including, but not limited to, the hand-held probes described below with reference toFIG.4. A CMUT may include, for example, a cavity formed in a CMOS wafer, with a membrane overlying the cavity, and in some embodiments sealing the cavity. Electrodes may be provided to create an ultrasonic transducer cell from the covered cavity structure. The CMOS wafer may include integrated circuitry to which the ultrasonic transducer cell may be connected. The ultrasonic transducer cell and CMOS wafer may be monolithically integrated, thus forming an integrated ultrasonic transducer cell and integrated circuit on a single substrate (the CMOS wafer). In the example shown, one or more output ports134may output a high-speed serial data stream generated by one or more components of the signal conditioning/processing circuit130. Such data streams may be, for example, generated by one or more USB 3.0 modules, and/or one or more 10 GB, 40 GB, or 100 GB Ethernet modules, integrated on the die132. It is appreciated that other communication protocols may be used for the output ports134. In some embodiments, the signal stream produced on output port134can be provided to a computer, tablet, or smartphone for the generation and/or display of two-dimensional, three-dimensional, and/or tomographic images. In some embodiments, the signal provided at the output port134may be ultrasound data provided by the one or more beamformer components or auto-correlation approximation circuitry, where the ultrasound data may be used by the computer (external to the ultrasound device) for displaying the ultrasound images. In embodiments in which image formation capabilities are incorporated in the signal conditioning/processing circuit130, even relatively low-power devices, such as smartphones or tablets which have only a limited amount of processing power and memory available for application execution, can display images using only a serial data stream from the output port134. As noted above, the use of on-chip analog-to-digital conversion and a high-speed serial data link to offload a digital data stream is one of the features that helps facilitate an “ultrasound on a chip” solution according to some embodiments of the technology described herein. The ultrasound probe103such as that shown inFIG.2may be used in various imaging and/or treatment (e.g., HIFU) applications, and the particular examples described herein should not be viewed as limiting. In one illustrative implementation, for example, an imaging device including an N×M planar or substantially planar array of CMUT elements may itself be used to acquire an ultrasound image of a subject (e.g., a person's abdomen) by energizing some or all of the elements in the ultrasonic transducer array(s)122(either together or individually) during one or more transmit phases, and receiving and processing signals generated by some or all of the elements in the ultrasonic transducer array(s)122during one or more receive phases, such that during each receive phase the CMUT elements sense acoustic signals reflected by the subject. In other implementations, some of the elements in the ultrasonic transducer array(s)122may be used only to transmit acoustic signals and other elements in the same ultrasonic transducer array(s)122may be simultaneously used only to receive acoustic signals. Moreover, in some implementations, a single imaging device may include a P×Q array of individual devices, or a P×Q array of individual N×M planar arrays of CMUT elements, which components can be operated in parallel, sequentially, or according to some other timing scheme so as to allow data to be accumulated from a larger number of CMUT elements than can be embodied in a single ultrasound device102or on a single die132. FIG.4illustrates a schematic block diagram of the ultrasound system100which may implement various aspects of the technology described herein. In some embodiments, ultrasound system100may include the ultrasound device102, the processing device104, a communication network147, and one or more servers149. The ultrasound device102may be configured to generate ultrasound data that may be employed to generate an ultrasound image. The ultrasound device102may be constructed in any of a variety of ways. In some embodiments, the ultrasound device102includes a transmitter that transmits a signal to a transmit beamformer which in turn drives transducer elements within a transducer array to emit pulsed ultrasound signals into a structure, such as a patient. The pulsed ultrasound signals may be back-scattered from structures in the body, such as blood cells or muscular tissue, to produce echoes that return to the transducer elements. These echoes may then be converted into electrical signals by the transducer elements and the electrical signals are received by a receiver. The electrical signals representing the received echoes are sent to a receive beamformer that outputs ultrasound data. In some embodiments, the ultrasound device102may include an ultrasound circuitry105(e.g., transducer arrays122, signal conditioning/processing circuit130, etc.) that may be configured to generate the ultrasound data. For example, the ultrasound device102may include semiconductor die132for implementing the various techniques described in. Reference is now made to the processing device104. In some embodiments, the processing device104may be communicatively coupled to the ultrasound device102(wirelessly or in a wired fashion (e.g., by a detachable cord or cable) to implement at least a portion of the process for approximating the auto-correlation of ultrasound signals. For example, one or more beamformer components may be implemented on the processing device104. In some embodiments, the processing device104may include one or more processors142, which may include specially-programmed and/or special-purpose hardware such as the ASIC chip. The processor142may include one or more graphics processing units (GPUs) and/or one or more tensor processing units (TPUs). TPUs may be ASICs specifically designed for machine learning (e.g., deep learning). The TPUs may be employed to, for example, accelerate the inference phase of a neural network. In some embodiments, the processing device104may be configured to process the ultrasound data received from the ultrasound device102to generate ultrasound images for display on the display screen140. The processing may be performed by, for example, the processor(s)142. The processor(s)142may also be adapted to control the acquisition of ultrasound data with the ultrasound device102. The ultrasound data may be processed in real-time during a scanning session as the echo signals are received. In some embodiments, the displayed ultrasound image may be updated a rate of at least 5 Hz, at least 10 Hz, at least 20 Hz, at a rate between 5 and 60 Hz, at a rate of more than 20 Hz. For example, ultrasound data may be acquired even as images are being generated based on previously acquired data and while a live ultrasound image is being displayed. As additional ultrasound data is acquired, additional frames or images generated from more-recently acquired ultrasound data are sequentially displayed. Additionally, or alternatively, the ultrasound data may be stored temporarily in a buffer during a scanning session and processed in less than real-time. In some embodiments, the processing device104may be configured to perform various ultrasound operations using the processor(s)142(e.g., one or more computer hardware processors) and one or more articles of manufacture that include non-transitory computer-readable storage media such as the memory144. The processor(s)142may control writing data to and reading data from the memory144in any suitable manner. To perform certain of the processes described herein, the processor(s)142may execute one or more processor-executable instructions stored in one or more non-transitory computer-readable storage media (e.g., the memory144), which may serve as non-transitory computer-readable storage media storing processor-executable instructions for execution by the processor(s)142. The camera148may be configured to detect light (e.g., visible light) to form an image. The camera148may be on the same face of the processing device104as the display screen140. The display screen140may be configured to display images and/or videos, and may be, for example, a liquid crystal display (LCD), a plasma display, and/or an organic light emitting diode (OLED) display on the processing device104. The input device146may include one or more devices capable of receiving input from a user and transmitting the input to the processor(s)142. For example, the input device146may include a keyboard, a mouse, a microphone, touch-enabled sensors on the display screen140, and/or a microphone. The display screen140, the input device146, the camera148, and/or other input/output interfaces (e.g., speaker) may be communicatively coupled to the processor(s)142and/or under the control of the processor142. It should be appreciated that the processing device104may be implemented in any of a variety of ways. For example, the processing device104may be implemented as a handheld device such as a mobile smartphone or a tablet. Thereby, a user of the ultrasound device102may be able to operate the ultrasound device102with one hand and hold the processing device104with another hand. In other examples, the processing device104may be implemented as a portable device that is not a handheld device, such as a laptop. In yet other examples, the processing device104may be implemented as a stationary device such as a desktop computer. The processing device104may be connected to the network147over a wired connection (e.g., via an Ethernet cable) and/or a wireless connection (e.g., over a WiFi network). The processing device104may thereby communicate with (e.g., transmit data to or receive data from) the one or more servers149over the network147. For example, a party may provide from the server149to the processing device104processor-executable instructions for storing in one or more non-transitory computer-readable storage media (e.g., the memory144) which, when executed, may cause the processing device104to perform ultrasound processes.FIG.4should be understood to be non-limiting. For example, the ultrasound system100may include fewer or more components than shown and the processing device104and ultrasound device102may include fewer or more components than shown. In some embodiments, the processing device104may be part of the ultrasound device102. FIGS.5-16illustrate graphical user interfaces (GUIs) that may be generated by a processing device104(e.g., a smartphone or a tablet) in operative communication with the ultrasound device102and displayed by the display screen108of the processing device104, in accordance with certain embodiments described herein. Methods of using the GUI200, and particular the preset filter option204, to select presets are shown inFIG.17, which is described in connection with the GUI200ofFIGS.5-16. The methods may be embodied as software instructions executed by the processing device104in operative communication with the ultrasound device102. The processing device104processes ultrasound data received from the ultrasound device102based on the user-selected presets. With reference toFIGS.5-16the processing device104is configured to receive from a user a selection of a first preset152afrom a preset menu152using a GUI200displayed on the processing device104at step300ofFIG.17. In the illustrated example ofFIG.5, the user has selected the Cardiac (also known as Cardiac Standard) preset. The GUI200may be displayed over a majority of the display screen108. In embodiments, the GUI200may include a preset menu152listing the presets. It should be noted that not all of the presets are shown inFIG.5since the presets may be arranged in a scrollable list. In embodiments, the presets may be arranged in any other suitable manner (e.g., table of buttons, etc.). The GUI200may be activated by the user swiping from an edge (e.g., bottom) of the display screen108toward the center. Once activated, the user then selects one of the presets. As described above, the presets may be optimized for imaging a particular type of anatomy and/or for imaging in a particular clinical application, and may also be optimized for human or veterinary imaging. In embodiments, different versions of ultrasound devices102may include corresponding menus of presets, such that a human version of the ultrasound devices102may list human presets and a veterinary version of the ultrasound devices102may list veterinary presets. At step302, the processing device104controls ultrasound imaging operation based on the selected first preset (in the illustrated example, Cardiac Standard preset). Controlling ultrasound imaging operation may include the processing device controlling ultrasound imaging operation of the ultrasound device and the processing device controlling its own ultrasound imaging operation based on the first preset. A preset may include values for ultrasound imaging parameters that control ultrasound imaging operations such as transmit, analog processing, digital pre-processing and beamforming, coherent post-processing, and incoherent post-processing. Because some of these ultrasound imaging operations may be performed by the ultrasound device and some may be performed by the processing device, a preset's parameter values may control ultrasound imaging operation of the ultrasound device and the processing device. In other words, the processing device may use a preset to control ultrasound imaging operation of the ultrasound device and its own ultrasound imaging operation. Following are further examples of ultrasound imaging aspects that may be controlled by a preset's parameter values. It should be appreciated that some presets may have values related to more or fewer operations. Transmit: waveform, voltage, aperture, apodization, focal depth, transmit spacing, transmit span. Analog processing: amplification, averaging, analog time-gain compensation (TGC), analog to digital conversion. Digital pre-processing and beamforming: demodulation, digital filtering (e.g., cascaded integrator-comb (CIC) filtering), microbeamforming. Coherent processing: receive beamforming, transmit beamforming, digital filtering (e.g., finite impulse response (FIR) filtering). Incoherent processing: Envelope detection, frequency compounding, log compression, spatial filters, gain compensations, scan conversion, gain and dynamic range, image processing The processing device104receives selection of the first preset and then transmits commands to the ultrasound device102to configure it with parameter values of the first preset. The ultrasound device102may use these parameter values when performing ultrasound imaging operations, such as transmit, analog processing, digital pre-processing and beamforming, and coherent processing operations. The processing device104thereby controls ultrasound imaging operation of the ultrasound device102based on the first preset. Generally, the ultrasound device102uses the first preset to collect and process ultrasound data and transmit the ultrasound data back to the processing device104. The processing device104itself may also perform ultrasound imaging operations, such as incoherent processing operations, and may use parameter values of the first preset in such operations. Thus, the processing device104may control its own ultrasound imaging operation based on the first preset. When ultrasound images have been generated, the processing device104displays the most recent ultrasound image110in real time on the display screen108of the processing device104as shown inFIGS.6-16based on the selected preset152. In embodiments in which the processing device104does not perform any ultrasound imaging operation, but merely displays final ultrasound images generated by the ultrasound device102, the processing device104may not itself use a preset. With reference toFIG.6, the GUI200is shown after selection of the Cardiac (which may also be referred to as Cardiac Standard) preset152afrom the preset menu152inFIG.5. The GUI200shows the ultrasound image110collected in real time, as well as an imaging depth indicator208and a top preset indicator210. The top preset indicator210along with other device status indicators are shown in a fourth region155above the first region151. The top preset indicator210indicates that the current preset used for generating the ultrasound images110being shown is Cardiac Standard. The default imaging depth may be based on the selected preset, e.g., the Cardiac Standard preset, and may extend to a default depth for the selected preset, e.g., 16 cm. The imaging depth may be displayed by the imaging depth indicator208in a third region154on a side of the display screen108along the side of the first region151, without obscuring the ultrasound image110. The user may modify the imaging depth, for example by swiping in a vertical direction across the display screen of the processing device104.FIG.7shows the GUI200after the user has changed the imaging depth from the default depth of 16 cm to a new depth of 20 cm. The GUI200ofFIG.7also shows a preset filter option204and a time-gain compensation (TGC) option206, which may appear in the GUI200after selection (e.g., tapping) by the user on a particular portion (e.g., a second region153) of the GUI200or generally in a vicinity of the location of the preset filter option204. The ultrasound image110may be shown in a first region151of the display screen108, and the preset filter option204and the TGC option206may be shown in the second region153, which may be below the first region151, without obscuring the ultrasound image110. A preset family includes related presets that are grouped together. Certain presets within a family may be optimized for imaging the same anatomy or the same anatomical region or the same type of anatomy, but may differ in certain ways. For example, one preset family may have a standard preset and a deep preset, both optimized for imaging the same anatomy. As a specific example, a preset family may include an abdomen preset and an abdomen deep preset, where both presets are optimized for imaging the abdomen, but the abdomen preset is optimized for standard patients and the abdomen deep preset is optimized for technically challenging patients, such as those with high BMI or those with highly attenuating livers, as in hepatitis. As another example, one preset family may have a harmonics preset and a fundamentals preset, both optimized for imaging the same anatomy. As a specific example, a preset family may include a cardiac:harmonics preset and a cardiac:fundamentals preset, where both presets are optimized for imaging the heart, but one preset uses harmonic frequencies and one preset uses fundamental frequencies. As another example, one preset family may have an OB 1/GYN preset and an OB 2/3 preset, where both presets are optimized for obstetric applications, but one preset is optimized for use in the first month of pregnancy and the other preset is optimized for use in the second and third months of pregnancy. Examples preset families for human ultrasound imaging include:Cardiac: Cardiac Standard, Cardiac Coherence, and Cardiac Deep;Abdomen: Abdomen, Abdomen Deep, Aorta & Gallbladder;MSK: MSK, MSK-Soft Tissue, Small Organ;OB/GYN: OB 1/GYN, OB 2/3;Vascular: Vascular: Access, Vascular: Carotid, Others;Cardiac: Cardiac Harmonics, Cardiac Fundamentals;Abdomen: Abdomen Harmonics, Abdomen Fundamentals; andLung: Lung: Artifacts, Lung: Consolidation, Lung: Tissue. Examples preset families for veterinary ultrasound imaging include:Cardiac Harmonics: Cardiac Harmonics, Cardiac Standard;Cardiac Deep Harmonics: Cardiac Deep, Cardiac Deep Harmonics;Abdomen: Abdomen, Abdomen Deep;MSK (Musculoskeletal): MSK, and may be Small Organ;Vascular: Vascular: Access, Vascular: Carotid, Others;Cardiac: Cardiac Harmonics, Cardiac Fundamentals; andAbdomen: Abdomen Harmonics, Abdomen Fundamentals. It should be appreciated that a preset family need not include every preset in a particular group above; a preset family may include a subset of two or more of the presets listed in a particular group above. For example, a preset family for cardiac imaging may just include Cardiac Standard and Cardiac Deep presets. For further description of the Cardiac Coherence preset see U.S. patent application Ser. No. 17/525,791 titled “METHODS AND SYSTEMS FOR COHERENCE IMAGING IN OBTAINING ULTRASOUND IMAGES,” filed Nov. 12, 2021, the entire disclosure of which is incorporated by reference herein in its entirety. When the preset selected from the preset menu152ofFIG.5is part of a family of related presets, the preset filter option204may be displayed in the GUI200. (As described above, in some embodiments the preset filter option204may be hidden until the user taps a particular portion of the GUI200). The preset filter option204allows the user to select from a plurality of (e.g., two, three, or more) presets within the family of the preset originally selected from the preset menu152ofFIG.5. In some embodiments, repeated activation of the preset filter option204may cycle through the presets within the preset family. In the illustrated example, the user has selected the Cardiac Standard preset152afrom the preset menu152. This preset is part of the preset family that includes Cardiac Standard, Cardiac Coherence, and Cardiac Deep. Repeated selection of the preset filter option204may cause the preset to cycle from Cardiac Standard to Cardiac Coherence, from Cardiac Coherence to Cardiac Deep, from Cardiac Deep to Cardiac Standard, etc. It should be appreciated that the number of presets which may be selected using the preset filter option204may be smaller than the number of presets which may be selected from preset menu in the GUI200ofFIG.5. In some embodiments, a subset of the presets displayed as options by the preset menu152inFIG.5may not be in preset families. For example, the bladder preset may not be part of a preset family. Upon receiving from the user a selection of such a preset from the preset menu152ofFIG.5, the processing device104would not display the preset filter option204. (As described above, in some embodiments the preset filter option204may be hidden until the user taps a particular portion of the GUI200. In such embodiments, when the user selects from the preset menu152a preset that is not in a family, the preset filter option204may not appear even when the user taps the particular portion of the GUI200.) In some embodiments, a subset of (i.e., not all) available presets may be displayed in the preset menu152ofFIG.5. For example, the Cardiac Coherence preset may not be displayed in the preset menu152ofFIG.5. To select the Cardiac Coherence preset, the user may select the Cardiac Standard preset from the preset menu152ofFIG.5and then use the preset filter option204to cycle to the Cardiac Coherence preset. In some embodiments, more than one preset within a family may be displayed in the preset menu152ofFIG.5. For example, both the Abdomen preset and the Aorta & Gallbladder preset may be part of a preset family, and both presets may be displayed in the preset menu152ofFIG.5. To select, for example, the Aorta & Gallbladder preset, the user could select the Aorta & Gallbladder preset from the preset menu152ofFIG.5, or the user could select the Abdomen preset from the preset menu152ofFIG.5and then use the preset filter option204to select the Aorta & Gallbladder preset. At step304, the processing device104receives from the user an activation of the preset filter option204displayed by the processing device104, thereby selecting a second preset (in the illustrated example ofFIG.8, the Cardiac Coherence preset) within the same preset family as the first preset. In other words, activation of the preset filter option204is the manner in which the user selects the second preset. Thus, the preset is changed from the first preset to the second preset. At step306, the processing device104controls ultrasound imaging operation based on the selected second preset, in the same manner as described above with reference to the first preset at step302.FIG.8shows the GUI200that is shown after selection of the preset filter option204from the GUI200inFIG.7. Selection of the preset filter option204switches the current preset used for generating the ultrasound images110from Cardiac Standard to Cardiac Coherence. This GUI includes a side preset indicator212indicating that the new preset is Cardiac Coherence. The top preset indicator210also indicates that that the current preset is Cardiac Coherence. After activating the preset filter option204, the side preset indicator212may disappear after a preset period of time, e.g., 5 seconds, to avoid cluttering of the GUI200, as shown inFIG.9. It should be noted that the imaging depth remains the same as it was prior to selection of the preset filter option204, namely 20 cm, even though the default imaging depth for the Cardiac Coherence preset is 16 cm. In other words, imaging depth persists even when different presets are selected by the user using the preset filter option204. This allows the user to more easily compare ultrasound images110generated using a previously selected preset versus ultrasound images110generated using a currently selected preset. The method by which imaging depth is used by the processing device104when presets are selected by a user using the GUI200, and particularly the preset filter option204, is shown inFIG.18. The method ofFIG.18may be an embodiment of the method ofFIG.17with the addition of certain features. At step400, the processing device104receives from a user a selection of a first preset from the preset menu152displayed by the processing device104. Step400may be the same as step300. For example, inFIG.5the user selects the Cardiac Standard preset (the first preset). At step402, the processing device104controls ultrasound imaging operation based on the first preset and uses a default imaging depth associated with the first preset in the ultrasound imaging operation. Step402may be the same as step302, with the additional feature that a default imaging depth associated with the first preset is used. For example, inFIG.6, the default imaging depth or the first preset is 16 cm. In some embodiments, imaging depth may be a parameter used just by the processing device104in its ultrasound imaging operations (i.e., processing ultrasound data, generating ultrasound images, and/or displaying ultrasound images). In some embodiments, imaging depth may be a parameter used just by the ultrasound device102in its ultrasound imaging operations (i.e., collecting ultrasound data, processing ultrasound data, and/or generating ultrasound images). In some embodiments, imaging depth may be a parameter used by both the processing device104and the ultrasound device102. In embodiments in which the ultrasound device102uses the imaging depth parameter, the processing device104may transmit an indication of this parameter to the ultrasound device102. At step404, the processing device104receives from the user a selection of a first imaging depth, and at step406, the processing device104uses the first imaging depth in the ultrasound imaging operation. For example, inFIG.7, the user has selected a new imaging depth (i.e., the first imaging depth) of 20 cm, and this imaging depth is used for continued imaging with the Cardiac Standard preset (i.e., the first preset). The first imaging depth is different from the default imaging depth associated with the first preset. At step408, the processing device104receives from the user an activation of the preset filter option204thereby selecting a second preset within a same preset family as the first preset. Step408may be the same as step304. For example, inFIG.8, the user has activated the preset filter option204to select the Cardiac Coherence preset (i.e., the second preset). At step410, the processing device104controls the ultrasound imaging operation based on the second preset and uses the first imaging depth in the ultrasound imaging operation. Further description of using an imaging depth may be found with reference to step402. The first imaging depth may be different than the default imaging depth associated with the second preset. For example, inFIG.8, the imaging depth remains 20 cm (i.e., the first imaging depth), which is the imaging depth used during use of the Cardiac Standard preset just prior to the switch to the Cardiac Coherence preset (i.e., at step406), even though the default imaging depth for the Cardiac Coherence preset is 16 cm. The processing device104may automatically set the imaging depth at the first imaging depth, rather than the default imaging depth associated with the second preset, without any user input to do so between step408and step410. While imaging depth may persist even when different presets are selected using the preset filter option204, other parameters such as time-gain compensation (TGC) may not persist. TGC is used to adjust gain in an ultrasound image as a function of depth. In the ultrasound signal, a signal that arrives at a deeper region of the subject and returns is weaker. Therefore, an ultrasound image of a deep region may be relatively dark and unclear. The ultrasound system can compensate for this by modulating the relative gain for signals arriving from different regions (i.e., signals that arrive at different times). The method by which TGC is used by the processing device104when presets are selected by a user using the GUI200, and particularly the preset filter option204, is shown inFIG.19. The method ofFIG.19may be an embodiment of the method ofFIG.17with the addition of certain features. At step502, the processing device104controls ultrasound imaging operation based on a first preset and uses a default TGC setting associated with the first preset in ultrasound imaging operation. (As referred to herein, “TGC setting” may refer to a collection of multiple settings for different depth regions.) Step502may be the same as step302, with the additional feature that a default TGC setting associated with the first preset is used. In some embodiments, a TGC setting may be a parameter used just by the processing device104in its ultrasound imaging operations (i.e., processing ultrasound data, generating ultrasound images, and/or displaying ultrasound images). In some embodiments, a TGC setting may be a parameter just used by the ultrasound device102in its ultrasound imaging operations (i.e., collecting ultrasound data, processing ultrasound data, and/or generating ultrasound images). In some embodiments, a TGC setting may be a parameter used by both the processing device104and the ultrasound device102. In embodiments in which the ultrasound device102uses the imaging depth parameter, the processing device104may transmit an indication of this parameter to the ultrasound device102. As shown inFIG.10, upon selecting the TGC option206, TGC settings214are displayed, which include a plurality of sliders214a-c(e.g., near, mid, far) for adjusting multiple TGC parameters. Each of the sliders214a-cmay be individually adjusted. TGC is used to adjust a gain of the depth of the ultrasound image110. In embodiments, the TGC settings214may include any suitable number of sliders or any other GUI element, e.g., text box, drop down menu, etc. In the example ofFIG.10, the default TGC setting for the Cardiac Coherence preset (i.e., the first preset) is 50%/50%/50% for Near/Mid/Far. At step504, the processing device104receives from the user a selection of a first TGC setting, and at step506, the processing device104uses the first TGC setting in the ultrasound imaging operation. For example,FIG.11shows the GUI200after modification of the TGC settings214by the user, and in particular by moving the near slider214a. The default TGC setting associated with the first preset may be different than the first TGC setting. Here, the user has modified the settings from the default of 50%/50%/50% to 80%/50%/50% (i.e., the first TGC setting) for Near/Mid/Far. At step508, the processing device104receives from the user an activation of the preset filter option204thereby selecting a second preset within a same preset family as the first preset. Step508may be the same as step304. For example, inFIG.12, the user has activated the preset filter option204to select the Cardiac Deep preset (i.e., the second preset). After selecting the preset filter option204, the side preset indicator212may disappear after a preset period of time, e.g., 5 seconds, to avoid cluttering of the GUI200, as shown inFIG.13. The processing device104may save the most-recently used TGC setting for the first preset for later use. In the illustrated example, the processing device104may save the TGC setting 80%/50%/50% and associated it with the Cardiac Coherence for later use. This saving may occur continuously whenever a new TGC setting is selected (e.g., at steps504and506) or only when the preset filter option204is activated (e.g., at step508). In other words, the saving may occur prior to or upon activation of the preset filter option204. At step510, the processing device104controls the ultrasound imaging operation based on the second preset and uses a default TGC setting associated with the second preset in the ultrasound imaging operation. Further description of using a TGC setting may be found with reference to step502. The default TGC setting associated with the second preset may be different than the first TGC setting. In the example ofFIG.14, although the TGC settings were most recently modified to 80%/50%/50%, while in the Cardiac Coherence preset, the settings have reverted to the default settings for Cardiac Deep, i.e., 50%/50%/50% for Near/Mid/Far, following the switching of the preset from Cardiac Coherence to Cardiac Deep. The processing device104may automatically set the TGC setting to be the default TGC setting associated with the second preset without any user input to do so between step508and step510. At step512, the processing device104receives from the user a selection of a second TGC setting, and at step514, the processing device104uses the second TGC setting in the ultrasound imaging operation. For example,FIG.15shows the GUI200after modification of the TGC settings214by the user. The default TGC setting associated with the second preset may be different than the second TGC setting. The second TGC setting may also be different than the first TGC setting Here, the user has modified the settings from the default of 50%/50%/50% to 62%/50%/24% (i.e., the second TGC setting) for Near/Mid/Far. At step516, the processing device104receives from the user an activation of the preset filter option204thereby selecting the first preset. For example, inFIG.16, the user has activated the preset filter option204to cycle back to the Cardiac Coherence preset. In the illustrated example, in which the preset family may include the Cardiac Standard, Cardiac Coherence, and Cardiac Deep presets, the user may select the preset filter option204twice to switch from Cardiac Deep to Cardiac Standard and then from Cardiac Standard to Cardiac Coherence due to the cyclical behavior of the preset filter option204. At step518, the processing device104controls the ultrasound imaging operation based on the first preset and uses the first TGC setting in the ultrasound imaging operation. As noted above, the processing device104may save certain settings, such as TGC settings, that were selected when a specific preset was selected. Thus, once the user activates the preset filter option204to cycle back to the Cardiac Coherence preset (i.e., the first preset), the processing device104automatically retrieves the saved TGC settings that were most recently used in the Cardiac Coherence preset (at step506), namely 80%/50%/50% (the first TGC setting), as shown inFIG.16. The TGC settings used are those that were most recently set while in the Cardiac Coherence, even though the user has, since then, switched to other presets and modified the TGC settings while in those presets. For example, the user most recently switched the TGC setting to 62%/50%/24% when in the Cardiac Deep preset, and yet instead the processing device104automatically applies the TGC setting 80%/50%/50% most recently used when in the Cardiac Coherence preset. The processing device104may automatically set the TGC setting to be the first TGC setting without any user input to do so between step516and step518. Thus, it should be appreciated that when the user has most recently selected a TGC setting different than the default TGC setting associated with a particular preset, the processing device104may save this TGC setting, and when the user cycles back to that preset, the processing device104may immediately use the saved user-selected TGC setting rather than the default TGC setting for that preset. The method by which both imaging depth and TGC are used by the processing device104when presets are selected by a user using the GUI200, and particularly the preset filter option204, is shown inFIG.20. The method ofFIG.20may be an embodiment of the method ofFIG.17with the addition of certain features. The method ofFIG.20is a combination of the methods ofFIG.18andFIG.19. At step600, the processing device104receives from a user a selection of a first preset from the preset menu152displayed by the processing device104. At step602, the processing device104controls ultrasound imaging operation based on a first preset and uses a default TGC setting associated with the first preset in ultrasound imaging operation and a default imaging depth. Step602may be the same as step502or302, with the additional feature that a default imaging depth associated with the first preset is used. At step604, the processing device104receives from the user a selection of a first TGC setting and imaging depth, and at step606, the processing device104uses the first TGC setting and the imaging depth in the ultrasound imaging operation. At step608, the processing device104receives from the user an activation of the preset filter option204thereby selecting a second preset within a same preset family as the first preset. At step610, the processing device104controls the ultrasound imaging operation based on the second preset and uses a default TGC setting associated with the second preset in the ultrasound imaging operation while still using the first (i.e., default) imaging depth. Further description of using a TGC setting may be found with reference to step602. At step612, the processing device104receives from the user a selection of a second TGC setting, and at step614, the processing device104uses the second TGC setting in the ultrasound imaging operation. At step616, the processing device104receives from the user an activation of the preset filter option204thereby selecting the first preset. At step618, the processing device104controls the ultrasound imaging operation based on the first preset and uses the first TGC setting and the default imaging depth in the ultrasound imaging operation. Thus, the imaging depth is retained while cycling between different presets within a preset family unless changed by the user. Additionally, TGC settings may not be retained when cycling from one preset to another within a preset family. Rather, the most recently used TGC setting for a particular preset may be used. In some embodiments, a gain setting may operate in a similar manner as described above for TGC. TGC and gain may not persist when a preset is switched using the preset filter option204because gain and TGC may both vary the ultrasound images and may be preset dependent-parameters, while the imaging depth may be an anatomy-dependent parameter so the anatomy at issue should remain at the same depth regardless of the preset. Various aspects of the present disclosure may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. It will be appreciated that of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims. Unless specifically recited in a claim, steps or components of claims should not be implied or imported from the specification or any other claims as to any particular order, number, position, size, shape, angle, or material. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. The terms “approximately” and “about” may be used to mean within ±20% of a target value in some embodiments, within ±10% of a target value in some embodiments, within ±5% of a target value in some embodiments, and yet within ±2% of a target value in some embodiments. The terms “approximately” and “about” may include the target value. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. | 56,054 |
11857373 | DETAILED DESCRIPTION Described herein is an imaging modality, referred to herein as vibro-acoustography (VA), which uses ultrasound-based technology to identify materials. VA is a non-invasive imaging modality that uses the viscoelastic (i.e., mechanical) properties of targets to distinguish various material types within a region or volume of interest. The approach described herein relates directly to the mechanical properties of tissue through quantitative mathematical modeling. This allows for absolute quantitative measurement of tissue properties using the VA technology. 1. System Configuration Referring toFIG.1throughFIG.8, in one embodiment a VA system10according to the presented technology includes a focused confocal transducer12, which possesses a piezoelectric element and a compact hydrophone14for detection. Two pulse generators30aand30bsupply two electrical sinusoidal waves, f1and f2(where f2=f1+Δf), to the transducer12. Waves f1and f2are applied to the inner50and outer portions54of the piezoelectric element12(seeFIG.2) to emit two distinct acoustic waves f1and f2as waves20into the tissue16at the focal plane of the transducer12. The two waves interfere at the focal plane within tissue or material16to vibrate the tissue16, generating a third acoustic wave22in the kHz region of spectrum. This is energy transformation is accomplished as the target tissue18absorbs the energy and emits its own unique vibration at the difference frequency (Δf), as well as its harmonics, which is then recorded by the nearby compact hydrophone14. The signal received by the hydrophone14is then filtered and amplified with a Low Noise Amplifier (LNA)40. The filtered signal42is then passed through a lock-in amplifier46, and processed by a signal processor72(e.g., computer hardware processing circuit78and related software instructions74, seeFIG.8) for absolute characterization. Detecting the acoustic responses not only generates contrast sufficient for image formation, but the acquired data enables for the quantitative characterization of material properties without reliance on reflection/attenuation of acoustic waves. In a preferred embodiment, system10generates two unmodulated continuous wave (CW) ultrasonic beams (f1 and f2) at a slightly different frequencies in the low-MHz range to impose a low frequency kHz stress field (or beat frequency)22. Each beam is generated by a coherent function generator (30aand30b) and power amplifier24aand24b. The two amplified frequencies are then fed into a confocal transducer12with f1 coupled to the inner transducer ring50and f2 coupled to the outer ring54. This produces two converging beams that overlap at the target of interest18. Depending on the viscoelastic properties of the target18, the generated radiation force will cause a portion or the entire tissue of interest to vibrate at the difference frequency Δf=|f1−f2|. Further, the combination of viscoelastic properties and tissue volumes result in mechanical non-linearities that describe the harmonic generation behavior. The result is a variation on the acoustic yield of the harmonics of Δf. This can be expressed as an infinite sum of integer multiples of the fundamental: n1*(Δf)+n2*(2*Δf)+n3*(3*Δf)+n4*(4*Δf)+ Eq. 1. The presence and relative strengths of these harmonics form a unique tissue type identifier. The acoustic harmonic emission of the tissue is detected by a hydrophone14located near the illuminated tissue. In a preferred embodiment shown inFIG.2, the hydrophone14is disposed within a hole56at the center of the inner transducer element50. While hole56is shown in the center of the inner transducer, it is appreciated that it may be positioned at any location within the device. In one configuration, the outer diameter Doof the outer transducer54is 45.04 mm, with an outer diameter DIof 30.86. The diameter DEof the inner transducer is sized smaller than DI(e.g. 28.56 mm) such that a gap52is formed between inner50and outer54transducers. Center hole56is sized (e.g. 2.7 mm) to receive hydrophone14. FIG.3shows a perspective view of the transducer12and housing55configured to concentrically receive transducer12. An optical post58couples to the back of housing55. FIG.4AandFIG.4Bshow variations of the confocal curved element12ofFIG.1.FIG.4Ashows a transducer12awith a solid inner transducer54disposed within outer transducer54.FIG.4Bshows a transducer12bwith an inner transducer with center hole56in inner transducer50, the hole56sized to receive hydrophone14. FIG.5shows a plot of a simulation comparing beam profiles of the curved elements14a, and14bofFIG.4AandFIG.4B, respectively. The z-axis envelope beam profile comparison shows an ROC of 60 mm. The inner transducer50had a frequency of 3.2 MHz and the outer transducer54had a frequency of 3.16 MHz for both cases. FIG.6shows an alternative embodiment of a curved transducer12chaving concave surface60and plurality or array of transducer elements62, which may be used as a phase-delayed array. The array of transducer elements62allows for electrical beam steering (as opposed to mechanical beam steering), which allows for faster, instantaneous images. A hydrophone (not shown), may also be embedded within the array. FIG.7AthroughFIG.7Dshows a plot comparing beam profiles of the curved elements ofFIG.4AandFIG.4Busing a simulation for a compact in vivo VA system with COMSOL.FIG.7AandFIG.7Bshow individual beam profiles of the inner and outer transducer, respectively, of the confocal transducer14bwith center hole ofFIG.4B.FIG.7CandFIG.7Dshow individual beam profiles of the inner and outer transducer, respectively, of the confocal transducer14awithout center hole ofFIG.4A. It was found that placement of a center hole56is expected to shift the focus spot by 0.7 mm and decrease the relative pressure by 1.0%, and that the overall beam pattern was unperturbed. FIG.8shows high-level a schematic diagram of a VA system70with processing components. System70comprises a computing device72configured for executing application software74that is stored in memory76via a processor78. Application software74may be configured for controlling transducer12and hardware44for generating the waves f1and f2(20) into tissue16. Software74also may be configured to receive the output signal of hydrophone14(from waves22), and process the signal to generate output image48. Hardware44further comprises a pair of splitters26aand26bthat split the signals from signal generators30aand30b. Part of the signals are sent to a mixer28, the output of which is fed to lock in amp46as a reference signal for lock-in and it goes through, wherein band pass filter is used in conjunction with output from mixer28to remove difference frequency (f1 and f2) from the LNA processed data42received from hydrophone14. The application software74acts as a phase-sensitive spectrometer to detect the output signal. In a preferred embodiment, the images are generated by raster scanning the beam20throughout the field of view of tissue16through mechanical scanning or beam steering means (not shown), and processing the scanned data with application software74to generate an image map48of the mechanical (e.g. viscoelastic) response of the target18to the acoustic radiation force20. Application software74may be configured so that pixel values are computed as the power at a particular harmonic or algebraic combination of powers at multiple harmonics. In one embodiment, the application software74is configured to use acoustic as well as mechanical properties of the target tissue, such as elasticity and viscosity, which are not limited by the boundaries of the generated acoustic waves, and can provide absolute quantitative measurements of the target tissue. Application software74may further include a mathematical model based on the geometry, mechanical properties, and acoustic properties of the tissue in the phase and amplitude measurement to extract quantitative information from target. 2. Experiment and Results a. Sample Preparation Tissue samples were procured from patients undergoing resection of squamous cell carcinomas (SCC). Resected samples included the tumor and all surrounding epithelial and mesenchymal structures isolated by the margin selection of the surgeon. Imaging was performed ex-vivo prior to tissue processing by the surgical pathologists. All imaging was performed within two hours of resection to ensure sample integrity representative of in-vivo conditions. Ex-vivo tissue were sectioned and imaged using the acoustic imaging system10shown inFIG.1. Photographs were taken of the specimens after being placed in the imaging system10. The images of the specimens within the imaging system were then co-registered to the acoustic images via a non-reflective rigid affine transform using MATLAB 2013a (MathWorks, MA). The acoustic images were generated using false coloring. b. Results Multiple image sets at: n1*(Δf) and n2*(2*Δf) were been obtained of freshly resected head and neck squamous cell carcinoma that demonstrate significant contrast between tumor and normal tissue. FIG.9Aillustrates a visible image of ex-vivo human parotid gland, andFIG.9Billustrates the processed image of the parotid gland using the vibro-acoustography systems of the present description. The processed vibro-image ofFIG.9Bshows the ability of the system to successfully distinguish between types of tissue (e.g. tumor tissue80, healthy tissue82, and fat tissue84) within the sample based on the tissues' varying acoustic response to the low-frequency stress field. FIG.10Aillustrates a visible image of ex-vivo human scalp, andFIG.10Billustrates the processed image of the scalp using the vibro-acoustography system10of the present description.FIG.11Aillustrates a visible image of ex-vivo human mandible, andFIG.11Billustrates the processed image of the human lip using the vibro-acoustography system10of the present description.FIG.12Aillustrates a visible image of ex-vivo human mandible, andFIG.12Billustrates the processed image of the human lip using the vibro-acoustography system10of the present description. c. Frequency Characterization Imaging results were acquired on custom fabricated tissue phantoms to further identify the ability of the vibro-acoustography system10to characterize tissue stiffness. The elasticity of the tissue can be extracted from the vibro-acoustographic signal spectrum of a volume of tissue interrogated with our Multi-frequency harmonic acoustography imaging system. Knowledge of interrogation volume, elasticity parameters and mechanical modeling enable the quantitative assessment of tissue elasticity. As explained above, the VA system utilizes the inherent mechanical properties of biological materials to produce an image that corresponds to the response of the biological material to a low frequency ultrasonic acoustic wave in the low kHz range. Recent research has begun to show that biological materials differ in mechanical properties, particularly Young's Modulus (E) and Viscosity (η). Thus, different difference frequencies (Δf) must be explored to image various tissue mimicking phantoms (TMPs), as well as to obtain the mean power (dBm) of the emitted waves from each TMP. Three types of TMPs (gelatin, agarose and Poly-Vinyl Alcohol (PVA)), were produced for VA imaging. Each phantom was imaged at 28 kHz, 38 kHz, and 48 kHz in order to distinguish a correspondence between each type of TMP, and a specific Δf that produces the highest signal to noise ratio (SNR) while still maintaining effective contrast. All the TMP's were created by mixing powdered extracts with deionized water in a beaker in a water bath (˜90° F.) to allow for cross linking of the polymers. The solutions were then placed out for two hours to solidify prior to imaging. The TMPs all showed the highest mean power at the difference frequency of 48 kHz. FIG.13A,FIG.13B, andFIG.13Care plots illustrating signal intensity vs. tissue stiffness for sample target materials comprising gelatin, agar and PVA, respectively. Stiffness was measured as a function of the concentration of respective materials in the fabricated tissue. As seen inFIG.13AthroughFIG.14C, the signal intensity decreases as the stiffness increases among the same tissue types. FIG.14shows a plot of the mean power (dBm) of the emitted acoustic waves from a 3% by weight concentrated Agarose TMP at different Δf values, using the VA system10of the present description. Table 1 shows results of the relationship between imaging frequency and signal intensity. The mean power (dBm) for a 3% concentration of an Agarose TMP was used, showing that 48 kHz produced the highest mean power for that particular concentration of Agarose TMP. Moreover, as shown in Table 1, each type of TMP had the greatest mean intensity at this same Δf (thus providing the highest image resolution and signal intensity), with an increase in mean power (dBm) as the Δf increases. This phenomenon is due to the increased vibrations of the tissue, leading to a higher signal because of the higher Δf acoustic wave. It is appreciated that even higher Δf values may be used to distinguish the optimal difference frequency for each type of TMP, or target tissue where appropriate. The above results have demonstrated the capability of VA system of the present description for target distinction and evaluation of phantoms and ex vivo surgical resection specimens. The system is preferably implemented as a compact VA system for in vivo intra-operative applications. The confocal transducer orientations described herein provide both versatile and reliable detection schemes. The system may be miniaturized by combining the transmitter and detector into one structure, with the center hole approximately the diameter of a needle hydrophone. Additionally, the low operational frequency and high intrinsic dynamic range of the system of the present description allows processing to be performed by low power, integrated electronics, thus enabling implementation as a portable, hand-held device. Embodiments of the present technology may be described herein with reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of a flowchart, and combinations of blocks (and/or steps) in a flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general purpose computer or special purpose computer, or other programmable processing apparatus to produce a machine, such that the computer program instructions which execute on the computer processor(s) or other programmable processing apparatus create means for implementing the function(s) specified. Accordingly, blocks of the flowcharts, and procedures, algorithms, steps, operations, formulae, or computational depictions described herein support combinations of means for performing the specified function(s), combinations of steps for performing the specified function(s), and computer program instructions, such as embodied in computer-readable program code logic means, for performing the specified function(s). It will also be understood that each block of the flowchart illustrations, as well as any procedures, algorithms, steps, operations, formulae, or computational depictions and combinations thereof described herein, can be implemented by special purpose hardware-based computer systems which perform the specified function(s) or step(s), or combinations of special purpose hardware and computer-readable program code. Furthermore, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s). The computer program instructions may also be executed by a computer processor or other programmable processing apparatus to cause a series of operational steps to be performed on the computer processor or other programmable processing apparatus to produce a computer-implemented process such that the instructions which execute on the computer processor or other programmable processing apparatus provide steps for implementing the functions specified in the block(s) of the flowchart(s), procedure (s) algorithm(s), step(s), operation(s), formula(e), or computational depiction(s). It will further be appreciated that the terms “programming” or “program executable” as used herein refer to one or more instructions that can be executed by one or more computer processors to perform one or more functions as described herein. The instructions can be embodied in software, in firmware, or in a combination of software and firmware. The instructions can be stored local to the device in non-transitory media, or can be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely. Instructions stored remotely can be downloaded (pushed) to the device by user initiation, or automatically based on one or more factors. It will further be appreciated that as used herein, that the terms processor, hardware processor, computer processor, central processing unit (CPU), and computer are used synonymously to denote a device capable of executing the instructions and communicating with input/output interfaces and/or peripheral devices, and that the terms processor, hardware processor, computer processor, CPU, and computer are intended to encompass single or multiple devices, single core and multicore devices, and variations thereof. From the description herein, it will be appreciated that the present disclosure encompasses multiple embodiments which include, but are not limited to, the following:1. A method for performing multi-frequency harmonic acoustographyfor target identification and border detection within a target tissue, the method comprising: providing a focused confocal transducer having at least one piezoelectric element; focusing first and second ultrasonic waves at first and second frequencies from the transducer on the target tissue; wherein the first and second waves interfere at a focal plane within the target tissue such that the target tissue absorbs energy from the first and second waves and vibrates to emit a third acoustic wave within the target tissue; detecting the third acoustic wave with a hydrophone; and analyzing the third acoustic wave to evaluate one or more mechanical properties of the target tissue.2. The method of any preceding or following embodiment, wherein the confocal transducer comprises a hydrophone positioned centrally within in the piezoelectric element.3. The method of any preceding or following embodiment: wherein the confocal transducer comprises a first inner transducer disposed concentrically within a second outer transducer; wherein the first inner transducer emits the first wave at the first frequency and the second outer transducer emits the second wave at the second frequency; and wherein the hydrophone is positioned concentrically within a hole at the center of the first inner transducer.4. The method of any preceding or following embodiment, wherein the confocal transducer comprises a curved array of transducers configured to electronically scan the target tissue, the hydrophone disposed within the array.5. The method of any preceding or following embodiment, wherein the one or more mechanical properties of the target tissue comprise a boundary between malignant and normal tissue within the target tissue.6. The method of any preceding or following embodiment: wherein evaluating one or more mechanical properties comprises analyzing harmonics generated by non-linear properties of the target tissue; and wherein the one or more mechanical properties of the target tissue are selected from the group consisting of: tissue type, size, location with adjacent tissue or physiologic or disease state.7. The method of any preceding or following embodiment, wherein a spectral envelope comprising relative strengths of higher harmonics is used to create imaging contrast and unique identifying information associated with the target tissue.8. The method of any preceding or following embodiment, wherein evaluating one or more mechanical properties of the target tissue comprises quantitative characterization of one or more material properties of the target tissue without reliance on reflection or attenuation of acoustic waves.9. The method of any preceding or following embodiment: wherein the first and second waves cause the target tissue to vibrate at a difference frequency Δf=|f1−f2|, where f1 is the frequency of the first wave and f2 is the frequency of the second wave; and wherein detecting the third acoustic wave further comprises identifying variation of an acoustic yield of one or more harmonics of Δf.10. The method of any preceding or following embodiment, wherein the acoustic yield of one or more harmonics of Δf is calculated as an infinite sum of integer multiples of the fundamental: n1*(Δf)+n2*(2*Δf)+n3*(3*Δf)+n4*(4*Δf)+ . . . .11. A system for performing multi-frequency harmonic acoustography for target identification and border detection, the system comprising: focused confocal transducer having at least one piezoelectric element; a hydrophone; and a signal processing circuit; the signal processing circuit comprising a computer hardware processor and a non-transitory memory storing instructions executable by the computer hardware processor which, when executed, perform steps comprising sending signals to the transducer to emit first and second ultrasonic waves at first and second frequencies from the transducer into a target tissue; wherein the first and second waves interfere at a focal plane within the target tissue such that the target tissue absorbs energy from the first and second waves and vibrates to emit a third acoustic wave within the target tissue; acquiring the third acoustic wave with a hydrophone; and analyzing the third acoustic wave to evaluate one or more mechanical properties of the target tissue.12. The system of any preceding or following embodiment, wherein the confocal transducer comprises a hydrophone positioned centrally within in the piezoelectric element.13. The system of any preceding or following embodiment: wherein the confocal transducer comprises a first inner transducer disposed concentrically within a second outer transducer; wherein the first inner transducer emits the first wave at the first frequency and the second outer transducer emits the second wave at the second frequency; and wherein the hydrophone is positioned concentrically within a hole at the center of the first inner transducer.14. The system of any preceding or following embodiment, wherein the confocal transducer comprises a curved array of transducers configured to electronically scan the target tissue, the hydrophone disposed within the array.15. The system of any preceding or following embodiment, wherein the one or more mechanical properties of the target tissue comprise a boundary between malignant and normal tissue within the target tissue.16. The system of any preceding or following embodiment: wherein evaluating one or more mechanical properties comprises analyzing harmonics generated by non-linear properties of the target tissue; and wherein the one or more mechanical properties of the target tissue are selected from the group consisting of: tissue type, size, location with adjacent tissue or physiologic or disease state.17. The system of any preceding or following embodiment, wherein a spectral envelope comprising relative strengths of higher harmonics is used to create imaging contrast and unique identifying information associated with the target tissue.18. The system of any preceding or following embodiment, wherein evaluating one or more mechanical properties of the target tissue comprises quantitative characterization of one or more material properties of the target tissue without reliance on reflection or attenuation of acoustic waves.19. The system of any preceding or following embodiment: wherein the first and second waves cause the target tissue to vibrate at a difference frequency Δf=|f1−f2|, where f1 is the frequency of the first wave and f2 is the frequency of the second wave; and wherein detecting the third acoustic wave further comprises identifying variation of an acoustic yield of one or more harmonics of Δf.20. The system of any preceding or following embodiment, wherein the acoustic yield of one or more harmonics of Δf is calculated as an infinite sum of integer multiples of the fundamental: n1*(Δf)+n2*(2*Δf)+n3*(3*Δf)+n4*(4*Δf)+ . . . .21. The system of any preceding or following embodiment, further comprising: one or more pulse generators for and one or more amplifiers for generating the first and second waves as unmodulated continuous wave (CW) ultrasonic beams at the first and second frequencies in the low MHz range; wherein the confocal transducer comprises a first inner transducer disposed concentrically within a second outer transducer; wherein the first and second waves are input into the confocal transducer with the first wave coupled to the inner transducer and the second wave coupled to the outer transducer to create two converging beams that overlap at the target tissue.22. The system of any preceding or following embodiment, further comprising a band pass filter used to remove the first and second waves from the acquired signal.23. A method for performing multi-frequency harmonic acoustography for target identification and border detection, the method comprising: providing a focused confocal transducer having a piezoelectric element and a hydrophone positioned centrally in the piezoelectric element; focusing ultrasonic waves at first and second frequencies from the transducer on a target of interest; wherein the two waves interfere at a focal plane within the target to generate a third acoustic wave and wherein the target absorbs energy and emits its own unique vibration at the difference frequency (Δf) as well as its harmonics; recording the unique vibration with the hydrophone; and ascertaining mechanical properties of the target through detection and analysis of the third acoustic wave.24. The method of any preceding or following embodiment, wherein analysis of the third acoustic wave include analysis of the harmonics.25. The method of any preceding or following embodiment, wherein the mechanical properties of the target are selected from the group consisting of convolution of tissue type, size, and adjacent tissue and unique to the physiologic or disease state of the tissue of interest.26. A system for performing multi-frequency harmonic acoustography for target identification and border detection, the system comprising: a focused confocal transducer having a piezoelectric element and a hydrophone positioned centrally in the piezoelectric element; a signal processing circuit; the signal processing circuit comprising a computer hardware processor and a non-transitory memory storing instructions executable by the computer hardware processor which, when executed, perform steps comprising: causing the transducer to emit ultrasonic waves at first and second frequencies from the transducer on a target of interest; wherein the two waves interfere at a focal plane within the target to generate a third acoustic wave and wherein the target absorbs energy and emits its own unique vibration at the difference frequency (Δf) as well as its harmonics; recording the unique vibration with the hydrophone; and ascertaining mechanical properties of the target through detection and analysis of the third acoustic wave.27. The system of any preceding or following embodiment, wherein analysis of the third acoustic wave include analysis of the harmonics.28. The system of any preceding or following embodiment, wherein the mechanical properties of the target are selected from the group consisting of convolution of tissue type, size, and adjacent tissue and unique to the physiologic or disease state of the tissue of interest.29. A method for performing multi-frequency harmonic acoustography for target identification and border detection, the method comprising: providing a focused confocal transducer having a piezoelectric element and a hydrophone positioned centrally in the piezoelectric element; focusing ultrasonic waves at first and second frequencies from the transducer on a target of interest; wherein the two waves interfere at a focal plane within the target to generate a third acoustic wave and wherein the target absorbs energy and emits its own unique vibration at the difference frequency (Δf) as well as its harmonics; recording the unique vibration with the hydrophone; and ascertaining mechanical properties of the target through detection and analysis of the third acoustic wave.30. The method of any preceding or following embodiment, wherein analysis of the third acoustic wave include analysis of the harmonics.31. The method of any preceding or following embodiment, wherein the mechanical properties of the target are selected from the group consisting of convolution of tissue type, size, and adjacent tissue and unique to the physiologic or disease state of the tissue of interest.32. A system for performing multi-frequency harmonic acoustography for target identification and border detection, the system comprising: a focused confocal transducer having a piezoelectric element and a hydrophone positioned centrally in the piezoelectric element; a signal processing circuit; the signal processing circuit comprising a computer hardware processor and a non-transitory memory storing instructions executable by the computer hardware processor which, when executed, perform steps comprising: causing the transducer to emit ultrasonic waves at first and second frequencies from the transducer on a target of interest; wherein the two waves interfere at a focal plane within the target to generate a third acoustic wave and wherein the target absorbs energy and emits its own unique vibration at the difference frequency (Δf) as well as its harmonics; recording the unique vibration with the hydrophone; and ascertaining mechanical properties of the target through detection and analysis of the third acoustic wave.33. The system of any preceding or following embodiment, wherein analysis of the third acoustic wave include analysis of the harmonics.34. The system of any preceding or following embodiment, wherein the mechanical properties of the target are selected from the group consisting of convolution of tissue type, size, and adjacent tissue and unique to the physiologic or disease state of the tissue of interest. As used herein, the singular terms “a,” “an,” and “the” may include plural referents unless the context clearly dictates otherwise. Reference to an object in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” As used herein, the term “set” refers to a collection of one or more objects. Thus, for example, a set of objects can include a single object or multiple objects. As used herein, the terms “substantially” and “about” are used to describe and account for small variations. When used in conjunction with an event or circumstance, the terms can refer to instances in which the event or circumstance occurs precisely as well as instances in which the event or circumstance occurs to a close approximation. When used in conjunction with a numerical value, the terms can refer to a range of variation of less than or equal to ±10% of that numerical value, such as less than or equal to ±5%, less than or equal to ±4%, less than or equal to ±3%, less than or equal to ±2%, less than or equal to ±1%, less than or equal to ±0.5%, less than or equal to ±0.1%, or less than or equal to ±0.05%. For example, “substantially” aligned can refer to a range of angular variation of less than or equal to ±10°, such as less than or equal to ±5°, less than or equal to ±4°, less than or equal to ±3°, less than or equal to ±2°, less than or equal to ±1°, less than or equal to ±0.5°, less than or equal to ±0.1°, or less than or equal to ±0.05°. Additionally, amounts, ratios, and other numerical values may sometimes be presented herein in a range format. It is to be understood that such range format is used for convenience and brevity and should be understood flexibly to include numerical values explicitly specified as limits of a range, but also to include all individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly specified. For example, a ratio in the range of about 1 to about 200 should be understood to include the explicitly recited limits of about 1 and about 200, but also to include individual ratios such as about 2, about 3, and about 4, and sub-ranges such as about 10 to about 50, about 20 to about 100, and so forth. Although the description herein contains many details, these should not be construed as limiting the scope of the disclosure but as merely providing illustrations of some of the presently preferred embodiments. Therefore, it will be appreciated that the scope of the disclosure fully encompasses other embodiments which may become obvious to those skilled in the art. All structural and functional equivalents to the elements of the disclosed embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed as a “means plus function” element unless the element is expressly recited using the phrase “means for”. No claim element herein is to be construed as a “step plus function” element unless the element is expressly recited using the phrase “step for”. TABLE 1Relationship Between Imaging Frequency And Signal IntensityPower at 28 kHzPower at 38 kHzPower at 48 kHzGelatin 15%−8.05−5.88−5.34Agar 3%−5.24−2.99−1.32PVA 17%−19.37−16.65−16.84 | 35,132 |
11857374 | DETAILED DESCRIPTION OF THE INVENTION In order to illustrate the principles of the present invention an image registration system for registering a live stream of ultrasound images from an intra cardiac echography, i.e. ICE, beamforming ultrasound probe with an X-ray image is described. It is however to be appreciated that the invention also finds application with beamforming ultrasound probes in general. For example the beamforming ultrasound probe may alternatively be cardiac probe in general, a 2D or a 3D ICE probe, a transesophageal echocardiogram, TEE, probe, a transthoracic echocardiogram, TTE, probe, or an intra vascular ultrasound, IVUS, probe. Moreover, in the described image registration system, reference is made to a static X-ray image. It is also to be appreciated that the invention may also be used with a live stream of X-ray images, i.e. fluoroscopy X-ray images. Moreover an image registration system is described in which the medical device that is represented in the X-ray image is an esophageal temperature probe. Whilst particular reference is made to an esophageal temperature probe it is also to be appreciated that the medical device is not limited to this example and that the medical device may alternatively be for example a catheter, an ablation catheter, a biopsy device, a guidewire, a probe, an endoscope, a robot, a filter device, a balloon device, a stent, a mitral clip, a left atrial appendage closure device, an aortic valve, a pacemaker, an intravenous line, a drainage line, a surgical tool, a tissue sealing device, or a tissue cutting device. The above examples should not be construed as limiting the scope of the invention. FIG.1illustrates an arrangement110that includes an image registration system111in accordance with some aspects of the invention. Image registration system111may be used to register live stream of ultrasound images112that are received from beamforming ultrasound probe113, with X-ray image114that is received from X-ray imaging system118. Beamforming ultrasound probe113inFIG.1is an imaging probe that generates a plurality of ultrasound beams that together define its field of view120. Field of view120may include, i.e. overlap with, region of interest121which may for example be a portion of the anatomy. In-use, beamforming ultrasound probe113may therefore be used to generate an ultrasound image that corresponds to its field of view120and which therefore includes region of interest121. In a specific example, beamforming ultrasound probe113may for example be a 2D or a 3D ICE probe that is used to image a region of interest121such as a portion of the heart. During an ultrasound imaging procedure such as brightness-mode, i.e. B-mode, imaging, a one or two-dimensional transducer array within beamforming ultrasound probe113transmits a plurality of ultrasound beams of that define its field of view120. The beams are transmitted in a sequence, or a •frame•. The frame is repeated multiple times in order to generate a live, i.e. substantially real-time, stream of ultrasound images112. Each beam is generated using known beamforming techniques by applying various delays to the ultrasound signals transmitted by the elements of the transducer array. After each beam is transmitted, beamforming ultrasound probe113switches to a receive mode and detects reflected ultrasound signals. One or more associated processors determine the acoustic density along each beam by applying various delays to the ultrasound signals detected by the transducer array. An ultrasound image corresponding to the acoustic density of field of view120is determined by the one or more associated processors by combining the responses for all the beams for each frame. The one or more associated processors are not illustrated inFIG.1, however they may located in beamforming ultrasound probe113, or in a separate ultrasound console not illustrated inFIG.1, or alternatively their functionality may be distributed between the one or more processors in beamforming ultrasound probe113, in the separate ultrasound console, or the one or more processors117in image registration system111. X-ray imaging system118inFIG.1has a field of view119and may in general be any X-ray imaging system. In one example X-ray imaging system118is a C-arm X-ray imaging system, and in another example X-ray imaging system118is a computed tomography, i.e. CT, imaging system. X-ray image114may therefore be a planar or projection image, or a tomographic image. Moreover, X-ray image may be a fluoroscopic X-ray image, i.e. a live X-ray image. In-use, X-ray imaging system118inFIG.1generates X-ray image114that includes a representation of medical device116. Medical device116is located within field of view119of X-ray imaging system118at the time when X-ray image114is generated. Consequently at least a portion of medical device116is visible in X-ray image114. Medical device116may for example be an esophageal temperature probe. An esophageal temperature probe advantageously remains relatively static during a medical procedure and can therefore provide a relatively fixed reference position. Moreover, medical device116inFIG.1includes an ultrasound transducer115. Ultrasound transducer115may be an ultrasound emitter or a detector, or indeed be capable of both emitting and detecting ultrasound signals; i.e. a transponder. In-use, ultrasound transducer115is within field of view120of beamforming ultrasound probe113such that either ultrasound transducer115is sensitive to at least one of the ultrasound beams transmitted by beamforming ultrasound probe113, or such that beamforming ultrasound probe113is sensitive, within at least one of its ultrasound beams, to ultrasound signals emitted by ultrasound transducer115. In a preferred configuration ultrasound transducer115is a detector that is sensitive to the ultrasound signals emitted by beamforming ultrasound probe113. Moreover, ultrasound transducer115disposed on medical device116has a predetermined spatial relationship with medical device116. In one example ultrasound transducer115may be arranged at a known position or orientation or rotation with respect to a position on medical device116. In one example medical device116may include a distal end and ultrasound transducer115may be arranged at a known distance from the distal end. In another example medical device116may include a body having an axis, and ultrasound transducer115may be arranged at a known rotational angle with respect to the axis of the body. In another example multiple ultrasound transducers115may be disposed on medical device116. The multiple ultrasound transducers may for example be arranged in a known pattern with respect to e.g. the distal end of medical device116. Various types of ultrasound transducer are contemplated for use as ultrasound transducer115inFIG.1. These include piezoelectric, piezoresistive and capacitive transducers. More specifically, micro electro mechanical system, i.e. MEMS, or capacitive micro-machined ultrasound transducers, i.e. CMUT-type ultrasound transducers may also be used. Suitable piezoelectric materials include polymer piezoelectric materials such as Polyvinylidene fluoride, a PVDF co-polymer such as polyvinylidene fluoride trifluoroethylene, or a PVDF ter-polymer such as P(VDF-TrFE-CTFE). Polymer piezoelectric materials offer high flexibility and thus may be conformally attached to surfaces having non-flat topography. Preferably a non-imaging ultrasound transducer is used, i.e. an ultrasound transducer that has inadequate pixels or is otherwise not controlled so as to generate an image, however it is also contemplated that an imaging transducer may be used. In other words, ultrasound transducer115may form part of an imaging array of an imaging medical device116. InFIG.1, connections between image registration system111and X-ray imaging system118, ultrasound transducer115, and beamforming ultrasound probe113are illustrated as being wired connections. The wired connections facilitate the reception or transmission of data such as signals, and/or control signals between the various units. It is also contemplated to replace one or more of these wired connections with wireless connections. Moreover, X-ray image114may be received from a memory rather than, as illustrated, being received directly from X-ray imaging system118. As mentioned above, image registration system111inFIG.1includes at least one processor117. The at least one processor117is configured to a) receive live stream of ultrasound images112b) receive X-ray image114that includes the representation of medical device116c) identify, from received X-ray image114, a position of medical device116d) receive transmitted and detected signals corresponding to the ultrasound signals transmitted between beamforming ultrasound probe113and ultrasound transducer115disposed on medical device116e) determine, based on the received signals, a location of ultrasound transducer115respective beamforming ultrasound probe113by i) selecting an ultrasound beam of beamforming ultrasound probe113corresponding to the maximum detected signal and ii) calculating, for the selected ultrasound beam, a range between beamforming ultrasound probe113and ultrasound transducer115based on a time of flight of said transmitted ultrasound signals; and to f) register each ultrasound image from live stream112with X-ray image114based on the identified position of medical device116. The registration includes determining an offset from said identified position that is based on i) the predetermined spatial relationship of ultrasound transducer115respective medical device116and ii) the determined location of ultrasound transducer115respective beamforming ultrasound probe113. Consequently, the registration of live stream of ultrasound images112with X-ray image114in step f) is determined using two offsets from the position of medical device116in X-ray image114. The first offset is defined by the known spatial relationship between ultrasound transducer115and medical device116. The second offset is defined by the location of ultrasound transducer115with respect to beamforming ultrasound probe113. Depending on the actual positions, the offset may be for example a translation, a rotation or a combination of a rotation and a translation. Many suitable techniques for performing such rotations and translations in the registration are known from the medical image registration field. Known registration techniques from the medical imaging field such as those described in the document: Medical image registration, Hill, D. L. G, et al, PHYSICS IN MEDICINE AND BIOLOGY, Phys. Med. Biol. 46 (2001) R1, R45, may be used. In particular, rigid body transformations that incorporate offsets i) and ii), may be used in the registration. In respect of step c) above, i.e. identifying, from received X-ray image114, a position of the medical device116, various methods are contemplated. In a preferred implementation a representation of the medical device is registered with the X-ray image using known medical image registration techniques. One suitable technique is implemented in the Philips Echonavigator system. This involves adjusting the position of a model of the medical device in two- or three-dimensional image space until its position best matches that in the X-ray image. The matching position may for example be determined using a least squares or minimum energy calculation. The image registration may for example compute gradients in the X-ray image and/or apply an edge-detection algorithm to the X-ray image in order to determine an outline of medical device116. Suitable known edge detection methods include frequency domain filtering and Fourier transformations. In respect of steps d) and e) above, i.e. determining a location of ultrasound transducer115respective beamforming ultrasound probe113based on ultrasound signals transmitted between the beamforming ultrasound probe113and the ultrasound transducer115disposed on the medical device; a technique may be implemented that is similar to that described in document WO 2011/138698A1 and in the publication •A New Sensor Technology for 2D Ultrasound-Guided Needle Tracking•, Huanxiang Lu et al, MICCAI 2014, Part II, LNCS 8674, pp. 389, 10 396, 2014. With reference toFIG.1, in this technique, ultrasound transducer115is preferably a detector that is sensitive to the imaging ultrasound signals that are emitted by beamforming ultrasound probe113. As described above, in a conventional ultrasound imaging mode, for example a brightness, i.e. B-mode, the one or two-dimensional transducer array of beamforming ultrasound probe113transmits a sequence of beams within its field of view120in order to generate each ultrasound image. Each beam is generated by applying various delays to the ultrasound signals transmitted by the elements of the transducer array. After each beam is transmitted, beamforming ultrasound probe113switches to a receive mode and determines the acoustic density along the beam by applying various delays to the ultrasound signals detected by the transducer array. An ultrasound image corresponding to the acoustic density of field of view120is determined by combining the responses for all the beams for each frame. When ultrasound transducer115is within field of view120of beamforming ultrasound probe113, each emitted ultrasound beam will be detected with a signal strength that depends in part on its lateral displacement from that beam. The closest beam to detector115generates the maximum detected signal strength, and this beam identifies the angle or bearing between beamforming ultrasound probe113and ultrasound transducer115. The actual beam in which detector115is located is determined by image registration system111using knowledge of the timing of the transmission of the beam within each imaging frame. Furthermore, the range between beamforming ultrasound probe113and ultrasound transducer115is calculated by image registration system111from the time of flight of that beam, i.e. the time between its transmission by the transducer array of beamforming ultrasound probe113and its detection by ultrasound transducer115. Since the angle and range between beamforming ultrasound probe113and ultrasound transducer115is known, this provides the location of the ultrasound transducer115respective beamforming ultrasound probe113. This technique may be used in a similar manner with other ultrasound imaging modes. In one variation of the above technique, instead of using the ultrasound imaging beams of beamforming ultrasound probe113to determine the relative position of ultrasound transducer115, dedicated tracking beams may likewise be transmitted and detected in the same manner to determine the relative position of ultrasound transducer115. These tracking beams may be interleaved temporally between the imaging beams; for example they may be transmitted within an image frame or between image frames. Advantageously these tracking beams do not necessarily have to spatially coincide with, or have the same spatial resolution as the imaging beams. In another variation of the above technique, instead of ultrasound transducer115being a detector, an ultrasound emitter may be used. In this variation the ultrasound emitter may periodically emit an ultrasound pulse or signature that, following a correction for the time of emission of the pulse, appears in the live stream of ultrasound images generated by beamforming ultrasound probe113as a bright spot at the position of ultrasound emitter115. In another variation of the above technique ultrasound transducer115may operate as a transponder that issues an ultrasound pulse or signature upon detection of an ultrasound signal from beamforming ultrasound probe113. Again, following a correction for the time of emission of the pulse, the position of emitter115appears in the live stream of ultrasound images as a bright spot. Thus, in each of these described variations, a location of ultrasound transducer115respective the beamforming ultrasound probe113is determined based on ultrasound signals transmitted between beamforming ultrasound probe113and an ultrasound transducer115disposed on a medical device116. Moreover the location of ultrasound transducer115respective beamforming ultrasound probe113is determined in step e) by i) selecting an ultrasound beam of the beamforming ultrasound probe corresponding to the maximum detected signal and ii) calculating, for the selected ultrasound beam, a range between the beamforming ultrasound probe and the ultrasound transducer based on a time of flight of said transmitted ultrasound signals. It is noted that when ultrasound transducer115operates as an emitter, steps i) and ii) are inherently performed by the processor of the beamforming ultrasound probe when it provides the position of the emitter as a bright spot in the live stream of ultrasound images. Finally, in respect of step f), each ultrasound image from live stream112is registered with X-ray image114based on the identified position of medical device116. The registration includes determining an offset from said identified position that is based on i) the predetermined spatial relationship of ultrasound transducer115respective medical device116and ii) the determined location of ultrasound transducer115respective the beamforming ultrasound probe113. In one implementation X-ray image114includes an X-ray image coordinate system and live stream of ultrasound images112includes an ultrasound image coordinate system that is fixed with respect to the beamforming ultrasound probe113. The at least one processor117may be configured register each ultrasound image from the live stream112with the X-ray image114based on the identified position of medical device116by mapping the ultrasound image coordinate system to the X-ray image coordinate system. The ultrasound image coordinate system, and/or the X-ray image coordinate system may for example be a polar, a Cartesian coordinate system, a cylindrical or a spherical coordinate system. Various mapping techniques are contemplated for this and are well known from the medical image registration field. In one implementation the at least one processor117of image registration system111inFIG.1is configured to display each ultrasound image of the live stream112consecutively as an overlay on the X-ray image114. In other words a fused image comprising the current or most-recent ultrasound image of the live stream112may be displayed. In order to improve the accuracy of the registration performed in step f), the accuracy of determining the location of ultrasound transducer115in step e) may optionally be improved by including one or more additional ultrasound transducers on medical device116. Thereto,FIG.2illustrates a first embodiment of an ICE catheter130with an esophageal temperature probe131within its field of view120. ICE catheter130corresponds to beamforming ultrasound probe113inFIG.1and esophageal temperature probe131corresponds to medical device116inFIG.1. InFIG.2, ICE catheter130has field of view120that includes, i.e. overlaps with region of interest121. ICE catheter130is therefore arranged to generate a live stream of images112that includes region of interest121. Esophageal temperature probe131is included in, i.e. overlapped by, field of view121. Multiple ultrasound transducers115nare disposed on esophageal temperature probe131and configured to detect ultrasound beams transmitted by ICE catheter130. The locations of each of the multiple ultrasound transducers115nare determined respective ICE catheter130in the same manner as described above for the single ultrasound transducer inFIG.1. The additional location information provided by multiple ultrasound transducers115nimproves the accuracy of determining the location of esophageal temperature probe131respective ICE catheter130. For example, the multiple ultrasound transducers115nprovide redundancy such that if one of the transducer is outside field of view120, one of the others may be within the field of view. Moreover, the multiple positions ultrasound transducers115nprovide additional rotation and/or orientation information about medical device116that may result in a more accurate registration in step f). In order to even further improve the accuracy of the registration performed in step f) above, the identification of the position of medical device116in X-ray image114in step c) may optionally be improved. Thereto the position of the ultrasound sensor, as identified in the X-ray image, may be used to provide an additional reference point on medical device116to improve the identification of the position of medical device116. An outline of the ultrasound sensor may for example be represented in the X-ray image owing to its characteristic shape or its characteristic X-ray absorption. Alternatively, one or more X-ray fiducials may be included on medical device116. These fiducials are consequently represented in X-ray image114and may be used in the same manner to identify the position of medical device116in X-ray image114. Thereto,FIG.3illustrates a second embodiment of an ICE catheter130with an esophageal temperature probe131within its field of view120. Esophageal temperature probe131inFIG.3includes multiple X-ray fiducials132n. An X-ray fiducial has a characteristic X-ray Hounsfield unit absorption that allows it to be identified as such in the X-ray image, thereby allowing its distinction from typical anatomical image features such as tissue and bone. Clearly, multiple X-ray fiducials132nmay be alternatively used in combination with a single ultrasound transducer115. Preferably, X-ray fiducials132nand/or ultrasound transducers115nare arranged in a known configuration, or pattern, respective esophageal temperature probe131in order to simplify their identification and consequently this known pattern is used in identifying the position of esophageal temperature probe131in X-ray image114. Again, a model that includes the known pattern in relation to esophageal temperature probe131may be used as described above in relation to medical device116to determine the position of esophageal temperature probe131. In one embodiment, X-ray image114inFIG.1is a historic X-ray image that is generated earlier in time than each ultrasound image of live stream112. In this embodiment, the at least one processor117receives an updated X-ray image that is more recent than the historic X-ray image and replaces the historic X-ray image with the updated X-ray image. The most recent X-ray image may thereby indicate recent anatomical changes, or indicate a different position within the anatomy. The latter may be useful when performing medical procedures in a new location in the anatomy. In another embodiment, live stream of ultrasound images112inFIG.1includes a current ultrasound image, i.e. a most recent, or a substantially real-time, ultrasound image. In this embodiment image registration system111is arranged to receive user input indicative of a position in the current ultrasound image, and provides in the registered X-ray image, a marker corresponding to the position in the current ultrasound image. User input may be received from user input device such as a switch, a keyboard, a mouse or a joystick. User input may for example be received from a switch or other control associated with beamforming ultrasound probe113or associated with medical device116. A user may for example move a mouse cursor to a position in the current ultrasound image, click the mouse, and this position be indicated in the current ultrasound image by the marker. Various shapes and/or colors may be used for the marker, such as a circle or a cross and so forth. The marker may be displayed for a predetermined period of time, or indefinitely thereafter, to act as a record. In one example the user input is received from beamforming ultrasound probe113. The marker may for example indicate a position at which a subsequent medical procedure is planned to take place. A surgeon may for example use this facility as a navigational aid, e.g. to mark a ventricle in the heart, or as a reminder to return to that position later during a medical procedure to ablate tissue. In another example, medical device116inFIG.1is an esophageal temperature probe and the at least one processor117receives, from the esophageal temperature probe, a temperature signal indicative of a temperature at a position on the esophageal temperature probe. Moreover the at least one processor117is configured to indicate at a corresponding position in the registered X-ray image114, the received temperature. In so doing the registered, or fused image that is generated by image registration system111may display a record of the temperature that was recorded at the designated position. The temperature may be a historic or a live, i.e. substantially real-time, temperature. The temperature may thereby serve as a warning to a surgeon of the risk of damaging healthy tissue during a cardiac ablation procedure. User input may alternatively be received from a separate interventional device not illustrated inFIG.1such as a cardiac ablation catheter. In one example user input is received from a cardiac ablation catheter and corresponds to the activation of the catheter. The marker that is provided in the registered or fused X-ray image corresponds to the position of the activation and serves as an indication to an operator of each ablation point in the anatomy. In summary, the marker may therefore indicate a position of interest in the anatomy. In another embodiment a computer program product is disclosed for use with image registration system111inFIG.1. Thereto,FIG.4illustrates various method steps that may be carried out by the computer program product. The computer program product includes instructions which when executed on at least one processor117of image registration system111; cause processor(s)117to carry out the method steps of: a) receiving live stream of ultrasound images112; b) receiving X-ray image114that includes the representation of medical device116; c) identifying, from received X-ray image114, a position of medical device116; d) receiving transmitted and detected signals corresponding to the ultrasound signals transmitted between beamforming ultrasound probe113and ultrasound transducer115disposed on medical device116; e) determining, based on the received signals, a location of ultrasound transducer115respective beamforming ultrasound probe113by i) selecting an ultrasound beam of beamforming ultrasound probe113corresponding to the maximum detected signal and ii) calculating, for the selected ultrasound beam, a range between beamforming ultrasound probe113and ultrasound transducer115based on a time of flight of said transmitted ultrasound signals; and f) registering each ultrasound image from live stream112with X-ray image114based on the identified position of medical device116, wherein said registration includes determining an offset from said identified position that is based on i) the predetermined spatial relationship of ultrasound transducer115respective medical device116and ii) the determined location of ultrasound transducer115respective beamforming ultrasound probe113. The computer program product may further include instructions to perform additional method steps described herein in relation to image registration system111. The computer program product may be provided by dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions can be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which can be shared. Moreover, explicit use of the term •processor• or •controller• should not be construed to refer exclusively to hardware capable of executing software, and can implicitly include, without limitation, digital signal processor •DSP• hardware, read only memory •ROM• for storing software, random access memory •RAM•, non-volatile storage, etc. Furthermore, embodiments of the present invention can take the form of a computer program product accessible from a computer-usable or computer-readable storage medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable storage medium can be any apparatus that may include, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or apparatus or device, or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory •RAM•, a read-only memory •ROM•, a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk, read only memory •CD-ROM•, compact disk, read/write •CD-R/W•, Blu-Rayf and DVD. In summary, an image registration system for registering a live stream of ultrasound images of a beamforming ultrasound probe with an X-ray image has been described. The image registration system identifies, from the X-ray image, the position of a medical device represented in the X-ray image; and determines, based on ultrasound signals transmitted between the beamforming ultrasound probe and an ultrasound transducer disposed on the medical device, a location of the ultrasound transducer respective the beamforming ultrasound probe. Each ultrasound image from the live stream is registered with the X-ray image based on the identified position of the medical device. The registration includes determining an offset from said identified position that is based on i) a predetermined spatial relationship of the ultrasound transducer respective the medical device and ii) the determined location of the ultrasound transducer respective the beamforming ultrasound probe. Various embodiments and options have been described in relation the system, and it is noted that the various embodiments may be combined to achieve further advantageous effects. | 30,408 |
11857375 | DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, an embodiment according to the technology of the present disclosure will be described in detail with reference to the drawings. First, the configuration of a medical imaging system10according to this embodiment will be described with reference toFIG.1. As illustrated inFIG.1, the medical imaging system10comprises a medical imaging apparatus12, a console14, and an image interpretation support apparatus16. The medical imaging apparatus12and the console14are connected so as to be able to communicate with each other, and the console14and the image interpretation support apparatus16are connected so as to be able to communicate with each other. The medical imaging apparatus12and the console14are operated by a radiographer, such as a radiology technician, and the image interpretation support apparatus16is operated by an image interpreter, such as a doctor. Next, the configuration of the medical imaging apparatus12according to this embodiment will be described with reference toFIGS.2to4. The medical imaging apparatus12has the functions of a mammography apparatus that irradiates the breast of a subject as an object with radiation R (for example, X-rays) to capture a radiographic image of the breast, and the functions of an ultrasonography apparatus that transmits ultrasonic waves to the breast, receives ultrasonic echoes reflected in the breast, and captures an ultrasound image. That is, the medical imaging apparatus12can capture two types of medical images having different imaging principles, that is, a radiographic image and an ultrasound image. The medical imaging apparatus12may be an apparatus that captures the image of the breast of the subject not only in a state in which the subject stands up (standing state) but also in a state in which the subject sits on, for example, a chair (including a wheelchair) (sitting state). As illustrated inFIG.2, the medical imaging apparatus12comprises a control unit20, a storage unit22, an interface (I/F) unit24, an operation unit26, an ultrasound probe27, and a probe moving mechanism28. Further, the medical imaging apparatus12comprises a radiation detector30, a compression plate driving unit32, a compression force detection sensor33, a compression plate34, a radiation emitting unit36, and a radiation source driving unit37. The control unit20, the storage unit22, the I/F unit24, the operation unit26, the ultrasound probe27, and the probe moving mechanism28are connected to each other through a bus39such that they can transmit and receive various kinds of information. Further, the radiation detector30, the compression plate driving unit32, the compression force detection sensor33, the radiation emitting unit36, and the radiation source driving unit37are connected to each other through the bus39such that they can transmit and receive various kinds of information. The control unit20controls the overall operation of the medical imaging apparatus12under the control of the console14. The control unit20includes a central processing unit (CPU)20A, a read only memory (ROM)20B, and a random access memory (RAM)20C. For example, various programs including an imaging processing program21which is executed by the CPU20A and performs control related to the capture of a medical image are stored in the ROM20B in advance. The RAM20C temporarily stores various kinds of data. The radiation detector30detects the radiation R transmitted through the breast which is an object. As illustrated inFIG.3, the radiation detector30is provided in an imaging table40. In the medical imaging apparatus12according to this embodiment, in a case in which imaging is performed, the breast of the subject is positioned on an imaging surface40A of the imaging table40by a radiographer. For example, the imaging surface40A with which the breast of the subject comes into contact is made of carbon in terms of the transmission and intensity of the radiation R. The radiation detector30detects the radiation R transmitted through the breast of the subject and the imaging table40, generates a radiographic image on the basis of the detected radiation R, and outputs image data indicating the generated radiographic image. The type of the radiation detector30according to this embodiment is not particularly limited. For example, the radiation detector30may be an indirect-conversion-type radiation detector that converts the radiation R into light and converts the converted light into charge or a direct-conversion-type radiation detector that directly converts the radiation R into charge. For example, the image data indicating the radiographic image captured by the radiation detector30and various other kinds of information are stored in the storage unit22. Examples of the storage unit22include a hard disk drive (HDD) and a solid state drive (SSD). The I/F unit24transmits and receives various kinds of information to and from the console14using wireless communication or wired communication. The image data indicating the radiographic image captured by the radiation detector30in the medical imaging apparatus12is transmitted to the console14through the I/F unit24. The operation unit26is provided as a plurality of switches in, for example, the imaging table40of the medical imaging apparatus12. In addition, the operation unit26may be provided as a touch panel switch or may be provided as a foot switch that is operated by the user's feet. The radiation emitting unit36comprises a radiation source36R. As illustrated inFIG.3, the radiation emitting unit36is provided in an arm portion42together with the imaging table40and a compression unit46. The medical imaging apparatus12according to this embodiment comprises the arm portion42, a base44, and a shaft portion45. The arm portion42is supported by the base44so as to be movable in the up-down direction (Z-axis direction). The shaft portion45connects the arm portion42to the base44. The radiation source driving unit37can relatively rotate the arm portion42with respect to the base44, using the shaft portion45as a rotation axis. As illustrated inFIGS.3and4, the compression plate driving unit32, the compression force detection sensor33, and the compression plate34are provided in the compression unit46. The compression unit46and the arm portion42can be relatively rotated with respect to the base44separately, using the shaft portion45as a rotation axis. In this embodiment, gears (not illustrated) are provided in each of the shaft portion45, the arm portion42, and the compression unit46. Each gear is switched between an engaged state and a disengaged state to connect each of the arm portion42and the compression unit46to the shaft portion45. One or both of the arm portion42and the compression unit46connected to the shaft portion45are rotated integrally with the shaft portion45. The compression plate34according to this embodiment is a plate-shaped compression member that compresses and fixes the breast and is moved in the up-down direction (Z-axis direction) by the compression plate driving unit32to compress the breast of the subject against the imaging table40. Hereinafter, for the movement direction of the compression plate34, the direction in which the breast is compressed, that is, the direction in which the compression plate34becomes closer to the imaging surface40A is referred to as a “compression direction” and the direction in which the compression of the breast is released, that is, the direction in which the compression plate34becomes closer to the radiation emitting unit36is referred to as a “decompression direction”. It is preferable that the compression plate34is transparent in order to check positioning or a compressed state in the compression of the breast. In addition, the compression plate34is made of a material having high transmittance for the radiation R. As illustrated inFIG.4, the compression unit46comprises the compression plate driving unit32including a motor31and a ball screw38and the compression force detection sensor33. The compression force detection sensor33detects the compression force of the compression plate34against the entire breast. In the example illustrated inFIG.4, the compression force detection sensor33detects the compression force on the basis of load applied to the motor31as a driving source of the compression plate34. The compression plate34is supported by the ball screw38and the motor31is driven to slide the compression plate34between the imaging table40and the radiation source36R. The compression force detection sensor33according to this embodiment is a strain gauge, such as a load cell. The compression force detection sensor33detects reaction force to the compression force of the compression plate34to detect the compression force of the compression plate34against the breast. A method for detecting the compression force is not limited to the configuration illustrated inFIG.4. For example, the compression force detection sensor33may be a semiconductor pressure sensor or a capacitive pressure sensor. Further, for example, the compression force detection sensor33may be provided in the compression plate34. As illustrated inFIG.3, the ultrasound probe27and the probe moving mechanism28are provided in the compression unit46. The ultrasound probe27is moved along an upper surface (a surface opposite to the surface on which the breast of the subject is placed) of the compression plate34by the probe moving mechanism28and scans the breast with ultrasonic waves to acquire an ultrasound image of the breast. The ultrasound probe27comprises a plurality of ultrasound transducers (not illustrated) that are arranged one-dimensionally or two-dimensionally. Each of the ultrasound transducers included in the ultrasound probe27transmits ultrasonic waves on the basis of a driving signal applied, receives ultrasonic echoes, and outputs a received signal. Each of the ultrasound transducers included in the ultrasound probe27is, for example, a transducer in which electrodes are formed at both ends of a piezoelectric material (piezoelectric body), such as a piezoelectric ceramic typified by lead (Pb) zirconate titanate (PZT) or a polymeric piezoelectric element typified by polyvinylidene difluoride (PVDF). In a case in which a pulsed or continuous wave driving signal is transmitted to apply a voltage to the electrodes of the transducer, the piezoelectric body is expanded and contracted. Pulsed or continuous ultrasonic waves are generated from each transducer by the expansion and contraction and the generated ultrasonic waves are combined to form an ultrasound beam. In addition, each transducer receives the propagated ultrasonic waves and is then expanded and contracted to generate an electric signal. The generated electric signal is output as a received ultrasound signal and is input to the console14. In a case in which ultrasonography is performed, the ultrasound probe27is moved along the upper surface of the compression plate34in a state in which an acoustic matching member, such as echo jelly, is applied onto the upper surface of the compression plate34. In the medical imaging apparatus12according to this embodiment, the control unit20can direct the probe moving mechanism28to move the ultrasound probe27, thereby automatically capturing an ultrasound image. FIG.5illustrates an aspect of cranio-caudal (CC) imaging that is simple imaging in which the radiation source36R is disposed on a normal line passing through the center of the detection surface of the radiation detector30so as to face the detection surface and emits the radiation R, the breast is vertically sandwiched and compressed, and an image of the breast is captured. In contrast,FIG.6illustrates an aspect of medio-lateral oblique (MLO) imaging which is simple imaging and in which the breast is obliquely sandwiched and compressed and an image of the breast is captured.FIGS.5and6illustrate examples in which the right breast is the object. However, the CC imaging and the MLO imaging are similarly performed for the left breast. In a case in which a radiographic image is captured, the control unit20according to this embodiment controls the radiation emitting unit36, the radiation detector30, and the compression plate driving unit32. The control unit20directs the compression plate driving unit32to move the compression plate34on the basis of the detection result of the compression force detection sensor33, thereby compressing the breast against the imaging table40. The control unit20adjusts imaging conditions, such as a tube voltage and a tube current, and directs the radiation source36R of the radiation emitting unit36to emit the radiation R. The control unit20directs the radiation detector30to detect the radiation R transmitted through the breast, thereby capturing a radiographic image. Further, in a case in which an ultrasound image is captured, the control unit20controls the ultrasound probe27and the probe moving mechanism28in a state in which the breast is compressed by the compression plate34. The control unit20checks the position of the ultrasound probe27on the basis of the detection result of a sensor (not illustrated) that detects the position of the ultrasound probe27and directs the probe moving mechanism28to move the ultrasound probe27. The control unit20directs the ultrasound probe27to transmit and receive ultrasonic waves while moving the ultrasound probe27using the probe moving mechanism28, thereby capturing an ultrasound image. The portion (for example, the radiation emitting unit36and the radiation detector30) controlled by the control unit20in a case in which a radiographic image is captured is an example of a first imaging apparatus according to the technology of the present disclosure. In addition, the portion (for example, the ultrasound probe27and the probe moving mechanism28) controlled by the control unit20in a case in which an ultrasound image is captured is an example of a second imaging apparatus according to the technology of the present disclosure. Next, the hardware configuration of the console14according to this embodiment will be described with reference toFIG.7. The console14inputs an imaging order and various kinds of information acquired from, for example, a radiology information system (RIS) through a network and commands input by the user through, for example, an operation unit56to the medical imaging apparatus12. As illustrated inFIG.7, the console14comprises a control unit50, a storage unit52, an I/F unit54, the operation unit56, and a display unit58. The control unit50, the storage unit52, the I/F unit54, the operation unit56, and the display unit58are connected to each other through a bus59such that they can transmit and receive various kinds of information. Examples of the console14include information processing apparatuses such as a personal computer and a server computer. The control unit50controls the overall operation of the console14. The control unit50comprises a CPU50A, a ROM50B, and a RAM50C. Various programs including a control processing program51executed by the CPU50A are stored in the ROM50B in advance. The RAM50C temporarily stores various kinds of data. The storage unit52stores, for example, image data indicating the medical image captured by the medical imaging apparatus12and various other kinds of information. Examples of the storage unit52include an HDD and an SSD. The operation unit56is used by the user to input, for example, commands related to the capture of a medical image and various kinds of information. Therefore, the operation unit56according to this embodiment includes an irradiation command button that is pressed by the user to command the emission of the radiation R. The operation unit56is not particularly limited. Examples of the operation unit56include various switches, a touch panel, a touch pen, and a mouse. The display unit58displays various kinds of information. The operation unit56and the display unit58may be integrated into a touch panel display. The I/F unit54transmits and receives various kinds of information to and from the medical imaging apparatus12and the image interpretation support apparatus16using wireless communication or wired communication. Next, the hardware configuration of the image interpretation support apparatus16according to this embodiment will be described with reference toFIG.8. As illustrated inFIG.8, the image interpretation support apparatus16comprises a control unit70, a storage unit72, an I/F unit74, an operation unit76, and a display unit78. The control unit70, the storage unit72, the I/F unit74, the operation unit76, and the display unit78are connected to each other through a bus79such that they can transmit and receive various kinds of information. Examples of the image interpretation support apparatus16include information processing apparatuses such as a personal computer and a server computer. The control unit70controls the overall operation of the image interpretation support apparatus16. The control unit70includes a CPU70A, a ROM70B, and a RAM70C. Various programs including an analysis processing program71executed by the CPU70A are stored in the ROM70B in advance. The RAM70C temporarily stores various kinds of data. The storage unit72stores image data indicating the medical image transmitted from the console14and other various kinds of information. Examples of the storage unit72include an HDD and an SSD. The operation unit76includes, for example, a mouse and a keyboard and is used for the operation of the user. The display unit78displays various kinds of information. The operation unit76and the display unit78may be integrated into a touch panel display. The I/F unit74transmits and receives various kinds of information to and from the console14using wireless communication or wired communication. Next, the functional configuration of the medical imaging apparatus12according to this embodiment will be described with reference toFIG.9. As illustrated inFIG.9, the medical imaging apparatus12comprises an imaging control unit80, a compression control unit82, an image acquisition unit84, an image output unit86, and a position acquisition unit88. The CPU20A executes the imaging processing program21to function as the imaging control unit80, the compression control unit82, the image acquisition unit84, the image output unit86, and the position acquisition unit88. The imaging control unit80controls the radiation emitting unit36and the radiation detector30such that a radiographic image is captured in a state in which the breast as an object is fixed by the compression plate34. Hereinafter, the radiographic image captured by this control is referred to as a “first medical image”. In addition, the imaging control unit80controls the ultrasound probe27and the probe moving mechanism28such that an ultrasound image is captured in a state in which the fixation of the breast as an object is maintained after the first medical image is captured. Hereinafter, the ultrasound image captured by this control is referred to as a “second medical image”. In addition, the state in which the fixation of the breast is maintained includes a case in which the breast is fixed with the same fixing force and a case in which the fixing force is changed without releasing the fixation of the breast. In this embodiment, the imaging control unit80performs control to capture the second medical image in a state in which the force of fixing the breast is different from that in a case in which the first medical image is captured. Further, after the second medical image is captured, the imaging control unit80controls the ultrasound probe27and the probe moving mechanism28on the basis of positional information indicating the position of a region of interest acquired by the position acquisition unit88which will be described below such that an ultrasound image having the region of interest as a main object is captured. Hereinafter, the ultrasound image captured by this control is referred to as a “region-of-interest image”. At this time, the imaging control unit80performs control such that the region-of-interest image is captured while directing the ultrasound probe27to generate ultrasonic waves toward the region of interest at a plurality of different angles. That is, in this embodiment, the second medical image and the region-of-interest image are captured by the same imaging principle under different imaging conditions. Here, the different imaging conditions mean that the incident angle of ultrasonic waves on the breast and the imaging range are different. The second medical image according to this embodiment is a group of a plurality of images of the entire region of the breast captured while the ultrasound probe27is moved over the entire region of the breast. In contrast, the region-of-interest image according to this embodiment is an ultrasound image having the region of interest as the main object. The region-of-interest image is not a group of a plurality of images of the entire region of the breast, but is a group of a plurality of images obtained by narrowing the imaging position to the position of the region of interest in the entire region of the breast. The group of the plurality of images forming the region-of-interest image includes images captured at the same position at different angles. In the capture of the first medical image under the control of the imaging control unit80, the compression control unit82performs control to set the compression force of the compression plate34against the breast as a first force before the first medical image is captured. Further, in the capture of the second medical image under the control of the imaging control unit80, the compression control unit82performs control to set the compression force of the compression plate34against the breast as a second force less than the first force until the second medical image is captured after the first medical image is captured. Furthermore, in the capture of the region-of-interest image under the control of the imaging control unit80, the compression control unit82maintains the compression force of the compression plate34against the breast at the second force in a case in which the second medical image is captured. In addition, the compression control unit82sets the second force at which the amount of change in the thickness of the breast in a case in which the compressed state is changed from a state in which the breast is compressed with the first force to a state in which the breast is compressed with the second force is equal to or less than a predetermined amount of change. An example of the predetermined amount of change is the upper limit of the amount of change at which the compression force is changed to the extent that the overlap of the mammary gland tissues, that is, the development of the mammary gland tissues is not changed or the amount of change is within an allowable range even though the overlap is changed. In a case in which the region of interest has been detected by an analysis unit92which will be described below, the compression control unit82performs control to release the compressed state of the breast after the region-of-interest image is captured. In addition, in a case in which the region of interest has not been detected by the analysis unit92which will be described below, the compression control unit82performs control to release the compressed state of the breast without capturing the region-of-interest image. The image acquisition unit84acquires the first medical image, the second medical image, and the region-of-interest image captured under the control of the imaging control unit80. The image acquisition unit84is an example of an acquisition unit according to the technology of the present disclosure. The image output unit86outputs the first medical image, the second medical image, and the region-of-interest image acquired by the image acquisition unit84to the console14. The console14transmits the first medical image, the second medical image, and the region-of-interest image input from the medical imaging apparatus12to the image interpretation support apparatus16. The position acquisition unit88acquires positional information indicating the position of the region of interest transmitted from the console14which will be described below or information indicating that the region of interest has not been detected. Next, the functional configuration of the image interpretation support apparatus16according to this embodiment will be described with reference toFIG.10. As illustrated inFIG.10, the image interpretation support apparatus16comprises a receiving unit90, the analysis unit92, and a position output unit94. The CPU70A executes the analysis processing program71to function as the receiving unit90, the analysis unit92, and the position output unit94. The receiving unit90receives the first medical image, the second medical image, and the region-of-interest image transmitted from the console14. The analysis unit92analyzes the first medical image received by the receiving unit90using a known technique, such as computer-aided diagnosis (CAD) to detect the region of interest. The region of interest referred to here is a partial region of the first medical image and means a region including a lesion. In addition, the region of interest is not limited to a region that is definitely diagnosed as a lesion and may be a region in which the possibility of a lesion is recognized. In a case in which there are no lesions in the first medical image, the analysis unit92does not detect the region of interest. The region of interest is not limited to the region including a lesion and may be, for example, a mammary gland region. In this embodiment, a period for which the second medical image is captured (hereinafter, referred to as an “imaging period”) and a period for which the analysis unit92analyzes the first medical image (hereinafter, referred to as an “analysis period”) at least partially overlap each other. In addition, how the imaging period and the analysis period overlap each other is not particularly limited as long as the imaging period and the analysis period at least partially overlap each other. For example, as illustrated inFIG.11A, the analysis of the first medical image may start after the capture of the second medical image starts and the analysis of the first medical image may end before the capture of the second medical image ends. In addition, for example, as illustrated inFIG.11B, the analysis of the first medical image may start before the capture of the second medical image starts and the analysis of the first medical image may end before the capture of the second medical image ends. Further, for example, as illustrated inFIG.11C, the analysis of the first medical image may start after the capture of the second medical image starts and the analysis of the first medical image may end after the capture of the second medical image ends. Furthermore, for example, as illustrated inFIG.11D, the analysis of the first medical image may start before the capture of the second medical image starts and the analysis of the first medical image may end after the capture of the second medical image ends. Hereinafter, the case illustrated inFIG.11Awill be described as an example. In a case in which the analysis unit92detects the region of interest, the position output unit94outputs positional information indicating the position of the region of interest to the console14. An example of the positional information indicating the position of the region of interest is coordinate information of the region of interest in the first medical image. The console14transmits the positional information indicating the position of the region of interest which has been input from the image interpretation support apparatus16to the medical imaging apparatus12. The position output unit94is an example of an output unit according to the technology of the present disclosure. In a case in which the analysis unit92has not detected the region of interest, the position output unit94outputs information indicating that the region of interest has not been detected (hereinafter, referred to as “non-detection information”) to the console14. The console14transmits the non-detection information input from the image interpretation support apparatus16to the medical imaging apparatus12. Next, the operation of the medical imaging system10according to this embodiment will be described with reference toFIGS.12and13. In a case in which the CPU20A of the medical imaging apparatus12executes the imaging processing program21, an imaging process illustrated inFIG.12is performed. The CPU70A of the image interpretation support apparatus16executes the analysis processing program71to perform an analysis process illustrated inFIG.13. The imaging process illustrated inFIG.12and the analysis process illustrated inFIG.13may be performed for one or both of the CC imaging or the MLO imaging. In Step S10ofFIG.12, the compression control unit82performs control to set the compression force of the compression plate34against the breast as the first force and to compress the breast with the compression plate34as described above. In Step S12, the imaging control unit80controls the radiation emitting unit36and the radiation detector30such that a process of capturing the first medical image of the breast as an object is started. In Step S14, the image acquisition unit84acquires the first medical image obtained by completing the imaging started by the process in Step S12. In Step S16, the image output unit86outputs the first medical image acquired by the process in Step S14to the console14. The console14transmits the first medical image input from the medical imaging apparatus12to the image interpretation support apparatus16by the process in Step S16. The first medical image transmitted by the console14is received by the image interpretation support apparatus16in Step S40which will be described below. In Step S18, the compression control unit82performs control to set the compression force of the compression plate34against the breast as the second force less than the first force as described above. In Step S20, the imaging control unit80controls the ultrasound probe27and the probe moving mechanism28such that a process of capturing the second medical image of the breast as an object is started. In Step S22, the position acquisition unit88acquires the positional information indicating the position of the region of interest or the non-detection information transmitted from the console14. In Step S24, the image acquisition unit84acquires the second medical image obtained by completing the imaging started by the process in Step S20. In Step S26, the image output unit86outputs the second medical image acquired by the process in Step S24to the console14. The console14transmits the second medical image input from the medical imaging apparatus12by the process in Step S26to the image interpretation support apparatus16. The second medical image transmitted by the console14is received by the image interpretation support apparatus16in Step S48or Step S54which will be described below. In Step S28, the position acquisition unit88determines whether or not the information acquired in Step S22is the positional information indicating the position of the region of interest. In a case in which the information acquired in Step S22is the non-detection information, the determination result in Step S28is “No” and the process proceeds to Step S36. In a case in which the determination result in Step S28is “Yes”, the process proceeds to Step S30. In Step S30, the imaging control unit80controls the ultrasound probe27and the probe moving mechanism28on the basis of the positional information acquired by the process in Step S22to start the process of capturing the region-of-interest image, as described above. In Step S32, the image acquisition unit84acquires the region-of-interest image obtained by completing the imaging started by the process in Step S30. In Step S34, the image output unit86outputs the region-of-interest image acquired by the process in Step S32to the console14. The console14transmits the region-of-interest image input from the medical imaging apparatus12by the process in Step S34to the image interpretation support apparatus16. The region-of-interest image transmitted by the console14is received by the image interpretation support apparatus16in Step S50which will be described below. In Step S36, the compression control unit82performs control to release the compressed state of the breast. In a case in which the process in Step S36ends, the imaging process ends. In Step S40illustrated inFIG.13, the receiving unit90receives the first medical image transmitted from the console14. In Step S42, the analysis unit92performs the process of analyzing the first medical image received by the process in Step S40to detect the region of interest as described above. In Step S44, the analysis unit92determines whether or not the region of interest has been detected in Step S42. In a case in which the determination result is “No”, the process proceeds to Step S52. In a case in which the determination result is “Yes”, the process proceeds to Step S46. In Step S46, the position output unit94outputs positional information indicating the position of the region of interest detected by the process in Step S42to the console14. The console14transmits the positional information input from the image interpretation support apparatus16by the process in Step S46to the medical imaging apparatus12. The positional information transmitted by the console14is acquired by the medical imaging apparatus12in Step S22. In Step S48, the receiving unit90receives the second medical image transmitted from the console14. In Step S50, the receiving unit90receives the region-of-interest image transmitted from the console14. In Step S52, the position output unit94outputs the positional information indicating the position of the region of interest detected by the process in Step S42to the console14as in Step S46. In Step S54, the receiving unit90receives the second medical image transmitted from the console14as in Step S48. In a case in which the process in Step S50ends, the analysis process ends. In a case in which the process in Step S54ends, the analysis process ends. The first medical image, the second medical image, and the region-of-interest image are used for image interpretation by the user. As described above, according to this embodiment, for example, as illustrated inFIG.14, the second medical image is captured after the first medical image is captured. In addition, the region of interest is detected from the first medical image for a period that at least partially overlaps the period for which the second medical image is captured. That is, it is possible to capture the region-of-interest image without waiting for a long time after the second medical image is captured. Therefore, it is possible to shorten the time from the start of the capture of an image of a patient to the interpretation of the image by the user. As a result, it is possible to reduce a burden on the patient. In the above-described embodiment, the case in which the region-of-interest image is the ultrasound image captured by the same imaging principle as the second medical image has been described. However, the invention is not limited thereto. The region-of-interest image may be a radiographic image captured by the same imaging principle as the first medical image. In this case, the imaging control unit80controls the radiation emitting unit36and the radiation detector30on the basis of the positional information indicating the position of the region of interest acquired by the position acquisition unit88such that a radiographic image having the region of interest as the main object is captured after the second medical image is captured. In some cases, this imaging is referred to as spot imaging. At this time, for example, the imaging control unit80performs control to capture a region-of-interest image using a larger radiation dose than that in a case in which the first medical image is captured. In this case, the region-of-interest image is a radiographic image having the region of interest as the main object and is an image obtained by narrowing the imaging range to a region having the region of interest as the center in the imaging range of the first medical image. In the above-described embodiment, the case in which the first medical image is a radiographic image and the second medical image is an ultrasound image has been described. However, the invention is not limited thereto. For example, the first medical image may be an ultrasound image and the second medical image may be a radiographic image. Further, in the above-described embodiment, the case in which the region-of-interest image is captured using the ultrasound probe27provided in the compression unit46has been described. However, the invention is not limited thereto. For example, the region-of-interest image may be captured using the hand-held ultrasound probe27. In this embodiment, the console14displays the positional information indicating the position of the region of interest output from the image interpretation support apparatus16on the display unit58. The user takes a region-of-interest image using the hand-held ultrasound probe27on the basis of the positional information displayed on the display unit58. Further, in the above-described embodiment, the imaging control unit80may perform control to capture the region-of-interest image while changing the compressed state of the breast. In this case, for example, the compression control unit82controls to increase or decrease the compression force of the compression plate34against the breast. In this case, for example, the image interpretation support apparatus16derives hardness information of the region of interest on the basis of a difference in the amount of distortion corresponding to a difference in the force of compressing the breast. The user can make a more accurate diagnosis by using the hardness information of the region of interest having a different viewpoint in addition to the radiographic image. Further, in the above-described embodiment, the case in which the mammography apparatus captures the radiographic image of the breast as an object has been described. However, the invention is not limited thereto. For example, a magnetic resonance imaging (MRI) apparatus may capture the radiographic image of the breast as an object. In this embodiment, for example, the radiographic image and the ultrasound image are captured in a state in which the breast is fixed to the hole of the imaging table. Further, in the medical imaging apparatus12according to the above-described embodiment, the ultrasound probe27scans the upper surface of the compression plate34to capture an ultrasound image from the side of the radiation source36R. However, the medical imaging apparatus12may be an apparatus that captures an ultrasound image from an opposite side, that is, the side of the imaging table40. In addition, each functional unit of the medical imaging apparatus12and each functional unit of the image interpretation support apparatus16according to the above-described embodiment may be provided in one apparatus. Further, at least one of these functional units may be provided in an apparatus different from the apparatuses of the medical imaging system10implemented in the above-described embodiment. In this case, for example, the receiving unit90, the analysis unit92, and the position output unit94are provided in the console14. In addition, in the above-described embodiment, for example, the following various processors can be used as a hardware structure of processing units performing various processes, such as each functional unit of the medical imaging apparatus12and each functional unit of the image interpretation support apparatus16. The various processors include, for example, a CPU which is a general-purpose processor executing software (program) to function as various processing units, a programmable logic device (PLD), such as a field programmable gate array (FPGA), which is a processor whose circuit configuration can be changed after manufacture, and a dedicated electric circuit, such as an application-specific integrated circuit (ASIC), which is a processor having a dedicated circuit configuration designed to perform a specific process. One processing unit may be configured by one of the various processors or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). Further, a plurality of processing units may be configured by one processor. A first example of the configuration in which a plurality of processing units are configured by one processor is an aspect in which one processor is configured by a combination of one or more CPUs and software and functions as a plurality of processing units. A representative example of this aspect is a client computer or a server computer. A second example of the configuration is an aspect in which a processor that implements the functions of the entire system including a plurality of processing units using one integrated circuit (IC) chip is used. A representative example of this aspect is a system-on-chip (SoC). In such fashion, various processing units are configured by using one or more of the various processors as the hardware structure. Furthermore, specifically, an electric circuit (circuitry) obtained by combining circuit elements, such as semiconductor elements, can be used as the hardware structure of the various processors. In the above-described embodiment, the aspect in which the imaging processing program21is stored (installed) in the ROM20B in advance has been described. However, the invention is not limited thereto. The imaging processing program21may be recorded on a recording medium, such as a compact disk read only memory (CD-ROM), a digital versatile disk read only memory (DVD-ROM), or a universal serial bus (USB) memory, and then provided. In addition, the imaging processing program21may be downloaded from an external apparatus through the network. In the above-described embodiment, the analysis processing program71is stored (installed) in the ROM70B in advance. However, the present invention is not limited thereto. The analysis processing program71may be provided as recorded on a recording medium, such as a CD-ROM, a DVD-ROM, or a USB memory. The analysis processing program71may be downloaded from an external apparatus through the network. EXPLANATION OF REFERENCES 10: medical imaging system12: medical imaging apparatus14: console16: image interpretation support apparatus20,50,70: control unit20A,50A,70A: CPU20B,50B,70B: ROM20C,50C,70C: RAM21: imaging processing program22,52,72: storage unit24,54,74: I/F unit26,56,76: operation unit27: ultrasound probe28: probe moving mechanism30: radiation detector31: motor32: compression plate driving unit33: compression force detection sensor34: compression plate36: radiation emitting unit36R: radiation source37: radiation source driving unit38: ball screw39,59,79: bus40: imaging table40A: imaging surface42: arm portion44: base45: shaft portion46: compression unit51: control processing program58,78: display71: analysis processing program80: imaging control unit82: compression control unit84: image acquisition unit86: image output unit88: position acquisition unit90: receiving unit92: analysis unit94: position output unitR: radiation | 43,896 |
11857376 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. For example, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately. FIG.1is a diagrammatic schematic view of an ultrasound system100according to some embodiments of the present disclosure. The ultrasound system100may be used to carry out intravascular ultrasound imaging of a lumen of a patient. The system100may include an ultrasound device110, a patient interface module (PIM)150, an ultrasound processing system160, and/or a monitor170. The ultrasound device110is structurally arranged (e.g., sized and/or shaped) to be positioned within anatomy102of a patient. The ultrasound device110obtains ultrasound imaging data from within the anatomy102. The ultrasound processing system160can control the acquisition of ultrasound imaging and may be used to generate an image of the anatomy102(using the ultrasound imaging data received via the PIM150) that is displayed on the monitor170. In some embodiments, the system100and/or the PIM150can include features similar to those described in U.S. Patent Application No. 62/574,455, titled “DIGITAL ROTATIONAL PATIENT INTERFACE MODULE,” filed Oct. 19, 2017, U.S. Patent Application No. 62/574,687, titled “INTRALUMINAL DEVICE REUSE PREVENTION WITH PATIENT INTERFACE MODULE AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS,” filed October 19, U.S. Patent Application No. 62/574,835, titled “INTRALUMINAL MEDICAL SYSTEM WITH OVERLOADED CONNECTORS,” filed Oct. 20, 2017, and U.S. Patent Application No. 62/574,610, titled “HANDHELD MEDICAL INTERFACE FOR INTRALUMINAL DEVICE AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS,” filed Oct. 19, 2017, each of which is incorporated by reference in its entirety. Generally, the ultrasound device110can be a catheter, a guide catheter, or a guide wire. The ultrasound device110includes a flexible elongate member116. As used herein, “elongate member” or “flexible elongate member” includes at least any thin, long, flexible structure structurally arranged (e.g., sized and/or shaped) to be positioned within a lumen104of the anatomy102. For example, a distal portion114of the flexible elongate member116is positioned within the lumen104, while a proximal portion112of the flexible elongate member116is positioned outside of the body of the patient. The flexible elongate member116can include a longitudinal axis LA. In some instances, the longitudinal axis LA can be a central longitudinal axis of the flexible elongate member116. In some embodiments, the flexible elongate member116can include one or more polymer/plastic layers formed of various grades of nylon, Pebax, polymer composites, polyimides, and/or Teflon. In some embodiments, the flexible elongate member116can include one or more layers of braided metallic and/or polymer strands. The braided layer(s) can be tightly or loosely braided in any suitable configuration, including any suitable per in count (pic). In some embodiments, the flexible elongate member116can include one or more metallic and/or polymer coils. All or a portion of the flexible elongate member116may have any suitable geometric cross-sectional profile (e.g., circular, oval, rectangular, square, elliptical, etc.) or non-geometric cross-sectional profile. For example, the flexible elongate member116can have a generally cylindrical profile with a circular cross-sectional profile that defines an outer diameter of the flexible elongate member116. For example, the outer diameter of the flexible elongate member116can be any suitable value for positioning within the anatomy102, including between approximately 1 Fr (0.33 mm) and approximately 15 Fr (5 mm), including values such as 3.5 Fr (1.17 mm), 5 Fr (1.67 mm), 7 Fr (2.33 mm), 8.2 Fr (2.73 mm), 9 Fr (3 mm), and/or other suitable values both larger and smaller. The ultrasound device110may or may not include one or more lumens extending along all or a portion of the length of the flexible elongate member116. The lumen of the ultrasound device110can be structurally arranged (e.g., sized and/or shaped) to receive and/or guide one or more other diagnostic and/or therapeutic instruments. If the ultrasound device110includes lumen(s), the lumen(s) may be centered or offset with respect to the cross-sectional profile of the device110. In the illustrated embodiment, the ultrasound device110is a catheter and includes a lumen at the distal portion114of the flexible elongate member116. A guide wire140extends through the lumen of the ultrasound device110between an entry/exit port142and an exit/entry port at a distal end118of the flexible elongate member116. Generally, the guide wire140is a thin, long, flexible structure that is structurally arranged (e.g., sized and/or shaped) to be disposed within the lumen104of the anatomy102. During a diagnostic and/or therapeutic procedure, a medical professional typically first inserts the guide wire140into the lumen104of the anatomy102and moves the guide wire140to a desired location within the anatomy102, such as adjacent to an occlusion106. The guide wire140facilitates introduction and positioning of one or more other diagnostic and/or therapeutic instruments, including the ultrasound device110, at the desired location within the anatomy102. For example, the ultrasound device110moves through the lumen104of the anatomy102along the guide wire140. In some embodiments, the lumen of the ultrasound device110can extend along the entire length of the flexible elongate member116. In the illustrated embodiment, the exit/entry port142is positioned proximally of components130of the ultrasound device110. In some embodiments, the exit/entry port142, the exit/entry port at the distal end118, and/or the lumen of the ultrasound device110is positioned distally of the components130. In some embodiments, the ultrasound device110is not used with a guide wire, and the exit/entry port142can be omitted from the ultrasound device110. The anatomy102may represent any fluid-filled or surrounded structures, both natural and man-made. For example, the anatomy102can be within the body of a patient. Fluid can flow through the lumen104of the anatomy102. In some instances, the ultrasound device110can be referenced as an intraluminal device. The anatomy102can be a vessel, such as a blood vessel, in which blood flows through the lumen104. In some instances, the ultrasound device110can be referenced as an intravascular device. In various embodiments, the blood vessel is an artery or a vein of a patient's vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable anatomy/lumen inside the body. The anatomy102can be tortuous in some instances. For example, the device110may be used to examine any number of anatomical locations and tissue types, including without limitation, organs including the liver, heart, kidneys, gall bladder, pancreas, lungs, esophagus; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood, chambers or other parts of the heart, and/or other systems of the body. In addition to natural structures, the device110may be used to examine man-made structures such as, but without limitation, heart valves, stents, shunts, filters and other devices. The occlusion106of the anatomy102is generally representative of any blockage or other structural arrangement that results in a restriction to the flow of fluid through the lumen104, for example, in a manner that is deleterious to the health of the patient. For example, the occlusion106narrows the lumen104such that the cross-sectional area of the lumen104and/or the available space for fluid to flow through the lumen104is decreased. Where the anatomy102is a blood vessel, the occlusion106may be a result of plaque buildup, including without limitation plaque components such as fibrous, fibro-lipidic (fibro fatty), necrotic core, calcified (dense calcium), blood, fresh thrombus, and/or mature thrombus. In some instances, the occlusion106can be referenced as thrombus, a stenosis, and/or a lesion. Generally, the composition of the occlusion106will depend on the type of anatomy being evaluated. Healthier portions of the anatomy102may have a uniform or symmetrical profile (e.g., a cylindrical profile with a circular cross-sectional profile). The occlusion106may not have a uniform or symmetrical profile. Accordingly, diseased portions of the anatomy102, with the occlusion106, will have a non-symmetric and/or otherwise irregular profile. While the anatomy102is illustrated inFIG.1as having a single occlusion106, it is understood that the devices, systems, and methods described herein have similar application for anatomy having multiple occlusions. The ultrasound device110may include ultrasound imaging components130disposed at the distal portion114of the flexible elongate member116. The ultrasound imaging components130may be configured to emit ultrasonic energy into the anatomy102while the device110is positioned within the lumen104. In some embodiments, the components130may include various numbers and configurations. For example, some of the components130may be configured to transmit ultrasound pulses while other others may be configured to receive ultrasound echoes. The components130may be configured to emit different frequencies of ultrasonic energy into the anatomy102depending on the type of tissue being imaged and the type of imaging being used. In some embodiments, the components130include ultrasound transducer(s). For example, the components130can be configured to generate and emit ultrasound energy into the anatomy102in response to being activated by an electrical signal. In some embodiments, the components130include a single ultrasound transducer. In some embodiments, the components130include an ultrasound transducer array including more than one ultrasound transducer. For example, an ultrasound transducer array can include any suitable number of individual transducers between 2 transducers and 1000 transducers, including values such as 2 transducers, 4 transducers, 36 transducers, 64 transducers, 128 transducers, 500 transducers, 812 transducers, and/or other values both larger and smaller. The ultrasound transducer array including components130can be any suitable configuration, such as phased array including a planar array, a curved array, a circumferential array, an annular array, etc. For example, the ultrasound transducer array including components130can be a one-dimensional array or a two-dimensional array in some instances. In some instances, the ultrasound imaging components130may be part of a rotational ultrasound device as described in U.S. Patent Application No. 62/574,455, titled “DIGITAL ROTATIONAL PATIENT INTERFACE MODULE,” filed Oct. 19, 2017. In some embodiments, the active area of the ultrasound imaging components130can include one or more transducer materials and/or one or more segments of ultrasound elements (e.g., one or more rows, one or more columns, and/or one or more orientations) that can be uniformly or independently controlled and activated. The active area of the components130can be patterned or structured in various basic or complex geometries. The components130can be disposed in a side-looking orientation (e.g., ultrasonic energy emitted perpendicular and/or orthogonal to the longitudinal axis LA) and/or a forward-looking looking orientation (e.g., ultrasonic energy emitted parallel to and/or along the longitudinal axis LA). In some instances, the components130are structurally arranged to emit and/or receive ultrasonic energy at an oblique angle relative to the longitudinal axis LA, in a proximal or distal direction. In some embodiments, ultrasonic energy emission can be electronically steered by selective triggering of ultrasound imaging components130in an array. The ultrasound transducer(s) of the components130can be a piezoelectric micromachined ultrasound transducer (PMUT), capacitive micromachined ultrasonic transducer (CMUT), single crystal, lead zirconate titanate (PZT), PZT composite, other suitable transducer type, and/or combinations thereof. Depending on the transducer material, the manufacturing process for ultrasound transducer(s) can include dicing, kerfing, grinding, sputtering, wafer technologies (e.g., SMA, sacrificial layer deposition), other suitable processes, and/or combinations thereof. In some embodiments, the components130are configured to obtain ultrasound imaging data associated with the anatomy102, such as the occlusion106. The ultrasound imaging data obtained by the ultrasound imaging components130can be used by a medical professional to diagnose the patient, including evaluating the occlusion106of the anatomy102. For imaging, the components130can be configured to both emit ultrasonic energy into the lumen104and/or the anatomy102, and to receive reflected ultrasound echoes representative of fluid and/or tissue of lumen104and/or the anatomy102. As described herein, components130can include an ultrasound imaging element, such as an ultrasound transducer and/or an ultrasound transducer array. For example, the components130generate and emit ultrasound energy into the anatomy102in response to transmission of an electrical signal to the components130. For imaging, the components130may generate and transmit an electrical signal representative of the received reflected ultrasound echoes from the anatomy102(e.g., to the PIM150and/or processing system160). Based on the IVUS imaging data obtained by the ultrasound imaging components130, the IVUS imaging system160assembles a two-dimensional image of the vessel cross-section from a sequence of several hundred of these ultrasound pulse/echo acquisition sequences occurring during a single revolution of the ultrasound imaging components130. In various embodiments, the ultrasound imaging component130can obtain imaging data associated with intravascular ultrasound (IVUS) imaging, forward looking intravascular ultrasound (FL-IVUS) imaging, intravascular photoacoustic (IVPA) imaging, intracardiac echocardiography (ICE), transesophageal echocardiography (TEE), and/or other suitable imaging modalities. In some embodiments, the device110can include an imaging component of any suitable imaging modality, such as optical imaging, optical coherence tomography (OCT), etc. In some embodiments, the device110can include any suitable sensing component, including a pressure sensor, a flow sensor, a temperature sensor, an optical fiber, a reflector, a mirror, a prism, an ablation element, a radio frequency (RF) electrode, a conductor, and/or combinations thereof. The imaging and/or sensing components can be implemented in the device110in lieu of or in addition to the ultrasound component130. For diagnosis and/or imaging, the center frequency of the components130can be between 2 MHz and 75 MHz, for example, including values such as 2 MHz, 5 MHz, 10 MHz, 20 MHz, 40 MHz, 45 MHz, 60 MHz, 70 MHz, 75 MHz, and/or other suitable values both larger and smaller. For example, lower frequencies (e.g., between 2 MHz and 10 MHz) can advantageously penetrate further into the anatomy102, such that more of the anatomy102is visible in the ultrasound images. Higher frequencies (e.g., 50 MHz, 75 MHz) can be better suited to generate more detailed ultrasound images of the anatomy102and/or fluid within the lumen104. In some embodiments, the frequency of the ultrasound imaging components130is tunable. For imaging, in some instances, the components130can be tuned to receive wavelengths associated with the center frequency and/or one or more harmonics of the center frequency. In some instances, the frequency of the emitted ultrasonic energy can be modified by the voltage of the applied electrical signal and/or the application of a biasing voltage to the ultrasound imaging components130. In some embodiments, the ultrasound imaging components130are positioned at the distal portion of the flexible elongate member116. The ultrasound imaging components130can include one or more electrical conductors extending along the length from the flexible elongate member116. The electrical conductor(s) are in communication with the ultrasound imaging components130at the distal portion114, and an interface156at the proximal portion112. The electrical conductors carry electrical signals between the ultrasound processing system160and the ultrasound imaging components130. For example, activation and/or control signals can be transmitted from the processing system160to the ultrasound imaging components130via the electrical conductors. Electrical signals representative of the reflected ultrasound echoes can be transmitted from the ultrasound imaging components130to the processing system160via the electrical conductors. In some embodiments, the same electrical conductors can be used for communication between the processing system160and the ultrasound imaging components130. The ultrasound device110includes an interface156at the proximal portion112of the flexible elongate member116. In some embodiments, the interface156can include a handle. For example, handle can include one or more actuation mechanisms to control movement of the device110, such as deflection of the distal portion114. In some embodiments, the interface156can include a telescoping mechanism that allows for pullback of the device110through the lumen. In some embodiments, the interface156can include a rotation mechanism to rotate one or more components of the device110(e.g., the flexible elongate member116and the ultrasound imaging components130). In some embodiments, the interface156includes a user interface component (e.g., one or more buttons, a switch, etc.) for a medical professional to selectively activate the ultrasound imaging components130. In other embodiments, a user interface component of the PIM150, the processing system160and/or the monitor170allows a medical profession to selectively activate the ultrasound imaging components130. A conduit including, e.g., electrical conductors, extends between the interface156and the connector108. The connector108can be configured to mechanically and/or electrically couple the device110to the PIM150. The ultrasound processing system160, the PIM150, and/or the intravascular ultrasound device110(e.g., the interface156, the ultrasound imaging components130, etc.) can include one or more controllers. The controllers can be integrated circuits, such as application specific integrated circuits (ASIC), in some embodiments. The controllers can be configured to select the particular transducer element(s) to be used for transmit and/or receive, to provide the transmit trigger signals to activate the transmitter circuitry to generate an electrical pulse to excite the selected transducer element(s), and/or to accept amplified echo signals received from the selected transducer element(s) via amplifiers of controllers. Multiple ASIC configurations with various numbers of master circuits and slave circuits can be used to create a single ultrasound wave or multi-firing ultrasound wave device. In some embodiments, the PIM150performs preliminary processing of the ultrasound echo data prior to relaying the data to the console or processing system160. In examples of such embodiments, the PIM150performs amplification, filtering, and/or aggregating of the data. In an embodiment, the PIM150also supplies high- and low-voltage DC power to support operation of the device110including circuitry associated with the ultrasound transducers130. In some embodiments, the PIM150is powered by a wireless charging system. The wireless charging system may include an inductive charging system. For example, the PIM150may include a battery that is charged with a Qi inductive charging base. This may allow for a portable PIM150. Furthermore, an internal battery of the PIM150may produce low enough current that electrical isolation components are not required. In other embodiments, the PIM150includes a charging cable in addition to the internal battery. In this case, the PIM150may be an isolation device as, in various surgical settings, patient safety requirements mandate physical and electrical isolation of the patient from one or more high voltage components. The ultrasound processing system160receives imaging data (e.g., electrical signals representative of the ultrasound echo data) from the ultrasound imaging components130by way of the PIM150. The processing system160can include processing circuit, such as processor and/or memory. The ultrasound processing system160processes the data to reconstruct an image of the anatomy. The processing system160outputs image data such that an image of the anatomy102, such as a cross-sectional IVUS image of a vessel, is displayed on the monitor170. The processing system160and/or the monitor170can include one or more user interface elements (e.g., touchscreen, keyboard, mouse, virtual buttons on a graphical user interface, physical buttons, etc.) to allow a medical professional to control the device110, including one or more parameters of the ultrasound imaging components130. In some embodiments, imaging data is transmitted from the PIM150to the ultrasound processing system160wirelessly. For example, the PIM150may be configured to transmit imaging data with a wireless Ethernet protocol. The PIM150and ultrasound processing system160may include a wireless transmitter and receiver, such as a wireless router. In some embodiments the wireless router is included in the ultrasound processing system160and not the PIM150. The PIM150may also include a function to cease all transmissions, wired and wireless, when the PIM150is charging. FIG.2is a diagrammatic schematic view of a PIM150. In some embodiments, the PIM150is communicatively disposed between the ultrasound device110and the processing system160. The PIM150may be used to transmit commands and signals to the ultrasound device110, as well as to receive, process, and transmit ultrasound echo signals from the ultrasound device110. In some embodiments, these ultrasound echo signals are transmitted along a differential signal path in the PIM150, and are digitized and formatted for Ethernet transmission to the ultrasound processing system160. The PIM150may include an outer housing304. The housing304may be suitable for use in a sterile environment (i.e., water resistant) and may be sized to be suitable for use on an operating table. In some embodiments, the housing304includes internal sections to house various components. For example, the housing304may include particular housing sections to contain the power system340, the signal chain350, and the controller310and associated components. The controller310of the PIM150may be configured to transmit signals to other elements of the PIM150as well as to external devices, such as the ultrasound device110, processing system160, and monitor170. In some embodiments, the controller310is a field-programmable gate array (FPGA). In other embodiments, the controller310is a central processing unit (CPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein with reference to the controller310as shown inFIG.2above. The controller310may be connected to a memory318. In some embodiments, the memory is a random access memory (RAM). In other embodiments, the memory318is a cache memory (e.g., a cache memory of the controller310), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory. In some embodiments, the memory318may include a non-transitory computer-readable medium. The memory318may store instructions. The instructions may include instructions that, when executed by a processor, cause the processor to perform operations described herein with reference to the controller310in connection with embodiments of the present disclosure. The controller310may be connected to a catheter motor326, EEPROM324, transmitter322, and a time gain compensation (TGC) control320. In some embodiments, the catheter motor326is configured to move the ultrasound device110within a lumen. The catheter motor326may include a rotational component for rotating a portion of the ultrasound device110. The catheter motor326may also include a motor for moving the ultrasound device110along lumens within the body of the patient. The transmitter322may be any type of transmission device for sending signals to the ultrasound device110. In some embodiments, the controller310is configured to control the ultrasound device110by sending signals through the transmitter322. In this way, the controller310may be configured to drive the transmission of ultrasound signals by the ultrasound device110. The direction of transmission and signal strength of the ultrasound signals may be controlled by the controller310. The transmitter322may be connected to a transmit/receive (T/R) switch328. In some embodiments, the T/R switch328may be configured to change between transmit and receive modes. For example, the controller310may send a signal to the ultrasound device110while the T/R switch328is in transmit mode. Data (such as ultrasound echo signals) may be transmitted back from the ultrasound device110to the PIM150. This data may be stored by the EEPROM324. When the ultrasound echo signals are transmitted back from the ultrasound device110to the PIM150, the T/R switch328may be set to receive mode to receive and direct the ultrasound echo signals along the correct signal route. The ultrasound echo signals may be received by the PIM150and directed along a differential signal route. In some embodiments, the differential signal route may include a signal chain350including one or more elements352,354,356,358,360,362. The differential signal route may help to cancel common mode noise, and in particular “white noise/flicker” which can occur in existing image processing systems. The differential signal route and associated signal chain350may result in a more noise free signal and improved image quality. The signal chain350may provide filtering and programmable gain functions. In some embodiments, the TGC control320is a time varying gain that adjusts for signal loss as the distance between the PIM150and the ultrasound device110increases. The gain is typically reduced for near reflections and gradually increased for distant reflections. The amount of gain over distance may be controlled, for example by the controller310of the PIM150. In some embodiments, the TGC control may be configured to control the signal amplification of the received ultrasound echo signals. The TGC control320may also be configured to set the receive path for the ultrasound echo signals along the signal chain350. The signal chain350may include bandpass filters352,360and amplifiers354,356,358,362. For example, the ultrasound echo signals from the ultrasound device110may be passed in order through a first bandpass filter352, a first fixed amplifier354, a variable gain amplifier356, a first buffer amplifier358, a second bandpass filter360, and a second buffer amplifier362. In some embodiments, the bandpass filters352,360allow signals between 20 and 40 MHz. In other embodiments, the bandpass filters allow other ranges of signals, such as 10 to 50 MHz, 5 to 60 MHz, and other ranges. After the signals are passed through the signal chain350, they may be transmitted to an analog to digital converter (ADC)330. The ADC330may digitize the ultrasound echo signals for processing by the controller310. The signals may then be prepared for transmission to the ultrasound processing system160. In some embodiments, the signals from the ultrasound device110may be transmitted by a wireless connection to the ultrasound processing system160. For example, the signals may be transmitted by a wireless transmitter312. The wireless transmitter312may be any type of transmitter that is configured to send wireless signals. The wireless transmitter312may include an antennae disposed on an outer surface of the housing304or within the housing304. Generally, the wireless transmitter312may be configured to use any current or future developed wireless protocol(s). For example, the wireless transmitter may be configured for wireless protocols including WiFi, Bluetooth, LTE, Z-wave, Zigbee, WirelessHD, WiGig, etc. In some embodiments, the wireless transmitter312is configured to transmit wireless Ethernet signals. In this case, the signals from the ultrasound device110(which have been digitized by the ADC330) are transmitted to an Ethernet physical layer (PHY)316. The Ethernet PHY may be configured to convert the signals from the ultrasound device110for an Ethernet connection. The converted signals may then be passed to a wireless transmitter312and transmitted to the ultrasound processing system160. In some embodiments, the wireless transmitter312also includes a wired Ethernet connection. In this case, the wireless transmitter312may include one or more Ethernet cables as well as associated ports. In other embodiments, the PIM150may be configured to transmit data from the ultrasound device110to the ultrasound processing system160via other protocols, such as USB (and in particular, USB 3.0). In this case, the PIM150may include a wireless USB transmitter the signals from the ultrasound device110may be configured for use with USB. The PIM150may include a pullback motor332which may be used to pull the ultrasound device110through a lumen to collect imaging data. The pullback motor332may be configured to pull the ultrasound device110at a constant speed. The pullback motor332may be connected to an external pullback sled334. The PIM150may include a wireless power system340. In some embodiments, this wireless power system may include an inductive charging system. The inductive charging system may include an inductive coupling system, such as a Qi inductive charging system. This power system340may include an inductive charging base342, a battery344within the PIM150, and a power distributor346that is configured to provide power to the various components of the PIM150. In some embodiments, the inductive charging base342is wireless and the battery344is rechargeable. The PIM150may be placed on the inductive charging base342to recharge its battery344. Due to the separation between the PIM150and the inductive charging base342, isolation components may not be required within the PIM150, for example, if the PIM150has a minimum creepage distance of 4 mm of isolation from the ultrasound device110or the inductive charging base342. In other embodiments, the minimum creepage distance may be 2 mm, 6 mm, 8 mm, or other distances. In other embodiments, the wireless power system is a resonant inductive coupling system, a capacitive coupling system, or a magnetodynamic coupling system. In some embodiments, the PIM150is configured to receive power from microwaves, lasers, and/or light waves. In some embodiments, the PIM150is configured to stop all transmissions while it is connected to the charging base342. This may help to prevent high levels of current to be passed between the PIM150and other components (such as the ultrasound device110) which could cause harm to a patient. This function to stop transmissions during charging may reduce the number of isolation components that are required within the PIM150. In some embodiments, this function is provided by a connector348of the PIM150. The connector348may be disposed on an external portion of the housing304, and may be configured to stop transmissions from the PIM150while the PIM150is connected to the charging base342. The connector348may include mechanical features such as pins, flanges, projections, extensions, or other devices to effectively couple the PIM150to the inductive charging base342and provide a signal to the PIM150that it is connected. In some embodiments, the connector348includes a switch349, similar toFIG.3, that has “on” and “off” modes. The switch349may be connected to the controller310of the PIM150. When the PIM150is connected to the charging base342, the switch349may be in the “off” mode which may prevent the PIM150from transmitting signals to other devices, such as the ultrasound device110. When the PIM150is disconnected from the charging base342, the switch349may be in the “on” mode, allowing transmissions from the PIM150to other devices. In some embodiments, the PIM150is powered by power over Ethernet (PoE). In this case, power may be input through the transmitter312or through another Ethernet connection on the PIM150. The power may then be distributed throughout the PIM150by the power distributor346. FIG.3is a diagrammatic schematic view of a PIM150that may be connected to charging cable370. In this case, the PIM150may be powered by the charging cable370or the power system340including the inductive charging base342. The charging cable370may be a Power over Ethernet (PoE) device or another type of power device, such as a custom power cable. The charging cable370may be connected to the power distributor346. An isolation device372may be disposed between the charging cable370and the power distributor346. In some embodiments, power is only passed through the isolation device372when the PIM150is connected to the charging cable370. This may allow a user to choose between using the PIM150in a wireless, battery-powered mode or a wired, cable-charged mode while still protecting the patient from high current levels. Since this embodiment includes an isolation device372along the wired power input, the PIM150may be configured to transmit and receive signals even while the PIM150is connected to the charging cable370. FIG.4is diagrammatic schematic view400of a PIM150transmitting signals to an ultrasound processing system160. In some embodiments, the PIM150includes a wireless transmitter404and the ultrasound processing system160includes a wireless receiver406. In some embodiments, the wireless receiver406is connected to a wireless router402. In the example ofFIG.4, imaging data (such as ultrasound echo signals) are transmitted through a cable408from an ultrasound device110positioned within a lumen of the patient to the PIM150. This imaging data may be digitized and processed by the PIM150. The digitized data may then be wirelessly transmitted from the PIM150to the ultrasound processing system160. In this case, the PIM150may be powered by a battery such that the PIM150is portable. As discussed above, the battery within the PIM150may be chargeable via an inductive charging base342. However, during the transmission and reception of signals, the PIM150may not be connected to the inductive charging base342. FIG.5is diagrammatic schematic view500of a PIM150that is connected to an inductive charging base342. In this example, the PIM150may not transmit or receive signals from other devices such as the ultrasound device110or the ultrasound processing system160. As shown inFIG.5, the PIM150is not connected to the ultrasound device110and is not transmitting to the ultrasound processing system160. In other embodiments, when the PIM150is connected to the inductive charging base342, wireless transmission of signals may continue while wired transmission is stopped. For example, while the PIM150is connected to the charging base342the PIM150may transmit wireless to the ultrasound processing system160but not transmit or receive signals from the ultrasound device110. FIG.6is diagrammatic schematic view600of a processing system160that includes an integrated inductive charging base342. The PIM150may be placed on this inductive charging base342(and connected to the processing system160) to charge. The inductive charging base342may include a device343that prevents the PIM150from being connected to the ultrasound device110when connected. The device343may be any type of mechanical or electrical device for preventing connection between the PIM150and the ultrasound device110. For example, the device343may automatically turn a switch preventing connection (as discussed inFIG.3) or the device343may physically plug or otherwise disable a connector on the PIM150such that the ultrasound device110may not be connected to the PIM150when the device343is activated. The device343may also electrically disable the transmission and reception of signals to and from the PIM150while the PIM150is connected to the inductive charging base342. FIG.7provides a flow diagram illustrating a method700of intravascular ultrasound imaging. As illustrated, the method700includes a number of enumerated steps, but embodiments of the method700may include additional steps before, after, and in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently. The method700may be performed using any of the systems and devices referred to inFIGS.1-6. At step702, the method700may include positioning an ultrasound device in a body lumen of a patient. The ultrasound device may be similar to the ultrasound device110as shown inFIGS.1-6. In particular, the ultrasound device may be an intravascular rotational ultrasound device with one or more imaging ultrasound transducer elements at the distal portion of a rotating drive cable. The step702can include placing a sheath and the imaging core/drive cable within the lumen of the anatomy. The drive cable can be disposed within the sheath of the ultrasound device. At step704, the method700may include providing power to a patient interface module (PIM) with a wireless charging system, such as a power system340as shown inFIGS.2and3. The PIM may be connected to the ultrasound device and configured to transmit signals to and receive signals from the ultrasound device when the PIM is not charging. In some embodiments, the wireless charging system includes an inductive charging base (such as a Qi inductive charging base) and a battery within the PIM. The wireless charging system may include one or more devices to prevent connection of the PIM to other devices while charging, such as the ultrasound device. At step706, the method700may include transmitting a first ultrasound signal with the ultrasound device into the lumen. The first ultrasound signal may be transmitted with one or more ultrasound elements of the ultrasound device. In some embodiments, the transmission of the first ultrasound signal may be controlled by a patient interface module (PIM) such as PIM150as shown inFIGS.1-6. For example, a controller of the PIM may be used to send a signal to the ultrasound device, which may in turn be transmitted into the lumen by the one or more ultrasound elements of the ultrasound device. Step704may be performed while the drive cable of the ultrasound device and the one or more ultrasound elements are rotating within the sheath positioned inside the lumen. In that regard, the method700can include connecting the ultrasound device and/or the drive cable to a movement device, such as a pullback device, that is configured to rotate and/or longitudinally translate the ultrasound device. The first ultrasound signal may be reflected off anatomy (e.g., tissue, blood vessel, plaque, etc.) within the lumen in the form of ultrasound echoes, some of which may travel back toward the first ultrasound element. These ultrasound echo signals may be received by the ultrasound device, such as with one or more transducer elements. At step708, an ultrasound echo signal associated with the first ultrasound signal may be transmitted to the PIM. In some embodiments, the ultrasound echo signal is received by a transmit/receive (T/R) switch, such as T/R switch328as shown inFIGS.2and3. The ultrasound echo signal may be processed by the PIM in preparation for its use in creating ultrasound images of the lumen. At step710, the ultrasound echo signal may be transmitted on a differential signal path within the PIM. In some embodiments, the differential signal path may help to reduce noise. The differential signal path may include a signal chain with one or more amplifiers and buffers. In some embodiments, the differential signal path includes, in order, a first bandpass filter, a first fixed amplifier, a variable gain amplifier, a first buffer amplifier, a second bandpass filter, and a second buffer amplifier. In other embodiments, the differential signal path includes other combinations of elements. At step712, the ultrasound echo signal may be digitized. In some embodiments, after passing along the differential signal path, the ultrasound echo signal is transmitted to an ADC within the PIM. The ADC may be used to digitize the ultrasound echo signal. The digitized ultrasound echo signal may then be transmitted to a controller within the PIM. At step714, the digitized ultrasound echo signal may be configured for transmission to another device, such as an ultrasound processing system. In some embodiments, this includes transmitting the digitized ultrasound echo signal to an Ethernet physical layer (PHY) with the controller of the PIM. Step714may also include passing the ultrasound echo signal to a wireless transmitter and configuring the digitized ultrasound echo signal for wireless transmission. At step716, the digitized ultrasound signal may be transmitted wirelessly to the processing system. The processing system may be the ultrasound processing system160as shown inFIG.1. Step716may be carried out by using a wireless transmitter on the PIM and a wireless receiver on the processing system. The wireless signals transmitted may be wireless Ethernet signals. The processing system may be used to further process the digitized ultrasound echo signal to produce ultrasound images of the lumen of the patient. At step718, an ultrasound image representative of the ultrasound echo signal may optionally be displayed on a display device. The display device may be similar to the monitor170as shown inFIG.1. For example, the image can be an IVUS image of a blood vessel. Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. While in the present disclosure it is referred primarily to intraluminal ultrasound device, intraluminal ultrasound system and intraluminal ultrasound imaging method, the device may be any sensing device configured to provide measurements within the body (e.g. physiological measurements such as pressure, flow velocity, electric activation signals), a corresponding sensing system and sensing method. The sensing device substitutes the intraluminal ultrasound device in the system and method disclosed herein for those alternative embodiments. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. | 44,686 |
11857377 | While the disclosed apparatus, compounds, methods and compositions are susceptible of embodiments in various forms, specific embodiments of the disclosure are illustrated (and will hereafter be described) with the understanding that the disclosure is intended to be illustrative, and is not intended to limit the claims to the specific embodiments described and illustrated herein. DETAILED DESCRIPTION The disclosure relates to an ingress-egress device, which greatly improves cleaning and removing of metal fragments during surgical burring. This device includes structure for continuous input (via an ingress port) and removal (via an egress port) of a sterile saline solution to irrigate a surgical work site during removal of a surgical implant with a surgical burr. A vortex forms at the end of the device which helps remove metal particles during surgical burring. Specifically, swirling, vortex flow of irrigating water in a circular base of the device suspends dispersed metal particles, whereupon the metal particles migrate radially outward (e.g., toward the device sidewall at its circular base) and can be removed from the surgical work site via the egress port at the device sidewall. This device also keeps the area cool during the burring procedure. Lastly, it is more sanitary than directly flooding the area with a saline solution, which can become contaminated and then fall back into the wound. This device ensures the rapid removal of any saline solution exposed to the wound, and therefore greatly improves the probability that no post-surgical infection will occur. FIGS.1-3illustrate an ingress-egress device100according to the disclosure. The device has a funnel shape110that can include several structural features: ingress and egress ports120,130, a handle140(e.g., for a user to hold the device100while shielding inlet/outlet lines122,132), a conforming soft base150to prevent contaminated fluid from leaking into surrounding tissues, and a soft awning (or skirt)160to limit splatter while providing unrestricted surgical burr210access to the surgical work site200and the surgical implant210. In an aspect, the disclosure relates to an ingress-egress apparatus100for protection of a surgical work site200during removal of a surgical implant210therefrom. The apparatus100generally includes an inverted frustum surface110having a sidewall111. The frustum surface110and sidewall111define a bottom open area112(e.g., having a width or diameter of at least 0.5 or 1 cm and/or up to 1.5, 2, or 3 cm; circular area) at a base portion114of the frustum surface110sidewall111, and an opposing top open area116(e.g., having a width or diameter of at least 2, 3, or 4 cm and/or up to 4, 6, 8, or 10 cm; circular area) at a top portion118of the frustum surface110sidewall111. The top open area116can have a larger (cross-sectional) area than the bottom open area112(e.g., as illustrated). The apparatus100further includes an ingress (or inlet) port120located at the base portion114of the frustum surface110sidewall111, which port120provides fluid communication access from (i) a region external to the ingress-egress apparatus100to (ii) the bottom open area112(e.g., and neighboring internal volume of the ingress-egress apparatus100and the surgical work site/implant site200where being used for implant210removal). The apparatus100further includes an egress (or inlet) port130located at the base portion114of the frustum surface110sidewall111, which port120also provides fluid communication access from (i) a region external to the ingress-egress apparatus100to (ii) the bottom open area112(e.g., and neighboring internal volume of the ingress-egress apparatus100and the surgical work site/implant site200where being used for implant210removal). The frustum surface110can generally be a funnel shape or a frustoconical shape. The sidewalls111of the frustum110are generally outwardly sloping in an upward direction. The sidewalls111can be substantially straight as in the illustrated embodiments, or they can be sloped or curved, such as having an upper/top concave surface and a lower/bottom convex surface. The apparatus100and its components can be formed from any suitable material, for example metal (e.g., surgical stainless steel), a rigid plastic material, etc. The apparatus100can further include an inlet line122in fluid communication with the ingress port120, for example as tubing connected to the ingress port102at one end and connected to (or adapted to be connected to) a source of irrigating fluid (not shown), such as water, saline solution, etc. The apparatus100can further include an outlet line132in fluid communication with the egress port130, for example as tubing connected to the egress port130at one end and connected to (or adapted to be connected to) a suction source (not shown) for evacuating wash/irrigation fluid from the surgical work site200. In an embodiment, the ingress port120and the egress port130are spaced apart at an interior location of the bottom open area112. The ingress/egress ports120/130can be spaced apart on opposing sides of the base portion114sidewall111, for example at 90°-270°, at 135°-225°, or at about 180° from each other in a circular base. Spacing apart allows irrigating water/fluid to enter the bottom open area from the ingress port120, irrigate the surgical work site200as water flows across the site to cool the site and pick up particulate implant metal, and then be removed from the work site/bottom open area via the egress port130. Preferably, a circular base portion/bottom open area induces a vortex swirling flow therein to assist in metal particulate pick-up and removal at the egress port sidewall via centrifugal or cyclonic separation. In an embodiment, the inlet/outlet lines122/132can enter or pass through the sidewall111on/at the same side or relative location of the sidewall, and then one or both of the lines can curve or wrap around the bottom open area interior before having their exit orifice open into the interior of the apparatus100. This allows convenient wash water inlet/suction outlet at the same external physical location of the apparatus100, but still allows internal cross flow and/or vortex flow in the water flow path around the surgical work site200. The apparatus100can further include a conforming soft base150(e.g., rubber or other soft/flexible sealing material) attached to the inverted frustum surface110at the base portion114and around the bottom open area112. The base150can be a circular or cylindrical attachment, such as having a width or diameter of at least 0.5 or 1 cm and/or up to 1.5, 2, or 3 cm. The base150can fit or attach over an outer circumferential lip of an outer base portion114of the sidewall111. The apparatus100can further include a soft awning160(or skirt; rubber, fabric, or other soft/flexible covering material) cover attached to the inverted frustum surface110at the top portion118and around the top open area116. The awning160includes an interior opening162(e.g., sized for surgical burr220access). The interior opening162of the awning or skirt can be sized similar to the bottom open area112, such as having a width or circular diameter of at least 0.5 or 1 cm and/or up to 1.5, 2, or 3 cm. The interior opening162of the awning or skirt160provides access to the surgical site200with the surgical burr220, but limits back-splatter out of the surgical work site200and out of the top of the apparatus100. Back-splatter stopped by the cover of the awning or skirt160can fall back down (e.g., along the internal surface of the sidewall111) into the bottom open area112where it can be recovered and evacuated via the egress port130. The flexible or soft material for the awning160allows the surgeon to bend or stretch the awning material if needed to access the surgical work site200at an angle with the surgical burr/surgical drill. In another aspect, the disclosure relates to a method for removing a surgical implant210. The method is generally performed on a surgical subject (or patient) having an (internal) surgical implant210to be removed. The method includes burring the surgical implant210with a surgical burr220to remove the surgical implant210. The surgical burr220can be a cutting, shaving, and/or filing burr with a contoured or toothed distal tip that can be used for cutting, shaving, and/or filing the surgical implant210for removal. The surgical burr220can be attached to or a component of a surgical drill (not shown). Burring the surgical implant210can include several steps as described. After burring or removal of the implant210, further surgical procedures can include removing the surgical burr220, removing the apparatus100, closing the wound at the incision point, etc. Surgical access is provided to the surgical implant210with the ingress-egress apparatus100accord to any of its various embodiments. Providing surgical access to the surgical implant can include making a surgical opening (or incision) at a surgical work site200where the surgical implant210is located in the surgical subject, and then inserting a bottom portion of the apparatus100into the surgical subject at the surgical work site200and positioned above the surgical implant210. The bottom portion of the ingress-egress apparatus100could be the base portion114of the frustum110, or it could be the conforming soft base150when present, for example. The surgical implant210is accessed and burred with the surgical burr220(e.g., the distal tip thereof) through the bottom open area112of the apparatus100while burring the implant210. In an embodiment, burring the surgical implant210is performed to partially remove the implant210. For example, a portion of the implant210could be outside the desired implant area within the subject or patient, broken, and/or damaged, but another portion of the implant210could be inserted properly and functioning as desired. Accordingly, the method can be used to partially remove the undesired portion of the surgical implant210while leaving the desired portion of the implant210in place within the subject. In another embodiment, burring the surgical implant210is performed to completely remove the surgical implant210. For example, the implant210could be no longer needed (e.g., it has served its purpose and should be removed from the subject). Alternatively, the implant210could have been damaged or otherwise needs to be removed, for example for possible replacement. The surgical implant210(e.g., and the surrounding surgical work site200) is irrigated with water (e.g., water-containing solution or mixture such as a saline solution) during burring. The water is delivered through the ingress port120of the apparatus100, thereby forming a wash fluid including the fresh inlet water or water solution and burring residue. The burring residue can include metal or other implant material particles or fragments, and can possibly include particles or fragments of released bone or other body tissue. Similarly, the wash fluid is removed during burring through the egress port130of the apparatus100. In an embodiment, irrigating the surgical implant210with water includes injecting the water through the ingress port120under pressure to attain a desired wash flow rate and velocity. Alternatively or additionally, removing the wash fluid can include applying suction through the egress port130. The inlet and outlet flow rates of (fresh) water and wash water, respectively, are preferably balanced or otherwise selected so that the surgical work site200remains sufficiently irrigated or otherwise covered with water during burring so that the work site200and corresponding body tissue does not become overheated to the point of possible injury. Similarly, the flow rates can be controlled or selected to provide sufficient agitation or mixing (e.g., vortex mixing) for pick-up and removal of metal implant particles resulting from burring. Further, the flow rates can be balanced to prevent or reduce flooding of wash fluid out of the surgical site200and/or ingress-egress apparatus100. In an embodiment, the surgical implant210is a mechanical fastening or joining means. For example, implant210can be a screw, bolt, prosthesis, TTA cage, or other suitable mechanical fastening or joining means. The implant210can be partially or completely attached to or inserted into bone in the surgical subject, for example joining or fastening bones or bone sections together, or joining a bone or bone segment to another prosthetic implant or support device. The implant210location in the surgical work site200is generally near or adjacent to bones, joints, cartilage, and/or other soft tissue that could be injured by heat and/or (metal) implant210particles resulting from burring, if not removed from the surgical work site200. The surgical implant210can include a metal material. The implant210is suitably formed from or includes a metallic component or alloy, for example (surgical) stainless steel or other biocompatible metal or metallic alloy. Because other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the disclosure is not considered limited to the example chosen for purposes of illustration, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this disclosure. Accordingly, the foregoing description is given for clearness of understanding only, and no unnecessary limitations should be understood therefrom, as modifications within the scope of the disclosure may be apparent to those having ordinary skill in the art. All patents, patent applications, government publications, government regulations, and literature references cited in this specification are hereby incorporated herein by reference in their entirety. In case of conflict, the present description, including definitions, will control. Throughout the specification, where the apparatus, compounds, compositions, methods, and processes are described as including components, steps, or materials, it is contemplated that the compositions, processes, or apparatus can also comprise, consist essentially of, or consist of, any combination of the recited components or materials, unless described otherwise. Component concentrations can be expressed in terms of weight concentrations, unless specifically indicated otherwise. Combinations of components are contemplated to include homogeneous and/or heterogeneous mixtures, as would be understood by a person of ordinary skill in the art in view of the foregoing disclosure. PARTS LIST 100ingress-egress apparatus110inverted frustum surface (e.g., funnel shape or a frustoconical shape)111frustum sidewall112bottom open area114base portion116top open area118top portion120ingress (or inlet) port122ingress (or inlet) line130egress (or outlet) port132egress (or outlet) line140handle150conforming soft base160soft awning (or skirt)162soft awning (or skirt) open area/burr access area200surgical work site210surgical implant220surgical burr | 15,098 |
11857378 | DETAILED DESCRIPTION Aspects of the disclosure relate to systems and methods to register and/or track a head mounted display (HMD) in a coordinate system using, for example, a surgical navigation system, an image capture system, a video capture system, one or more depth sensors, one or more inertial moment units (IMUs), or any other registration and/or tracking system known in the art and described, for example in PCT application PCT/US19/15522 filed, on Jan. 29, 2019, the entire contents of each of which is hereby incorporated by reference in its entirety. One or more fiducial markers, e.g. optical markers, e.g. QR codes or Aruco markers (Intel, Inc.), navigation markers, including infrared markers, RF markers, active markers (e.g. LEDs, including LEDs emitting infrared light or light from the spectrum visible to the human eye), passive markers (for example as described in PCT application PCT/US19/15522, the entire contents of each of which is hereby incorporated by reference in its entirety) can be used and can be directly or indirectly integrated into, attached to, connected to, or mounted onto a head mounted display. Optionally, the surgeon can wear a surgical helmet, for example, with a protective cover and clear see-through shield. Representative examples are, for example, the Stryker T5 surgical helmet and associated sterile cover for the surgical helmet or hoods or the Stryker Flyte helmet and associated cover for the surgical helmet or hoods. Aspects of the present disclosure relate to systems, devices and methods for performing a surgical step or surgical procedure with visual guidance using a head mounted display, e.g. by displaying virtual representations of one or more of a virtual surgical tool, virtual surgical instrument including a virtual surgical guide or cut block, virtual trial implant, virtual implant component, virtual implant or virtual device, a predetermined start point, predetermined start position, predetermined start orientation or alignment, predetermined intermediate point(s), predetermined intermediate position(s), predetermined intermediate orientation or alignment, predetermined end point, predetermined end position, predetermined end orientation or alignment, predetermined path, predetermined plane, predetermined cut plane, predetermined contour or outline or cross-section or surface features or shape or projection, predetermined depth marker or depth gauge, predetermined stop, predetermined angle or orientation or rotation marker, predetermined axis, e.g. rotation axis, flexion axis, extension axis, predetermined axis of the virtual surgical tool, virtual surgical instrument including virtual surgical guide or cut block, virtual trial implant, virtual implant component, implant or device, non-visualized portions for one or more devices or implants or implant components or surgical instruments or surgical tools, and/or one or more of a predetermined tissue change or alteration, on a live patient. In some embodiments, the head mounted (HMD) is a see-through head mounted display. In some embodiments, an optical see through HMD is used. In some embodiments, a video see through HMD can be used, for example with a camera integrated into, attached to, or separate from the HMD, generating video feed. Aspects of the disclosure can be applied to knee replacement surgery, hip replacement surgery, shoulder replacement surgery, ankle replacement surgery, spinal surgery, e.g. spinal fusion, brain surgery, heart surgery, lung surgery, liver surgery, spleen surgery, kidney surgery vascular surgery or procedures, prostate, genitourinary, uterine or other abdominal or pelvic surgery, and trauma surgery. In some embodiments, one or more head mounted displays can display virtual data, e.g. virtual surgical guides, for knee replacement surgery, hip replacement surgery, shoulder replacement surgery, ankle replacement surgery, spinal surgery, e.g. spinal fusion, brain surgery, heart surgery, lung surgery, liver surgery, spleen surgery, kidney surgery vascular surgery or procedures, prostate, genitourinary, uterine or other abdominal or pelvic surgery, and trauma surgery. In some embodiments, one or more head mounted displays can be used to display volume data or surface data, e.g. of a patient, of imaging studies, of graphical representation and/or CAD files. Aspects of the disclosure relate to a system or device comprising at least one head mounted display, the device being configured to generate a virtual surgical guide. In some embodiments, the virtual surgical guide is a three-dimensional representation in digital format which corresponds to at least one of a portion of a physical surgical guide, a placement indicator of a physical surgical guide, or a combination thereof. In some embodiments, the at least one head mounted display is configured to display the virtual surgical guide superimposed onto a physical joint based at least in part on coordinates of a predetermined position of the virtual surgical guide, and the virtual surgical guide is configured to align the physical surgical guide or a physical saw blade with the virtual surgical guide to guide a bone cut of the joint. In some embodiments, the at least one head mounted display is configured to display the virtual surgical guide superimposed onto a physical joint based at least in part on coordinates of a predetermined position of the virtual surgical guide, and the virtual surgical guide is configured to align the physical surgical guide or a physical saw drill, pin, burr, mill, reamer, broach, or impactor with the virtual surgical guide to guide a drilling, pinning, burring, milling, reaming, broach or impacting of the joint. In some embodiments, the at least one head mounted display is configured to display the virtual surgical guide superimposed onto a physical spine based at least in part on coordinates of a predetermined position of the virtual surgical guide, and the virtual surgical guide is configured to align the physical surgical guide or a physical tool or physical instrument with the virtual surgical guide to guide an awl, a drill, a pin, a tap, a screw driver or other instrument or tool. In some embodiments, the system or device comprises one, two, three or more head mounted displays. In some embodiments, the virtual surgical guide is configured to guide a bone cut in a knee replacement, hip replacement, shoulder joint replacement or ankle joint replacement. In some embodiments, the virtual surgical guide includes a virtual slot for a virtual or a physical saw blade. In some embodiments, the virtual surgical guide includes a planar area for aligning a virtual or a physical saw blade. In some embodiments, the virtual surgical guide includes two or more virtual guide holes or paths for aligning two or more physical drills or pins. In some embodiments, the predetermined position of the virtual surgical guide includes anatomical information, and/or alignment information of the joint. For example, the anatomic and/or alignment information of the joint can be based on at least one of coordinates of the joint, an anatomical axis of the joint, a biomechanical axis of the joint, a mechanical axis, or combinations thereof. In some embodiments, the at least one head mounted display is configured to align the virtual surgical guide based on a predetermined limb alignment. For example, the predetermined limb alignment can be a normal mechanical axis alignment of a leg. In some embodiments, the at least one head mounted display is configured to align the virtual surgical guide based on a predetermined femoral or tibial component rotation. In some embodiments, the at least one head mounted display is configured to align the virtual surgical guide based on a predetermined flexion of a femoral component or a predetermined slope of a tibial component. In some embodiments, the virtual surgical guide is configured to guide a proximal femoral bone cut based on a predetermined leg length. In some embodiments, the virtual surgical guide is configured to guide a bone cut of a distal tibia or a talus in an ankle joint replacement and the at least one head mounted display is configured to align the virtual surgical guide based on a predetermined ankle alignment, wherein the predetermined ankle alignment includes a coronal plane implant component alignment, a sagittal plane implant component alignment, an axial plane component alignment, an implant component rotation or combinations thereof. In some embodiments, the virtual surgical guide is configured to guide a bone cut of a proximal humerus in a shoulder joint replacement and the at least one head mounted display is configured to align the virtual surgical guide based on a predetermined humeral implant component alignment, wherein the humeral implant component alignment includes a coronal plane implant component alignment, a sagittal plane implant component alignment, an axial plane component alignment, an implant component, or combinations thereof. In some embodiments, the predetermined position of the surgical guide is based on a pre-operative or intra-operative imaging study, one or more intra-operative measurements, intra-operative data or combinations thereof. Aspects of the disclosure relate to a system or device comprising two or more head mounted displays for two or more users, wherein the device is configured to generate a virtual surgical guide, wherein the virtual surgical guide is a three-dimensional representation in digital format which corresponds to at least one of a portion of a physical surgical guide, a placement indicator of a physical surgical guide, or a combination thereof, wherein the head mounted display is configured to display the virtual surgical guide superimposed onto a physical joint based at least in part on coordinates of a predetermined position of the virtual surgical guide, and wherein the virtual surgical guide is configured for aligning the physical surgical guide or a saw blade to guide a bone cut of the joint. Aspects of the disclosure relate to a system or device comprising at least one head mounted display and a virtual bone cut plane, wherein the virtual bone cut plane is configured to guide a bone cut of a joint, wherein the virtual bone cut plane corresponds to at least one portion of a bone cut plane, and wherein the head mounted display is configured to display the virtual bone cut plane superimposed onto a physical joint based at least in part on coordinates of a predetermined position of the virtual bone cut plane. In some embodiments, the virtual bone cut plane is configured to guide a bone cut in a predeterminedvarusor valgus orientation or in a predetermined tibial slope or in a predetermined femoral flexion of an implant component or in a predetermined leg length. Aspects of the disclosure relate to a method of preparing a joint for a prosthesis in a patient. In some embodiments, the method comprises registering one or more head mounted displays worn by a surgeon or surgical assistant in a coordinate system, obtaining one or more intra-operative measurements from the patient's physical joint to determine one or more intra-operative coordinates, registering the one or more intra-operative coordinates from the patient's physical joint in the coordinate system, generating a virtual surgical guide, determining a predetermined position and/or orientation of the virtual surgical guide based on the one or more intra-operative measurements, displaying and superimposing the virtual surgical guide, using the one or more head mounted displays, onto the physical joint based at least in part on coordinates of the predetermined position of the virtual surgical guide, and aligning the physical surgical guide or a physical saw blade with the virtual surgical guide to guide a bone cut of the joint. In some embodiments, the one or more head mounted displays are registered in a common coordinate system. In some embodiments, the common coordinate system is a shared coordinate system. In some embodiments, the virtual surgical guide is configured to guide a bone cut in a knee replacement, hip replacement, shoulder joint replacement or ankle joint replacement. In some embodiments, the predetermined position of the virtual surgical guide determines a tibial slope for implantation of one or more tibial implant components in a knee replacement. In some embodiments, the predetermined position of the virtual surgical guide determines an angle ofvarusor valgus correction for a femoral and/or a tibial component in a knee replacement. In some embodiments, the virtual surgical guide corresponds to a physical distal femoral guide or cut block and the predetermined position of the virtual surgical guide determines a femoral component flexion. In some embodiments, the virtual surgical guide corresponds to a physical anterior or posterior femoral surgical guide or cut block and the predetermined position of the virtual surgical guide determines a femoral component rotation. In some embodiments, the virtual surgical guide corresponds to a physical chamfer femoral guide or cut block. In some embodiments, the virtual surgical guide corresponds to a physical multi-cut femoral guide or cut block and the predetermined position of the virtual surgical guide determines one or more of an anterior cut, posterior cut, chamfer cuts and a femoral component rotation. In some embodiments, the virtual surgical guide is used in a hip replacement and the predetermined position of the virtual surgical guide determines a leg length after implantation. In some embodiments, the virtual surgical guide is a virtual plane for aligning the physical saw blade to guide the bone cut of the joint. In some embodiments, the one or more intraoperative measurements include detecting one or more optical markers attached to the patient's joint, the operating room table, fixed structures in the operating room or combinations thereof. In some embodiments, one or more cameras or image capture or video capture systems and/or a 3D scanner included in the head mounted display can detect one or more optical markers including their coordinates (x, y, z) and at least one or more of a position, orientation, alignment, direction of movement or speed of movement of the one or more optical markers. In some embodiments, registration of one or more of head mounted displays, surgical site, joint, spine, surgical instruments or implant components can be performed using spatial mapping techniques. In some embodiments, registration of one or more of head mounted displays, surgical site, joint, spine, surgical instruments or implant components can be performed using depth sensors. In some embodiments, the virtual surgical guide is configured to guide a bone cut of a distal tibia or a talus in an ankle joint replacement and the one or more head mounted display is configured to align the virtual surgical guide based on a predetermined tibial or talar implant component alignment, wherein the predetermined tibial or talar implant component alignment includes a coronal plane implant component alignment, a sagittal plane implant component alignment, an axial plane component alignment, an implant component rotation of an implant component or combinations thereof. In some embodiments, the virtual surgical guide is configured to guide a bone cut of a proximal humerus in a shoulder joint replacement and wherein the one or more head mounted display is configured to align the virtual surgical guide based on a predetermined humeral implant component alignment, wherein the humeral implant component alignment includes a coronal plane implant component alignment, a sagittal plane implant component alignment, an axial plane component alignment, a humeral implant component rotation, or combinations thereof. Aspects of the disclosure relate to a system comprising at least one head mounted display and a library of virtual implants, wherein the library of virtual implants comprises at least one virtual implant component, wherein the virtual implant component has at least one dimension that corresponds to a dimension of the implant component or has a dimension that is substantially identical to the dimension of the implant component, wherein the at least one head mounted display is configured to display the virtual implant component in substantial alignment with a tissue intended for placement of the implant component, wherein the placement of the virtual implant component is intended to achieve a predetermined implant component position and/or orientation. In some embodiments, the system further comprises at least one user interface. Aspects of the disclosure relate to methods of selecting an implant or a prosthesis in three dimensions in a surgical site of a physical joint of a patient. In some embodiments, the method comprises registering, in a coordinate system, one or more head mounted displays worn by a user. In some embodiments, the head mounted display is a see-through head mounted display. In some embodiments, the method comprises obtaining one or more intra-operative measurements from the physical joint of the patient to determine one or more intra-operative coordinates. In some embodiments, the method comprises registering the one or more intra-operative coordinates from the physical joint of the patient in the coordinate system. In some embodiments, the method comprises displaying a three-dimensional graphical representation of a first implant or prosthesis projected over the physical joint using the one or more head mounted displays. In some embodiments, the three-dimensional graphical representation of the first implant or prosthesis is from a library of three-dimensional graphical representations of physical implants or prostheses. In some embodiments, the three-dimensional graphical representation corresponds to at least one portion of the physical implant or prosthesis. In some embodiments, the method comprises moving the three-dimensional graphical representation of the first implant or prosthesis to align with or to be near with or to intersect one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint. In some embodiments, the method comprises visually evaluating the fit or alignment between the three-dimensional graphical representation of the first implant or prosthesis and the one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface, of the one or more structures of the physical joint. In some embodiments, the method comprises repeating the steps of displaying, optionally moving and visually evaluating the fit or alignment with one or more three-dimensional graphical representations of one or more additional physical implants or prostheses, wherein the one or more additional physical implants or prostheses have one or more of a different dimension, size, diameter, radius, curvature, geometry shape or surface than the first and subsequently evaluated implant or prosthesis. In some embodiments, the method comprises selecting a three-dimensional graphical representation of an implant or prosthesis with a satisfactory fit relative to the one or more structures of the physical joint from the library of three-dimensional graphical representations of physical implants or prostheses. In some embodiments, the method comprises obtaining one or more intra-operative measurements from the physical joint of the patient to determine one or more intra-operative coordinates and registering the one or more intra-operative coordinates from the physical joint of the patient in the coordinate system. In some embodiments, the step of visually evaluating the fit includes comparing one or more of a radius, curvature, geometry, shape or surface of the graphical representation of the first or subsequent prosthesis with one or more of an articular radius, curvature, shape or geometry of the joint. In some embodiments, the graphical representation of the first or subsequent implant or prosthesis is moved to improve the fit between the one or more of a radius, curvature, geometry, shape or surface of the graphical representation of the first or subsequent prosthesis and the one or more of an articular radius, curvature, shape or geometry of the joint. In some embodiments, the one or more of the size, location, position, and orientation of the selected graphical representation of the implant or prosthesis with its final coordinates is used to develop or modify a surgical plan for implantation of the implant or prosthesis. In some embodiments, the one or more of the location, position or orientation of the selected graphical representation is used to determine one or more bone resections for implantation of the implant or prosthesis. In some embodiments, the one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint have not been surgically altered. In other embodiments, the one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint have been surgically altered. For example, the surgically altering can include removal of bone or cartilage. In some embodiments, the bone removal can be a bone cut. In some embodiments, the HMD is a see-through head mounted display. In some embodiments, the head mounted display is a virtual reality type head mounted display and the joint of the patient is imaged using one or more cameras and the images are displayed by the HMD. In some embodiments, the satisfactory fit includes a fit within 1, 2, 3, 4 or 5 mm distance between the selected graphical representation of the prosthesis and at least portions of the one or more of an internal or external margin, periphery, edge, perimeter anteroposterior, mediolateral, oblique dimension, radius, curvature, geometry, shape or surface, of the one or more structures of the physical joint. In some embodiments, the one or more structures of the physical joint include one or more anatomic landmarks. In some embodiments, the one or more anatomic landmarks define one or more anatomical or biomechanical axes. In some embodiments, the steps of moving and visually evaluating the fit of the graphical representation of the prosthesis include evaluating the alignment of the graphical representation of the prosthesis relative to the one or more anatomic or biomechanical axis. In some embodiments, the step of moving the three-dimensional graphical representation of the prosthesis is performed with one, two, three, four, five or six degrees of freedom. In some embodiments, the step of moving the three-dimensional graphical representation of the prosthesis includes one or more of translation or rotation of the three-dimensional graphical representation of the prosthesis. In some embodiments, the step of visually evaluating the fit or alignment between the three-dimensional graphical representation of the first or subsequent prosthesis includes comparing one or more of an anteroposterior or mediolateral dimension of one or more of the prosthesis components with one or more with one or more of an anteroposterior or mediolateral dimension of the distal femur or the proximal tibia of the joint. In some embodiments, the step of visually evaluating the fit or alignment between the three-dimensional graphical representation of the first or subsequent prosthesis includes comparing one or more of a dimension, size, radius, curvature, geometry shape or surface of at least portions of the prosthesis with one or more of a dimension, size, radius, curvature, geometry shape or surface of at least portions of a medial condyle or a lateral condyle of the joint. In some embodiments, the joint is a knee joint and the prosthesis includes one or more components of a knee replacement device. In some embodiments, the joint is a hip joint and the prosthesis includes one or more components of a hip replacement device. In some embodiments, the joint is a shoulder joint and the prosthesis includes one or more components of a shoulder replacement device. In some embodiments, the joint is an ankle and the prosthesis includes one or more components of an ankle replacement device. In some embodiments, the library of three-dimensional graphical representations of physical implants or prostheses includes symmetrical and asymmetrical implant's or prosthesis' components. In some embodiments, the symmetrical or asymmetrical implant's or prosthesis' components include at least one of symmetrical and asymmetrical femoral components and symmetrical and asymmetrical tibial components. Aspects of the disclosure relate to methods of selecting a medical device in three dimensions in a physical site of a patient selected for implantation. In some embodiments, the method comprises registering, in a coordinate system, one or more HMDs worn by a user. In some embodiments, the method comprises obtaining one or more measurements from the physical site of the patient to determine one or more coordinates. In some embodiments, the method comprises registering the one or more coordinates from the physical site of the patient in the coordinate system. In some embodiments, the method comprises displaying a three-dimensional graphical representation of a first medical device projected over the physical site using the one or more HMDs. In some embodiments, the three-dimensional graphical representation of the first medical device is from a library of three-dimensional graphical representations of physical medical devices and the three-dimensional graphical representation corresponds to at least one portion of the physical first medical device. In some embodiments, the method comprises moving the three-dimensional graphical representation of the first medical device to align with or to be near with or to intersect one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures at the physical site. In some embodiments, the method comprises visually evaluating the fit or alignment between the three-dimensional graphical representation of the first medical device and the one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface, of the one or more structures at the physical site. In some embodiments, the method comprises repeating the steps of displaying, optionally moving and visually evaluating the fit or alignment with one or more three-dimensional graphical representations of one or more additional physical medical devices, wherein the one or more additional physical medical devices have one or more of a different dimension, size, diameter, radius, curvature, geometry shape or surface than the first and subsequently evaluated medical device. In some embodiments, the method comprises selecting a three-dimensional graphical representation of a medical device with a satisfactory fit relative to the one or more structures at the physical site from the library of three-dimensional graphical representations of physical medical devices. In some embodiments, the one or more structures at the physical site include an anatomic or pathologic tissue intended for implantation. In some embodiments, the one or more structures at the physical site include an anatomic or pathologic tissue surrounding or adjacent or subjacent to the intended implantation site. In some embodiments, the one or more structures at the physical site include a pre-existing medical device near the implantation site or adjacent or subjacent or opposing or articulating with or to be connected with the medical device planned for implantation. In some embodiments, the one or more structures at the physical site include a one or more of a tissue, organ or vascular surface, diameter, dimension, radius, curvature, geometry, shape or volume. In some embodiments, the one or more HMDs are registered with the physical surgical site, using, for example, one or more markers, e.g. attached to the surgical site or attached near the surgical site (for example by attaching the one or more markers to an anatomic structure), one or more of a pre- or intra-operative imaging study. The one or more HMDs can display live images of the physical surgical site, one or more of a pre- or intra-operative imaging study, 2D or 3D images of the patient, graphical representations of one or more medical devices, and/or CAD files of one or more medical devices. In some embodiments, the one or more HMDs are registered in relationship to at least one marker, e.g. attached to the patient, for example a bony structure in a spine, knee, hip, shoulder or ankle joint, or attached to the OR table or another structure in the operating room. In some embodiments, the information from the one or more structures at the physical site and from the one or more of a pre- or intra-operative imaging study, 2D or 3D images of the patient, graphical representations of one or more medical devices, CAD files of one or more medical devices are used to select one or more of an anchor or attachment mechanism or fixation member. In some embodiments, the information from the one or more structures at the physical site and from the one or more of a pre- or intra-operative imaging study, 2D or 3D images of the patient, graphical representations of one or more medical devices, CAD files of one or more medical devices are used to direct one or more of an anchor or attachment mechanism or fixation member. In some embodiments, the medical device is one or more of an implant an instrument. In some embodiments, the implant is an implant component. In some embodiments, the medical device can be, but not limited to, a joint replacement implant, a stent, a wire, a catheter, a screw, an otoplasty prosthesis, a dental implant, a dental implant component, a prosthetic disk, a catheter, a guide wire, a coil, an aneurysm clip. Aspects of the disclosure relate to methods of aligning an implant or a prosthesis in a joint of a patient. In some embodiments, the method comprises registering, in a coordinate system, one or more HMDs worn by a user. In some embodiments, the method comprises obtaining one or more intra-operative measurements from the physical joint of the patient to determine one or more coordinates of the physical joint. In some embodiments, the method comprises registering the one or more coordinates of the physical joint of the patient in the coordinate system. In some embodiments, the method comprises displaying a three-dimensional graphical representation of an implant or implant component or a prosthesis or prosthesis component projected over the physical joint using the one or more HMDs, wherein the three-dimensional graphical representation corresponds to at least one portion of the physical prosthesis. In some embodiments, the method comprises moving the three-dimensional graphical representation of the prosthesis to align with or to be near with or to intersect one or more of an internal or external margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint. In some embodiments, the method comprises registering one or more coordinates from the graphical representation of the prosthesis in the coordinate system after the moving and aligning. In some embodiments, the moving of the three-dimensional graphical representation of the implant or prosthesis is performed using one or more of a computer interface (also referred to user interface), an acoustic interface, optionally including voice recognition, a virtual interface, optionally including gesture recognition. In some embodiments, the one or more coordinates from the graphical representation of the prosthesis in the coordinate system after the moving and aligning are used to derive or modify a surgical plan. In some embodiments, the one or more coordinates from the graphical representation of the implant or prosthesis in the coordinate system after the moving and aligning are used to determine one or more of a location, orientation, or alignment or coordinates of a bone removal for placing the implant or prosthesis. In some embodiments, the bone removal is one or more of a bone cut, a burring, a drilling, a pinning, a reaming, or an impacting. In some embodiments, the surgical plan is used to derive one or more of a location, position, orientation, alignment, trajectory, plane, start point, or end point for one or more surgical instruments. In some embodiments, the one or more of a location, orientation, or alignment or coordinates of bone removal are used to derive one or more of a location, position, orientation, alignment, trajectory, plane, start point, or end point for one or more surgical instruments. In some embodiments, the one or more HMDs visualize the one or more of a location, position, orientation, alignment, trajectory, plane, start point, or end point for one or more surgical instruments projected onto and registered with the physical joint. In some embodiments, the prosthesis is an acetabular cup of a hip replacement and wherein a graphical representation of the acetabular up is aligned with at least a portion of the physical acetabular rim of the patient. In some embodiments, the implant or prosthesis is a femoral component of a hip replacement and wherein a graphical representation of the femoral component is aligned with at least a portion of the physical endosteal bone or cortical bone of the patient. In some embodiments, the aligning means positioning the femoral component in substantially equidistant location between at least a portion of one or more of an anterior and a posterior endosteal or cortical bone or a medial and a lateral endosteal bone or cortical bone. In some embodiments, the femoral component includes a femoral neck. In some embodiments, the one or more coordinates from the femoral component in the coordinate system after the moving and aligning is used to determine at least one of a femoral component stem position, a femoral component stem orientation, a femoral component neck angle, a femoral component offset, and a femoral component neck anteversion. In some embodiments, the implant or prosthesis is a glenoid component of a shoulder replacement and wherein a graphical representation of the glenoid component is aligned with at least a portion of the physical glenoid rim of the patient. In some embodiments, the implant or prosthesis is a humeral component of a shoulder replacement and wherein a graphical representation of the humeral component is aligned with at least a portion of the physical endosteal bone or cortical bone of the patient. In some embodiments, the aligning means positioning the humeral component in substantially equidistant location between at least a portion of one or more of an anterior and a posterior endosteal or cortical bone or a medial and a lateral endosteal bone or cortical bone. In some embodiments, the humeral component includes a humeral neck. In some embodiments, the one or more coordinates from the humeral component in the coordinate system after the moving and aligning is used to determine at least one of a humeral component stem position, a humeral component stem orientation, a humeral component neck angle, a humeral component offset, and a humeral component neck anteversion. In some embodiments, the one or more of a margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint includes one or more of a cartilage, normal cartilage, damaged or diseased cartilage, subchondral bone or osteophyte. In some embodiments, the one or more of a margin, periphery, edge, perimeter, anteroposterior, mediolateral, oblique dimension, diameter, radius, curvature, geometry, shape or surface of one or more structures of the physical joint excludes one or more of a cartilage, normal cartilage, damaged or diseased cartilage, subchondral bone or osteophyte. In some embodiments, the one or more HMDs display registered with and superimposed onto the physical joint one or more of a pre- or intra-operative imaging study, 2D or 3D images of the patient, graphical representations of one or more medical devices, CAD files of one or more medical devices, wherein the display assists with the moving and aligning of the three-dimensional graphical representation of the graphical representation of the prosthesis. In some embodiments, the implant or prosthesis is a femoral component or a tibial component of a knee replacement system, wherein the one or more coordinates from the graphical representation of the implant or prosthesis in the coordinate system after the moving and aligning include a center of the graphical representation of the femoral component or a center of the graphical representation of the tibial component. In some embodiments, the moving or aligning includes aligning the femoral component on the distal femur. In some embodiments, the aligning includes aligning the femoral component substantially equidistant to a medial edge of the medial femoral condyle and the lateral edge of a lateral femoral condyle. In some embodiments, the aligning includes aligning the femoral component tangent with the articular surface of at least one of the medial condyle and the lateral condyle in at least one of a distal weight-bearing zone or a weight-bearing zone at 5, 10, 15, 20, 25, 30, 40 or 45 degrees of knee flexion. In some embodiments, the moving or aligning includes aligning the tibial component on the proximal tibia. In some embodiments, the aligning includes aligning the tibial component substantially equidistant to a medial edge of the medial tibial plateau and the lateral edge of a lateral tibial plateau and/or the anterior edge of the anterior tibial plateau and the posterior edge of the posterior tibial plateau or centered over the tibial spines. In some embodiments, the aligning includes aligning the tibial component tangent with at least portions of the articular surface of at least one of the medial tibial plateau and the lateral tibial plateau. In some embodiments, the center of the graphical representation of the femoral component after the aligning and the center of the hip joint are used to determine a femoral mechanical axis. In some embodiments, the center of the graphical representation of the tibial component after aligning and the center of the ankle joint are used to determine a tibial mechanical axis. In some embodiments, the femoral and tibial mechanical axes are used to determine a desired leg axis correction relative to the mechanical axis of the leg. In some embodiments, the leg axis correction is one of a full correction to normal mechanical axis, partial correction to normal mechanical axis or no correction to normal mechanical axis. In some embodiments, the leg axis correction is used to determine the coordinates and/or alignment for the bone removal or bone cuts. In some embodiments, the bone removal or bone cuts for a full correction to normal mechanical axis or a partial correction to normal mechanical axis or no correction to normal mechanical axis are used to adjust the femoral and/or tibial prosthesis coordinates. In some embodiments, the bone removal or bone cuts are executed using at least one of a robot guidance, a surgical navigation system and visual guidance using the one or more of HMD. In some embodiments, the one or more HMDs project a graphical representation of one or more of a cut block, a cut plane or a drill path registered with and superimposed onto the physical joint for aligning one or more of a physical cut guide, a saw blade or a drill. Various exemplary embodiments will be described more fully hereinafter with reference to the accompanying drawings, in which some example embodiments are shown. The present inventive concept may, however, be embodied in many different forms and should not be construed as limited to the example embodiments set forth herein. Rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present inventive concept to those skilled in the art. In the drawings, the sizes and relative sizes of layers and regions may be exaggerated for clarity. The term live data of the patient, as used herein, includes the surgical site, anatomy, anatomic structures or tissues and/or pathology, pathologic structures or tissues of the patient as seen by the surgeon's or viewer's eyes without information from virtual data, stereoscopic views of virtual data, or imaging studies. The term live data of the patient does not include internal or subsurface tissues or structures or hidden tissues or structures that can only be seen with assistance of a computer monitor or HMD. The terms real surgical instrument, actual surgical instrument, physical surgical instrument and surgical instrument are used interchangeably throughout the application; the terms real surgical instrument, actual surgical instrument, physical surgical instrument and surgical instrument do not include virtual surgical instruments. For example, the physical surgical instruments can be surgical instruments provided by manufacturers or vendors for spinal surgery, pedicle screw instrumentation, anterior spinal fusion, knee replacement, hip replacement, ankle replacement and/or shoulder replacement; physical surgical instruments can be, for example, cut blocks, pin guides, awls, reamers, impactors, broaches. Physical surgical instruments can be re-useable or disposable or combinations thereof. Physical surgical instruments can be patient specific. The term virtual surgical instrument does not include real surgical instrument, actual surgical instrument, physical surgical instrument and surgical instrument. The terms real surgical tool, actual surgical tool, physical surgical tool and surgical tool are used interchangeably throughout the application; the terms real surgical tool, actual surgical tool, physical surgical tool and surgical tool do not include virtual surgical tools. The physical surgical tools can be surgical tools provided by manufacturers or vendors. For example, the physical surgical tools can be pins, drills, saw blades, retractors, frames for tissue distraction and other tools used for orthopedic, neurologic, urologic or cardiovascular surgery. The term virtual surgical tool does not include real surgical tool, actual surgical tool, physical surgical tool and surgical tool. The terms real implant or implant component, actual implant or implant component, physical implant or implant component and implant or implant component are used interchangeably throughout the application; the terms real implant or implant component, actual implant or implant component, physical implant or implant component and implant or implant component do not include virtual implant or implant components. The physical implants or implant components can be implants or implant components provided by manufacturers or vendors. For example, the physical surgical implants can be a pedicle screw, a spinal rod, a spinal cage, a femoral or tibial component in a knee replacement, an acetabular cup or a femoral stem and head in hip replacement. The term virtual implant or implant component does not include real implant or implant component, actual implant or implant component, physical implant or implant component and implant or implant component. The terms “image capture system”, “video capture system”, “image or video capture system”, “image and/or video capture system, and/or optical imaging system” can be used interchangeably. In some embodiments, a single or more than one, e.g. two or three or more, image capture system, video capture system, image or video capture system, image and/or video capture system, and/or optical imaging system can be used in one or more locations (e.g. in one, two, three, or more locations), for example integrated into, attached to or separate from an HMD, attached to an OR table, attached to a fixed structure in the OR, integrated or attached to or separate from an instrument, integrated or attached to or separate from an arthroscope, integrated or attached to or separate from an endoscope, internal to the patient's skin, internal to a surgical site, internal to a target tissue, internal to an organ, internal to a cavity (e.g. an abdominal cavity or a bladder cavity or a cistern or a CSF space, or an internal to a vascular lumen), internal to a vascular bifurcation, internal to a bowel, internal to a small intestine, internal to a stomach, internal to a biliary structure, internal to a urethra and or urether, internal to a renal pelvis, external to the patient's skin, external to a surgical site, external to a target tissue, external to an organ, external to a cavity (e.g. an abdominal cavity or a bladder cavity or a cistern or a CSF space, or an external to a vascular lumen), external to a vascular bifurcation, external to a bowel, external to a small intestine, external to a stomach, external to a biliary structure, external to a urethra and or urether, and/or external to a renal pelvis. In some embodiments, the position and/or orientation and/or coordinates of the one or more image capture system, video capture system, image or video capture system, image and/or video capture system, and/or optical imaging system can be tracked using any of the registration and/or tracking methods described in the specification, e.g. direct tracking using optical imaging systems and/or a 3D scanner(s), in any of the foregoing locations and/or tissues and/or organs and any other location and/or tissue and/or organ described in the specification or known in the art. Tracking of the one or more image capture system, video capture system, image or video capture system, image and/or video capture system, and/or optical imaging system can, for example, be advantageous when the one or more 3D scanners are integrated into or attached to an instrument, an arthroscope, an endoscope, and/or when they are located internal to any structures, e.g. inside a joint or a cavity or a lumen. In some embodiments, a single or more than one, e.g. two or three or more, 3D scanners can be present in one or more locations(e.g. in one, two, three, or more locations), for example integrated into, attached to or separate from an HMD, attached to an OR table, attached to a fixed structure in the OR, integrated or attached to or separate from an instrument, integrated or attached to or separate from an arthroscope, integrated or attached to or separate from an endoscope, internal to the patient's skin, internal to a surgical site, internal to a target tissue, internal to an organ, internal to a cavity (e.g. an abdominal cavity or a bladder cavity or a cistern or a CSF space, and/or internal to a vascular lumen), internal to a vascular bifurcation, internal to a bowel, internal to a small intestine, internal to a stomach, internal to a biliary structure, internal to a urethra and or urether, internal to a renal pelvis, external to the patient's skin, external to a surgical site, external to a target tissue, external to an organ, external to a cavity (e.g. an abdominal cavity or a bladder cavity or a cistern or a CSF space, and/or external to a vascular lumen), external to a vascular bifurcation, external to a bowel, external to a small intestine, external to a stomach, external to a biliary structure, external to a urethra and or urether, and/or external to a renal pelvis. In some embodiments, the position and/or orientation and/or coordinates of the one or more 3D scanners can be tracked using any of the registration and/or tracking methods described in the specification, e.g. direct tracking using optical imaging systems and/or a 3D scanner(s), in any of the foregoing locations and/or tissues and/or organs and any other location and/or tissue and/or organ mentioned in the specification or known in the art. Tracking of the one or more 3D scanners can, for example, be advantageous when the one or more 3D scanners are integrated into or attached to an instrument, an arthroscope, an endoscope, and/or when they are located internal to any structures, e.g. inside a joint or a cavity or a lumen. In some embodiments, one or more image capture system, video capture system, image or video capture system, image and/or video capture system, and/or optical imaging system can be used in conjunction with one or more 3D scanners, e.g. in any of the foregoing locations and/or tissues and/or organs and any other location and/or tissue and/or organ described in the specification or known in the art. With surgical navigation, a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers, and the position and/or orientation of the first virtual instrument can be compared with the position and/or orientation of a corresponding second virtual instrument generated in a virtual surgical plan. Thus, with surgical navigation the positions and/or orientations of the first and the second virtual instruments are compared. Aspects of the disclosure relates to devices, systems and methods for positioning a virtual path, virtual plane, virtual tool, virtual surgical instrument or virtual implant component in a mixed reality environment using a HMD device, optionally coupled to one or more processing units. With guidance in mixed reality environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the physical joint, spine or surgical site. Further, the physical guide, tool, instrument or implant can be aligned with the virtual surgical guide, tool, instrument or implant displayed or projected by the HMD. Thus, guidance in mixed reality environment does not need to use a plurality of virtual representations of the guide, tool, instrument or implant and does not need to compare the positions and/or orientations of the plurality of virtual representations of the virtual guide, tool, instrument or implant. In some embodiments, the HMD can display one or more of a virtual surgical tool, virtual surgical instrument including a virtual surgical guide or virtual cut block, virtual trial implant, virtual implant component, virtual implant or virtual device, predetermined start point, predetermined start position, predetermined start orientation or alignment, predetermined intermediate point(s), predetermined intermediate position(s), predetermined intermediate orientation or alignment, predetermined end point, predetermined end position, predetermined end orientation or alignment, predetermined path, predetermined plane, predetermined cut plane, predetermined contour or outline or cross-section or surface features or shape or projection, predetermined depth marker or depth gauge, predetermined stop, predetermined angle or orientation or rotation marker, predetermined axis, e.g. rotation axis, flexion axis, extension axis, predetermined axis of the virtual surgical tool, virtual surgical instrument including virtual surgical guide or cut block, virtual trial implant, virtual implant component, implant or device, estimated or predetermined non-visualized portions for one or more devices or implants or implant components or surgical instruments or surgical tools, and/or one or more of a predetermined tissue change or alteration. In some embodiments, the one or more of a virtual surgical tool, virtual surgical instrument including a virtual surgical guide or virtual cut block, virtual trial implant, virtual implant component, virtual implant or virtual device, predetermined start point, predetermined start position, predetermined start orientation or alignment, predetermined intermediate point(s), predetermined intermediate position(s), predetermined intermediate orientation or alignment, predetermined end point, predetermined end position, predetermined end orientation or alignment, predetermined path, predetermined plane, predetermined cut plane, predetermined contour or outline or cross-section or surface features or shape or projection, predetermined depth marker or depth gauge, predetermined stop, predetermined angle or orientation or rotation marker, predetermined axis, e.g. rotation axis, flexion axis, extension axis, predetermined axis of the virtual surgical tool, virtual surgical instrument including virtual surgical guide or cut block, virtual trial implant, virtual implant component, implant or device, estimated or predetermined non-visualized portions for one or more devices or implants or implant components or surgical instruments or surgical tools, and/or one or more of a predetermined tissue change or alteration can be displayed by the HMD at one or more predetermined coordinates, e.g. indicating a predetermined position predetermined orientation or combination thereof for superimposing and/or aligning a physical surgical tool, physical surgical instrument, physical implant, or a physical device. In some embodiments, one or more of a virtual surgical tool, virtual surgical instrument including a virtual surgical guide or virtual cut block, virtual trial implant, virtual implant component, virtual implant or virtual device, predetermined start point, predetermined start position, predetermined start orientation or alignment, predetermined intermediate point(s), predetermined intermediate position(s), predetermined intermediate orientation or alignment, predetermined end point, predetermined end position, predetermined end orientation or alignment, predetermined path, predetermined plane, predetermined cut plane, predetermined contour or outline or cross-section or surface features or shape or projection, predetermined depth marker or depth gauge, predetermined stop, predetermined angle or orientation or rotation marker, predetermined axis, e.g. rotation axis, flexion axis, extension axis, predetermined axis of the virtual surgical tool, virtual surgical instrument including virtual surgical guide or cut block, virtual trial implant, virtual implant component, implant or device, estimated or predetermined non-visualized portions for one or more devices or implants or implant components or surgical instruments or surgical tools, and/or one or more of a predetermined tissue change or alteration displayed by the HMD can be a placement indicator for one or more of a physical surgical tool, physical surgical instrument, physical implant, or a physical device. Any of a position, location, orientation, alignment, direction, speed of movement, force applied of a surgical instrument or tool, virtual and/or physical, can be predetermined using, for example, pre-operative imaging studies, pre-operative data, pre-operative measurements, intra-operative imaging studies, intra-operative data, and/or intra-operative measurements. Any of a position, location, orientation, alignment, sagittal plane alignment, coronal plane alignment, axial plane alignment, rotation, slope of implantation, angle of implantation, flexion of implant component, offset, anteversion, retroversion, and position, location, orientation, alignment relative to one or more anatomic landmarks, position, location, orientation, alignment relative to one or more anatomic planes, position, location, orientation, alignment relative to one or more anatomic axes, position, location, orientation, alignment relative to one or more biomechanical axes, position, location, orientation, alignment relative to a mechanical axis of a trial implant, an implant component or implant, virtual and/or physical, can be predetermined using, for example, pre-operative imaging studies, pre-operative data, pre-operative measurements, intra-operative imaging studies, intra-operative data, and/or intra-operative measurements. Intra-operative measurements can include measurements for purposes of registration, e.g. of a joint, a spine, a surgical site, a bone, a cartilage, a HMD, a surgical tool or instrument, a trial implant, an implant component or an implant. In some embodiments, measurements can include measurements of coordinate(s) or coordinate information. A coordinate can be a set of numbers used in specifying the location of a point on a line, on a surface, or in space, e.g. x, y, z. Coordinate can be predetermined, e.g. for a virtual surgical guide. In some embodiments, multiple coordinate systems can be used instead of a common or shared coordinate system. In this case, coordinate transfers can be applied from one coordinate system to another coordinate system, for example for registering the HMD, live data of the patient including the surgical site, virtual instruments and/or virtual implants and physical instruments and physical implants. Head Mounted Displays In some embodiments, head mounted displays (HMDs) can be used. Head mounted displays can be of non-see through type (such as the Oculus VR HMD (Facebook, San Mateo, CA)), optionally with a video camera to image the live data of the patient as a video-see through head mounted display, or they can be of optical see through type as an optical see-through head mounted display or see-through optical head mounted display. A HMD can include a first display unit for the left eye and a second display unit for the right eye. The first and second display units can be transparent, semi-transparent or non-transparent. The system, comprising, for example, the HMD, one or more computer processors and/or an optional marker attached to the patient, can be configured to generate a first view of virtual data, e.g. a virtual surgical guide, for the first display unit and a second view of virtual data, e.g. a virtual surgical guide, for the second display unit. The virtual data can be a placement indicator for a physical surgical tool, physical surgical instrument, physical implant or physical device. The virtual data, e.g. a virtual surgical guide, can be a three-dimensional digital representation at one or more predetermined coordinates indicating, for example, a predetermined position, predetermined orientation or combination thereof for superimposing and/or aligning a physical surgical tool, physical surgical instrument, physical implant or physical device. The system can be configured to generate a first view displayed by a first display unit (e.g. for the left eye) and a second view displayed by a second display unit (e.g. for the right eye), wherein the first view and the second view create a 3D stereoscopic view of the virtual data, e.g. a virtual surgical guide, which can optionally be based on one or more predetermined coordinates. The system can be configured to display the 3D stereoscopic view by the HMD onto the patient. In some embodiments, a pair of glasses is utilized. The glasses can include an optical head-mounted display. An optical see through head-mounted display (OHMD) can be a wearable display that has the capability of reflecting projected images as well as allowing the user to see through it. Various types of OHMDs known in the art can be used in order to practice embodiments of the present disclosure. These include curved mirror or curved combiner OHMDs as well as wave-guide or light-guide OHMDs. The OHMDs can optionally utilize diffraction optics, holographic optics, polarized optics, and reflective optics. Traditional input devices that can be used with the HMDs include, but are not limited to touchpad or buttons, smartphone controllers, speech recognition, and gesture recognition. Advanced interfaces are possible, e.g. a brain—computer interface. Optionally, a computer or server or a workstation can transmit data to the HMD. The data transmission can occur via cable, Bluetooth, WiFi, optical signals and any other method or mode of data transmission known in the art. The HMD can display virtual data, e.g. virtual data of the patient, in uncompressed form or in compressed form. Virtual data of a patient can optionally be reduced in resolution when transmitted to the HMD or when displayed by the HMD. When virtual data are transmitted to the HMD, they can be in compressed form during the transmission. The HMD can then optionally decompress them so that uncompressed virtual data are being displayed by the HMD. Alternatively, when virtual data are transmitted to the HMD, they can be of reduced resolution during the transmission, for example by increasing the slice thickness of image data prior to the transmission. The HMD can then optionally increase the resolution, for example by re-interpolating to the original slice thickness of the image data or even thinner slices so that virtual data with resolution equal to or greater than the original virtual data or at least greater in resolution than the transmitted data are being displayed by the HMD. In some embodiments, the HMD can transmit data back to a computer, a server or a workstation. Such data can include, but are not limited to:Positional, orientational or directional information about the HMD or the operator or surgeon wearing the HMDChanges in position, orientation or direction of the HMDData generated by one or more IMUsData generated by markers (radiofrequency, optical, light, other) attached to, integrated with or coupled to the HMDData generated by a surgical navigation system attached to, integrated with or coupled to the HMDData generated by an image and/or video capture system attached to, integrated with or coupled to the HMDParallax data, e.g. using two or more image and/or video capture systems attached to, integrated with or coupled to the HMD, for example one positioned over or under or near the left eye and a second positioned over or under or near the right eyeDistance data, e.g. parallax data generated by two or more image and/or video capture systems evaluating changes in distance between the HMD and a surgical field or an objectMotion parallax dataData related to calibration or registration phantoms (see other sections of this specification)Any type of live data of the patient captured by the HMD including image and/or video capture systems attached to, integrated with or coupled to the HMDFor example, alterations to a live surgical siteFor example, use of certain surgical instruments detected by the image and/or video capture systemFor example, use of certain medical devices or trial implants detected by the image and/or video capture systemAny type of modification to a surgical planPortions or aspects of a live surgical planPortions or aspects of a virtual surgical plan Radiofrequency tags used throughout the embodiments can be of active or passive kind with or without a battery. Exemplary optical see through head mounted displays include the ODG R-7, R-8 and R-8 smart glasses from ODG (Osterhout Group, San Francisco, CA), the NVIDIA 942 3-D vision wireless glasses (NVIDIA, Santa Clara, CA) the Microsoft Hololens and Hololens 2 (Microsoft, Redmond, WI), the Daqri Smart Glass (Daqri, Los Angeles, CA) the Metal (Meta Vision, San Mateo, CA), the Moverio BT-300 (Epson, Suwa, Japan), the Blade 3000 and the Blade M300 (Vuzix, West Henrietta, NY), and the Lenovo ThinkA6 (Lenovo, Beijing, China). The Microsoft HoloLens is manufactured by Microsoft. It is a pair of augmented reality smart glasses. Hololens is a see-through optical head mounted display (or optical see through head mounted display)1125(seeFIG.1). An optical see through head mounted display can include a clear or transparent front portion or visor10, a combiner12or mirror system, a nose pad15, a front facing portion18, a side facing portion20, and an occipital portion22. At least one or more cameras21can be included, which can be used, for example, for inside-out-tracking. Hololens can use the Windows 10 operating system. The front portion of the Hololens includes, among others, sensors, related hardware, several cameras and processors. The visor includes a pair of transparent combiner lenses, in which the projected images are displayed. The HoloLens can be adjusted for the interpupillary distance (IPD) using an integrated program that recognizes gestures. A pair of speakers is also integrated. The speakers do not exclude external sounds and allow the user to hear virtual sounds. A USB 2.0 micro-B receptacle is integrated. A 3.5 mm audio jack is also present. The HoloLens has an inertial measurement unit (IMU) with an accelerometer, gyroscope, and a magnetometer, four environment mapping sensors/cameras (two on each side), a depth camera with a 120°×120° angle of view, a 2.4-megapixel photographic video camera, a four-microphone array, and an ambient light sensor. Hololens has an Intel Cherry Trail SoC containing the CPU and GPU. HoloLens includes also a custom-made Microsoft Holographic Processing Unit (HPU). The SoC and the HPU each have 1 GB LPDDR3 and share 8 MB SRAM, with the SoC also controlling 64 GB eMMC and running the Windows 10 operating system. The HPU processes and integrates data from the sensors, as well as handling tasks such as spatial mapping, gesture recognition, and voice and speech recognition. HoloLens includes a IEEE 802.11ac Wi-Fi and Bluetooth 4.1 Low Energy (LE) wireless connectivity. The headset uses Bluetooth LE and can connect to a clicker, a finger-operating input device that can be used for selecting menus and functions. A number of applications are available for Microsoft Hololens, for example a catalogue of holograms, HoloStudio, a 3D modelling application by Microsoft with 3D print capability, Autodesk Maya 3D creation application, FreeForm, integrating HoloLens with the Autodesk Fusion 360 cloud-based 3D development application, and others. HoloLens utilizing the HPU can employ sensual and natural interface commands—voice, gesture, and gesture. Gaze commands, e.g. head-tracking, allows the user to bring application focus to whatever the user is perceiving. Any virtual application or button can be selected using an air tap method, similar to clicking a virtual computer mouse. The tap can be held for a drag simulation to move a display. Voice commands can also be utilized. The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around. The Microsoft HoloLens App for Windows 10 PC's and Windows 10 Mobile devices can be used by developers to run apps and to view live stream from the HoloLens user's point of view, and to capture augmented reality photos and videos. Almost all Universal Windows Platform apps can run on Hololens. These apps can be projected in 2D. Select Windows 10 APIs are currently supported by HoloLens. Hololens apps can also be developed on Windows 10 PC's. Holographic applications can use Windows Holographic APIs. Unity (Unity Technologies, San Francisco, CA) and Vuforia (PTC, Inc., Needham, MA) are some apps that can be utilized. Applications can also be developed using DirectX and Windows API's. Many of the embodiments throughout the specification can be implemented also using non-see through head mounted displays, e.g. virtual reality head mounted displays. Non-see through head mounted displays can be used, for example, with one or more image or video capture systems (e.g. cameras) or 3D scanners to image the live data of the patient, e.g. a skin, a subcutaneous tissue, a surgical site, an anatomic landmark, an organ, or an altered tissue, e.g. a surgically altered tissue, as well as any physical surgical tools, instruments, devices and/or implants, or portions of the surgeon's body, e.g. his or her fingers, hands or arms. Non see through HMDs can be used, for example, for displaying virtual data, e.g. pre- or intra-operative imaging data of the patient, virtual surgical guides, virtual tools, virtual instruments, virtual implants and/or virtual implants, for example together with live data of the patient, e.g. from the surgical site, imaged through the one or more cameras or video or image capture systems or 3D scanners, for knee replacement surgery, hip replacement surgery, shoulder replacement surgery, ankle replacement surgery, spinal surgery, e.g. spinal fusion, brain surgery, heart surgery, lung surgery, liver surgery, spleen surgery, kidney surgery vascular surgery or procedures, prostate, genitourinary, uterine or other abdominal or pelvic surgery, and trauma surgery. Exemplary non-see through head mounted displays, e.g. virtual reality head mounted displays, are, for example, the Oculus Rift (Google, Mountain View, CA), the HTC Vive (HTC, Taipei, Taiwan) and the Totem (Vrvana, Apple, Cupertino, CA). When combined with a video camera, e.g. for streaming live images from a surgical site, these VR headsets can be configured as a video see through head mounted display. Computer Graphics Viewing Pipeline In some embodiments, the HMD uses a computer graphics viewing pipeline that consists of the following steps to display 3D objects or 2D objects positioned in 3D space or other computer-generated objects and models: 1. Registration; 2. View Projection Registration: In some embodiments, the different objects to be displayed by the HMD computer graphics system (for instance virtual anatomical models, virtual models of instruments, geometric and surgical references and guides) are initially all defined in their own independent model coordinate system. During the registration process, spatial relationships between the different objects are defined, and each object is transformed from its own model coordinate system into a common global coordinate system. Different techniques that are described below can be applied for the registration process. For augmented reality HMDs that superimpose computer-generated objects with live views of the physical environment, the global coordinate system is defined by the environment. A process called spatial mapping, described below, creates a computer representation of the environment that allows for merging and registration with the computer-generated objects, thus defining a spatial relationship between the computer-generated objects and the physical environment. View Projection In some embodiments, once all objects to be displayed have been registered and transformed into the common global coordinate system, they are prepared for viewing on a display by transforming their coordinates from the global coordinate system into the view coordinate system and subsequently projecting them onto the display plane. This view projection step can use the viewpoint and view direction to define the transformations applied in this step. For stereoscopic displays, such as an HMD, two different view projections can be used, one for the left eye and the other one for the right eye. For see through HMDs (augmented reality HMDs) the position of the viewpoint and view direction relative to the physical environment can be known in order to correctly superimpose the computer-generated objects with the physical environment. As the viewpoint and view direction change, for example due to head movement, the view projections are updated so that the computer-generated display follows the new view. Positional Tracking Systems In some embodiments, the position and/or orientation of the HMDs can be tracked. For example, in order to calculate and update the view projection of the computer graphics view pipeline as described in the previous section and to display the computer-generated overlay images in the HMD, the view position and direction needs to be known. Different methods to track the HMDs can be used. For example, the HMDs can be tracked using outside-in tracking. For outside-in tracking, one or more external sensors or cameras can be installed in a stationary location, e.g. on the ceiling, the wall or on a stand. The sensors or camera capture the movement of the HMDs, for example through shape detection or markers attached to the HMDs or the user's head. The sensor data or camera image is typically processed on a central computer to which the one or more sensors or cameras are connected. The tracking information obtained on the central computer can then be used to compute the view projection for the HMD (including multiple HMDs). The view projection can be computed on the central computer or on the HMD. Outside-in tracking can be performed with use of surgical navigation system using, for example, infrared and/or RF markers, active and/or passive markers. One or more external infrared or RF emitters and receivers or cameras can be installed in a stationary location, e.g. on the ceiling, the wall or a stand or attached to the OR table. One or more infrared and/or RF markers, active and/or passive markers can be applied to the HMD for tracking the coordinates and/or the position and/or orientation of the HMD. One or more infrared and/or RF markers, active and/or passive markers can be applied to the anatomic structure or near the anatomic structure tracking the coordinates and/or the position and/or orientation of the anatomic structure. One or more infrared and/or RF markers, active and/or passive markers can be applied to a physical tool, physical instrument, physical implant or physical device tracking the coordinates and/or the position and/or orientation of the physical tool, physical instrument, physical implant or physical device. One or more infrared and/or RF markers, active and/or passive markers can be applied to the surgeon. In some embodiments, outside-in tracking can be performed with use of an image capture or video capture system using, for example, optical markers, e.g. with geometric patterns. One or more external cameras can be installed in a stationary location, e.g. on the ceiling, the wall or a stand or attached to the OR table. One or more optical markers can be applied to the HMD for tracking the coordinates and/or the position and/or orientation of the HMD. One or more optical markers can be applied to the anatomic structure or near the anatomic structure tracking the coordinates and/or the position and/or orientation of the anatomic structure. One or more optical markers can be applied to a physical tool, physical instrument, physical implant or physical device tracking the coordinates and/or the position and/or orientation of the physical tool, physical instrument, physical implant or physical device. One or more optical markers can be applied to the surgeon. In some embodiments, including for outside-in and inside-out tracking, a camera, image capture or video capture system can detect light from the spectrum visible to the human eye, e.g. from about 380 to 750 nanometers wavelength, or from about 400 to 720 nanometers wavelength, or from about 420 to 680 nanometers wavelength, or similar combinations. In embodiments throughout the specification, including for outside-in and inside-out tracking, a camera, image capture or video capture system can detect light from the spectrum not visible to the human eye, e.g. from the infrared spectrum, e.g. from 700 nm or above to, for example, 1 mm wavelength, 720 nm or above to, for example, 1 mm wavelength, 740 nm or above to, for example, 1 mm wavelength, or similar combinations. In embodiments throughout the specification, including for outside-in and inside-out tracking, a camera, image capture or video capture system can detect light from the spectrum visible and from the spectrum not visible to the human eye. In some embodiments, including for outside-in and inside-out tracking, a marker, e.g. a marker with a geometric pattern and/or a marker used with a navigation system, can be configured to reflect or emit light from the spectrum visible to the human eye, e.g. from about 380 to 750 nanometers wavelength, or from about 400 to 720 nanometers wavelength, or from about 420 to 680 nanometers wavelength, or similar combinations. In embodiments throughout the specification, including for outside-in and inside-out tracking, a marker, e.g. a marker with a geometric pattern and/or a marker used with a navigation system, can be configured to reflect or emit light from the spectrum not visible to the human eye, e.g. from the infrared spectrum, e.g. from 700 nm or above to, for example, 1 mm wavelength, 720 nm or above to, for example, 1 mm wavelength, 740 nm or above to, for example, 1 mm wavelength, or similar combinations. In embodiments throughout the specification, including for outside-in and inside-out tracking, a marker, e.g. a marker with a geometric pattern and/or a marker used with a navigation system, can be configured to reflect or emit light from the spectrum visible and from the spectrum not visible to the human eye. In some embodiments, outside-in tracking can be performed with use of a 3D scanner or a laser scanner. One or more external 3D scanners or laser scanners can be installed in a stationary location, e.g. on the ceiling, the wall or a stand or attached to the OR table. The 3D scanner or laser scanner can be used to track objects directly, e.g. the HMD, the anatomic structure, the physical tool, physical instrument, physical implant or physical device or the surgeon. Optionally, markers can be applied to one or more of the HMD, the anatomic structure, the physical tool, physical instrument, physical implant or physical device or the surgeon for tracking any of the foregoing using the 3D scanner or laser scanner. In some embodiments, the inside-out tracking method can be employed. One or more sensors or cameras can be attached to the HMD or the user's head or integrated with the HMD. The sensors or cameras can be dedicated to the tracking functionality. The cameras attached or integrated into the HMD can include infrared cameras. Infrared LEDs or emitters can also be included in the HMD. The sensors or cameras attached or integrated into the HMD can include an image capture system, a video capture system, a 3D scanner, a laser scanner, a surgical navigation system or a depth camera. In some embodiments, the data collected by the sensors or cameras is used for positional tracking as well as for other purposes, e.g. image recording or spatial mapping. Information gathered by the sensors and/or cameras is used to determine the HMD's position and orientation in 3D space. This can be done, for example, by detecting optical, infrared, RF or electromagnetic markers attached to the external environment. Changes in the position of the markers relative to the sensors or cameras are used to continuously determine the position and orientation of the HMD. Data processing of the sensor and camera information can be performed by a mobile processing unit attached to or integrated with the HMD, which can allow for increased mobility of the HMD user as compared to outside-in tracking. Alternatively, the data can be transmitted to and processed on the central computer. Inside-out tracking can also utilize markerless techniques. For example, spatial mapping data acquired by the HMD sensors can be aligned with a virtual model of the environment, thus determining the position and orientation of the HMD in the 3D environment. Alternatively, or additionally, information from inertial measurement units can be used. Potential advantages of inside-out tracking include greater mobility for the HMD user, a greater field of view not limited by the viewing angle of stationary cameras and reduced or eliminated problems with marker occlusion. Eye and Gaze Tracking Systems Some aspects of the present disclosure provide for methods and devices of using the human eye including eye movements and lid movements as well as movements induced by the peri-orbital muscles for executing computer commands. Methods of executing computer commands by way of facial movements and movements of the head are provided. Command execution induced by eye movements and lid movements as well as movements induced by the peri-orbital muscles, facial movements and head movements can be advantageous in environments where an operator does not have his hands available to type on a keyboard or to execute commands on a touchpad or other hand—computer interface. Such situations include, but are not limited, to industrial applications including automotive and airplane manufacturing, chip manufacturing, medical or surgical procedures and many other potential applications. In some embodiments, the head mount display can include an eye tracking system. Different types of eye tracking systems can be utilized. The examples provided below are in no way thought to be limiting. Any eye tracking system known in the art now can be utilized. Eye movement can be divided into fixations and saccades—when the eye gaze pauses in a certain position, and when it moves to another position, respectively. The resulting series of fixations and saccades can be defined as a scan path. The central one or two degrees of the visual angle provide most of the visual information; the input from the periphery is less informative. Thus, the locations of fixations along a scan path show what information locations were processed during an eye tracking session, for example during a surgical procedure. Eye trackers can measure rotation or movement of the eye in several ways, for example via measurement of the movement of an object (for example, a form of contact lens) attached to the eye, optical tracking without direct contact to the eye, and measurement of electric potentials using electrodes placed around the eyes. If an attachment to the eye is used, it can, for example, be a special contact lens with an embedded mirror or magnetic field sensor. The movement of the attachment can be measured with the assumption that it does not slip significantly as the eye rotates. Measurements with tight fitting contact lenses can provide very accurate measurements of eye movement. Additionally, magnetic search coils can be utilized which allow measurement of eye movement in horizontal, vertical and torsion direction. Alternatively, non-contact, optical methods for measuring eye motion can be used. With this technology, light, optionally infrared, can be reflected from the eye and can be sensed by an optical sensor or a video camera. The information can then be measured to extract eye rotation and/or movement from changes in reflections. Optical sensor or video-based eye trackers can use the corneal reflection (the so-called first Purkinje image) and the center of the pupil as features to track, optionally over time. A more sensitive type of eye tracker, the dual-Purkinje eye tracker, uses reflections from the front of the cornea (first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track. An even more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates and or moves. Optical methods, particularly those based on optical sensors or video recording, can be used for gaze tracking. In some embodiments, optical or video-based eye trackers can be used. A camera focuses on one or both eyes and tracks their movement as the viewer performs a function such as a surgical procedure. The eye-tracker can use the center of the pupil for tracking. Infrared or near-infrared non-collimated light can be utilized to create corneal reflections. The vector between the pupil center and the corneal reflections can be used to compute the point of regard on a surface or the gaze direction. Optionally, a calibration procedure can be performed at the beginning of the eye tracking. Bright-pupil and dark-pupil eye tracking can be employed. Their difference is based on the location of the illumination source with respect to the optics. If the illumination is co-axial relative to the optical path, then the eye acts is retro reflective as the light reflects off the retina creating a bright pupil effect similar to a red eye. If the illumination source is offset from the optical path, then the pupil appears dark because the retroreflection from the retina is directed away from the optical sensor or camera. Bright-pupil tracking can have the benefit of greater iris/pupil contrast, allowing more robust eye tracking with all iris pigmentation. It can also reduce interference caused by eyelashes. It can allow for tracking in lighting conditions that include darkness and very bright lighting situations. The optical tracking method can include tracking movement of the eye including the pupil as described above. The optical tracking method can also include tracking of the movement of the eye lids and also periorbital and facial muscles. In some embodiments, the eye-tracking apparatus is integrated in a HMD. In some embodiments, head motion can be simultaneously tracked, for example using a combination of accelerometers and gyroscopes forming an inertial measurement unit (see below). In some embodiments, electric potentials can be measured with electrodes placed around the eyes. The eyes generate an electric potential field, which can also be detected if the eyes are closed. The electric potential field can be modelled to be generated by a dipole with the positive pole at the cornea and the negative pole at the retina. It can be measured by placing two electrodes on the skin around the eye. The electric potentials measured in this manner are called an electro-oculogram. If the eyes move from the center position towards the periphery, the retina approaches one electrode while the cornea approaches the opposing one. This change in the orientation of the dipole and consequently the electric potential field results in a change in the measured electro-oculogram signal. By analyzing such changes eye movement can be assessed. Two separate movement directions, a horizontal and a vertical, can be identified. If a posterior skull electrode is used, a EOG component in radial direction can be measured. This is typically the average of the EOG channels referenced to the posterior skull electrode. The radial EOG channel can measure saccadic spike potentials originating from extra-ocular muscles at the onset of saccades. EOG can be limited for measuring slow eye movement and detecting gaze direction. EOG is, however, well suited for measuring rapid or saccadic eye movement associated with gaze shifts and for detecting blinks. Unlike optical or video-based eye-trackers, EOG allows recording of eye movements even with eyes closed. The major disadvantage of EOG is its relatively poor gaze direction accuracy compared to an optical or video tracker. Optionally, both methods, optical or video tracking and EOG, can be combined in select embodiments. A sampling rate of 15, 20, 25, 30, 50, 60, 100, 120, 240, 250, 500, 1000 Hz or greater can be used. Any sampling frequency is possibly. In many embodiments, sampling rates greater than 30 Hz will be preferred. Measuring Location, Orientation, Acceleration The location, orientation, and acceleration of the human head, portions of the human body, e.g. hands, arms, legs or feet, as well as portions of the patient's body, e.g. the patient's head or extremities, including the hip, knee, ankle, foot, shoulder, elbow, hand or wrist and any other body part, can, for example, be measured with a combination of gyroscopes and accelerometers. In select applications, magnetometers may also be used. Such measurement systems using any of these components can be defined as inertial measurement units (IMU). As used herein, the term IMU relates to an electronic device that can measure and transmit information on a body's specific force, angular rate, and, optionally, the magnetic field surrounding the body, using a combination of accelerometers and gyroscopes, and, optionally, magnetometers. An IMU or components thereof can be coupled with or registered with a navigation system or a robot, for example by registering a body or portions of a body within a shared coordinate system. Optionally, an IMU can be wireless, for example using WiFi networks or Bluetooth networks. Pairs of accelerometers extended over a region of space can be used to detect differences (gradients) in the proper accelerations of frames of references associated with those points. Single- and multi-axis models of accelerometer are available to detect magnitude and direction of the acceleration, as a vector quantity, and can be used to sense orientation (because direction of weight changes), coordinate acceleration (so long as it produces g-force or a change in g-force), vibration, shock. Micromachined accelerometers can be utilized in some embodiments to detect the position of the device or the operator's head. Piezoelectric, piezoresistive and capacitive devices can be used to convert the mechanical motion into an electrical signal. Piezoelectric accelerometers rely on piezoceramics or single crystals Piezoresistive accelerometers can also be utilized. Capacitive accelerometers typically use a silicon micro-machined sensing element. Accelerometers used in some of the embodiments can include small micro electro-mechanical systems (MEMS), consisting, for example, of little more than a cantilever beam with a proof mass. Optionally, the accelerometer can be integrated in the head mounted devices and both the outputs from the eye tracking system and the accelerometer(s) can be utilized for command execution. With an IMU, the following exemplary information can be captured about the operator and the patient and respective body parts including a moving joint: Speed, velocity, acceleration, position in space, positional change, angular orientation, change in angular orientation, alignment, orientation, and/or direction of movement and or speed of movement (e.g. through sequential measurements). Operator and/or patient body parts about which such information can be transmitted by the IMU include, but are not limited to: head, chest, trunk, shoulder, elbow, wrist, hand, fingers, arm, hip, knee, ankle, foot, toes, leg, inner organs, e.g. brain, heart, lungs, liver, spleen, bowel, bladder, etc. Any number of IMUS can be placed on the HMD, the operator and/or the patient and, optionally, these IMUS can be cross-referenced to each other within a single or multiple coordinate systems or, optionally, they can be cross-referenced in relationship to an HMD, a second and third or more HMDs, a navigation system or a robot and one or more coordinate systems used by such navigation system and/or robot. A navigation system can be used in conjunction with an HMD without the use of an IMU. For example, navigation markers including infrared markers, retroreflective markers, RF markers can be attached to an HMD and, optionally, portions or segments of the patient or the patient's anatomy. The HMD and the patient or the patient's anatomy can be cross-referenced in this manner or registered in one or more coordinate systems used by the navigation system and movements of the HMD or the operator wearing the HMD can be registered in relationship to the patient within these one or more coordinate systems. Once the virtual data and the live data of the patient and the HMD are registered in the same coordinate system, e.g. using IMUS, optical markers, navigation markers including infrared markers, retroreflective markers, RF markers, and any other registration method described in the specification or known in the art, any change in position of any of the HMD in relationship to the patient measured in this fashion can be used to move virtual data of the patient in relationship to live data of the patient, so that the visual image of the virtual data of the patient and the live data of the patient seen through the HMD are always aligned, irrespective of movement of the HMD and/or the operator's head and/or the operator wearing the HMD. Similarly, when multiple HMDs are used, e.g. one for the primary surgeon and additional ones, e.g. two, three, four or more, for other surgeons, assistants, residents, fellows, nurses and/or visitors, the HMDs worn by the other staff, not the primary surgeon, will also display the virtual representation(s) of the virtual data of the patient aligned with the corresponding live data of the patient seen through the HMD, wherein the perspective of the virtual data that is with the patient and/or the surgical site for the location, position, and/or orientation of the viewer's eyes for each of the HMDs used and each viewer. The foregoing embodiments can be achieved since the IMUS, optical markers, RF markers, infrared markers and/or navigation markers placed on the operator and/or the patient as well as any spatial anchors can be registered in the same coordinate system as the primary HMD and any additional HMDs. The position, orientation, alignment, and change in position, orientation and alignment in relationship to the patient and/or the surgical site of each additional HMD can be individually monitored thereby maintaining alignment and/or superimposition of corresponding structures in the live data of the patient and the virtual data of the patient for each additional HMD irrespective of their position, orientation, and/or alignment in relationship to the patient and/or the surgical site. One or more IMUS can also be attached to or integrated into a surgical helmet. When an HMD is integrated into or attached to the surgical helmet, the IMU integrated into or attached to the surgical helmet can be used for determining the position and/or orientation of the HMD. Alternatively, one or more IMUS can be integrated into or attached to the surgical helmet and the HMD. The data generated by the IMUS integrated into or attached to the surgical helmet can be compared to the data generated by the IMUS integrated into or attached to the HMD and their position and/or orientation relative to each other can be determined by comparing the data using one or more computer processors. User Interfaces Aspects of the present disclosure provide a user interface where the human eye including eye movements and lid movements including movements induced by the orbital and peri-orbital and select skull muscles are detected by the eye tracking system and are processed to execute predefined, actionable computer commands. An exemplary list of eye movements and lid movements that can be detected by the system is provided in Table 1. TABLE 1Exemplary list of eye movements and lid movements detected bythe eye tracking software1 blink2 blinks3 blinksFast blink, for example less than 0.5 secondsSlow blink, for example more than 1.0 seconds2 or more blinks with fast time interval, e.g. less than 1 second2 or more blinks with long time interval, e.g. more than 2 seconds(typically chosen to be less than the natural time interval betweeneye blinks)Blink left eye onlyBlink right eye onlyBlink left eye and right eye simultaneouslyBlink left eye first, then within short time interval (e.g. less than1 second), blink right eyeBlink right eye first, then within short time interval (e.g. less than1 second), blink lef teyeBlink left eye first, then within long time interval (e.g. more than2 seconds), blink right eyeBlink right eye first, then within long time interval (e.g. more than2 seconds), blink left eyeRapid eye movement to leftRapid eye movement to rightRapid eye movement upRapid eye movement downWiden eyes, hold for short time interval, e.g. less than 1 secondWiden eyes, hold for long time interval, e.g. more than 2 secondsClose both eyes for 1 second etc.Close both eyes for 2 seconds or more etc.Close both eyes, hold, then open and follow by fast blinkClose left eye only 1 second, 2 seconds etc.Close right eye only 1 second, 2 seconds etc.Close left eye, then right eyeClose right eye, then left eyeBlink left eye, then right eyeBlink right eye, then left eyeStare at field, virtual button for 1, 2, 3 or more seconds; activatefunction, e.g. Zoom in or Zoom out Any combination of blinks, eye movements, sequences, and time intervals is possible for encoding various types of commands. These commands can be computer commands that can direct or steer, for example, a surgical instrument or a robot. Methods of executing commands by way of facial movements and movements of the head are also provided. An exemplary list of facial movements and head movements that can be detected by the system is provided in Table 2. (This list is only an example and by no way meant to be exhaustive; any number or combination of movements is possible). TABLE 2Exemplary list of facial movements and head movements detected:Move head fast to right and holdMove head fast to left and holdMove head fast down and holdMove head fast down and holdMove head fast to right and backMove head fast to left and backMove head fast down and backMove head fast down and backTilt head to left and holdTilt head to right and holdTilt head to left and backTilt head to right and backOpen mouth and holdOpen mouth and closeTwitch nose onceTwitch nose twice etc. Exemplary commands executed using eye movements, lid movements, facial movements and head movements are listed in Table 3. TABLE 3Exemplary list of commands that can be executed by tracking eyemovement, lid movement, facial movement and head movement(this list is only an example and by no way meant to be exhaustive;any number or combination of commands is possible; applicationspecific commands can be executed in this manner as well).ClickPointMove pointerSlowFastScroll, e.g. through imagesFast scrollSlow scrollScroll upScroll downScroll leftScroll rightDragSwooshRegisterToggle 2D vs. 3DSwitch imaging studyOverlay imagesFuse imagesRegister imagesCutPasteCopyUndoRedoDeletePurchaseProvide credit card informationAuthorizeGo to shopping cardHMD onHMD offEye tracking onEye tracking offEye command execution onEye command execution offFacial command execution onFacial command execution offTurn surgical instrument on (e.g. oscillating saw, laser etc.)Turn surgical instrument offIncrease intensity, speed, energy deposed of surgical instrumentReduce intensity, speed, energy deposed of surgical instrumentChange direction of surgical instrumentChange orientation of surgical instrumentChange any type of setting surgical instrument In some embodiments, eye movements, lid movements, facial movement, head movements alone or in combination can be used to signal numerical codes or sequences of numbers or sequences of machine operations. Such sequences of numbers can, for example, be used to execute certain machine operating sequences. Other user interfaces can be a physical keyboard, physical track pad, physical mouse, physical joy stick, acoustic interface, voice recognition, virtual interface, virtual slider, virtual keyboard, virtual track pad, virtual mouse, virtual joy stick, gesture recognition, eye tracking, e.g. in relationship to a virtual interface. Fusing Physical World with Imaging and Other Data of a Patient In some embodiments, an operator such as a surgeon may look through an HMD observing physical data or information on a patient, e.g. a surgical site or changes induced on a surgical site, while pre-existing data of the patient are superimposed onto the physical visual representation of the live patient. Systems, methods and techniques to improve the accuracy of the display of the virtual data superimposed onto the live data of the patient are described in International Patent Application No. PCT/US2018/012459, which is incorporated herein by reference in its entirety. The pre-existing data of the patient can be an imaging test or imaging data or other types of data including metabolic information or functional information. The pre-existing data of the patient including one or more imaging tests or other types of data including metabolic or functional information can be obtained at a time different from the time of the surgical procedure. For example, the pre-existing data of the patient can be obtained one, two, three or more days or weeks prior to the surgical procedure. The pre-existing data of the patient including one or more imaging tests or other types of data including metabolic or functional information are typically obtained with the patient or the surgical site being located in a different location or a different object coordinate system in the pre-existing data when compared to the location or the object coordinate system of the live patient or the surgical site in the live patient. Thus, pre-existing data of the patient or the surgical site are typically located in a first object coordinate system and live data of the patient or the surgical site are typically located in a second object coordinate systems; the first and the second object coordinate system are typically different from each other. The first object coordinate system with the pre-existing data needs to be registered with the second object coordinate system with the live data of the patient including, for example, the live surgical site. Scan Technology The following is an exemplary list of scanning and imaging techniques that can be used or applied for various aspects of the present disclosure; this list is not exhaustive, but only exemplary. Anyone skilled in the art can identify other scanning or imaging techniques that can be used in practicing the present disclosure: X-ray imaging, 2D, 3D, supine, upright or in other body positions and poses, including analog and digital x-ray imaging; Digital tomosynthesis; Cone beam CT; Ultrasound; Doppler ultrasound; Elastography, e.g. using ultrasound or MRI; CT; MRI, including, for example, fMRI, diffusion imaging, stroke imaging, MRI with contrast media; Functional MRI (fMRI), e.g. for brain imaging and functional brain mapping; Magnetic resonance spectroscopy; PET; SPECT-CT; PET-CT; PET-MRI; Upright scanning, optionally in multiple planes or in 3D using any of the foregoing modalities, including x-ray imaging, ultrasound etc.; Contrast media (e.g. iodinated contrast agents for x-ray and CT scanning, or MRI contrast agents; contrast agents can include antigens or antibodies for cell or tissue specific targeting; other targeting techniques, e.g. using liposomes, can also be applied; molecular imaging, e.g. to highlight metabolic abnormalities in the brain and target surgical instruments towards area of metabolic abnormality; any contrast agent known in the art can be used in conjunction with the present disclosure); 3D optical imaging, including Laser scanning, Confocal imaging, e.g. including with use of fiberoptics, single bundle, multiple bundle, Confocal microscopy, e.g. including with use of fiberoptics, single bundle, multiple bundles, Optical coherence tomography, Photogrammetry, Stereovision (active or passive), Triangulation (active or passive), Interferometry, Phase shift imaging, Active wavefront sampling, Structured light imaging, Other optical techniques to acquire 3D surface information, Combination of imaging data, e.g. optical imaging, wavefront imaging, interferometry, optical coherence tomography and/or confocal laser imaging or scanning, Image fusion or co-display of different imaging modalities, e.g. in 2D or 3D, optionally registered, optionally more than two modalities combined, fused or co-displayed, e.g. optical imaging, e.g. direct visualization or through an arthroscope, and/or laser scan data, e.g. direct visualization or through an arthroscope, and/or virtual data, e.g. intra-articular, extra-articular, intra-osseous, hidden, not directly visible, and/or external to skin, and/or confocal imaging or microscopy images/data, e.g. direct visualization or through an arthroscope. For a detailed description of illustrative scanning and imaging techniques, see for example, Bushberg et al. The Essential Physics of Medical Imaging, 3rdedition, Wolters, Kluwer, Lippincott, 2012. In embodiments, 3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. 3D scanning can be accomplished with multiple different modalities including combinations thereof, for example, optical imaging, e.g. using a video or image capture system integrated into, attached to, or separate from one or more HMDs, laser scanning, confocal imaging, optical coherence tomography, photogrammetry, active and passive stereovision and triangulation, interferometry and phase shift principles and/or imaging, wavefront sampling and/or imaging. One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of,Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more HMDs, e.g. in a common coordinate systemThe surgeon's hands and/or fingers, e.g. forMonitoring steps in a surgical procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual axis, e.g. a reaming, broaching or drilling axis, a virtual cut plane, a virtual instrument, a virtual implant component etc.Executing virtual commands, e.g. using gesture recognition or a virtual interface, e.g. a virtual touch padOne or more HMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon's hands and/or fingers The use of optical imaging systems and/or 3D scanners for registration, e.g. of the surgical site and/or one or more HMDs can be helpful when markerless registration is desired, e.g. without use of optical markers, e.g. with geometric patterns, and/or IMUS, and/or LEDs, and/or navigation markers. The use of optical imaging systems and/or 3D scanners for registration can also be combined with the use of one or more of optical markers, e.g. with geometric patterns, and/or IMUS, and/or LEDs, and/or navigation markers. In embodiments, one or more 3D models and/or 3D surfaces generated by an optical imaging system and/or a 3D scanner can be registered with, superimposed with and/or aligned with one or more 3D models and/or 3D surfaces generated by another imaging test, e.g. a CT scan, MRI scan, PET scan, other scan, or combinations thereof, and/or a 3D model and/or 3D surfaces generated from or derived from an x-ray or multiple x-rays, e.g. using bone morphing technologies, as described in the specification or known in the art. With optical imaging systems or 3D scanners, a virtual 3D model can be reconstructed by postprocessing single images, e.g. acquired from a single perspective. In this case, the reconstruction cannot be performed in real time with continuous data capture. Optical imaging systems or 3D scanners can also operate in real time generating true 3D data. For example, with confocal microscopy using, for example, an active triangulation technique, a projector can project a changing pattern of light, e.g. blue light, onto the surgical field, e.g. an articular surface exposed by arthroscopy or a bone or a soft-tissue, e.g. using projection grids that can have a transmittance random distribution and which can be formed by sub regions containing transparent and opaque structures. By using elements for varying the length of the optical path, it can possible, for each acquired profile, to state a specific relationship between the characteristic of the light and the optical distance of the image plane from the imaging optics. A light source can produce an illumination beam that can be focused onto the surface of the surgical field, e.g. the articular surface. An image sensor can receive the observation beam reflected by the surface of the target object. A focusing system can focus the observation beam onto the image sensor. The light source can split into a plurality of regions that can be independently regulated in terms of light intensity. Thus, the intensity of light detected by each sensor element can be a direct measure of the distance between the scan head and a corresponding point on the target object. Parallel confocal imaging can be performed, e.g. by shining an array of incident laser light beams, e.g. passing through focusing optics and a probing face, on the surgical field, e.g. an articular surface, a bone or a soft-tissue. The focusing optics can define one or more focal planes forward to the probe face in one or more positions which can be changed, e.g. by a motor or other mechanism. The laser light beams can generate illuminated spots or patterns on the surgical field and the intensity of returning light rays can be measured at various positions of the focal plane determining spot-specific positions yielding a maximum intensity of the reflected light beams. Data can be generated which can represent the topology of the three-dimensional structure of the surgical field, e.g. an articular surface, e.g. exposed and/or visible and/or accessible during arthroscopy, a bone or a soft-tissue. By determining surface topologies of adjacent portions or tissues, e.g. an adjacent articular surface or bone or soft-tissue, from two or more different angular locations and then combining such surface topologies, a complete three-dimensional representation of the entire surgical field can be obtained. Optionally, a color wheel can be included in the acquisition unit itself. In this example, a two-dimensional (2D) color image of the 3D structure of the surgical field, e.g. an articular surface, a bone or a soft-tissue, can also be taken at the same angle and orientation with respect to the structure. Thus, each point with its unique coordinates on the 2D image can correspond to a similar point on the 3D scan having the same x and y coordinates. The imaging process can be based on illuminating the target surface with three differently-colored illumination beams (e.g. red, green or blue light) combinable to provide white light, thus, for example, capturing a monochromatic image of the target portion of the surgical field, e.g. an articular surface, a bone, a cartilage or a soft-tissue, corresponding to each illuminating radiation. The monochromatic images can optionally be combined to create a full color image. Three differently-colored illumination beams can be provided by means of one white light source optically coupled with color filters. With optical coherence tomography (OCT), using, for example, a confocal sensor, a laser digitizer can include a laser source, e.g. coupled to a fiber optic cable, a coupler and a detector. The coupler can split the light from the light source into two paths. The first path can lead to the imaging optics, which can focus the beam onto a scanner mirror, which can steer the light to the surface of the surgical field, e.g. an articular surface, e.g. as seen or accessible during arthroscopy, a cartilage, a bone and/or a soft-tissue. A second path of light from the light source can be coupled via the coupler to the optical delay line and to the reflector. The second path of light, e.g. the reference path, can be of a controlled and known path length, as configured by the parameters of the optical delay line. Light can be reflected from the surface of the surgical field, e.g. an articular surface, a cartilage, a bone and/or a soft-tissue, returned via the scanner mirror and combined by the coupler with the reference path light from the optical delay line. The combined light can be coupled to an imaging system and imaging optics via a fiber optic cable. By utilizing a low coherence light source and varying the reference path by a known variation, the laser digitizer can provide an optical coherence tomography (OCT) sensor or a low coherence reflectometry sensor. The focusing optics can be placed on a positioning device in order to alter the focusing position of the laser beam and to operate as a confocal sensor. A series of imaged laser segments on the object from a single sample/tissue position can be interlaced between two or multiple 3D maps of the sample/tissue from essentially the same sample/tissue position. The motion of the operator between each subframe can be tracked mathematically through reference points. Operator motion can optionally be removed. Active wavefront sampling and/or imaging can be performed using structured light projection. The scanning system can include an active three-dimensional imaging system that can include an off-axis rotating aperture element, e.g. placed in the illumination path or in the imaging path. Out-of-plane coordinates of object points can be measured by sampling the optical wavefront, e.g. with an off-axis rotating aperture element, and measuring the defocus blur diameter. The system can include a lens, a rotating aperture element and an image plane. The single aperture can help avoid overlapping of images from different object regions and can help increase spatial resolution. The rotating aperture can allow taking images at several aperture positions. The aperture movement can make it possible to record on a CCD element a single exposed image at different aperture locations. To process the image, localized cross correlation can be applied to reveal image disparity between image frames. In another embodiment, a scanner can use a polarizing multiplexer. The scanner can project laser sheet onto the surgical cite, e.g. an articular surface, e.g. as exposed or accessible during arthroscopy, a cartilage, damaged, diseased or normal, a subchondral bone, a cortical bone etc., and can then utilize the polarizing multiplexer to optically combine multiple views of the profile illuminated by the sheet of laser light. The scanner head can use a laser diode to create a laser beam that can pass through a collimating lens which can be followed by a sheet generator lens that can convert the beam of laser light into a sheet of laser light. The sheet of laser light can be reflected by a folding mirror and can illuminate the surface of the surgical field. A system like this can optionally combine the light from two perspectives onto a single camera using passive or active triangulation. A system like this system can be configured to achieve the independence of lateral resolution and depth of field. In order to achieve this independence, the imaging system, can be physically oriented so as to satisfy the Scheimpflug principle. The Scheimpflug principle is a geometric rule that describes the orientation of the plane of focus of an optical system wherein the lens plane is not parallel to the image plane. This enables sheet of light based triangulation systems to maintain the high lateral resolution required for applications requiring high accuracy, e.g. accuracy of registration, while providing a large depth of focus. A 3D scanner probe can sweep a sheet of light across one or more tissue surfaces, where the sheet of light projector and imaging aperture within the scanner probe can rapidly move back and forth along all or part of the full scan path, and can display, for example near real-time, a live 3D preview of the digital 3D model of the scanned tissue surface(s). A 3D preview display can provide feedback on how the probe is positioned and oriented with respect to the target tissue surface. In other embodiments, the principle of active stereo photogrammetry with structured light projection can be employed. The surgical field can be illuminated by a 2D array of structured illumination points. 3D models can be obtained from the single image by triangulation with a stored image of the structured illumination onto a reference surface such as a plane. A single or multiple camera can be used. To obtain information in z-direction, the surgical site can be illuminated by a 2D image of structured illumination projected from a first angle with respect to the surgical site. Then the camera can be positioned at a second angle with respect to the surgical site, to produce a normal image containing two-dimensional information in x and y direction as seen at that second angle. The structured illumination projected from a photographic slide can superimpose a 2D array of patterns over the surgical site and can appear in the captured image. The information in z-direction is then recovered from the camera image of the surgical site under the structured illumination by performing a triangulation of each of the patterns in the array on the image with reference to an image of the structured illumination projected on a reference plane, which can also be illuminated from the first angle. In order to unambiguously match corresponding points in the image of the surgical site and in the stored image, the points of the structured illumination can be spatially-modulated with two-dimensional random patterns which can be generated and saved in a projectable medium. Random patterns are reproducible, so that the patterns projected onto the surgical site to be imaged are the same as the corresponding patterns in the saved image. Accordion fringe interferometry (AFI) can employ light from two-point sources to illuminate an object with an interference fringe pattern. A high precision digital camera can be used to record the curvature of the fringes. The degree of apparent fringe curvature coupled with the known geometry between the camera and laser source enable the AFI algorithms to digitize the surface of the object being scanned. AFI can offer advantages over other scanners as lower sensitivity to ambient light variations and noise, high accuracy, large projector depth of field, enhanced ability to scan shiny and translucent surfaces, e.g. cartilage, and the ability to scan without targets and photogrammetric systems. A grating and lens can be used. Alternatively, coherent point source of electromagnetic radiation can also be generated without a grating and lens. For example, electromagnetic radiation can be emitted from a pair or pairs of optical fibers which can be used to illuminate target objects with interferometric fringes. Consequently, movement of a macroscopic grating which requires several milliseconds or more to effect a phase shift can be avoided. A fiber-based phase shifter can be used to change the relative phase of the electromagnetic radiation emitted from the exit ends of two optical fibers in a few microseconds or less. Optical radiation scattered from surfaces and subsurface regions of illuminated objects can be received by a detector array. Electrical signals can be generated by a detector array in response to the received electromagnetic radiation. A processor receives the electrical signals and calculates three-dimensional position information of tissue surfaces based on changes in the relative phase of the emitted optical radiation and the received optical radiation scattered by the surfaces. Sources of optical radiation with a wavelength between about 350 nm and 500 nm can be used; other wavelengths are possible. Other optical imaging systems and/or 3D scanners can use the principle of human stereoscopic vision and the principle of linear projection: if straight lines are projected onto an object the lines will be curved around the object. This distortion of the lines allows conclusions to be drawn about the surface contour. When optical imaging and/or 3D scanning is performed in the context of an arthroscopy procedure, the optical imaging and/or 3D scanning apparatus can be integrated into the endoscope, including by sharing the same fiberoptic(s) or with use of separate fiberoptic(s), e.g. in the same housing or a separate housing. An arthroscopic optical imaging and/or 3D scanning probe can be inserted through the same portal as the one used for the arthroscope, including when integrated into the arthroscope or in a common housing with the arthroscope, or it can be inserted through a second, separate portal. An optical imaging and/or 3D scanning probe used with an arthroscopic procedure can optionally be tracked by tracking the position, location, orientation, alignment and/or direction of movement using optical markers, e.g. with one or more geometric patterns, e.g. in 2D or 3D, or LEDs using one or more camera or video systems integrated into, attached to, or separate from one or more HMDs. The camera or video systems can be arranged at discrete, defined angles thereby utilizing angular information including parallax information for tracking distances, angles, orientation or alignment of optical markers attached to the probe, e.g. the arthroscope and/or optical imaging and/or 3D scanning probe. An optical imaging and/or 3D scanning probe and/or an arthroscope used with an arthroscopic procedure can optionally be tracked by tracking the position, location, orientation, alignment and/or direction of movement using navigation markers, e.g. infrared or RF markers, and a surgical navigation system. An optical imaging and/or 3D scanning probe and/or an arthroscope used with an arthroscopic procedure can optionally be tracked by tracking the position, location, orientation, alignment and/or direction of movement directly with one or more camera or video systems integrated into, attached to or separate from one or more HMDs, wherein a computer system and software processing the information can use image processing and pattern recognition to recognize the known geometry of the one or more probes and their location within a coordinate system, e.g. in relationship to the patient, the surgical site and/or the OR table. With any of the optical imaging and/or 3D scanner techniques, if there are holes in the acquisition and/or scan and/or 3D surface, repeat scanning can be performed to fill the holes. The scanned surface can also be compared against a 3D surface or 3D model of the surgical site, e.g. an articular surface, a cartilage, damaged or diseased or normal, a subchondral bone, a bone and/or a soft-tissue, obtained from an imaging study, e.g. an ultrasound, a CT or MRI scan, or obtained via bone morphing from x-rays as described in other parts of the specification. Discrepancies in surface geometry between the 3D model or 3D surface generated with the optical imaging system and/or the 3D scanner and the 3D surface or 3D model obtained from an imaging study or bone morphing from x-rays, can be determined; similarly, it can be determined if the surfaces or 3D models display sufficient commonality to allow for registration of the intra-operative 3D surface or 3D model obtained with the optical imaging system and/or 3D scanner and the 3D surface or 3D model obtained from the pre-operative imaging study or bone morphing from x-rays. If there is not sufficient commonality, additional scanning can be performed using the optical imaging and/or 3D scanner technique, for example in order to increase the spatial resolution of the scanned data, the accuracy of the scanned data and/or to fill any holes in the model or surface. Any surface matching algorithm known in the art can be utilized to register overlapping surface areas and thereby transform all surface portions into the same coordinate space, for example the Iterative Closest Point method described in Besl et al.,A Method for Registration of3-D Shapes;1992; IEEE Trans PAM/14(2): 239-255. Optionally, with any of the foregoing embodiments, the optical imaging system or 3D scanner can have a form of boot or stabilization advice attached to it, which can, for example, be rested against and moved over the target tissue, e.g. an articular surface, a bone or a soft-tissue. The boot or stabilization device can help maintain a constant distance between the scanner and the target tissue. The boot or stabilization device can also help maintain a constant angle between the scanner and the target tissue. For example, a boot or stabilization device can be used with an optical imaging system or scanner used during arthroscopy, maintaining, for example, a constant distance to the articular surface or intra-articular ligament, cartilage, bone or other structures, e.g. a femoral notch or a tibial spine or a tri-radiate cartilage region or fovea capitis in a hip. Multi-Dimensional Imaging, Reconstruction and Visualization Various embodiments can be practiced in one, two, three or more dimensions. The following is an exemplary list of potential dimensions, views, projections, angles, or reconstructions that can be applied; this list is not exhaustive, but only exemplary. Anyone skilled in the art can identify additional dimensions, views, projections, angles or reconstructions that can be used in practicing the present disclosure. Exemplary dimensions are listed in Table 4. TABLE 4Exemplary list of potential dimensions, views, projections, angles,or reconstructions that can be displayed using virtual representationswith HMD(s), optionally stereoscopic1stdimension: superoinferior, e.g. patient physical data2nddimension: mediolateral, e.g. patient physical data3rddimension: anteroposterior, e.g. patient physical data4th-6thdimension: head motion (and with it motion of glasses/HMD)in 1, 2 or 3 dimensions7th-9thdimension: instrument motion in 1, 2 or 3 dimensions, e.g.in relationship to surgical field, organ or head including head motion10th-13thdimension: arm or hand motion in 1, 2 or 3 dimensions, e.g.in relationship to surgical field, organ or head including head motion14th-16thdimension: virtual 3D data of patient, obtained, for examplefrom a scan or intraoperative measurements17th-19thdimension: vascular flow; in 1, 2 or 3 dimensions, e.g.in relationship to surgical field, organ or head including head motion20th-22nddimension: temperature map (including changes induced bycryo- or hyperthermia), thermal imaging, in 1, 2 or 3 dimensions, e.g.in relationship to surgical field25th-28thdimension: metabolic map (e.g. using MRS, PET-CT,SPECT-CT), in 1, 2 or 3 dimensions, e.g. in relationship to surgical field29th-32nddimension: functional map (e.g. using fMRI, PET-CT,SPECT-CT, PET, kinematic imaging), in 1, 2 or 3 dimensions, e.g.in relationship to surgical field or patient33rd-35thdimension: confocal imaging data and/or microscopy data in1, 2, or 3 dimensions, e.g. in relationship to surgical field or patient,e.g. obtained through an endoscope or arthroscope ordental scanner or direct visualization/imaging of an exposed surface36th-38thdimension: optical imaging data in 1, 2 or 3 dimensions, e.g.in relationship to surgical field or patient, e.g. obtained through anendoscope or arthroscope or dental scanner or directvisualization/imaging of an exposed surface39th-40thdimension: laser scan data in 1, 2 or 3 dimensions, e.g.in relationship to surgical field or patient, e.g. obtained through anendoscope or arthroscope or dental scanner or directvisualization/imaging of an exposed surface Any oblique planes are possible. Any perspective projections are possible. Any oblique angles are possible. Any curved planes are possible. Any curved perspective projections are possible. Any combination of 1D, 2D, and 3D data between the different types of data is possible. Any of the virtual data or virtual representations for display by one or more HMDs in Table 4 or described in the specification can be adjusted with regard to the focal plane or focal point of the display using any of the embodiments described in the specification. One or more computer processors can be integrated into, attached to or separate to one or more HMD, one or more surgical helmet, one or more external computer servers. A first computer processor can process data from one or more cameras for eye tracking. A second computer processor can activate one or more electromagnetic, electric, piezoelectric actuators and/or motors, for example for moving an HMD in relationship to a surgical helmet. A third computer processor can receive tracking information, e.g. from one or more video cameras or a surgical navigation system. A fourth computer processor can activate and/or control a robot, e.g. a handheld robot or a robot with a robotic arm. The first, second, third, fourth, fifth etc. computer processor can be the same or different. The first, second, third, fourth, fifth etc. computer processor can be integrated or attached to the HMD, the surgical helmet, or a computer or server separate from the HMD or surgical helmet. The computer processors can optionally be connected using an RF, WIFI, Bluetooth or LiFi signal. The computer processors can optionally be connected to a camera, an IMU, an eye tracking system, a navigation system, a surgical robot using an RF, WIFI, Bluetooth or LiFi signal. Surgical Helmets In some embodiments, the surgical helmet can include one or more of the following portions or modules or components:Frontal or front facing portion, module or componentParietal portion, module or componentOccipital portion, module or componentTemporal portion, module or componentVertex portion, module or componentZygomatic portion, module or componentNasal portion, module or componentMaxillary portion, module or componentMandibular portion, module or componentChin portion, module or component Any of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components can include a skin, face or hair facing surface, portion, module or component and/or an external facing surface, portion, module or component. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components and/or external facing surfaces, portions, modules or components can be of mono-component design, e.g. with a singular body, or multi-component design, e.g. with multiple bodies, for example connected using mechanical, electric, magnetic or electromagnetic means and/or motors and/or actuators. Mono-component, singular body and multi-component, multi-body designs can be combined. For example, an internal (towards skin, face or hair) facing portion of a frontal, vertex and occipital portion can be of mono-component design, e.g. formed as a singular body, while a mandibular portion and chin portion can be of multi-component design with a chin portion body and two or more mandibular portion bodies formed as multiple bodies that can be connected to each other and, optionally, that can be connected to the single body frontal, vertex and occipital portion. Any combination of mono-component and multi-component designs is possible. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more components, pieces, extenders, tabs to support, secure and/or stabilize the surgical helmet against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more fasteners, including optionally with adhesive, e.g. water soluble, to support, secure and/or stabilize the surgical helmet against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more straps, including optionally with rubber or Velcro, to support, secure and/or stabilize the surgical helmet against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more mechanical means to support, secure and/or stabilize the surgical helmet against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. The surgical helmet can include an optional fan, for example located near the occipital portion of the surgical helmet. Representative examples can be found, for example, in U.S. Pat. Nos. 6,481,019, 7,752,682, 8,819,869, and Patent Application Nos. US2012/0075464A1 and WO201460149A2, which are hereby incorporated by reference in their entireties. The surgical helmet can include an optional air filtration system, for example with a vent or an air intake located near the frontal portion of the surgical helmet or extending near a transparent facial shield, which can, for example, be part of a cover for the surgical helmet or hood or a surgical gown. Representative examples can be found, for example, in U.S. Pat. Nos. 5,054,480, 6,481,019, 7,752,682, 8,819,869, and Patent Application Nos. US2012/0075464A1 and WO201460149A2, which are hereby incorporated by reference in their entireties. In some embodiments, the air processed through the air filtration system can be exposed to an ultraviolet light source for purposes of reducing or killing any pathogens in the air that the surgeon is breathing. Representative examples can be found, for example, in U.S. Pat. Nos. 6,481,019, 7,752,682, 8,819,869, and Patent Application Nos. US2012/0075464A1 and WO201460149A2, which are hereby incorporated by reference in their entireties. In some embodiments, the air processed through the air filtration system can be exposed to an ultrasound transmitter for purposes of reducing or killing any pathogens in the air that the surgeon is breathing. The ultrasound transmitter can, for example, emit ultrasonic waves at a frequency ranging, for example, between 50 kHz and 10 MHz through air passing through the air filtration system. For example, the ultrasound transmitter can optionally be placed in front of the air intake of a fan or at the air exit from the fan. The ultrasound transmitter can also be placed near an air intake near the face shield. Head Mounted Display Components In some embodiments, the HMD can include one or more of the following portions or modules or components:Frontal or front facing portion, module or component (18,FIGS.1,2A)Clear or transparent front portion or visor (10,FIG.1)Parietal portion, module or component (20,FIGS.1,2A)Occipital portion, module or component (22,FIGS.1,2A)Temporal portion, module or componentVertex portion, module or componentZygomatic portion, module or componentNasal portion, module or component, e.g. with nasal pads (15,FIGS.1,2A)Maxillary portion, module or componentMandibular portion, module or componentChin portion, module or componentA head holder portion (24,FIG.2A) Any of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components can include a skin, face or hair facing surface, portion, module or component and/or an external facing surface, portion, module or component. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components and/or external facing surfaces, portions, modules or components can be of mono-component design, e.g. with a singular body, or multi-component design, e.g. with multiple bodies, for example connected using mechanical, electric, magnetic or electromagnetic means. Mono-component, singular body and multi-component, multi-body designs can be combined. For example, an internal (towards skin, face or hair) facing portion of a frontal, vertex and occipital portion can be of mono-component design, e.g. formed as a singular body, while a mandibular portion and chin portion can be of multi-component design with a chin portion body and two or more mandibular portion bodies formed as multiple bodies that can be connected to each other and, optionally, that can be connected to the single body frontal, vertex and occipital portion. Any combination of mono-component and multi-component designs is possible. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more components, pieces, extenders, tabs to support, secure and/or stabilize the HMD against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more fasteners, including optionally with adhesive, e.g. water soluble, to support, secure and/or stabilize the HMD against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more straps, including optionally with rubber or Velcro, to support, secure and/or stabilize the HMD against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. One or more of the frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin portions, modules or components, including any skin, face or hair facing surfaces, portions, modules or components, can include one or more mechanical means to support, secure and/or stabilize the HMD against portions of the surgeon's skin, hair, face on the surgeon's frontal, parietal, occipital, temporal, vertex, zygomatic, nasal, maxillary, mandibular and/or chin area. The mechanical means can be head holder which can optionally be tightened over the user's head, for example using a dial with expandable or retractable elements (24,FIG.2A). The frontal or front facing portion, module or component (18,FIGS.1,2A), parietal portion, module or component (20,FIGS.1,2A), occipital portion, module or component (22,FIGS.1,2A) can optionally include at least one processor, camera, depth sensor, storage media. The front portion or visor (11,FIG.2A), for example centered over or located over the eyes, can be transparent, e.g. in the case of an optical see through head mounted display, or can be opaque or non-transparent, e.g. in the case of a video see through head mounted display. Gown In some embodiments, the surgeon can wear a surgical gown, which can be sterile. Standard surgical gowns, such as, for example, the AAMI Level 4 and Level 3 gowns provided by Cardinal Health (Dublin, OH, USA) can be used. The gown can, for example, cover the arms, shoulders, chest, abdomen, and/or back portions of the surgeon. The gown can be worn in conjunction with a cover for the surgical helmet or hood. Cover for Surgical Helmet, Hood In some embodiments, the surgeon can elect to wear a cover for a surgical helmet or hood, for example in conjunction with a surgical helmet and an optional air filtration system. The cover for the surgical helmet or hood can be sterile. The cover for the surgical helmet or hood can be made of the same material or a similar material as the gown. The cover for the surgical helmet or hood can cover the entire head and, optionally, extend down to the surgeon's neck and/or shoulders. The cover for the surgical helmet or hood can, optionally, be connected to the gown, for example using fasteners, Velcro straps, regular straps (which can, optionally, be tied or knotted). The cover for the surgical helmet or hood can include a transparent portion, e.g. an integrated, transparent face shield, through which the surgeon can observe the surgical field. The transparent portion, e.g. a transparent face shield, can be made of plastic. The transparent face shield can, optionally, include a UV light filter. The UV light filter function can be inherent to the plastic of the face shield. The UV light filter function can be achieved through a coating to the face shield. Optionally, the transparent portion, e.g. transparent face shield, can be exchangeable. Exchanging the transparent portion can be desirable when, for example, too much blood or tissue has accumulated on the transparent portion, for example after sawing a bone. The transparent portion can be exchangeable using mechanical means, e.g. one or more snap-on mechanisms, which can be rounded and local, e.g. like multiple snap-on studding the front portion of the cover for the surgical helmet or hood at the perimeter surrounding the face shield. The transparent portion can be exchangeable using a zip lock like mechanism, which can optionally surround the entire perimeter of the cover for the surgical helmet or hood portion framing the transparent portion. The transparent portion can be exchangeable using a zipper like mechanism, which can optionally surround the entire perimeter of the cover for the surgical helmet or hood portion framing the transparent portion. The cover for surgical a helmet or hood can include visual and/or mechanical means for aligning the cover for the surgical helmet or hood with the surgical helmet. The cover for the surgical helmet or hood, transparent portion, e.g. transparent face shield, and/or the surgical helmet can include visual or mechanical means for aligning the HMD with the cover for the surgical helmet or hood, face shield, and/or the surgical helmet. The helmet, cover for the surgical helmet or hood, transparent portion, e.g. transparent face shield and/or the surgical helmet can also be aligned with the HMD, for example if the surgeon places the HMD first on his or her head. The visual means in any of the embodiments can include aiming marks, e.g. on the cover for the surgical helmet or hood, transparent portion, e.g. transparent face shield, and/or the surgical helmet and/or the HMD. The aiming marks can be lines, circles, points, triangles, arrows or any other geometric shape or form. The aiming marks can deploy colors, e.g. red, green, yellow, blue etc. Each component can have one or more aiming marks, optionally with a corresponding aiming mark on the other component. For example, the HMD can have an aiming mark, with a corresponding aiming mark on the surgical helmet, so that the user can align the corresponding aiming marks and/or then, for example, physically connect the HMD and the surgical helmet, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. The surgical helmet can have an aiming mark, with a corresponding aiming mark on the transparent portion, e.g. transparent face shield, so that the user can align the corresponding aiming marks and then, for example, align and/or physically connect the transparent portion, e.g. transparent face shield, and the surgical helmet, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. The HMD can have an aiming mark, with a corresponding aiming mark on a transparent portion, e.g. transparent face shield, so that the user can align the corresponding aiming marks and then, for example, align and/or physically connect the HMD and the transparent portion, e.g. transparent face shield, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. Fiducial markers can have an aiming mark, with a corresponding aiming mark on the surgical helmet, so that the user can align the corresponding aiming marks and/or then, for example, align and/or physically connect the fiducial markers, e.g. on an arm or extender, and the surgical helmet, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. Fiducial markers can have an aiming mark, with a corresponding aiming mark on the HMD, so that the user can align the corresponding aiming marks and then, for example, align and/or physically connect the fiducial markers, e.g. on an arm or extender, and the HMD, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. Fiducial markers can have an aiming mark, with a corresponding aiming mark on the transparent portion, e.g. transparent face shield, so that the user can align the corresponding aiming marks and then, for example, align and/or physically connect the fiducial markers, e.g. on an arm or extender, and the face shield, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. Fiducial markers can have an aiming mark, with a corresponding aiming mark on the surgical helmet and/or the HMD and/or the transparent portion, e.g. transparent face shield, so that the user can align the corresponding aiming marks and/or then, for example, align and/or physically connect the fiducial markers, e.g. on an arm or extender, and the surgical helmet and/or the HMD and/or the transparent portion of the cover, e.g. transparent face shield, for example using a mechanical connector, e.g. a snap-on, ratchet, or dovetail like mechanism, e.g. with male and female parts. The mechanical means or mechanical connectors can be, in any part of the specification, for example, mechanical connectors, e.g. snap-on mechanisms, ratchet like mechanisms, e.g. wherein a female part can engage with a male part, e.g. on the cover for the surgical helmet or hood, transparent portion, e.g. transparent face shield, surgical helmet, one or more fiducial markers, including any fiducial markers mounted onto holding structures or arms or fixtures, and/or the HMD, dovetail like mechanisms, e.g. on the cover for the surgical helmet or hood, face shield, surgical helmet, one or more fiducial markers, including any fiducial markers mounted onto holding structures or arms or fixtures, and/or the HMD. Transparent Portion, e.g. Transparent Face Shield The transparent portion of the cover, e.g. transparent face shield, can have dimensions that can be less than, approximately the same, or greater than the facial dimensions of the user. The transparent portion, e.g. transparent face shield, can have dimensions that can be less than, approximately the same, or greater than the field of view of the user. The transparent portion, e.g. transparent face shield, can have dimensions that can be less than, approximately the same, or greater than the field of view of the HMD. The transparent portion, e.g. transparent face shield, can be made of plastic. The transparent portion, e.g. transparent face shield, can be transparent, partially transparent or semi-transparent. The transparent portion, e.g. transparent face shield, can include a UV light filter, e.g. integrated into the plastic or, for example, applied as a coating. In some embodiments, a HMD can be integrated into a surgical helmet or can be attached to a surgical helmet, e.g. using one or more mechanical connectors. The surgical helmet can have an integrated fan also. One or more fiducial markers can be attached to or integrated into a surgical helmet. One or more fiducial markers can be mounted on one or more holding arms or holding members attached to an HMD and/or a surgical helmet and/or a transparent portion, e.g. transparent face shield, which can optionally extend to or extend through the cover for the surgical helmet or hood (for example, if the cover for the surgical helmet or hood includes some holes or openings through which the holding arms or holding members can pass through the cover for the surgical helmet or hood to the outward facing side of the cover for the surgical helmet or hood). The holding arm or member can include a magnet, optionally located subjacent to the cover for the surgical helmet or hood and/or external or superjacent to the cover for the surgical helmet or hood. A second holding arm or member with an optional second magnet can be placed over the first holding arm or member outside the cover for the surgical helmet or hood. The two magnets can attract each other, thereby fixing the second holding arm or member inside and/or outside the cover for the surgical helmet or hood in a fixed spatial relationship to the first holding arm or member and the HMD and/or the surgical helmet (for example, if the HMD is integrated into the surgical helmet or attached to the surgical helmet) and/or the transparent portion, e.g. transparent face shield. The second holding arm or member can include one or more fiducial markers, e.g. located outside the cover for the surgical helmet or hood and visible to the navigation, image capture or other registration system. Multiple holding arms or holding members can be used, e.g. a first, second, third, fourth, fifth and more holding arm. Using a first, second, third, fourth, fifth or more magnets, the first, second, third, fourth, fifth or more holding arm or holding members can be connected. One or more magnets can be integrated into or attached to the portion of the holding arm subjacent to the cover for the surgical helmet. The base of the fiducial marker, or a holding arm with an integrated or attached fiducial marker, or the fiducial marker, external to the cover for the surgical helmet, can be made of metal. One or more magnets can be integrated into or attached to the base of the fiducial marker, or a holding arm with an integrated or attached fiducial marker, or the fiducial marker, external to or superjacent to the cover for the surgical helmet. The holding arm subjacent to the cover for the surgical helmet can be made of metal. One or more magnets can be integrated into or attached to the portion of the holding arm subjacent to the cover for the surgical helmet. One or more magnets can be integrated into or attached to the base of the fiducial marker, or a holding arm with an integrated or attached fiducial marker, or the fiducial marker, external to the cover for the surgical helmet. One or more magnets can engage or couple with a second one or more magnets. One or more magnets can engage or couple with a metal. In the foregoing embodiments, instead of a magnet, a lock, locking mechanism, an attachment, an attachment mechanism, mechanical means and/or mechanical connector connecting a first and second holding arm or member or a base of a fiducial marker or a fiducial marker can be used; in this embodiment, the first holding arm or member can extend through an optional hole in the cover for the surgical helmet or hood to allow for attaching, locking and/or connecting of the second holding arm or member with the attached fiducial marker or to allow for attaching, locking and/or connecting of the base of the fiducial marker or the fiducial marker. In any of the embodiments, a fiducial marker can be an optical marker, a geometric pattern, a bar code, a QR code, an alphanumeric code, a radiofrequency marker, an infrared marker, a retroreflective marker, an active marker, and a passive marker. Multiple holding arms or holding members can be used, e.g. a first, second, third, fourth, fifth and more holding arm. Using a first, second, third, fourth, fifth or more attachment, attachment mechanism, lock, locking mechanism, mechanical means and/or mechanical connector, the first, second, third, fourth, fifth or more holding arm or holding members can be connected or the first, second, third, fourth, fifth or more base of a fiducial marker or fiducial markers can be connected. In another embodiment, one or more markers can be directly attached to the HMD, for example portions of the HMD not covered by the cover for the surgical helmet or hood, and/or visible through the transparent portion of the cover for the surgical helmet or hood. The markers can be active markers, e.g. LEDs emitting infrared or visible light for detection by the navigation, image capture or other registration system. In another embodiment, fiducial markers can be mounted on one or more holding arms or holding members attached to the HMD and/or the surgical helmet, which can optionally extend to the clear see through portion or transparent portion of the cover for the surgical helmet or hood. The holding arm or member can include a magnet, optionally located subjacent to the clear see through portion or transparent portion of the cover for the surgical helmet or hood. A second holding arm or member with an optional second magnet can be placed over the first holding arm or member outside the clear see through portion or transparent portion of the cover for the surgical helmet or hood. The two magnets can attract each other, thereby fixing the second holding arm or member outside the clear see through portion of the cover for the surgical helmet or hood in a fixed spatial relationship to the first holding arm or member and the HMD and/or the surgical helmet (for example, if the HMD is integrated into the surgical helmet or attached to the surgical helmet). The second holding arm or member can include one or more fiducial markers, located outside the clear see through portion of the cover for the surgical helmet or hood and visible to the navigation, image capture or other registration system. A base of a fiducial marker or a fiducial marker with an integrated or attached optional second magnet can be placed over the first holding arm or member outside the clear see through portion or transparent portion of the cover for the surgical helmet or hood. The two magnets can attract each other, thereby fixing the base of the fiducial marker or the fiducial marker outside the clear see through portion or transparent portion of the cover for the surgical helmet or hood in a fixed spatial relationship to the first holding arm or member and the HMD and/or the surgical helmet (for example, if the HMD is integrated into the surgical helmet or attached to the surgical helmet). The base of the fiducial marker or the fiducial marker can comprise one or more fiducial markers, located outside the clear see through portion or transparent portion of the cover for the surgical helmet or hood and visible to the navigation, image capture or other registration system. Multiple holding arms or holding members can be used, e.g. a first, second, third, fourth, fifth and more holding arm. Multiple bases or fiducial markers can be used, e.g. a first, second, third, fourth, fifth and more base or fiducial marker. Using a first, second, third, fourth, fifth or more magnets, the first, second, third, fourth, fifth or more holding arm or holding members can be connected. Using a first, second, third, fourth, fifth or more magnets, the first, second, third, fourth, fifth or more base or fiducial marker can be connected. Instead of a magnet, a lock, locking mechanism, an attachment, an attachment mechanism, mechanical means and/or mechanical connector connecting the first and second holding arms or members, or the base or fiducial marker, can be used; in this embodiment, the first holding arm or member can extend through an optional hole in the clear see through portion of the cover for the surgical helmet or hood to allow for attaching or locking of the second holding arm or member with the attached fiducial marker or for attaching or locking the base of the fiducial marker or the fiducial marker. Multiple holding arms or holding members can be used, e.g. a first, second, third, fourth, fifth and more holding arm. Using a first, second, third, fourth, fifth or more attachment, attachment mechanism, lock, locking mechanism, mechanical means and/or mechanical connector, the first, second, third, fourth, fifth or more holding arm or holding members can be connected or attached or the first, second, third, fourth, fifth or more bases of fiducial markers or fiducial markers can be connected or attached. In another embodiment, an HMD can be attached to a surgical helmet. The attachment can be permanent. The attachment can be a removable connection. The surgical helmet can have an integrated fan also. The fiducial markers can be attached to or integrated into the surgical helmet and/or the HMD. The fiducial markers can be mounted on one or more holding arms or holding members or holding fixtures or bases attached to the helmet and/or the HMD, which can optionally extend to the cover for the surgical helmet or hood. The holding arm or member can include a magnet, optionally located subjacent to the cover for the surgical helmet or hood. A second holding arm or member with an optional second magnet can be placed over the first holding arm or member outside the cover for the surgical helmet or hood. The two magnets can attract each other, thereby fixing the second holding arm or member, or base or fiducial marker, inside and/or outside the cover for the surgical helmet or hood in a fixed spatial relationship to the first holding arm or member and the surgical helmet and/or the HMD. The second holding arm or member, or base or fiducial marker, can include one or more fiducial markers, located outside the cover for the surgical helmet or hood and visible to the navigation, image capture or other registration system. Multiple holding arms or holding members, or bases or fiducial markers, can be used, e.g. a first, second, third, fourth, fifth and more holding arm, or base or fiducial marker. Using a first, second, third, fourth, fifth or more magnets, the first, second, third, fourth, fifth or more holding arm or holding members or bases or fiducial markers can be connected. Thus, in any of the embodiments throughout the specification, multiple fiducial markers and multiple holding arms or holding members or holding fixtures for fiducial markers or bases for fiducial markers or fiducial markers can be used. Since the HMD can be attached or integrated in this and other embodiments to the surgical helmet or since the surgical helmet can be attached or integrated to the HMD in this and other embodiments, the spatial coordinates of the HMD in relationship to the helmet can be known; thus, a coordinate transfer can be used to track the HMD in its relationship to the one or more fiducial markers attached to the helmet; a coordinate transfer can be used to track the surgical helmet in its relationship to the one or more fiducial markers attached to the HMD; a coordinate transfer can be used to track the HMD in its relationship to the one or more fiducial markers attached to the surgical helmet. Instead of a magnet, an attachment, an attachment mechanism, a lock, locking mechanism, mechanical means and/or mechanical connector connecting the first and second holding arms or members or bases of fiducial markers or fiducial markers can be used; in this embodiment, the first holding arm or member can extend through an optional hole in the cover for the surgical helmet or hood to allow for attaching or locking of the second holding arm or member with the attached fiducial marker or for attaching or locking of the base of the fiducial marker or of the fiducial marker. In some embodiments, the HMD can be an integral part of a surgical helmet. In some embodiments, the surgical helmet can be an integral part of the HMD. Adjusting Mechanism for HMD The attachment of the HMD to the surgical helmet can be adjustable, for example, using an adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, to adjust for different facial and head shape of different users. The adjustment mechanism can be mechanical, electric, electromagnetic, piezoelectric etc. The amount of adjustment in x, y, and/or z-direction can be determined, e.g. on an attached scale, and can optionally be included in the coordinates of the HMD in relationship to the surgical helmet to which it is attached and any fiducial markers attached to the helmet. This can be, for example, useful when the HMD is attached to, coupled to, or connected to the surgical helmet and one or more markers, e.g. fiducial markers are also connected to the surgical helmet. Alternatively, at least one marker, e.g. via a holding member attaching and/or coupling the marker, can be connected and/or attached to the HMD; in this case, the HMD can be directly referenced/registered relative to the attached marker, irrespective of any adjustments of the position of the HMD, e.g. up, down, forward, backward and/or rotation/tilt to adjust the HMD position in relationship to the user's eyes and, optionally, a transparent portion of a cover for the surgical helmet. The adjusting mechanism can be configured to provide a forward or backward movement of the HMD in relationship to the user's face or eyes. The adjusting mechanism can be configured to provide an upward or downward movement of the HMD in relationship to the user's face or eyes. The adjusting mechanism can be configured to provide a tilting of the HMD in relationship to the user's face or eyes. The adjusting mechanism can be configured to provide a rotation of the HMD in relationship to the user's face or eyes, e.g. in a frontal (coronal) plane and/or in a sagittal plane. The adjusting mechanism can be configured to center the HMD and/or the display of the virtual data in relationship to the surgeon's eyes and/or the surgeon's visual field. The adjusting mechanism can be operated prior to the surgery to adjust the position (including a forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) of the HMD. The adjusting mechanism can be operated during the surgery to adjust the position (including the orientation, rotation, tilt) of the HMD, e.g. by the surgeon or a nurse. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can be subjacent to the cover for the surgical helmet and can be operated through the cover of the surgical helmet. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can extend external to the cover for the surgical helmet and can be touched directly, without interposed cover for the surgical helmet. In the latter embodiment, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof can optionally provided sterile. In some embodiments, the adjusting mechanism can include at least one magnet. The at least one magnet can couple at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet with at least portions of the adjusting mechanism located subjacent to the cover for the surgical helmet. The at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet can be provided sterile. In some embodiments, the adjusting mechanism can be electric, electromagnetic, or piezoelectric and the position of the HMD (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted using a computer processor and a user interface, as described in the specification or known in the art. In some embodiments, the adjusting mechanism can be electric, electromagnetic, e.g. using a motor, or piezoelectric and the position of the HMD (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted automatically using a computer processor and, optionally, eye tracking as described in the specification or known in the art. Adjusting Mechanism for Camera If a computer processor operates a camera (21,FIG.1) integrated into or attached to the HMD, e.g. for inside-out-tracking, an adjusting mechanism can be configured to provide a forward or backward movement of the camera, e.g. in relationship to the HMD, the surgical helmet, and/or the user's eyes or face. The adjusting mechanism can be configured to provide an upward or downward movement of the camera, e.g. in relationship to the HMD, the surgical helmet, and/or the user's eyes or face. The adjusting mechanism can be configured to provide a tilting of the camera, e.g. in relationship to the HMD, the surgical helmet, and/or the user's eyes or face. The adjusting mechanism can be configured to provide a rotation of the camera, e.g. in relationship to the HMD, the surgical helmet, and/or the user's eyes or face, e.g. in a frontal (coronal) plane and/or in a sagittal plane. The adjusting mechanism can be configured to center the camera, e.g. in relationship to the HMD, the surgical helmet, and/or the user's eyes or face and/or the surgeon's visual field. The adjusting mechanism can be operated prior to the surgery to adjust the position (including a forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) of the camera. The adjusting mechanism can be operated during the surgery to adjust the position (including the orientation, rotation, tilt) of the camera, e.g. by the surgeon or a nurse. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can be subjacent to the cover for the surgical helmet and can be operated through the cover of the surgical helmet. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can extend external to the cover for the surgical helmet and can be touched directly, without interposed cover for the surgical helmet. In the latter embodiment, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof can optionally provided sterile. In some embodiments, the adjusting mechanism can include at least one magnet. The at least one magnet can couple at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet with at least portions of the adjusting mechanism located subjacent to the cover for the surgical helmet. The at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet can be provided sterile. In some embodiments, the adjusting mechanism can be electric, electromagnetic, or piezoelectric and the position of the camera (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted using a computer processor and a user interface, as described in the specification or known in the art. In some embodiments, the adjusting mechanism can be electric, electromagnetic, e.g. using a motor, or piezoelectric and the position of the camera (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted automatically using a computer processor and, optionally, eye tracking as described in the specification or known in the art. Adjusting Mechanism for Surgical Helmet The attachment of the surgical helmet to the HMD can be adjustable, for example, using an adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, to adjust for different facial and head shape of different users. The adjustment mechanism can be mechanical, electric, electromagnetic, piezoelectric etc. The amount of adjustment in x, y, and/or z-direction can be determined, e.g. on an attached scale, and can be included in the coordinates of the HMD in relationship to the surgical helmet to which it is attached and the fiducial markers attached to the helmet. This can be, for example, useful when the HMD is attached to, coupled to, or connected to the surgical helmet and one or more markers, e.g. fiducial markers are also connected to the surgical helmet. Alternatively, at least one marker, e.g. via a holding member attaching and/or coupling the marker, can be connected and/or attached to the HMD; in this case, the HMD can be directly referenced/registered relative to the attached marker, irrespective of any adjustments of the position of the HMD, e.g. up, down, forward, backward and/or rotation/tilt to adjust the HMD position in relationship to the user's eyes and, optionally, a transparent portion of a cover for the surgical helmet. The adjusting mechanism can be configured to provide a forward or backward movement of the surgical helmet in relationship to the user's face or eyes and/or the HMD. The adjusting mechanism can be configured to provide an upward or downward movement of the surgical helmet in relationship to the user's face or eyes and/or the HMD. The adjusting mechanism can be configured to provide a tilting of the surgical helmet in relationship to the user's face or eyes and/or the HMD. The adjusting mechanism can be configured to provide a rotation of the surgical helmet in relationship to the user's face or eyes and/or the HMD, e.g. in a frontal (coronal) plane and/or in a sagittal plane. The adjusting mechanism can be configured to center surgical helmet in relationship to the surgeon's eyes and/or the surgeon's visual field and/or the HMD. The adjusting mechanism can be operated prior to the surgery to adjust the position (including a forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) of the surgical helmet. The adjusting mechanism can be operated during the surgery to adjust the position (including the orientation, rotation, tilt) of the surgical helmet, e.g. by the surgeon or a nurse. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can be subjacent to the cover for the surgical helmet and can be operated through the cover of the surgical helmet. In some embodiments, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof, can extend external to the cover for the surgical helmet and can be touched directly, without interposed cover for the surgical helmet. In the latter embodiment, at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof can optionally provided sterile. In some embodiments, the adjusting mechanism can include at least one magnet. The at least one magnet can couple at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet with at least portions of the adjusting mechanism located subjacent to the cover for the surgical helmet. The at least portions of the adjusting mechanism, e.g. a dial, knob, button, lever, slider, handle, handle bar, thread, ratchet, screw, ring, key, wrench or combinations thereof external or superjacent to the cover for the surgical helmet can be provided sterile. In some embodiments, the adjusting mechanism can be electric, electromagnetic, or piezoelectric and the position of the surgical helmet (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted using a computer processor and a user interface, as described in the specification or known in the art. In some embodiments, the adjusting mechanism can be electric, electromagnetic, e.g. using a motor, or piezoelectric and the position of the surgical helmet (including forward-backward, up-down movement, orientation adjustment, e.g. rotation, tilt) can be adjusted automatically using a computer processor and, optionally, eye tracking as described in the specification or known in the art. In some embodiments, the system can comprise a head mounted display; and an adjusting mechanism, wherein the head mounted display can be configured to be worn on a head of a user under a cover of a surgical helmet so that the display of the head mounted display can be adjacent to a transparent portion of the cover, wherein the adjusting mechanism can be configured to adjust at least a position of the head mounted display in relationship to the user's eyes; wherein the system can comprise a connecting mechanism configured to couple the head mounted display to the surgical helmet. The adjusting mechanism can comprise the connecting mechanism. The adjusting mechanism can be configured to adjust the head mounted display in relationship to the user's eyes in an x, a y, or a z-direction or combinations thereof. The adjustment of the head mounted display can comprise a translation of about 30 mm, about 25 mm, about 20 mm, about 15 mm, about 10 mm, about 7 mm, about 5 mm, about 4 mm, about 3 mm, about 2 mm, about 1 mm, about 0.5 mm, about 0.25 mm in at least one of an x, a y, or a z-direction or combinations thereof. The adjusting mechanism can be configured to rotate, tilt or rotate and tilt the head mounted display in relationship to at least one of the user's eyes. The rotating or tilting or rotating and tilting the head mounted display can comprise a change in angular orientation of about 30°, about 25°, about 20°, about 15°, about 10°, about 9°, about 8°, about 7°, about 6°, about 5°, about 4°, about 3°, about 2°, about 1°, about 0.5°, about 0.25° in at least one direction. The adjusting mechanism can be configured for adjusting the position of the head mounted display prior to a surgical procedure or during the surgical procedure or prior to and during the surgical procedure. The adjusting mechanism can comprise a mechanical mechanism configured to adjust the position of the head mounted display. The mechanical mechanism can comprise one or more mechanical elements, wherein the one or more mechanical element can comprise a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof. The mechanical mechanism can comprise one or more mechanical elements configured to be subjacent to the cover for the surgical helmet, and wherein the mechanical element can be configured to be operated through the cover for the surgical helmet. The mechanical mechanism can comprise one or more mechanical elements configured to be operated external to the cover for the surgical helmet. The mechanical element external to the cover for the surgical helmet can be provided sterile. The adjusting mechanism can comprise a motorized element, an electric element, an electromagnetic element, a piezoelectric element or combinations thereof configured to adjust the position of the head mounted display. The system can comprise a processor and a user interface, wherein the user interface can be configured to activate the motorized element, the electric element, the electromagnetic element, the piezoelectric element or combinations thereof, wherein the user interface can comprise a graphical user interface, a voice recognition, a gesture recognition, a virtual interface displayed by the head mounted display, a virtual keyboard displayed by the head mounted display, a virtual slider, a virtual button, an eye tracking system, a physical keyboard, a physical computer mouse, a physical button, a physical joy stick, a physical track pad, or a combination thereof. The system can comprise a processor and an eye tracking system, wherein the processor can be configured to receive one or more inputs from the eye tracking system and to generate one or more outputs for activating the motorized element, the electric element, the electromagnetic element, the piezoelectric element or combinations thereof for adjusting the position of the head mounted display. The eye tracking system can comprise at least one camera configured to be integrated into or attached to the head mounted display or the surgical helmet, wherein the at least one camera can be configured to track at least one eye of the user. The surgical procedure can comprise a knee replacement, hip replacement, shoulder joint replacement, ankle joint replacement, or a spinal procedure. The connecting mechanism can be configured to be attachable to the surgical helmet. The connecting mechanism can be configured to be integrated into the surgical helmet. The connecting mechanism can be configured to be attachable to the head mounted display. The connecting mechanism can be configured to be integrated into the head mounted display. The connecting mechanism can be configured to be integrated into a portion of a housing of the head mounted display. The connecting mechanism can be configured to attach the head mounted display to the surgical helmet. The at least head mounted display can be an optical see through head mounted display. The at least one head mounted display can be a video see through head mounted display. In some embodiments, the system can comprise a head mounted display, at least one holding member, at least one marker, and at least one magnetic coupling mechanism, wherein the at least one holding member can be configured to be integrated into or configured to be attached to the head mounted display, wherein the at least one magnetic coupling mechanism is configured to removably attach the at least one marker to the at least one holding member. Each of the at least one holding member can comprise a proximal and a distal end, wherein the proximal end of the holding member can be attached to the head mounted display, and wherein the at least one marker can be attached to the distal end of the holding member. The at least one holding member can be attached directly to the head mounted display. The at least one holding member can be attached indirectly to the head mounted display. The at least one magnetic coupling mechanism can comprise at least one magnetic element, and wherein the at least one magnetic element can be positioned at the distal end of the at least one holding member. The at least one magnetic element can be attached to the distal end of the at least one holding member, to the at least one marker or to the distal end of the at least one holding member and to the at least one marker. The at least one magnetic coupling mechanism can comprise a first magnetic element attached to the distal end of the at least one holding member and a second magnetic element attached to the to the at least one marker. At least one holding member can be configured to be removably attached to the head mounted display. The head mounted display can be configured to be worn under a cover of a surgical helmet by a user. The holding member can be configured to extend subjacent to a cover for a surgical helmet, and wherein the at least one marker can be configured to be attached to the at least one holding member superjacent to the cover for the surgical helmet. The processor can be configured to register the head mounted display in a coordinate system using the at least one marker. The processor can be integrated into the head mounted display. The processor can be external to the head mounted display. The holding member can be composed of at least one of a metal, plastic, magnetic material, or combinations thereof. The holding member can be rigid. The at least one marker can be part of a marker structure, wherein the marker structure can comprise two or more markers, wherein the two or more markers can be arranged in a geometrically predetermined orientation. The at least one magnetic coupling mechanism can be configured to position the at least one marker in a geometrically predetermined position and orientation relative to the head mounted display. The at least one marker can be configured to be mounted in a geometrically predetermined position and orientation with an accuracy of about 2 mm, about 1.5 mm, about 1 mm, about 0.5 mm, about 0.25 mm, about 0.1 mm, about 0.05 mm in at least one direction, or an accuracy of about 2°, about 1.5°, about 1°, about 0.5°, about 0.25°, about 0.1°, about 0.05° in at least one direction. The at least one magnetic coupling mechanism can comprise one or more neodymium magnets or other magnets. The at least one marker, the at least one holding member, the at least one magnetic coupling mechanism, or combinations thereof can comprise one or more mating features, wherein the one or more mating features can be configured to position the at least one marker in a geometrically predetermined position and orientation relative to the head mounted display. The predetermined position and orientation can be adjusted for a thickness of the cover for the surgical helmet. At least a portion of the holding member can be provided sterile. At least one marker can be provided sterile. At least one holding member can be configured to be integrated into, attached to or linked to the surgical helmet. The least one marker can comprise at least one of an optical marker, a geometric pattern, a bar code, a QR code, an alphanumeric code, a radiofrequency marker, an infrared marker, a retroreflective marker, an active marker, and a passive marker. The at least one holding member can be configured to be removably connected to the head mounted display using at least one magnetic mechanism, a mechanical attachment mechanism, an electromagnetic attachment mechanism, or combinations thereof. The at least one holding member can be configured to be removably connected to the surgical helmet using at least one of a magnetic mechanism, a mechanical attachment mechanism, an electromagnetic attachment mechanism, or combinations thereof. The at least one marker can comprise a base, wherein the base is mounted on the distal end of the at least one holding member using the at least one magnetic element. The at least one magnetic element can be configured to be integrated into or attached to the at least one of the at least one marker, the base holding the at least one marker, the at least one holding member, the head mounted display, the surgical helmet, the cover for the surgical helmet or combinations thereof. The head mounted display can be an optical see through head mounted display. The head mounted display can be a video see through head mounted display. The system can be configured to track the head mounted display during a surgical procedure. The system can use an outside in tracking. The system can use an inside out tracking. The system can use an outside in and an inside out tracking. In some embodiments, the system can track a head mounted display during a surgical procedure and can comprise a head mounted display; at least one marker; and at least one holding member, wherein the head mounted display can be configured to be integrated into or attached to a surgical helmet, wherein the at least one holding member can be integrated into or configured to be attached to or connected to the head mounted display, the surgical helmet or combinations thereof, wherein at least a portion of the at least one holding member can be configured to extend through at least one opening of a cover for the surgical helmet, wherein the at least one marker can be configured to be integrated into or attached to the at least one holding member external to the cover. The system can comprise a processor, wherein the processor can be configured to register the head mounted display in a coordinate system using the at least one marker. The processor can be integrated into the head mounted display. The processor can be external to the head mounted display. The system can be configured to track the head mounted display during a surgical procedure. The head mounted display can be configured to be integrated into or attached to the surgical helmet worn on a head of a user under the cover so that the display of the head mounted display can be adjacent to a transparent portion of the cover so as to permit the user to view the surgical site. The at least a portion of the holding member can be provided sterile. The at least the portion of the holding member that extends external of the cover for the surgical helmet can be provided sterile. The at least one marker can be provided sterile. The at least one holding member can be configured to be integrated into, attached to or connected to the surgical helmet. The least one marker comprises one or more optical marker, a geometric pattern, a bar code, a QR code, an alphanumeric code, a radiofrequency marker, an infrared marker, a retroreflective marker, an active marker, a passive marker, or combinations thereof. The at least one marker can be configured to be removably attached to the at least one holding member using a magnetic mechanism, a mechanical attachment mechanism, an electromagnetic attachment mechanism, or combinations thereof. The at least one holding member can be configured to be removably attached to the head mounted display using a magnetic mechanism, a mechanical attachment mechanism, an electromagnetic attachment mechanism, or combinations thereof. The at least one holding member can be configured to be attached to the surgical helmet using a magnetic mechanism, a mechanical attachment mechanism, an electromagnetic attachment mechanism, or combinations thereof. The at least one marker can comprise a base, wherein the base can be mounted on the at least one holding member. The head mounted display can be an optical see through head mounted display. The head mounted display can be a video see through head mounted display. The following are additional non-limiting illustrative embodiments describing various aspects of the invention.FIG.2Ais an illustrative example of a head mounted display (HMD). The frontal or front facing portion, module or component18, parietal portion, module or component20, occipital portion, module or component22can optionally include at least one processor, camera, depth sensor, storage media. The front portion or visor11, for example centered over or located over the eyes, can be transparent, e.g. in the case of an optical see through head mounted display, or can be opaque or non-transparent, e.g. in the case of a video see through head mounted display. The HMD can have extender tabs (grey)28(FIGS.2B,2C). Extender tabs28can be attachable and/or detachable. Extender tabs28can be used for attaching one or more fiducial markers and/or for attaching one or more holding arms, holding members or holding fixtures.FIG.2Dis an illustrative example of a snap on structure30with one or more snap on mechanisms32and one or more fiducial markers35. The snap on can, for example, be attached to the extender tab(s)28.FIGS.2E,2Fshow a fiducial marker35set attached to the HMD, in this example with extender tabs28and snap on30fiducial marker35set. By attaching the one or more fiducial markers35to the extender tabs28and/or one or more holding arms or holding members (not shown), the one or more fiducial markers35can be placed in an area that is not obscured by the cover for the surgical helmet or hood, but, for example, an area that is visible through the transparent portion of the cover, e.g. a transparent face shield. The transparent portion can be configured with regard to its dimensions, e.g. AP, ML, SI so that the fiducial markers35can be detected by a navigation system, image capture system, video system etc. through the transparent portion of the cover, e.g. a transparent face shield. FIGS.3A-Cshows an illustrative embodiment, where a holding member40or holding arm40or extender40is integrated or attached to an HMD. The holding member40can comprise a mechanical connector or attachment at the end or, in the example inFIGS.3A and3B, a magnet or magnetic connector45. The holding members can be configured to stay clear of the surgical helmet (not shown) on the surgeon's head or to connect to the surgical helmet or to connect to the HMD and the surgical helmet. The holding members40can optionally extend to/subjacent to the cover for the surgical helmet50or hood or through the cover for the surgical helmet or hood, if the cover for the surgical helmet or hood has openings to accommodate the posts. In this example, after a cover for the surgical helmet or hood50is applied, one or more fiducial markers35attached to a second holding member55(or a base) with an optional second magnet60and/or a second mechanical connector can be connected to the holding member40connected to the HMD, with the cover50for the surgical helmet or hood interposed. In this example, three fiducial markers35are seen on the top left and three fiducial markers35are seen on the top right. A first mechanical connector can be of male type, a second mechanical connector can be of female type. The first mechanical connector can be of female type, the second mechanical connector can be of male type. The fiducial markers35can be in a predetermined position and/or orientation relative to the HMD using the one or more holding members40,55(or bases) and the one or more magnets45,60. The predetermined position and orientation, and resultant coordinates of the HMD and/or the fiducial markers, can be adjusted for a thickness51of the cover for the surgical helmet; for example, the thickness of the cover can be added and/or subtracted to the relevant coordinates of the marker and/or the HMD and/or the surgical helmet. In some embodiments, the magnets, for example as seen inFIG.3B, can be substituted or supplemented by a male piece, for example connected or linked to the marker, the base of the marker or a holding member connected to the marker, and a female piece, for example linked, connected to, or integrated into the holding member attached to the HMD and/or the surgical helmet. The male piece can optionally be configured to comprise a sharp portion, wherein the user can utilize the sharp portion to pierce the cover for the surgical helmet and to advance the male portion to mate or connect with the female portion. The cover can optionally include at least one transparent portion in a location configured to visualize or make visible the female portion; in this manner, the user can advance the male portion with the sharp portion through the cover, piercing the cover, into the female portion under visual control. Alternatively, the user can palpate the female portion under the cover and advance the male portion with the sharp portion through the cover under tactile control. FIGS.4A-Dare illustrative examples of an integration of an HMD70with a surgical helmet75. Surgical helmets can be used, for example, for joint replacement surgery. The HMD70can be mechanically, e.g. removably, attached to the frame of the surgical helmet75. Two holding members40extend superiorly from the HMD70, comprising magnets45at the superior tip. A cover50for the surgical helmet with a transparent portion80can be applied over the surgical helmet75, and fiducial marker structure85with multiple fiducial markers35can be attached with use of optional mating magnets (not shown) attached to the base90of the fiducial marker structure85. The HMD can be protected from blood and body fluids that may be released from the knee or hip or spine or other surgical site during surgery. A cover50for the surgical helmet with a transparent portion80can be applied over the surgical helmet75, and fiducial marker structure85with optionally one or multiple fiducial markers35can be attached using the magnets.FIG.4Dshows the base90of the fiducial marker structure85, which can include, for example, an optional receptacle95for accepting a magnet (not shown). FIGS.5A-Dare illustrative examples of an integration of an HMD70with a surgical helmet75. The HMD70can be mechanically, e.g. removably, attached to the frame of the surgical helmet75. Two holding members40extend superiorly from the HMD70. A cover50for the surgical helmet with a transparent portion80can be applied over the surgical helmet75, and fiducial marker structure85with multiple fiducial markers35can be attached, for example using a mechanical connection or coupling mechanism, e.g. with a press fit and/or a lock and/or mating features. The HMD can be protected from blood and body fluids that may be released from the knee or hip or spine or other surgical site during surgery. A cover50for the surgical helmet with a transparent portion80can be applied over the surgical helmet75, and fiducial marker structure with optionally one or multiple fiducial markers35can be attached using the mechanical connection or coupling mechanism. The cover50can include one or more openings90through which the at least one holding member40can extend from the HMD and/or the surgical helmet, located underneath the cover50, to the area above the cover for placement of the fiducial marker structure85with optionally one or multiple fiducial markers35located above the cover50. The one or more fiducial markers35can also be placed directly on the at least one holding member40, above the cover50. FIGS.6A-Dare illustrative examples of an HMD70with holding members40attached to the front of the HMD with magnetic end portions45and/or mechanical connectors (not shown). Holding arm40or member can also be attached to clear, see through portion of HMD if optical see through head mounted display is used (as compared to video see through head mounted display). Holding arm40or member can also be attached to non-see through portion of HMD if video see through head mounted display is used. After sterile cover50or hood for the surgical helmet is applied, a second holding arm55or member with a second magnet60and/or a second mechanical connector (not shown here) can be connected, with the cover50for the surgical helmet or hood material interposed, to the first holding arm40or member with the first magnet attached to the HMD70front portion. Fiducial markers35are shown external to the cover for the surgical helmet (not shown). FIGS.7A-Dare illustrative examples of an HMD70with holding arms55or bases or fiducial markers35attached to outer lens face or front portion of the visor11. InFIGS.7A-B, a left and a right cluster of fiducial markers35on holding members55are shown. Individual fiducial markers also be directly attached to the outer lens face or front portion of the visor11or, inFIGS.7C-D, they can be attached using a base or small holding member95. FIGS.8A-Eare illustrative examples of an integration of an HMD70with a surgical helmet100. The surgical helmet100can include at least one holding member40with magnetic end portions or magnets/magnetic connectors45and/or mechanical connectors (not shown). The surgical helmet100can include external or internal attachment mechanisms110, locking mechanisms110or coupling mechanisms110(for example using mechanical connectors, electromagnetic connectors, or magnets) for connecting an HMD70. The HMD70can comprise external or internal attachment mechanisms120, locking mechanisms120or coupling mechanisms120(for example using mechanical connectors, electromagnetic connectors, or magnets) for connecting to the surgical helmet100. The attachment mechanisms120, locking mechanisms120or coupling mechanisms120of the HMD70can be male or female; the attachment mechanisms110, locking mechanisms110or coupling mechanisms110of the surgical helmet can be the corresponding counterpart male or female, e.g. with the HMD side being male, the surgical helmet side can be female; or with the surgical helmet side being male, the HMD side can be female.FIG.8Dshows the HMD70connected to the surgical helmet100using HMD side connectors or attachments120mating with surgical helmet side connectors110. The occipital portion, module or component22of the HMD70is visible; the head holder portion24of the HMD70is also shown. The holding members40can optionally extend to/subjacent to the cover for the surgical helmet50or hood or through the cover for the surgical helmet or hood, if the cover for the surgical helmet or hood has openings to accommodate the holding members. In this example, after a cover for the surgical helmet or hood50is applied, one or more fiducial markers35attached to a second holding member55(or a base) with an optional second magnet and/or a second mechanical connector can be connected to the holding member40connected to the HMD, with the cover50for the surgical helmet or hood interposed. In this example, three fiducial markers35are seen on the top left and three fiducial markers35are seen on the top right. The fiducial markers35can be in a predetermined position and/or orientation relative to the HMD using the one or more holding members40,55(or bases) and the one or more magnets45,60. The predetermined position and orientation, and resultant coordinates of the HMD and/or the fiducial markers, can be adjusted for a thickness51of the cover for the surgical helmet. FIGS.9A-Dare illustrative examples of an adjusting mechanism configured to adjust at least a position of the head mounted display attached or connected to a surgical helmet in relationship to the user's eyes. The head mounted display can be configured to be worn on a head of a user under a cover of a surgical helmet so that the display of the HMD is adjacent to a transparent portion of the cover (not shown). The system can comprise a connecting mechanism configured to couple the head mounted display to the surgical helmet. The adjusting mechanism can be configured to adjust the head mounted display in relationship to the user's eyes in an x, a y, or a z-direction or combinations thereof. The adjusting mechanism can be configured to rotate, tilt or rotate and tilt the head mounted display in relationship to at least one of the user's eyes. The adjusting mechanism can comprise a mechanical mechanism configured to adjust the position of the head mounted display; the mechanical mechanism can comprise one or more mechanical elements, wherein the one or more mechanical element can comprise a dial, knob140, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof. InFIG.9A, markers35are seen attached to a marker structure85, which can be attached to the HMD70. Am adjusting mechanism, e.g. a mechanical mechanism configured to adjust the position of the head mounted display, in this example a knob140, is seen, which can be operated, for example through an overlying or superjacent cover for a surgical helmet to move, rotate and/or tilt the HMD or the display unit130of the HMD70, including with the HMD connected or attached to the surgical helmet. InFIG.9B, the adjusting mechanism140has been operated to tilt the HMD including its display unit130.FIG.9Cis an illustrative example showing the HMD on the user's head. The HMD including its display unit130is not well aligned with the user's eyes150. InFIG.9D, the adjusting mechanism140has been operated to move, e.g. translate, rotate, tilt, the HMD; the display unit of the HMD130is now centered over the user's eyes150. Depending on the surgeon's head shape, the surgical helmet with a connected and/or attached HMD may move the HMD into a position and/or orientation where the HMD is not aligned with or centered over the surgeon's eyes and/or pupils and/or visual field. The adjusting mechanism can be used to move the HMD70, including the display unit of the HMD130, forward, backward, up and down, or to rotate and/or tilt it in relationship to the user's eyes. The virtual display can be centered in this manner in relationship to the surgeon's eyes and/or pupils and or the surgeon's visual field, even in the presence of a mechanical connection or attachment between the HMD and the surgical helmet.FIGS.10A-6are illustrative examples of an adjusting mechanism using a mechanical mechanism with mechanical elements comprising ratchet like elements160and knob like elements170for moving an attached HMD (not shown). The adjusting mechanism and mechanical elements can be integrated into or attached to the HMD, the surgical helmet or both. FIGS.11A-Jare illustrative, non-limiting examples of an adjusting mechanism configured to adjust at least a position of the head mounted display in relationship to the user's eyes, while the user is wearing a surgical helmet. The HMD can be attached to the surgical helmet using various attachment means or connecting mechanisms. The HMD can be removably attachable. The head mounted display can be configured to be worn on a head of a user under a cover of a surgical helmet so that the display of the head mounted display is adjacent to a transparent portion of the cover.FIG.11Ais an illustrative example of a surgical helmet. The surgical helmet can optionally comprise a chin portion180, a forehead or frontal portion190, side members185connecting the chin portion180with the frontal portion190, a head band or felt portion195, an air duct200, e.g. for circulating air inside the cover for the surgical helmet, a support element210, in this example connecting the air duct200with the frontal portion190. Frame215outlines the general area for the views including magnified views shown inFIGS.11B-J.FIG.11Bis a magnified view showing the air duct200, support element210, frontal portion190, and side members185.FIG.11Bis a side view showing the air duct200, support element210, frontal portion190, and side members185.FIG.11Cis another side view showing the air duct200, support element210, frontal portion190, and side members185. An attachment member or mechanism220for attaching the HMD (not shown) to the surgical helmet is shown. In this example, the attachment member or mechanism attaches to the support element210. The attachment member or mechanism220can attach to any other portion or element of the surgical helmet.FIG.11Dis a view from the rear showing a cross-section of the air duct200, a cross section of support element210, the rear (forehead facing) of the frontal portion190, and side members185. The attachment member or mechanism220for attaching the HMD (not shown) to the surgical helmet is shown, in this example partially encircling the support element210. Tabs or struts225extend from the attachment member or mechanism220over the top portion215of the support member210, thereby securing the attachment member or mechanism220to the support member210. The attachment member or mechanism220can attach to any other portion or element of the surgical helmet. FIG.11Eshows air duct200, support element210, frontal portion190, with attachment member or mechanism220attached to support element210. Attachment member or mechanism220comprises an optional recess or receptacle230for receiving a magnet (not shown) for attaching an HMD. The magnets can be configured for removable attachment of the HMD. Similarly, the attachment member or mechanism220can be configured to be removably attachable, for example using partially flexible tabs or struts (225,FIG.11D). Attachment member or mechanism220comprises also lateral centering elements235, configured to receive centering arms or struts (not shown) for attaching and aligning an HMD. FIGS.11F-Gshow illustrative, non-limiting examples of an attachment member or mechanism240, for example for attachment to or integration into an HMD. The attachment member or mechanism240can comprise holding members40, for example for accepting or attaching one or more markers, e.g. fiducial markers. The attachment member or mechanism240can include a recess or receptacle, for example for receiving a magnet or corresponding metal component or mechanical attachment mechanism to couple and/or connect the attachment member240(with the attached HMD) to the attachment member220attached to the surgical helmet. At least one magnet, for example placed in recess or receptacle230or placed in recess or receptacle250can be used for attaching the attachment member240(with the attached HMD) to the attachment member220attached to the surgical helmet; alternatively, at least one mechanical connector and/or mechanical connecting element can be used. Centering members or struts255can be configured to align with/to be placed in or to mate with centering elements235, thereby aligning the attachment member240(with the attached HMD) with the attachment member220and the attached surgical helmet in a predetermined pose/position and/or orientation. FIG.11Hshows air duct200, support element210, frontal portion190, with attachment member or mechanism220attached to support element210. Attachment member or mechanism220comprises an adjusting mechanism260to adjust at least a position of the head mounted display attached to or connected to the surgical helmet in relationship to the user's eyes. The adjustment mechanism260comprises an optional thread270to accept an optional screw (not shown). The adjustment mechanism260comprises also a slot280, for example for slideably engaging an attachment member of an HMD or for slideably engaging a portion of an HMD housing or holder or support member. FIGS.11I-Jshow air duct200, frontal portion190, with attachment member or mechanism220attached to support element210. Attachment member or mechanism220comprises an adjusting mechanism260to adjust at least a position of the head mounted display (not shown) attached to or connected to the surgical helmet in relationship to the user's eyes. The adjustment mechanism260comprises an optional thread to accept an optional screw290. The screw290can be threaded into an extension245of attachment member240integrated into or attached to the HMD/HMD housing (not shown). The adjustment mechanism260can comprise a wing nut, knob, lever300, or other mechanical element or motor, for example for moving the screw and for moving, rotating, and/or tilting the attachment member240integrated into or attached to the HMD/HMD housing, thereby moving, rotating or tilting the HMD to adjust at least a position of the HMD in relationship to the user's eyes, while the HMD is coupled to the surgical helmet, e.g. moving the HMD up, down, forward, backward, or rotating the HMD in relationship to the user's eyes, optionally centering the HMD over the user's eyes/pupils. In some embodiments, the adjusting mechanism can be configured to the center the HMD, and with it the display unit of the HMD and/or the display of virtual data over the user's eyes/pupils. In some embodiments, the adjusting mechanism can be configured to the adjust the position of the HMD, and with it the display unit of the HMD and/or the display of virtual data over the user's eyes/pupils so that the display is adjacent to a transparent portion of the cover for the surgical helmet and so that the user can observe the surgical site through the see through HMD and the transparent portion. In some embodiments, the adjusting mechanism can be activated pre-operatively to adjust the position of the HMD, and with it the display unit of the HMD and/or the display of virtual data in relationship to the user's eyes/pupils. In some embodiments, the adjusting mechanism can be activated intra-operatively to adjust the position of the HMD, and with it the display unit of the HMD and/or the display of virtual data in relationship to the user's eyes/pupils. The adjusting mechanism can comprise a mechanical mechanism configured to adjust the position of the head mounted display, wherein the mechanical mechanism can comprise one or more mechanical elements, wherein the one or more mechanical element can comprise a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof. In some embodiments, the mechanical elements can be configured so that they can be operated with the cover for the surgical helmet overlying the mechanical elements, i.e. with the mechanical elements, e.g. a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, subjacent to the cover for the surgical helmet. For example, in the latter embodiment, the user can palpate the mechanical elements, e.g. a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, through the cover and control them through the cover. In some embodiments, the cover for the surgical helmet can include at least one opening, with at least portions of one or more mechanical elements, e.g. dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, being positioned/located superjacent and/or external to the cover of the surgical helmet. Optionally, the at least portions of the one or more mechanical elements being positioned/located superjacent and/or external to the cover of the surgical helmet can be provided in sterile fashion. In some embodiments, the mechanical elements, e.g. a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, can comprise a metal base and can be located external to the cover for the surgical helmet and can be coupled, without an opening in the cover, to a holding member, connector or second mechanical element comprising a magnet. In some embodiments, the mechanical elements, e.g. a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, can comprise a magnet and can be located external to the cover for the surgical helmet and can be coupled, without an opening in the cover, to a holding member, connector or second mechanical element comprising a metal coupling portion. In some embodiments, the mechanical elements, e.g. a dial, knob, button, lever, slider, handle, handle bar, ring, key, wrench or combinations thereof, can comprise a first magnet and can be located external to the cover for the surgical helmet and can be coupled, without an opening in the cover, to a holding member, connector or second mechanical element comprising a second magnet. In some embodiments, the adjusting mechanism can comprise a motor, a motorized element, an electric element, an electromagnetic element, a piezoelectric element or combinations thereof and a user interface can be configured to control the motor, the motorized element, the electric element, the electromagnetic element, the piezoelectric element or combinations thereof for adjusting the position of the HMD in relationship to the user's eyes. In some embodiments, the adjusting mechanism can comprise a motor, a motorized element, an electric element, an electromagnetic element, a piezoelectric element or combinations thereof and an eye tracking system, including optionally one or more cameras directed at the eyes/pupils, can be configured to control the motor, the motorized element, the electric element, the electromagnetic element, the piezoelectric element or combinations thereof for adjusting the position of the HMD in relationship to the user's eyes. Reference is made to PCT application PCT/US19/15522, filed Jan. 29, 2019, and PCT application PCT/US18/12459, filed on Jan. 5, 2018, which are hereby incorporated by reference in their entireties. All publications, patent applications and patents mentioned herein are hereby incorporated by reference in their entirety as if each individual publication or patent was specifically and individually indicated to be incorporated by reference. | 215,312 |
11857379 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS As an improvement of known surface scanning systems, devices, controllers and methods for intraoperative surface scanning of soft tissue anatomical organ(s) during a surgical procedure, the present disclosure provides inventions for constructing an intraoperative scanned volume model of an anatomical organ based upon a sensing of a contact force applied by an surface scanning end-effector of a scanning robot to the anatomical organ whereby the contact force is indicative of a defined surface deformation offset of the anatomical organ. To facilitate an understanding of the various inventions of the present disclosure, the following description ofFIGS.1A and1Bteaches embodiments of a force sensed surface scanning method10and a force sensed surface scanning system20in accordance with the inventive principles of the present disclosure. From this description, those having ordinary skill in the art will appreciate how to practice various and numerous embodiments of force sensed surface scanning methods and force sensed surface scanning systems in accordance with the inventive principles of the present disclosure. Also from this description, those having ordinary skill in the art will appreciate an application of the force sensed surface scanning methods and force sensed surface scanning systems of the present disclosure in support of surgical procedures utilizing fusion of preoperative imaging and intraoperative imaging. Examples of such surgical procedure include, but are not limited to, a cardio-thoracic surgery, a prostatectomy, a splenectomy, a nephrectomy and a hepatectomy. Referring toFIG.1B, force sensed surface scanning system20employs a volume imaging modality31, a robotic system40and a surface scanning controller50. Volume imaging modality31is an imaging modality for generating a preoperative volume image of an anatomical region as known in the art of the present disclosure (e.g., a computed tomography imaging, a magnetic resonance imaging, an ultrasound imaging modality, a positron emission tomography imaging, and a single photo emission computed tomography imaging of a thoracic region, a cranial region, an abdominal region or a pelvic region). Robotic system40employs a scanning robot41, a robot controller42, a surface scanning end-effector43and an ultrasound imaging end-effector44. A scanning robot41is any type of robot, known in the art of the present disclosure or hereinafter conceived, that is structurally configured or structurally configurable with one or more end-effectors utilized in the performance of a surgical procedure. Further, scanning robot41is equipped with pose tracking technology and force sensing technology as known in the art of the present disclosure. In one exemplary embodiment, a scanning robot41is a snake scanning robot equipped with a rotary encoder embedded in each joint of the snake scanning robot for tracking a pose of the snake scanning robot as known in the art of the present disclosure, and further equipped with a force sensor, a pressure sensor, or an optical fiber for sensing a contact force between an end-effector of the snake scanning robot and an anatomical organ as known in the art of the present disclosure. Robot controller42controls a pose of scanning robot41within a relevant coordinate system in accordance with robot position commands55issued by surface scanning controller50as known in the art of the present disclosure. Surface scanning end-effector43is utilized to construct an intraoperative scanned volume model17of the anatomical region in accordance with the inventive principles of the present invention as will be further explained herein. In practice, surface scanning end-effector43may be any type of end-effector having a calibration scan reference thereon as known in the art of the present disclosure. In exemplary embodiments, surface scanning end-effector43may include mount holding a tool pointer having a spherical distal tip serving as a calibrated scanning reference, or may include a mount holding an ultrasound laparoscope having an ultrasound transducer serving as a calibrated scanning reference. Surgical imaging end-effector44is utilized to intraoperatively image an external surface and/or internal structures within the anatomical organ in support of a surgical procedure as known in the present disclosure. In an exemplary embodiment, surgical imaging end-effector44may be an ultrasound laparoscope, which may also serve as surface scanning end-effector43. In practice, surface scanning end-effector43is mounted onto scanning robot41whereby robot controller42controls scanning robot41in accordance with robot position commands55from surface scanning controller50to implement a robotic surface scanning12of force sensed surface scanning method10ofFIG.1Aas will be further explained herein. Subsequently, surgical imaging end-effector44is mounted onto scanning robot41whereby robot controller42controls scanning robot41in accordance with interactive or planned commands from an operator of robotic system40during a surgical procedure as will be further explained herein. Alternatively in practice, surface scanning end-effector43is affixed to scanning robot41whereby robot controller42controls scanning robot41in accordance with robot position commands55from surface scanning controller50to implement a robotic surface scanning12of force sensed surface scanning method10ofFIG.1Aas will be further explained herein. Subsequently, surgical imaging end-effector44is affixed to or mounted onto an additional scanning robot41whereby robot controller42controls the additional scanning robot41in accordance with interactive or planned commands from an operator of robotic system40during a surgical procedure as will be further explained herein. Surface scanning controller50controls an implementation of force sensed surface scanning method10(FIG.1A) of the present disclosure as will now be described herein. Referring toFIGS.1A and1B, force sensed surface scanning method10involves a scan path planning phase11, a robotic surface scanning phase12and a volume model registration phase13. Prior to a path planning phase11of method10, an imaging controller30is operated for controlling a generation by a volume imaging modality31of a preoperative volume image of an anatomical region as known in the art of the present disclosure (e.g., a computed tomography imaging, a magnetic resonance imaging, an ultrasound imaging modality, a positron emission tomography imaging, and a single photo emission computed tomography imaging of a thoracic region, a cranial region, an abdominal region and a pelvic region). Path planning phase11of method10encompasses a communication of volume image data14representative of the preoperative volume image of the anatomical organ to surface scanning controller50by any communication technique known in the art of the present disclosure (e.g., a data upload or a data streaming). Surface scanning controller50processes volume image data14to generate a preoperative image segmented volume model15of an anatomical organ within the anatomical region as known in the art of the present disclosure (e.g., a segmented volume model of a liver, a heart, a lung, a brain, a stomach, a spleen, a kidney, a pancreas, a bladder, etc.). Alternatively, imaging controller30may process volume image data14to generate preoperative image segmented volume model15of the anatomical organ as known in the art of the present disclosure whereby path planning phase11of method10encompasses a communication of preoperative image segmented volume model15of the anatomical organ to surface scanning controller50by any communication technique known in the art of the present disclosure (e.g., a data upload or a data streaming). Path planning phase11of method10further encompasses surface scanning controller50executing a scan path planning51involving a definition of a path along one or more segments or an entirety of a surface of preoperative image segmented volume model15of the anatomical organ as known in the art of the present disclosure. In one embodiment of scan path planning51, surface scanning controller50implements an operator or systematic delineation as known in the art of the present disclosure of a line sampling scan path on preoperative image segmented volume model15of the anatomical organ involving a continuous contact between surface scanning end-effector43and the anatomical organ as surface scanning end-effector43is traversed along one or more lines over a surface segment or an entire surface of preoperative image segmented volume model15of the anatomical organ. For example,FIG.2Aillustrates an exemplary delineation of a line sampling scan path15aincluding a plurality of lines traversing the surface of a preoperative image segmented volume model of a liver. In practice, the lines may be disconnected as shown or connected to any degree by an operator or system delineation of path15a. Alternatively in practice, a line sampling scan path may be defined independent of the preoperative image segmented volume. For example, the line sampling scan path may be a defined as a geometric pattern (e.g., a spiral pattern, a zigzag pattern, etc.) or as a random pattern (e.g., a white noise sampling scheme) or a combination thereof. In a second embodiment of scan path planning51, surface scanning controller50implements an operator or systematic delineation as known in the art of the present disclosure of a point sampling scan path on preoperative image segmented volume model15of the anatomical organ involving a periodic contact between surface scanning end-effector43and the anatomical organ as surface scanning end-effector42is traversed over a surface segment or an entire surface of preoperative image segmented volume model15of the anatomical organ. For example,FIG.2Billustrates an exemplary a delineation of a point sampling scan path15bincluding a plurality of points marked on a surface of a preoperative image segmented volume model of a liver. In practice, as designed by an operator or system delineation of path15b, the points may be arranged in a uniform pattern as shown or in a non-uniform pattern. Alternatively in practice, a point sampling scan path may be defined independent of the preoperative image segmented volume. For example, the line sampling scan path may be a defined as a geometric pattern (e.g., a spiral pattern, a zigzag pattern, etc.) or as a random pattern (e.g., a white noise sampling scheme) or a combination thereof. Further in practice, scan path planning51may also involve any combination of a line sampling scan path and a point sampling scan path delineated on preoperative image segmented volume model15of the anatomical organ. Additionally in practice, scan path planning51may be omitted for surface scanning controller50or not used by surface scanning controller50for a particular procedure. In this scenario, an operator of system20may control a navigation of scanning robot41in implementing an operator defined sampling scan path. Still referring toFIGS.1A and1B, robotic surface scanning phase12of method10encompasses an image guidance of surface scanning end-effector43in proximity of the anatomical organ whereby surface scanning controller50is operated to issue robot position commands55to robot controller42for controlling a navigation of surgical scanning end-effector43relative to the anatomical organ in accordance with the planned sampling scan path delineated on preoperative image segmented volume model15of the anatomical organ. More particularly, to facilitate a model registration53in accordance with the inventive principles of the present disclosure as will be further described herein, robotic system40communicates surface sensing data16to surface scanning controller50whereby surface scanning controller50implements a model construction52of an inoperative volume model17of the anatomical organ in accordance with the inventive principles of the present disclosure as will be further described herein. More particularly, surface sensing data16includes robotic position data45communicated by robot controller42to surface scanning controller50whereby robot position data45is informative of a current pose of scanning robot41within a coordinate system registered to the anatomical organ or preoperative segmented volume model as known in the art of the present disclosure. Surface sensing data16further includes force sensing data46informative of a contact force applied by the surface scanning end-effector43to the anatomical organ, and for imaging embodiments of surface scanning end-effector43, surface sensing data16further includes scan image data47representative of a current image slice of the anatomical image. Surface scanning controller50processing robot position data45, force sensing data46and scan image data47(if applicable) to construct an inoperative volume model17of the anatomical organ based on a physical behavior of a soft tissue of an anatomical organ under a minor deformation by scanning surface end-effector42(e.g., a tissue deformation in nanometers). Specifically, model construction52is premised on an assumption that the physical behaviour soft tissue of an anatomical organ under a minor deformation is both linearly elastic and one-dimensional. Under such conditions, an offset between undeformed anatomical tissue and deformed anatomical tissue may be calculated using the equation u=f/k, where u is a tissue displacement (offset), f is the sensed contact force between surface scanning end effector43and the deformed anatomical tissue, and k is a parameter describing viscoelastic properties of the anatomical organ. From the assumption, model construction52involves a designation of a defined scanning force parameter fDOand of a defined visocleastic property parameter k whereby a surface deformation offset uSDOmay be calculated to support the construction of the inoperative volume model17of the anatomical organ as will be further explained herein. In one embodiment of model construction52, an operator of surface scanning controller50via input devices and/or graphical interfaces provides or selects a visocleastic property parameter k as a constant value representative viscoelastic properties of the subject anatomical organ, and further provides or selects a scanning force parameter fDCat which the surface of the anatomical organ will be scanned (e.g., a contact force in meganewtons). A surface deformation offset uSDOis calculated from the provided/selected visocleastic property parameter k and scanning force parameter fDCto support the construction of the inoperative volume model17of the anatomical organ. Alternatively, the present disclosure recognizes a viscoelastic behavior of a soft tissue of an anatomical organ under deformation may be a very complex process. First, the viscoelastic parameters for any unevenly distributed force may be described by a multi-dimensional matrix, which takes into account the direction of the force and topology of the surface. Second, a linearity of the deformation holds true only for very small deformations (e.g., in the order of nanometers). Third, a viscoelastic property parameters k of the soft tissue of the anatomical organ may be either unknown due to tissue abnormalities or due to patient-specific anatomical characteristics. Thus, in a second embodiment of model construction52, surface deformation offset uSDOis empirically defined as will be further explained herein. Still referring toFIGS.1A and1B, as surface scanning controller50controls a navigation of surgical scanning end-effector43relative to the anatomical organ in accordance with the planned sampling scan path delineated on preoperative image segmented volume model15of the anatomical organ, robotic surface scanning phase12of method10further encompasses surface scanning controller50recording each positon of the calibrated scanned reference of scanning surface end-effector43that correspond to a contact force applied by surface scanning end-effector43to the anatomical organ equaling scanning force parameter fDC. In practice, the sensed contact form equaling the scanning force parameter fDCmay be enforced with an acceptable margin of error. Each recorded positon of the calibrated scanned reference of scanning surface end-effector43is deemed a digitized model point suitable for a generation of a sparse point cloud representation of the anatomical organ on the assumption of a uniform deformation offset of each recorded position of a digitized model point. In practice, as will be further explained herein, a line sampling scan path generates a sparse point cloud representation of the anatomical organ in view of a subset of positons of the calibrated scanned reference of scanning surface end-effector43corresponding to a contact force applied by surface scanning end-effector43to the anatomical organ equaling scanning force parameter fDCand further in view a subset of positons of the calibrated scanned reference of scanning surface end-effector43failing to correspond to a contact force applied by surface scanning end-effector43to the anatomical organ equaling scanning force parameter fDC. Also in practice, as will be further explained herein, a point sampling scan path generates a sparse point cloud representation of the anatomical organ based on the spatial delineation of the points on preoperative image segmented volume model15of the anatomical organ. For non-imaging embodiments of scanning surface end-effector43, robotic surface scanning phase12of method10further encompasses surface scanning controller50constructing intraoperative volume model17as a mesh created from the sparse point cloud representation via any mesh construction technique known in the art of the present disclosure (e.g., a Delaunay triangulation). Due to the defined deformation offset, the mesh will have a comparable shape to a shape of the preoperative image segmented volume model15of the anatomical organ for registration purposes, but the mesh will have a not necessarily have a comparable size to a size of the preoperative image segmented volume model15of the anatomical organ. While not necessary for most registration processes, to achieve comparable sizes, surface scanning controller50may further calculate normal vectors at each vertex as a function of the defined deformation offset via any mesh normalization technique known in the art of the present disclosure (e.g., a Mean Weight Equal), and displace each point of the mesh in a direction of the associated normal vector to increase the size yet maintain the shape of the mesh. For imaging embodiments of scanning surface end-effector43, robotic surface scanning phase12of method10further encompasses surface scanning controller50stitching images associated with each point of the mesh, unsized or sized to thereby render intraoperative volume model17as an image of the anatomical organ. In practice, while stitching images associated with each point of the mesh, surface scanning controller50may interpolate images missing from the mesh due to unrecorded positions of the calibrated scanned reference of scanning surface end-effector43. To facilitate an understanding of the various inventions of the present disclosure, the following description ofFIGS.3A-3Cillustrates exemplary recorded positions of digitize model points in accordance with the inventive principles of the present disclosure. From this description, those having ordinary skill in the art will further appreciate how to practice various and numerous embodiments of force sensed surface scanning methods and force sensed surface scanning systems in accordance with the inventive principles of the present disclosure. Referring toFIG.3A, surface scanning end-effector43is shown deforming an anatomical organ prior to a scanning of the surface of the anatomical organ. More particularly, surface scanning controller50controls a positioning of scanning end-effector43relative to the anatomical organ to initially apply a contact force unto the tissue of the anatomical organ resulting in an OFFSET1between undeformed anatomical tissue UAT and deformed anatomical tissue DAT1. The positioning of scanning end-effector43is adjusted until a sensed contact force SCF1per force sensing data FSD equals a desired contact force DCF whereby OFFSET1between undeformed anatomical tissue UAT and deformed anatomical tissue DAT1is deemed to equate the defined surface deformation offset uSDOof the anatomical organ as previously described herein. Consequently, from a corresponding robot positon RP1per robot position data45, surface scanning controller50records calibrated scanned reference positon SRP of surface scanning end-effector43represented by the black dot as the initial digitized model point DMP1. During a scanning of the surface of the anatomical organ,FIG.3Billustrates a repositioning of scanning end-effector43to a robot positon RPXrelative to the anatomical organ resulting in OFFSETXbetween undeformed anatomical tissue UAT and deformed anatomical tissue DATXwith a sensed contact force SCFXper force sensing data FSD equals a desired contact force DCF, andFIG.3Billustrates a repositioning of scanning end-effector43to a robot positon RPYrelative to the anatomical organ resulting in OFFSETYbetween undeformed anatomical tissue UAT and deformed anatomical tissue DATYwith a sensed contact force SCFYper force sensing data FSD that does not equal a desired contact force DCF. For point sampling scan path embodiments, the repositioning of scanning end-effector43is adjusted until a sensed contact force SCF per force sensing data FSD equals a desired contact force DCF as shown inFIG.3Bwhereby OFFSETXbetween undeformed anatomical tissue UAT and deformed anatomical tissue DATXis deemed to equate the defined surface deformation offset uSDOof the anatomical organ as previously described herein. Consequently, from a corresponding robot positon RPXper robot position data45, surface scanning controller50records calibrated scanned reference positon SRP of surface scanning end-effector43represented by the black dot as an additional digitized model point DMPX. This process is repeated for each point in the point sampling scan path. For line sampling scan path embodiments, as surface sensing end-effector43is traversed along a line over the surface of the anatomical organ, surface scanning controller50will digitize robot positions RPXas shown inFIG.3Band will not digitize robot positions RPYas shown inFIG.3Cor any other robot positon failing to sense a contact force equaling the scanning force parameter SFP. The result for either embodiment is a spare cloud representation of the anatomical organ facilitating of an unsized or resized mesh creation of inoperative volume model17. Referring back toFIGS.1A and1B, volume model registration13of method10encompasses surface scanning controller50implementing a model registration53of preoperative segmented volume model15and intraoperative volume model17via a registration technique as known in the art of the present disclosure. In mesh embodiments of intraoperative volume model17, surface scanning controller50may execute a point-by-point registration technique for registering preoperative segmented volume model15and intraoperative volume model17. Examples of such a point-by-point registration technique include, but are not limited to, a rigid or non-rigid Iterative Closer Point (ICP) registration, a rigid or non-rigid Robust Point Matching (RPM) registration and a particle filter based registrations. In stitched image embodiments of intraoperative volume model17, surface scanning controller50may execute an image registration technique for registering preoperative segmented volume model15and intraoperative volume model17. Examples of such a point-by-point registration technique include, but are not limited to, an internal anatomical landmark based image registration (e.g., bifurcations or calcifications), an internal implanted marker based image registration and a mutual information based image registration. Still referringFIGS.1A and1B, upon completion of the scanning process, surface scanning controller50may implement a model fusion54based on model registration53as known in the art of the present disclosure whereby a registered model fusion56may be displayed within an applicable coordinate system as symbolically shown. In one embodiment, registered model fusion56includes an overlay of preoperative segmented volume model15onto intraoperative volume model17. In another embodiment, registered model fusion56includes an overlay of preoperative segmented volume model15onto the anatomical organ as registered to the coordinate system of robotic system40. To facilitate an understanding of the various inventions of the present disclosure, the following description ofFIGS.4and5teaches additional embodiments of a force sensed surface scanning system100and a force sensed surface scanning system140in accordance with the inventive principles of the present disclosure. From this description, those having ordinary skill in the art will further appreciate how to practice various and numerous embodiments of force sensed surface scanning methods and force sensed surface scanning systems in accordance with the inventive principles of the present disclosure. Referring toFIG.4, force sensed surface scanning system100employs a snake scanning robot110, a tool pointer113, an ultrasound laparoscope114and an endoscope115. For scanning purposes, tool pointer113or ultrasound laparoscope114may be mounted onto snake scanning robot110as known in the art of the present disclosure. Snake scanning robot110is equipped with either force/pressure sensor(s)111and/or optical fiber(s)112for sensing a contact force applied by a mounted tool pointer113or ultrasound laparoscope114to an anatomical organ as known in the art of the pressure disclosure. Endoscope115is mountable on additional snake scanning robot110for purposes of viewing a positioning of tool pointer113or ultrasound laparoscope114in proximity of a surface of an anatomical organ. Force sensed surface scanning system100further employs a workstation120and a scanning control device130. Workstation120includes a known arrangement of a monitor121, a keyboard122and a computer123as known in the art of the present disclosure. Scanning control device130employs a robot controller131, a surface scanning controller132and a display controller137, all installed on computer123. In practice, robot controller131, surface scanning controller132and display controller137may embody any arrangement of hardware, software, firmware and/or electronic circuitry for implementing a force sensed surface scanning method as shown inFIG.5in accordance with the inventive principles of the present disclosure as will be further explained herein. In one embodiment, robot controller131, surface scanning controller132and display controller137each may include a processor, a memory, a user interface, a network interface, and a storage interconnected via one or more system buses. The processor may be any hardware device, as known in the art of the present disclosure or hereinafter conceived, capable of executing instructions stored in memory or storage or otherwise processing data. In a non-limiting example, the processor may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar devices. The memory may include various memories, as known in the art of the present disclosure or hereinafter conceived, including, but not limited to, L1, L2, or L3 cache or system memory. In a non-limiting example, the memory may include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices. The user interface may include one or more devices, as known in the art of the present disclosure or hereinafter conceived, for enabling communication with a user such as an administrator. In a non-limiting example, the user interface may include a command line interface or graphical user interface that may be presented to a remote terminal via the network interface. The network interface may include one or more devices, as known in the art of the present disclosure or hereinafter conceived, for enabling communication with other hardware devices. In a non-limiting example, the network interface may include a network interface card (NIC) configured to communicate according to the Ethernet protocol. Additionally, the network interface may implement a TCP/IP stack for communication according to the TCP/IP protocols. Various alternative or additional hardware or configurations for the network interface will be apparent. The storage may include one or more machine-readable storage media, as known in the art of the present disclosure or hereinafter conceived, including, but not limited to, read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media. In various non-limiting embodiments, the storage may store instructions for execution by the processor or data upon with the processor may operate. For example, the storage may store a base operating system for controlling various basic operations of the hardware. The storage may further store one or more application modules in the form of executable software/firmware. More particularly, still referring toFIG.4, robot controller131includes application module(s) for controlling a navigation of snake scanning robot100within a robotic coordinate system as known in the art of the present disclosure, and display controller137includes application module(s) for controlling a display of images, graphical user interfaces, etc. on monitor120as known in the art of the present disclosure. Surface scanning controller132includes application modules in the form of a scanning commander (133)133, a model constructor (134)134, a model registor137and a model fuser136for controlling the implementation of the force sensed surface scanning method as shown inFIG.5in accordance with the inventive principles of the present disclosure as will be further explained herein. In practice, scanning control device130may be alternatively or concurrently installed on other types of processing devices including, but not limited to, a tablet or a server accessible by workstations and tablets, or may be distributed across a network supporting an execution of a surgical procedure utilizing a force sensed surface scanning method of the present disclosure as shown inFIG.5. Also in practice, controllers131,132and135may be integrated components, segregated components or logically partitioned components of scanning control device130. FIG.5illustrates a flowchart140representative of a force sensed surface scanning method in accordance with the inventive principles of the present disclosure that is implemented by application modules133-136of surface scanning controller132as will now be described herein. Referring toFIG.5, a stage S142of flowchart140encompasses pre-scanning activities implemented by scanning commander133(FIG.4). These pre-scanning activities include, but are not limited to,1. scanning commander133controlling a registration of snake scanning robot110and a preoperative segmented volume model registration as known in the art;2. scanning commander133controlling a planning of a sampling scanning path for snake scanning robot110as previously described herein in connection with the description ofFIGS.1A and1B, particularly a line sampling scanning path or a point sampling scanning path;3. scanning commander133controlling a graphical user interface for an operator provision or an operator selection of viscoelastic property parameter k and scanning force parameter f, and4. scanning commander133controlling an initial offset positioning of a surface sensing end-effector, such as, for example, an initial positioning of tool pointer113as shown inFIG.6Aor an initial positioning of ultrasound laparoscope114as shown inFIG.7A. More particularly, a defined surface deformation offset u is calculated from the provided/selected viscoelastic property parameter k and scanning force parameter f whereby scanning parameter133controls the initial offset positioning of the surface sensing end-effector to equate a sensed contact force to scanning force parameter f to thereby achieve a defined surface deformation offset u between an undeformed anatomical tissue and a deformed anatomical tissue of the anatomical organ as previously described herein. For embodiments whereby viscoelastic property parameter k is known, defined surface deformation offset u may be empirically defined by:1. scanning commander133controlling a graphical user interface for operator control of an initial offset positioning of a surface sensing end-effector at a selected non-zero sensed control force, such as, for example, an initial positioning of tool pointer113as shown inFIG.6Aor an initial positioning of ultrasound laparoscope114as shown inFIG.7A; and2. scanning commander133retracting the surface sensing end-effector until such time the sensed control force is zero; and3. scanning commander133defining scanned force parameter f as the selected non-zero sensed control force associated with the initial offset positioning of the surface sensing end-effector, and further defining surface deformation offset u as the retraction distance of the surface sensing end-effector. Alternatively in practice, a sampling scan path may be defined independent of the preoperative image segmented volume during stage S142, thereby omitting a requirement to register snake scanning robot110to the preoperative segmented volume model. For example, the sampling scan path may be a defined as a geometric pattern (e.g., a spiral pattern, a zigzag pattern, etc.) or as a random pattern (e.g., a white noise sampling scheme) or a combination thereof. For such an alternative embodiment of stage S142, a surface of the anatomical organ is exposed via a surgical port, and the snake scanning robot110is inserted through the surgical port to the surface of the anatomical organ until reaching the initial offset positioning of the surface sensing end-effector or a position for an empirical definition of the surface deformation offset u. Thereafter snake scanning robot110is manually or controller operated to follow a predefined geometric pattern or to randomly traverse the surface of the anatomical organ or a combination thereof. Still referring toFIG.5, a stage S144of flowchart140encompasses scanning activities implemented by scanning commander133(FIG.4) and model constructor (134)134(FIG.4). These scanning activities include, but are not limited to,1. scanning commander133controlling a navigation of snake scanning robot110relative to the anatomical organ in accordance with the planned sampling scan path as previously described herein in connection with the description ofFIGS.1A and1B; and2A. model constructor134constructing an intraoperative volume mesh as previously described herein in connection with the description ofFIGS.1A and1B, such as for example, an intraoperative volume mesh170shown inFIG.6E; or2B. model constructor134stitching an intraoperative volume image as previously described herein in connection with the description ofFIGS.1A and1B, such as for example, an intraoperative volume image180shown inFIG.7E. More particular to embodiments of stage S144utilizing tool pointer113, the navigation of snake scanning robot110will result in a digitization of sample points indicating a sensed contact force equating scanned force parameter f as exemplary shown inFIG.6Band a non-digitization of sample point indicating a sensed contact force not equating scanned force parameter f as exemplary shown inFIG.6C. Referring toFIG.6D, a graph150may be displayed to an operator of workstation120(FIG.4) to thereby visualize digitization time periods152and154of specific sample point(s) and non-digitization time periods151,153and155of the remaining sample point(s). In one embodiment, non-digitization time period151represents a pre-scanning positioning of tool pointer113relative to the anatomical region with digitization time periods152and154representing multiple digitized sample points during a line sampling scan of the anatomical organ. In another embodiment, non-digitization151time period represents a pre-scanning positioning of tool pointer113relative to the anatomical region with digitization time periods152and154representing a single digitize sample point during a point sampling scan of the anatomical organ. Referring back toFIG.4, more particular to embodiments of stage S144utilizing ultrasound laparoscope114, the navigation of snake scanning robot110will result in a digitization of sample points indicating a sensed contact force equating scanned force parameter f as exemplary shown inFIG.7Band a non-digitization of sample point indicating a sensed contact force not equating scanned force parameter f as exemplary shown inFIG.7C. Referring toFIG.7D, a graph170may be displayed to an operator of workstation120(FIG.4) to thereby visualize digitization time periods172and174of specific sample point(s) and non-digitization time periods171,173and175of the remaining sample point(s). In one embodiment, non-digitization time period171represents a pre-scanning positioning of ultrasound laparoscope114relative to the anatomical region with digitization time periods172and174representing multiple digitized sample points during a line sampling scan of the anatomical organ. In another embodiment, non-digitization171time period represents a pre-scanning positioning of ultrasound laparoscope114relative to the anatomical region with digitization time periods172and174representing a single digitize sample point during a point sampling scan of the anatomical organ. Referring back toFIG.4, a stage S146of flowchart140encompasses post-scanning activities implemented by model constructor134(FIG.4) and/or model registor135. These post-scanning activities include, but are not limited to,1A. model constructor134optionally controlling resizing of the intraoperative volume mesh as a function of the defined surface deformation offset as previously described herein in connection with the description ofFIGS.1A and1B, such as, for example, a resizing of an intraoperative volume mesh150to an intraoperative volume mesh151as shown inFIG.6F(note the resizing will normally be in nanometers, thus the resizing as shown inFIG.6Fis exaggerated to visualize the concept); and2A. model registor135registering the unsized/resized intraoperative volume mesh to the preoperative segmented volume model as previously described herein in connection with the description ofFIGS.1A and1B; or1B. model constructor134optionally controlling resizing of the intraoperative volume image as a function of the defined surface deformation offset as previously described herein in connection with the description ofFIGS.1A and1B, such as, for example, a resizing of an intraoperative volume image180to an intraoperative volume mesh181as shown inFIG.7F(note the resizing will normally be in nanometers, thus the resizing as shown inFIG.7Fis exaggerated to visualize the concept); and2B. model registor135registering the unsized/resized intraoperative volume image to the preoperative segmented volume model as previously described herein in connection with the description ofFIGS.1A and1B. Upon completion of stage S146, model fuser136implements a fusion technique as known in the art of the present disclosure for generating a registered model fusion138as previously described herein whereby display controller137controls a display of registered model fusion138as shown. Referring toFIGS.1-7, those having ordinary skill in the art will appreciate numerous benefits of the present disclosure including, but not limited to, an improvement over surface scanning systems, devices, controllers and methods by the inventions of the present disclosure providing a construction of an intraoperative scanned volume model of an anatomical organ based upon a sensing of a contact force applied by an surface scanning end-effector of a scanning robot to the anatomical organ whereby the contact force is indicative of a defined surface deformation offset of the anatomical organ, thereby enhancing a registration of the intraoperative surface scanned volume model of the anatomical organ with a preoperative image segmented volume model of the anatomical organ. Furthermore, as one having ordinary skill in the art will appreciate in view of the teachings provided herein, features, elements, components, etc. described in the present disclosure/specification and/or depicted in the Figures may be implemented in various combinations of electronic components/circuitry, hardware, executable software and executable firmware and provide functions which may be combined in a single element or multiple elements. For example, the functions of the various features, elements, components, etc. shown/illustrated/depicted in the Figures can be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions can be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which can be shared and/or multiplexed. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and can implicitly include, without limitation, digital signal processor (“DSP”) hardware, memory (e.g., read only memory (“ROM”) for storing software, random access memory (“RAM”), non-volatile storage, etc.) and virtually any means and/or machine (including hardware, software, firmware, circuitry, combinations thereof, etc.) which is capable of (and/or configurable) to perform and/or control a process. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (e.g., any elements developed that can perform the same or substantially similar function, regardless of structure). Thus, for example, it will be appreciated by one having ordinary skill in the art in view of the teachings provided herein that any block diagrams presented herein can represent conceptual views of illustrative system components and/or circuitry embodying the principles of the invention. Similarly, one having ordinary skill in the art should appreciate in view of the teachings provided herein that any flow charts, flow diagrams and the like can represent various processes which can be substantially represented in computer readable storage media and so executed by a computer, processor or other device with processing capabilities, whether or not such computer or processor is explicitly shown. Furthermore, exemplary embodiments of the present disclosure can take the form of a computer program product or application module accessible from a computer-usable and/or computer-readable storage medium providing program code and/or instructions for use by or in connection with, e.g., a computer or any instruction execution system. In accordance with the present disclosure, a computer-usable or computer readable storage medium can be any apparatus that can, e.g., include, store, communicate, propagate or transport the program for use by or in connection with the instruction execution system, apparatus or device. Such exemplary medium can be, e.g., an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include, e.g., a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), flash (drive), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD. Further, it should be understood that any new computer-readable medium which may hereafter be developed should also be considered as computer-readable medium as may be used or referred to in accordance with exemplary embodiments of the present disclosure and disclosure. Having described preferred and exemplary embodiments of novel and inventive force sensed surface scanning systems, devices, controllers and methods, (which embodiments are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons having ordinary skill in the art in light of the teachings provided herein, including the Figures. It is therefore to be understood that changes can be made in/to the preferred and exemplary embodiments of the present disclosure which are within the scope of the embodiments disclosed herein. Moreover, it is contemplated that corresponding and/or related systems incorporating and/or implementing the device or such as may be used/implemented in a device in accordance with the present disclosure are also contemplated and considered to be within the scope of the present disclosure. Further, corresponding and/or related method for manufacturing and/or using a device and/or system in accordance with the present disclosure are also contemplated and considered to be within the scope of the present disclosure. | 45,992 |
11857380 | When practical, similar reference numbers denote similar structures, features, or elements. DETAILED DESCRIPTION Implantable medical devices (IMDs), such as cardiac pacemakers or implantable cardioverter defibrillators (ICDs), provide therapeutic electrical stimulation to the heart of a patient. This electrical stimulation may be delivered via electrodes on one or more implantable endocardial or epicardial leads that are positioned in or on the heart. This electrical stimulation may also be delivered using a leadless cardiac pacemaker disposed within a chamber of the heart. Therapeutic electrical stimulation may be delivered to the heart in the form of electrical pulses or shocks for pacing, cardioversion or defibrillation. An implantable cardiac pacemaker may be configured to facilitate the treatment of cardiac arrhythmias. The devices, systems and methods of the present disclosure may be used to treat cardiac arrhythmias including, but not limited to, bradycardia, tachycardia, atrial flutter and atrial fibrillation. Resynchronization pacing therapy may also be provided. While embodiments of the present disclosure refer to a cardiac pacing system, is understood that the implantable medical device may additionally be an implantable defibrillator used to treat disruptive cardiac arrhythmias. A cardiac pacemaker consistent with the present disclosure may include a pulse generator implanted adjacent the rib cage of the patient, for example, on the ribcage under the pectoral muscles, laterally on the ribcage, within the mediastinum, subcutaneously on the sternum of the ribcage, and the like. One or more leads may be connected to the pulse generator. A lead may be inserted, for example, between two ribs of a patient so that the distal end of the lead is positioned within the mediastinum of the patient adjacent, but not touching, the heart. The distal end of the lead may include an electrode for providing electrical pulse therapy to the patient's heart and may also include at least one sensor for detecting a state of the patient's organs and/or systems. The cardiac pacemaker may include a unitary design where the components of the pulse generator and lead are incorporated within a single form factor. For example, where a first portion of the unitary device resides within the subcutaneous tissue and a second portion of the unitary device is placed through an intercostal space into a location within the mediastinum. FIG.1is a front-view100of a pulse generator102having features consistent with implementations of the current subject matter. The pulse generator102may be referred to as a cardiac pacemaker. The pulse generator102can include a housing104, which may be hermetically sealed. In the present disclosure, and commonly in the art, housing104and everything within it may be referred to as a pulse generator, despite there being elements inside the housing other than those that generate pulses (for example, processors, storage, battery, etc.). Housing104can be substantially rectangular in shape and the first end of the housing104may include a tapered portion108. The tapered portion can include a first tapered edge110, tapered inwardly toward the transverse plane. The tapered portion108can include a second tapered edge112tapered inwardly toward the longitudinal plane. Each of the first tapered edge110and the second tapered edge112may have a similar tapered edge generally symmetrically disposed on the opposite side of tapered portion108, to form two pairs of tapered edges. The pairs of tapered edges may thereby form a chisel-shape at the first end106of pulse generator102. When used in the present disclosure, the term “chisel-shape” refers to any configuration of a portion of housing104that facilitates the separation of tissue planes during placement of pulse generator102into a patient. The “chisel-shape” can facilitate creation of a tightly fitting and properly sized pocket in the patient's tissue in which the pulse generator may be secured. For example, a chisel-shape portion of housing104may have a single tapered edge, a pair of tapered edges, 2 pairs of tapered edges, and the like. Generally, the tapering of the edges forms the shape of a chisel or the shape of the head of a flat head screwdriver. In some variations, the second end114of the pulse generator can be tapered. In other variations, one or more additional sides of the pulse generator102can be tapered. Housing104of pulse generator102can include a second end114. The second end114can include a port assembly116. Port assembly116can be integrated with housing104to form a hermetically sealed structure. Port assembly116may be configured to facilitate the egress of conductors from housing104of pulse generator102while maintaining a seal. For example, port assembly116may be configured to facilitate the egress of a first conductor118and a second conductor120from housing104. The first conductor118and the second conductor120may combine within port assembly116to form a twin-lead cable122. In some variations, the twin-lead cable122can be a coaxial cable. The twin-lead cable122may include a connection port124remote from housing104. Connection port124can be configured to receive at least one lead, for example, a pacing lead. Connection port124of the cable122can include a sealed housing126. Sealed housing126can be configured to envelope a portion of the received lead(s) and form a sealed connection with the received lead(s). Port assembly116may be made from a different material than housing104. For example, housing104may be made from a metal alloy and port assembly116may be made from a more flexible polymer. While port assembly116may be manufactured separately from housing104and then integrated with it, port assembly116may also be designed to be part of housing104itself. The port assembly116may be externalized from the housing104as depicted inFIG.1. The port assembly116may be incorporated within the shape of housing104of pulse generator102. FIG.2is a rear-view200of pulse generator102showing the back-side128of housing104. As shown, pulse generator102can include one or more electrodes or sensors disposed within housing104. As depicted in the example ofFIG.2, housing104includes a first in-housing electrode130and a second in-housing electrode132. The various electrodes illustrated and discussed herein may be used for delivering therapy to the patient, sensing a condition of the patient, and/or a combination thereof. A pulse generator consistent with the present disclosure installed at or near the sternum of a patient can monitor the heart, lungs, major blood vessels, and the like through sensor(s) integrated into housing104. FIG.3is an illustration300of a simplified schematic diagram of an exemplary pulse generator102having features consistent with the current subject matter. Pulse generator102can include signal processing and therapy circuitry to detect various cardiac conditions. Cardiac conditions can include ventricular dyssynchrony, arrhythmias such as bradycardia and tachycardia conditions, and the like. Pulse generator102can be configured to sense and discriminate atrial and ventricular activity and then deliver appropriate electrical stimuli to the heart based on a sensed state of the heart. Pulse generator102can include one or more components. The one or more components may be hermetically sealed within the housing104of pulse generator102. Pulse generator102can include a controller302, configured to control the operation of the pulse generator102. The pulse generator102can include an atrial pulse generator304and may also include a ventricular pulse generator306. Controller302can be configured to cause the atrial pulse generator304and the ventricular pulse generator306to generate electrical pulses in accordance with one or more protocols that may be loaded onto controller302. Controller302can be configured to control pulse generators304,306, to deliver electrical pulses with the amplitudes, pulse widths, frequency, or electrode polarities specified by the therapy protocols, to one or more atria or ventricles. Controller electronic storage308can store instructions configured to be implemented by the controller to control the functions of pulse generator102. Controller302can include a processor(s). The processor(s) can include any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or analog logic circuitry. The functions attributed to controller302herein may be embodied as software, firmware, hardware or any combination thereof. The pulse generator102can include a battery310to power the components of the pulse generator102. In some variations, battery310can be configured to charge a capacitor. Atrial pulse generator304and ventricular pulse generator306can include a capacitor charged by the battery310. The electrical energy stored in the capacitor(s) can be discharged as controlled by controller302. The electrical energy can be transmitted to its destination through one or more electrode leads312,314. The leads can include a ventricular pulsing lead312, an atrial pulsing lead314, and/or other leads. Pulse generator102can include one or more sensors322. Sensor(s)322can be configured to monitor various aspects of a patient's physiology. Sensor(s)322may be embedded in the housing of pulse generator102, incorporated into leads312,314or be incorporated into separate leads. Sensors322of pulse generator102can be configured to detect, for example, signals from a patient's heart. The signals can be decoded by controller302of the pulse generator to determine a state of the patient. In response to detecting a cardiac arrhythmia, controller302can be configured to cause appropriate electrical stimulation to be transmitted through electrodes312and314by atrial pulse generator304and/or ventricular pulse generator306. Sensor(s)322can be further configured to detect other physiological states of the patient, for example, a respiration rate, blood oximetry, and/or other physiological states. In variations where the pulse generator102utilizes a plurality of electrodes, controller302may be configured to alter the sensing and delivery vectors between available electrodes to enhance the sensitivity and specificity of arrhythmia detection and improve efficacy of the therapy delivered by the electrical impulses from the pulse generator102. Pulse generator102can include a transceiver316. The transceiver can include an antenna318. The transceiver316can be configured to transmit and/or receive radio frequency signals. The transceiver316can be configured to transmit and/or receive wireless signals having any wireless communication protocol. Wireless communication protocols can include Bluetooth, Bluetooth low energy, Near-Field Communication, WiFi, and/or other radio frequency protocols. The transceiver316can be configured to transmit and/or receive radio frequency signals to and/or from a programmer320. The programmer320can be a computing device external to the patient. Programmer320may comprise a transceiver configured to transmit and/or receive radio frequency signals to and/or from the transceiver316of the pulse generator102. Transceiver316can be configured to wirelessly communicate with programmer320through induction, radio-frequency communication or other short-range communication methodologies. In some variations, programmer320can be configured to communicate with the pulse generator102through longer-range remote connectivity systems. Such longer-range remote connectivity systems can facilitate remote access, by an operator, to pulse generator102without the operator being in close proximity with the patient. Longer-range remote connectivity systems can include, for example, remote connectivity through the Internet, and the like. When an operator connects with pulse generator102through longer-range remote connectivity systems, a local device can be positioned within a threshold distance of the patient. The local device can communicate using one or more radio-frequency wireless connections with the pulse generator102. The local device can, in turn, include hardware and/or software features configured to facilitate communication between it and an operator device at which the operator is stationed. The local device can be, for example, a mobile computing device such as a smartphone, tablet, laptop, and the like. The local device can be a purpose-built local device configured to communicate with the pulse generator102. The local device can be paired with the pulse generator102such that the communications between the pulse generator102and the local device are encrypted. Communications between the local device and the operator device can be encrypted. Programmer320can be configured to program one or more parameters of the pulse generator102. The parameter(s) can include timing of the stimulation pulses of the atrial pulse generator, timing of the stimulation pulses of the ventricular pulse generator, timing of pulses relative to certain sensed activity of the anatomy of the patient, the energy levels of the stimulation pulses, the duration of the stimulation pulses, the pattern of the stimulation pulses and other parameters. The programmer320can facilitate the performance of diagnostics on the patient or the pulse generator102. Programmer320can be configured to facilitate an operator of the programmer320to define how the pulse generator102senses electrical signals, for example ECGs, and the like. The programmer320can facilitate an operator of the programmer320to define how the pulse generator102detects cardiac conditions, for example ventricular dyssynchrony, arrhythmias, and the like. The programmer320can facilitate defining how the pulse generator102delivers therapy, and communicates with other devices. An operator can fine-tune parameters through the programmer320. For example, the sensitivity of sensors embodied in the housing of the pulse generator302, or within leads, can be modified. Programmer320can facilitate setting up communication protocols between the pulse generator102and another device such as a mobile computing device. Programmer320can be configured to facilitate modification of the communication protocols of the pulse generator102, such as adding security layers, or preventing two-way communication. Programmer320can be configured to facilitate determination of which combination of implanted electrodes are best suited for sensing and therapy delivery. Programmer320can be used during the implant procedure. For example, programmer320can be used to determine if an implanted lead is positioned such that acceptable performance will be possible. If the performance of the system is deemed unacceptable by programmer320, the lead may be repositioned by the physician, or an automated delivery system, until the lead resides in a suitable position. Programmer320can also be used to communicate feedback from sensors disposed on the leads and housing104during the implant procedure. In some cases, concomitant devices such as another pacemaker, an ICD, or a cutaneous or implantable cardiac monitor, can be present in a patient, along with pulse generator102. Pulse generator102can be configured to communicate with such concomitant devices through transceiver316wirelessly, or the concomitant device may be physically connected to pulse generator102. Physical connection between devices may be accomplished using a lead emanating from pulse generator102that is compatible with the concomitant device. For example, the distal end of a lead emanating from pulse generator102may be physically and electrically connected to a port contained on the concomitant device. Physical connection between devices may also be accomplished using an implantable adaptor that facilitates electrical connection between the lead emanating from pulse generator102and the concomitant device. For example, an adapter may be used that will physically and electrically couple the devices despite not having native components to facilitate such connection. Concomitant devices may be connected using a “smart adapter” that provides electrical connection between concomitant devices and contains signal processing capabilities to convert signal attributes from each respective device such that the concomitant devices are functionally compatible with each other. Pulse generator102can be configured to have a two-way conversation or a one-way conversation with a concomitant device. Controller302can be configured to cause the concomitant device to act in concert with pulse generator102when providing therapy to the patient, or controller302can gather information about the patient from the concomitant device. In some variations, pulse generator102can be configured to be triggered via one-way communication from a concomitant device to pulse generator102. FIGS.4A and4Bare illustrations showing exemplary placements of elements of a cardiac pacing system having features consistent with the present disclosure. Pulse generator102can be disposed in a patient, adjacent an outer surface of ribcage404. For example, pulse generator102can be disposed on the sternum402of the patient's ribcage404. A lead414, attached to pulse generator102, may also be disposed in the patient by traversing through intercostal muscle410of the patient. Lead414may optionally pass through a receptacle408in intercostal muscle410to guide the lead, fix the lead, and/or electrically insulate the lead from the tissue of the intercostal muscle410(examples of such receptacles are described herein with respect toFIGS.13-16). In other variations, pulse generator102can be disposed outside of a patient's ribcage in a pectoral position, outside of the patient's ribcage in a lateral position, below (inferior to) the patient's ribcage in a subxiphoid or abdominal position, within the patient's mediastinum, or the like. Lead414may be passed through the ribcage so the distal end of the lead and its electrodes are disposed on, or pass through, the inner surface of the rib or inner surface of the innermost intercostal muscle, or may alternatively traverse further within the thoracic cavity, but without physically contacting the tissue comprising the heart. This placement may be referred to herein as intracostal or intracostally. Leads may be inserted between any two ribs within the thoracic cavity, for example, as shown inFIG.4A. In some variations, it is desirable to insert the lead through one of the intercostal spaces associated with cardiac notch of the left lung420. For example, between the fourth and fifth ribs or between the fifth and sixth ribs. Due to variations in anatomy, the rib spacing associated with the cardiac notch of the left lung420may differ. In some patients the cardiac notch of the left lung420may not be present or other cardiac anomalies such as dextrocardia may require the insertion through alternative rib spaces. Lead414may be inserted into such a location through an incision406, as shown inFIG.4A. Lead414may optionally be inserted into such a location through a receptacle408, as shown inFIG.4B. Precise placement of a distal end of lead414, which may include electrode(s) for defibrillation, pacing or sensing, is now described further with reference to the anatomical illustrations ofFIGS.4A,4B and4C. In some variations, the distal end of lead414can be located within the intercostal space or intercostal muscle410. In such variations, the distal end of lead414is preferably surrounded by a receptacle408that electrically insulates the distal end of the lead414from the intercostal muscle410. In another variation, the distal end of lead414may be placed just on or near the inner surface of a rib or on or near the inner surface of the innermost intercostal muscle. In such instances, and in other placements, the lead414may include electrical insulation disposed around the electrode. For example, the lead414may include an electrode that is insulated on all sides other than one exposed side. This lead configuration can facilitate a placement where the insulated portions of the lead touch the intercostal muscle, or surrounding tissue, while allowing the electrically active portion of the electrode on the lead to be directional (e.g., directed toward the pericardium and the heart). When electrical stimulation is required, the directional electrode emanates the desired electrical stimulation while electrically insulating the surrounding muscle and tissue from the stimulating energy. In some instances, the electrode may be at the distal tip of the lead, and the insulation surrounds the entire circumference of the lead, but leaves exposed the distal tip. In other instances, an electrode located away from the distal tip of the lead may be insulated over a significant portion of the lead's circumference, for example, approximately 50% or 75% of the circumference may be insulated, leaving only 50% or 25% of the electrode exposed. The distal end of lead414can also be positioned so as to abut the parietal pleura of the lung426. In other variations, the distal end of lead414can be positioned so as to terminate within the mediastinum428of the thoracic cavity of the patient, proximate the heart418, but not physically in contact with the heart418or the pericardium432of heart418. Alternatively, the distal end of lead414can be placed to abut the pericardium432, but not physically attach to the epicardial tissue comprising the heart. A portion of lead414may be configured to include a preformed particular shape (e.g., including a 45 degree angle bend, a 90 degree angle bend, a coil, or the like) that enables the preformed portion of lead414to be directed towards a preferred location as it is inserted into the patient. For example, the distal end of lead414may be preformed so it creates an angle of 90 degrees relative to the main body of lead414. While lead414is being implanted, a sheath or delivery tool may be used to constrain the preformed portion of lead414into a straight shape. However, as lead414is deployed from the sheath or delivery tool, the preformed portion of lead414can revert to its preformed shape. In one instance, the preformed portion of lead414reverts to a shape that enables the distal end of lead414to reside along and against the posterior surface of the anterior chest wall. Alternatively, a stylet may be used to straighten the preformed shape during the insertion process. Upon removal of the stylet, the preformed shape is again assumed. Any number of preformed shapes are contemplated to facilitate the placement of lead(s) in the positions and particular orientations disclosed herein. The distal end of lead414may be physically affixed to cartilage or bone found within the thoracic cavity, for example, to a rib, to cartilage of a rib, or to other bone or cartilage structure in the thoracic cavity. In one variation, the lead can be disposed such that it is wrapped around the patient's sternum402or a patient's rib. For certain placements, lead414can be adequately fixed by direct physical contact with surrounding tissue. In other variations, an additional fixation mechanism may be used at various points along the body of the lead414. For example, the distal end of lead414can incorporate a fixation mechanism such as a tine, hook, spring, screw, or other fixation device. The fixation mechanism can be configured to secure the lead in the surrounding tissue, cartilage, bone, or other tissue, to prevent the lead from migrating from its original implantation location or orientation. FIG.5is an illustration500of an exemplary method of implanting a cardiac pacing system into a patient consistent with the present disclosure. At502, a pulse generator102may be implanted, in a manner described above, adjacent the sternum402of a patient. Optionally, pulse generator102may be at least partially chisel-shaped to facilitate implantation and the separation of tissue planes. At504, a lead414may be inserted into an intercostal space410of a patient. As described above, lead414may optionally be inserted into a receptacle408disposed within intercostal space410. At506, the distal end of lead414is delivered to one of a number of suitable final locations for pacing or defibrillation, as described above. FIG.6Ais an illustration600of a pulse generator delivery system602for facilitating positioning of pulse generator102into a patient, the delivery system602having features consistent with the current subject matter.FIG.6Bis an illustration604of the delivery system602as illustrated inFIG.6Awith the pulse generator102mounted in it. Delivery system602can be configured to facilitate implantation of the pulse generator102into the thoracic region of a patient. Delivery system602includes a proximal end606and a distal end608. The distal end608of delivery system602contains a receptacle610in which the housing of the pulse generator102is loaded. Where the pulse generator102contains a connection lead, the delivery system602can be configured to accommodate the connection lead so that the connection lead will not be damaged during the implantation of the pulse generator102. When pulse generator102is fully loaded into delivery system602, pulse generator102is substantially embedded into the receptacle610. In some variations, a portion of the pulse generator102's distal end can be exposed, protruding from the end of receptacle610. The tapered shape of the distal end106of pulse generator102can be used in conjunction with the delivery system602to assist with separating tissue planes as delivery system602is used to advance pulse generator102to its desired location within the patient. In some variations, the entirety of pulse generator102can be contained within receptacle610of the delivery system602. The pulse generator102in such a configuration will not be exposed during the initial advancement of delivery system602into the patient. The distal end608of delivery system602may be designed to itself separate tissue planes within the patient as delivery system602is advanced to the desired location within the patient. The pulse generator delivery system602may be made from a polymer, a metal, a composite material or other suitable material. Pulse generator delivery system602can include multiple components. Each component of the pulse generator delivery system602can be formed from a material suitable to the function of the component. The pulse generator delivery system602can be made from a material capable of being sterilized for repeated use with different patients. Pulse generator delivery system602may include a handle612. Handle612can facilitate advancement of delivery system602and pulse generator102into a patient's body. Handle612can be disposed on either side of the main body614of the delivery system602, as illustrated inFIGS.6A and6B. In some variations, handle612can be disposed on just one side of the main body614of the delivery system602. The handle612can be configured to be disposed parallel to plane of insertion and advancement616of pulse generator delivery system602within the body. In some variations, handle612can be located orthogonally to the plane of insertion and advancement616of the delivery system602. Handle612can be configured to facilitate the exertion of pressure, by a physician, onto the pulse generator delivery system602, to facilitate the advancement and positioning of the delivery system602at the desired location within the patient. Pulse generator delivery system602can include a pulse generator release device618. The release device618can be configured to facilitate disengagement of the pulse generator102from the delivery system602. In some variations, release device618can include a plunger620. Plunger620can include a distal end configured to engage with the proximal end606of the pulse generator delivery system602. The plunger620can engage with the proximal end606of the pulse generator delivery system602when the pulse generator102is loaded into the receptacle610of the delivery system602. The proximal end622of the plunger620can extend from the proximal end606of the delivery system602. Plunger620can include a force applicator624. Force applicator624can be positioned at the proximal end622of plunger620. Force applicator624can be configured to facilitate application of a force to the plunger620to advance the plunger620. Advancing plunger620can force pulse generator102from the delivery system602. In some variations, the force applicator624can be a ring member. The ring member can facilitate insertion, by the physician, of a finger. Pressure can be applied to the plunger620through the ring member, forcing the pulse generator102out of the receptacle610of the delivery system602into the patient at its desired location. In some variations, the proximal end622of the plunger620can include a flat area, for example, similar to the flat area of a syringe, that allows the physician to apply pressure to the plunger620. In some variations, the plunger620can be activated by a mechanical means such as a ratcheting mechanism. The distal end608of the pulse generator delivery device602can include one or more sensors. The sensor(s) can be configured to facilitate detection of a state of patient tissues adjacent distal end608of the pulse generator delivery device602. Various patient tissues can emit, conduct and/or reflect signals. The emitted, conducted and/or reflected signals can provide an indication of the type of tissue encountered by the distal end608of the pulse generator delivery device602. Such sensor(s) can be configured, for example, to detect the electrical impedance of the tissue adjacent the distal end608of the pulse generator delivery device602. Different tissues can have different levels of electrical impedance. Monitoring the electrical impedance can facilitate a determination of the location, or tissue plane, of the distal end608of the delivery device602. In addition to delivery of the pulse generator, delivery of at least one lead for sensing and/or transmitting therapeutic electrical pulses from the pulse generator is typically required. Proper positioning of the distal end of such lead(s) relative to the heart is very important. Delivery systems are provided that can facilitate the insertion of one or more leads to the correct location(s) in the patient. The delivery systems can facilitate finding the location of the initial insertion point for the lead. The initial insertion point optionally being an intercostal space associated with a patient's cardiac notch of the left lung. The intercostal spaces associated with the cardiac notch commonly include the left-hand-side fourth, fifth and sixth intercostal spaces. Other intercostal spaces on either side of the sternum may be used, especially when the patient is experiencing conditions that prevent use of the fourth, fifth and sixth intercostal spaces, or due to anatomical variations. When making the initial insertion through the epidermis and the intercostal muscles of the patient, it is important to avoid damaging important blood-filled structures of the patient. Various techniques can be employed to avoid damaging important blood-filled structures. For example, sensors can be used to determine the location of the blood-filled structures. Such sensors may include accelerometers configured to monitor pressure waves caused by blood flowing through the blood-filed structures. Sensors configured to emit and detect light-waves may be used to facilitate locating tissues that absorb certain wavelengths of light and thereby locate different types of tissue. Temperature sensors may be configured to detect differences in temperature between blood-filled structures and surrounding tissue. Lasers and detectors may be employed to scan laser light across the surface of a patient to determine the location of subcutaneous blood-filled structures. Conventional medical devices may be employed to locate the desired initial insertion point into the patient. For example, x-ray machines, MRI machines, CT scanning machines, fluoroscopes, ultrasound machines and the like, may be used to facilitate determination of the initial insertion point for the leads as well as facilitate in advancing the lead into the patient. FIG.19is an illustration of a medical procedure guide1910having features consistent with the current subject matter. Medical procedure guides can be utilized to bolster the reliability of locating a desired point on a patient for performing a medical procedure. For example, a medical procedure can include, for example, inserting or delivering a lead to a portion of an anatomy of a patient. Medical procedure guides can also identify critical structures to be avoided, for example while inserting the lead during the medical procedure. For example, the medical procedure guide1910may contain markers or regions on the medical procedure guide1910meant to be disposed over anatomical locations on the patient. Once the physician has found those anatomical locations (e.g., the xyphoid process), the physician can place the medical procedure guide1910so that the markers or desired regions on the medical procedure guide1910correlate with those anatomical locations. With the medical procedure guide1910properly positioned on the patient, the physician can then use markings on the medical procedure guide1910to locate a desired initial insertion point1940or to determine the position at which to commence a medical procedure. The medical procedure guide1910can be used with many medical procedures including, but not limited to, insertion of a cardiac therapy lead for pacing or defibrillation. In this way, the medical procedure guide1910can be configured to allow for puncture or incision through the guide during the medical procedure. Markings, such as critical anatomy markings, on the medical procedure guide1910can also indicate structures to be avoided during the lead delivery process. For example, the medical procedure guide1910can be configured to further facilitate a determination of the presence or absence of an interposed lung or facilitate a determination of a distance between a sternal margin and a thoracic vein or a thoracic artery. As used herein, “markings” or “marking regions” refer to marks, recesses, ridges, or other structural features of the medical procedure guide1910that are added to the medical procedure guide1910(e.g., coloration, changes in opacity, etc.). Markings or marking regions also refer to features that are added to or subtracted from the material that makes up the medical procedure guide1910. For example, ridges, scoring, recesses, openings and the like. In some implementations, the medical procedure guide1910can have a shape configured to overlay portions of an anatomy of the patient. Portions of the anatomy can include, for example, skin, exposed organs, muscles, tissues, bones, and the like. The shape of the medical procedure guide1910can be rectangular, square, circular, oval, or irregular. The medical procedure guide1910can be similar to a sheet and have a thickness and an area bounded by a perimeter that overlays the portion of the anatomy. As shown inFIG.19, the thickness of the medical procedure guild1910is variable, and that the depiction shows a greater thickness for illustrative purposes. The medical procedure guide1910can be flexible and configured to at least partially form to the anatomy of the patient. The medical procedure guide1910can be configured to be affixed to the patient, for example by the inclusion of an adhesive applied to a surface of the medical procedure guide1910. The medical procedure guide1910can also include alignment markings1920on the medical procedure guide1910to facilitate proper placement of the medical procedure guide1910on the patient. As one example, the alignment markings1920can be configured to line up with at least a portion of the patient's sternum and at least one rib. Procedure markings1940can also be included on the medical procedure guide1910to facilitate determination of a position at which to commence a medical procedure. For example, the procedure markings1940can be configured to locate a position proximate the patient's sternum, in the region of a cardiac notch. Also, imaging markers may be incorporated with the medical procedure guide1910to facilitate commencement or completion of the medical procedure in conjunction with imaging. As used herein, “imaging markers” refer to any markers that are added to or otherwise included with medical procedure guide1910. A marker can be, in some implementations, an object inserted into or integral with medical procedure guide1910. In other implementations, the marker can be a feature such as a dye or other material that can be detected by an imaging device or discerned by the human eye. For example, medical procedure guide1910can be used with conventional imaging devices such as CT, x-ray, fluoroscopes, MRI, and the like, that can discern the shape and/or location of imaging markers, such as radiopaque markers. In certain embodiments, the medical procedure guide1910may contain markers spaced at known intervals that are visible with the imaging devices. FIG.20is an illustration of medical procedure guide1910having imaging markers2010and2020consistent with the current subject matter. In some implementations, imaging markers2010can be located at particular known depths within the medical procedure guide1910to facilitate completion of the medical procedure. This is illustrated inFIG.20, where imaging markers2010are shown at several depths proximate to the procedure marking. The imaging markers2010can, for example, facilitate determination of a proper depth of insertion for a cardiac therapy lead, a distance between a posterior surface of a sternum and a pericardium, or the determination of the patient's sternum thickness. In other implementations, medical procedure guide1910can include imaging markers2020oriented across the face of guide1910, or at a common depth. As shown inFIG.20, imaging markers2020may be spaced on the surface of medical procedure guide1910. In one implementation, imaging markers2020may form a grid pattern, which can facilitate the location of particular anatomy relative to the grid upon imaging. These reference marks can be radiopaque and/or visible, as described herein. The imaging markers2020can facilitate locating a position relevant for a medical procedure, for example, locating a position to make a puncture through medical procedure guide1910in order to insert a cardiac therapy lead. These imaging markers2010(which may be radiopaque markers) may also include a complementing marker that is visible to the eye. Radiopaque markers on or within the medical procedure guide may also be configured to be visible only in certain x-ray or fluoroscopy orientations. For example, certain radiopaque markers can be seen predominantly in a sagittal view, while others radiopaque markers can be predominantly viewed while in an AP (anterior-posterior) view. Such orientation specific radiopaque markings can ensure that medical procedure guide1910is properly oriented, but can also provide the ability to obtain positional and thickness measurements for the physician. For example, using medical procedure guide1910with x-ray or fluoroscopy, the physician can visualize the rib spacing, the presence or absence of interposed lung, the distance between the posterior surface of the sternum and the pericardium, the distance between the sternal margin to the thoracic vein or artery, and the patient's sternum thickness. Having this information, the physician can then determine the ideal intercostal spaces for insertion and ultimate placement and orientation of a lead. As another example, the medical procedure guide1910may include critical anatomy markings or facilitate the location critical anatomy to avoid damage during a medical procedure. The medical procedure guide1910may be used with x-ray or fluoroscopy to obtain measurements for the thickness of the subcutaneous tissue between the surface of the skin and the anterior surface of the sternum. With these measurements, the physician can then determine whether the pulse generator will fit well over the sternum, or if other anatomical locations described above are better suited for the pulse generator placement. Additionally, using the medical procedure guide1910to obtain measurements related to the thickness of the sternum, the physician can calculate the minimum insertion depth that is necessary to obtain the entry point into the intracostal space. The physician can additionally determine the insertion depth that is necessary for the particular insertion technique (e.g., surgical, percutaneous, etc.) or lead delivery system, as described in detail below. In one implementation, the medical procedure guide1910may consist of a flexible material where the skin facing side of the medical procedure guide1910includes a means for temporarily and reversibly adhering to the patient's skin. The medical procedure guide1910is positioned in the desired location as described earlier and then adhered to the patient's skin. When viewed under x-ray or fluoroscopy, the caretaker can then determine the desired rib space for lead insertion (for example, above the ventricle) and directly correlate the insertion point with the unique marker on the medical procedure guide1910. The medical procedure guide1910can be a non-sterile tool that can be used prior to sterile preparation of the patient for identifying the proper insertion point. Medical procedure guide1910may include any of the aforementioned alignment markings, procedure markings or imaging markers and each may be used to identify particular important locations for a medical procedure. Medical procedure guide1910may be designed so that the locations can be identified, for example, by puncturing through guide1910and thereby marking the patient, or alternatively by making markings on the patient adjacent to guide1910, or within openings in guide1910. Medical procedure guide1910may consist of a thin sterile barrier material, that once properly oriented, is placed on the patient within the sterile field. The medical procedure guide1910is adhered to the patient's skin and can remain in place throughout the lead insertion process. In this application, the medical procedure guide1910material has properties allowing for an incision by scalpel, needle or the like, to be made directly through the medical procedure guide1910's sterile barrier material. As described above, the sterile barrier medical procedure guide1910may contain unique visible and radiopaque markers to assist with placement, orientation, and lead insertion. Advancing a lead into a patient can also present the risk of damaging physiological structures of the patient. Sensors may be employed to monitor the characteristics of tissues within the vicinity of the distal end of an advancing lead. Readings from sensors associated with the characteristics of tissues can be compared against known characteristics to determine the type of tissue in the vicinity of the distal end of the advancing lead. Sensors, such as pH sensors, thermocouples, accelerometers, electrical impedance monitors, and the like, may be used to detect the depth of the distal end of the electrode in the patient. Physiological characteristics of the body change the further a lead ventures into it. Measurements performed by sensors at, or near, the distal end of the advancing lead may facilitate the determination of the type of tissue in the vicinity of the distal end of the lead, as well as its depth into the patient. Various medical imaging procedures, may be used on a patient to determine the location of the desired positions in the heart for the distal end of the lead(s). This information can be used, in conjunction with sensor readings, of the kind described herein, to determine when the distal end of the lead has advanced to a desired location within the patient. Components may be used to first create a channel to the desired location for the distal end of the lead. Components can include sheathes, needles, cannulas, balloon catheters and the like. A component may be advanced into the patient with the assistance of sensor measurements to determine the location of the distal end of the component. Once the component has reached the desired location, the component may be replaced with the lead or the lead may be inserted within the component. An example of a component can include an expandable sheath. Once the sheath has been advanced to the desired location, a cannula extending the length of the sheath may be expanded, allowing a lead to be pass through the cannula. The sheath may then be removed from around the lead, leaving the lead in situ with the distal end of the lead at the desired location. Determination of the final placement of the distal end of a lead is important for the delivery of effective therapeutic electrical pulses for pacing the heart. The present disclosure describes multiple technologies to assist in placement of a lead in the desired location. For example, the use of sensors on the pulse generator, on the distal end of leads, or on delivery components. In addition, when a lead or component is advanced into a patient, balloons may be employed to avoid damaging physiological structures of the patient. Inflatable balloons may be disposed on the distal end of the lead or component, on the sides of a lead body of the lead, or may be circumferentially disposed about the lead body. The balloons may be inflated to facilitate the displacement of tissue from the lead to avoid causing damage to the tissue by the advancing lead. A lead delivery assembly may also be used to facilitate delivery of the lead to the desired location. In some variations, the lead delivery assembly may be configured to automatically deliver the distal end of the lead to the desired location in the patient. FIG.7is an illustration700of an exemplary process flow illustrating a method of delivering a lead having features consistent with the present disclosure. At702, the location of blood-filled structures, in the vicinity of an intercostal space, can be determined. The intercostal space can be an intercostal space associated with the cardiac notch of the patient. Determining the location of the blood-filed structures may be facilitated by one or more sensors configured to detect the location of blood-filled structures. At704, a region can be chosen for advancing of a lead through intercostal muscles associated with the cardiac notch. The region chosen may be based on the determined location of blood-filled structures of the patient in that region. It is important that damage to blood-filled structures, such as arteries, veins, and the like, is avoided when advancing a lead into a patient. At706, a lead can be advanced through the intercostal muscles associated with the cardiac notch of the patient. Care should be taken to avoid damaging important physiological structures. Sensors, of the kind described herein, may be used to help avoid damage to important physiological structures. At708, advancement of the lead through the intercostal muscles can be ceased. Advancement may be ceased in response to an indication that the distal end of the lead has advanced to the desired location. Indication that the distal end of the lead is at the desired location may be provided through measurements obtained by one or more sensors of the kind described herein. The lead advanced through the intercostal muscles associated with the cardiac notch of the patient can be configured to transmit therapeutic electrical pulses to pace or defibrillate the patient's heart.FIG.8Ais an illustration800aof an exemplary lead802having features consistent with the present disclosure. For the lead to deliver therapeutic electrical pulses to the heart for pacing or defibrillating the heart, a proximal end804of lead802is configured to couple with the pulse generator102. The proximal end804of lead802may be configured to couple with a connection port124. The connection port can be configured to couple the proximal end804of lead802to one or more conductors, such as conductors118and120. When the proximal end804of lead802couples with connection port124, a sealed housing may be formed between them. In some variations, the materials of connection port124and the proximal end804of lead802may be fused together. In some variations, the proximal end804of lead802may be configured to be pushed into the sealed housing126, or vice versa. Optionally, the external diameter of the inserted member may be slightly greater than the internal diameter of the receiving member causing a snug, sealed fit between the two members. Optionally, a mechanism, such as a set-screw or mechanical lock, may be implemented upon the connection port124or proximal lead end804in order to prevent unintentional disconnection of the lead802from pulse generator102. Also shown inFIG.8Ais the distal end806of lead802. The distal end806of lead802may comprise an electrode808. In some variations, lead802may include a plurality of electrodes. In such variations, lead802may include a multiple-pole lead. Individual poles of the multiple-pole lead can feed into separate electrodes. Electrode808at the distal end806of lead802may be configured to deliver electrical pulses to pace or defibrillate the heart when located in the desired position for pacing the heart. Electrodes used for sensing cardiac activity may be oriented on one side of the distal end806of lead802so that they are facing towards the pericardium and heart, and away from the skeletal muscles in the anterior chest wall and/or surrounding intracostal tissue. Electrodes used for sensing extracardiac activity may be oriented on one or both sides of the distal end806of lead802or circumferentially around the lead802. In certain applications, directing electrodes away from the pericardial surface can result in enhanced sensing of extracardiac signals. The distal end806of lead802can include one or more sensors810. Sensor(s)810can be configured to monitor physiological characteristics of the patient while the distal end806of lead802is being advanced into the patient. Sensors can be disposed along the length of lead802. For example, sensor812is disposed some distance from the distal end806. In such example, sensor812may reside in the subcutaneous tissue between the anterior surface of the ribcage and the surface of the skin, providing unique sensing from such a location. Such sensors incorporated onto the lead can detect subtle physiological, chemical and electrical differences that distinguish the lead's placement within the desired location, as opposed to other locations in the patient's thoracic cavity. In some variations, the proximal end804of lead802may be coupled with pulse generator102prior to the distal end806of lead802being advanced through the intercostal space of the patient. In some variations, the proximal end804of the lead802may be coupled with pulse generator102after the distal end806of lead802has been advanced to the desired location. To assist in the placement of the lead, various medical instruments may be used. The medical instruments may be used alone, or in combination with sensors disposed on the lead that is being placed. Medical instruments may be used to help the physician to access the desired location for the placement of a lead and/or confirm that the distal end of the lead has reached the desired location. For example, instruments, such as an endoscope or laparoscopic camera, with its long, thin, flexible (or rigid) tube, light and video camera can assist the physician in confirming that the distal end806of lead802has reached the desired location within the thoracic cavity. Other tools known to one skilled in the art such as a guidewire, guide catheter, or sheath may be used in conjunction with medical instruments, such as the laparoscopic camera, and may be advanced alongside and to the location identified by the medical instruments. Medical instruments such as a guidewire can be advanced directly to the desired location for the distal end of the lead with the assistance of acoustic sound, ultrasound, real-time spectroscopic analysis of tissue, real-time density analysis of tissue or by delivery of contrast media that may be observed by real-time imaging equipment. In some variations, the patient may have medical devices previously implanted that may include sensors configured to monitor physiological characteristics of the patient. The physiological characteristics of the patient may change based on the advancement of the lead through the intercostal space of the patient. The previously implanted medical device may have sensors configured to detect movement of the advancing lead. The previously implanted medical device can be configured to communicate this information back to the physician to verify the location of the advancing lead. Sensors disposed on the lead, such as sensors810disposed on distal end806of the lead may be used to facilitate the delivery of the lead to the desired location. Sensor(s)810can be configured to facilitate determination of a depth of the distal end806of lead802. As described above, the depth of the desired location within the patient can be determined using one or more medical instruments. This can be determined during implantation of the lead802or prior to the procedure taking place. Although sensor(s)810is illustrated as a single element inFIG.8A, sensor(s)810can include multiple separate sensors. The sensors810can be configured to facilitate placement of the distal end806of the lead802at a desired location and verification thereof. Sensor(s)810can be configured to transmit sensor information during advancement to the desired location. Sensor(s)810may transmit signals associated with the monitored physiological characteristics of the tissue within the vicinity of the distal end806of the lead802. In some variations, the signals from sensor(s)810may be transmitted to a computing device(s) configured to facilitate placement of the lead802in the desired location. In such variations, the computing device(s) can be configured to assess the sensor information individually, or in the aggregate, to determine the location of the distal end806of lead802. The computing device(s) can be configured to present alerts and/or instructions associated with the position of the distal end806of lead802. In some variations, lead802can be first coupled with connection port124of pulse generator102. Signals generated by sensor(s)810can be transmitted to a computing device(s) using transceiver316in pulse generator102, as illustrated inFIG.3. An accelerometer may be used to facilitate delivery of the distal end806of lead802to the desired location. An accelerometer may be disposed at the distal end806of lead802. The accelerometer may be configured to monitor the movement of the distal end806of lead802. The accelerometer may transmit this information to a computing device or the physician. The computing device, or the physician, can determine the location of the distal end806of the lead802based on the continuous movement information received from the accelerometer as the lead802is advanced into the patient. The computing device or the physician may know the initial entry position for lead802. The movement information can indicate a continuous path taken by the lead802as it advanced into the body of the patient, thereby providing an indication of the location of the distal end806of lead802. Pressure waves from the beating heart may differ as absorption changes within deepening tissue planes. These pressure wave differences may be used to assess the depth of the distal end of the electrode. The accelerometer can also be configured to monitor acoustic pressure waves generated by various anatomical structures of the body. For example, the accelerometer can be configured to detect acoustic pressure waves generated by the heart or by other anatomical structures of the body. The closer the accelerometer gets to the heart, the greater the acoustic pressure waves generated by the heart will become. By comparing the detected acoustical pressure waves with known models, a location of the distal end806of lead802can be determined. Pressure waves or vibrations can be artificially generated to cause the pressure waves or vibrations to traverse through the patient. The pressure waves or vibrations can be generated in a controlled manner. The pressure waves or vibrations may be distorted as they traverse through the patient. The level of type of distortion that is likely to be experienced by the pressure waves or vibrations may be known. The pressure waves or vibrations detected by the accelerometer can be compared to the known models to facilitate determination or verification of the location of the distal end806of lead802. Different tissues within a body exhibit different physiological characteristics. The same tissues situated at different locations within the body can also exhibit different physiological characteristics. Sensors, disposed on the distal end806, of lead802can be used to monitor the change in the physiological characteristics as the distal end806is advanced into the body of the patient. For example, the tissues of a patient through which a lead is advanced can demonstrate differing resistances, physiological properties, electrical impedance, temperature, pH levels, pressures, and the like. These different physiological characteristics, and the change in physiological characteristics, experienced as a sensor traverses through a body can be known or identified. For example, even if the actual degree is not known ahead of time, the change in sensor input when the sensor traverses from one tissue media to another may be identifiable in real-time. Consequently, sensors configured to detect physiological characteristics of a patient can be employed to facilitate determining and verifying the location of the distal end806of lead802. Different tissues can exhibit different insulative properties. The insulative properties of tissues, or the change in insulative properties of tissues, between the desired entry-point for the lead and the desired destination for the lead can be known. Sensor810can include an electrical impedance detector. An electrical impedance detector can be configured to monitor the electrical impedance of the tissue in the vicinity of the distal end806of lead802. The electrical impedance of the tissue monitored by the electrical impedance detector can be compared with the known insulative properties of the tissues between the entry point and the destination, to determine the location of the distal end of lead802or a transition from one tissue plane to another may be recognized by a measurable change in the measured impedance. Varying levels of electrical activity can be experienced at different locations with the body. Electrical signals emitted from the heart, or other muscles can send electrical energy through the body. This electrical energy will dissipate the further it gets from its source. Various tissues will distort the electrical energy in different ways. Sensors configured to detect the electrical energy generated by the heart and/or other anatomical structures can monitor the electrical energy as the lead is advanced. By comparing the monitored electrical energy with known models, a determination or verification of the location of the distal end806of lead802can be made. The sensors may be configured to identify sudden changes in the electrical activity caused by advancement of the sensor into different tissue planes. Tissues throughout the body have varying pH levels. The pH levels of tissues can change with depth into the body. Sensor(s)810can include a pH meter configured to detect the pH levels of the tissue in the vicinity of the sensor(s)810as the sensor(s) advance through the patient. The detected pH levels, or detected changes in pH levels, can be compared with known models to facilitate determination or verification of the location of the distal end806of lead802. The pH meter may be configured to identify sudden changes in the pH level caused by advancement of the meter into different tissue planes. Different tissues can affect vibration-waves or sound-waves in different ways. Sensor(s)810can include acoustic sensors. The acoustic sensors can be configured to detect vibration waves or sound waves travelling through tissues surrounding sensor(s)810. The vibration waves can be emitted by vibration-emitting devices embedded the lead802. The vibration waves can be emitted by vibration-emitting devices located on a hospital gurney, positioned on the patient, or otherwise remote from lead802. Sensor(s)810can be configured to transmit detected vibration-wave information to a computing device configured to determine the location of the distal end806of lead802based on the detected vibration-wave information. Different tissues can have different known effects on the emitted electromagnetic waves. Sensors can be used to detect the effect that the tissue in the vicinity of the sensors have on the electromagnet waves. By comparing the effect that the tissue has on the electromagnetic waves with known electromagnetic effects, the identity of the tissue can be obtained and the location of the lead can be determined or verified. For example, sensor(s)810can include electromagnetic wave sensors. Electromagnetic wave sensors can include an electromagnetic wave emitter and an electromagnetic wave detector. The electromagnetic waves will be absorbed, reflected, deflected, and/or otherwise affected by tissue surrounding sensor(s)810. Sensor(s)810can be configured to detect the change in the reflected electromagnetic waves compared to the emitted electromagnetic waves. By comparing the effect the tissue in the vicinity of the sensor(s)810has on the electromagnetic waves with known models, a determination verification of the location of lead802can be made. The sensors may be configured to identify sudden changes in the electromagnetic activity caused by advancement of the sensor into different tissue planes. FIG.9Ais an illustration900of the distal end of an exemplary delivery system902having features consistent with the presently described subject matter. WhileFIG.9Ais described with reference to a delivery system, one of ordinary skill in the art can appreciate and understand that the technology described herein could be applied directly to the end of a lead, such as lead802. The present disclosure is intended to apply to a delivery system, such as delivery system902, as well as a lead, such as lead802. Delivery system902can facilitate placement of the distal end of a lead, such as lead802illustrated inFIG.8, to a desired location by use of electromagnetic waves, such as light waves. Delivery system902may comprise a delivery catheter body904. Delivery catheter body904may be configured to facilitate advancement of delivery catheter body904into the patient to a desired location. The distal tip906of delivery catheter body904may comprise a light source908. Light source908can be configured to emit photons having a visible wavelength, infrared wavelength, ultraviolet wavelength, and the like. Delivery catheter body904may comprise a light detector910. Light detector910may be configured to detect light waves, emitted by the light source908, reflected by tissues surrounding distal tip906of delivery catheter body904. FIG.9Bis an illustration912of an exemplary process for using the delivery system illustrated inFIG.9A. Light detector910can be configured to detect light waves reflected by the tissue adjacent the distal end906of delivery system902. Information associated with the detected light waves may be transmitted to a computing device. The computing device can be configured to interpret the information transmitted from light detector910and determine a difference between the light emitted and the light detected. At914, light source908can be activated. Light source908may emit light-waves into the tissue in the general direction of the intended advancement of delivery system902. At916, the tissue can absorb a portion of the emitted light waves. At918, light detector910can detect the reflected light waves, reflected by tissues surrounding light source908. At920, a determination of a change in the absorption of the light waves by tissues surrounding the distal tip906of delivery system902can be made. At922, in response to an indication that the absorption of light waves has not changed, delivery system902can be configured to advance a delivery system, such as delivery system902, into the patient. In some variations, a physician can advance delivery system902into the patient. In other variations, the delivery system902can be advanced into the patient automatically. At924, in response to an indication that the absorption of light waves has changed, an alert can be provided to the physician. In some variations, the alert can be provided to the physician through a computing device configured to facilitate positioning of delivery system902into the patient. In some variations, a computing device may be configured to facilitate positioning of delivery system902into the patient. The computing device can be configured to alert the physician to the type of tissue in the vicinity of distal tip906of delivery system902. In some variations, the computing device can be configured to alert the physician when the distal tip906reaches a tissue having characteristics consistent with the desired location of the distal tip906of delivery system902. For example, when the characteristics of the tissue in the vicinity of the distal tip906match those within the intercostal tissues, or a particular location within the medistiunum, an alert may be provided. Blood vessels, both venous and arterial, absorb red, near infrared and infrared (IR) light waves to a greater degree than surrounding tissues. When illuminating the surface of the body with red, near infrared and infrared (IR) light waves, blood rich tissues, for example veins, will absorb more of this light than other tissues, and other tissues will reflect more of this light than the blood rich tissues. Analysis of the pattern of reflections can enable the blood rich tissues to be located. A positive or negative image can be projected on the skin of the patient at the location of the vein. In some variations, the vein can be represented by a bright area and the absence of a vein can be represented as a dark area, or vice versa. Delivery system902can include a subcutaneous visualization enhancer. The subcutaneous visualization enhancer may be configured to enhance visualization of veins, arteries, and other subcutaneous structures of the body. The subcutaneous visualization enhancer can include moving laser light sources to detect the presence of blood-filled structures, such as venous or arterial structures below the surface of the skin. The subcutaneous visualization enhancer can include systems configured to project an image onto the surface of the skin that can show an operator the pattern of the detected subcutaneous blood-filled structures. Laser light from laser light sources can be scanned over the surface of the body using mirrors. A light detector can be configured to measure the reflections of the laser light and use the pattern of reflections to identify the targeted blood rich structures. Such subcutaneous visualization enhancers can be used to facilitate determination of the location for the initial approach for inserting a lead, such as lead802, through the intercostal space associated with the cardiac notch of the patient. In some variations, the visualization enhancers can be disposed remote from the delivery system and/or can be configured to enhance visualization enhancers disposed on the delivery system. With the provision of a visualization of the detected subcutaneous structures, the physician can assess the position of subcutaneous structures such as the internal thoracic artery, or other structures, of the body while concurrently inserting components of the delivery system into the body, while avoiding those subcutaneous structures. In some variations, during advancement of lead802through the intercostal space associated with the cardiac notch, sensor(s)810can be configured to transmit obtained readings to a computing device for interpretation. In some variations, the computing device is pulse generator102. In some variations, pulse generator102is used to transmit the readings to an external computing device for interpretation. In any event, the sensor information from the various sensors can be used individually, or accumulatively, to determine the location of the distal end of lead802. FIG.10is a schematic illustration of a delivery control system1000having features consistent with the current subject matter. The delivery control system1000can be configured to automatically deliver a lead to the desired position within the patient. For example, the delivery control system1000can be configured to automatically deliver a distal tip of a lead through the intercostal space associated with the cardiac notch. Delivery control system1000can be configured to receive a plurality of inputs. The inputs can come from one or more sensors disposed in, or on, the patient. For example, delivery control system1000can be configured to receive subcutaneous structure visualization information1002, information associated with delivery insertion systems1004, information associated with sensors1006, and the like. Delivery control system1000can be configured to use remote sensors1006to facilitate determination of the insertion site for the lead. Sensors1006can be disposed in various instruments configured to be inserted into the patient. Sensors1006can also be disposed in various instruments configured to remain external to the patient. Delivery control system1000can be configured to perform depth assessments1008. The depth assessments1008can be configured to determine the depth of the distal end of an inserted instrument, such as a lead802illustrated inFIG.8A. Depth assessments1008can be configured to determine the depth of the distal end of the inserted instrument through light detection systems1010, pressure wave analysis1012, acoustic analysis, and the like. Depth assessments1008can be configured to determine the depth of the delivery system, or lead, though pressure wave analysis systems1012. Pressure waves can be detected by accelerometers as herein described. Depth assessments1008can be configured to determine the depth of the delivery system though acoustic analysis systems1014. Acoustic analysis system1014can be configured to operate in a similar manner to a stethoscope. The acoustic analysis system1014can be configured to detect the first heart sound (S1), the second heart sound (S2), or other heart sounds. Based on the measurements obtained by the acoustic analysis system1014, a depth and/or location of the distal end of a delivery system and/or inserted medical component can be determined. The acoustic analysis system1014can be configured to measure the duration, pitch, shape, and tonal quality of the heart sounds. By comparing the duration, pitch, shape, and tonal quality of the heart sounds with known models, a determination or verification of the location of the lead can be made. Sudden changes in the degree of heart sounds may be used to indicate advancement into a new tissue plane. In some variations, the lead can include markers or sensors that facilitate the correct placement and orientation of the lead. Certain markers such as a visual scale, radiopaque, magnetic, ultrasound markers, and the like, can be position at defined areas along the length of the lead so that the markers can be readily observed by an implanting physician, or automated system, on complementary imaging instruments such as fluoroscopy, x-ray, ultrasound, or other imaging instruments known in the art. Through the use of these markers, the physician, or automated implantation device, can guide the lead to the desired location within the intercostal muscle, pleural space, mediastinum, or other desired position, as applicable, and also ensure the correct orientation. Avoiding damage to tissues in the vicinity of the path-of-travel for the lead is important. Moving various tissues from the path of the lead without damaging them is also important.FIGS.11A and11Bare illustrations1100and1102of an exemplary lead802having features consistent with the present disclosure for moving and avoiding damage to tissues during lead delivery. Lead802can comprise a distal tip1104. Distal tip1104can include at least one electrode and/or sensor1106. Having leads directly touch the tissue of a patient can be undesirable and can damage the tissue. Consequently, the distal tip1104of lead802can include an inflatable balloon1108. Balloon1108can be inflated when the distal tip1104of lead802encounters an anatomical structure obstructing its path, or prior to moving near sensitive anatomy during lead delivery. The balloon may be configured to divert the obstacle and/or the lead to facilitate circumventing the anatomical structure or may indicate that the lead has reached its intended destination. To inflate the balloon, lead802can include a gas channel1110. At the end of gas channel1110there can be a valve1112. Valve1112can be controlled through physical manipulation of a valve actuator, through electrical stimulation, through pressure changes in gas channel1110and/or controlled in other ways. In some variations, the valve1112may be configured at the proximal end of the lead802. When positioning lead802into a patient, lead802may cause damage to, or perforations of, the soft tissues of the patient. When lead802is being installed into a patient, distal tip1104of lead802can encounter soft tissue of the patient that should be avoided. In response to encountering the soft tissue of the patient, gas can be introduced into gas channel1110, valve1112can be opened and balloon1108can be inflated, as shown inFIG.11B. Inflating balloon1108can cause the balloon to stretch and push into the soft tissue of the patient, moving the soft tissue out of the way and/or guiding distal tip1104of lead802around the soft tissue. When distal tip1104of lead802has passed by the soft tissue obstruction, valve1112can be closed and the balloon deflated. In some variations, a delivery component or system is used to facilitate delivery of a lead, such as lead802, to the desired location.FIG.12is an illustration1200of an exemplary delivery system for a lead having features consistent with the present disclosure. An example of the delivery system is an expandable sheath1202. Expandable sheath1202can be inserted into the patient at the desired insertion point, identified using one or more of the technologies described herein. Expandable sheath1202can include a tip1204. In some variations, tip1204may be radiopaque. A radiopaque tip1204may be configured to facilitate feeding of the expandable sheath1202to a desired location using one or more radiography techniques known in the art and described herein. Such radiography techniques can include fluoroscopy, CT scan, and the like. Tip1204can include one or more sensors for facilitating the placement of the lead. The sensors included in tip1204of the expandable sheath1202can be the same or similar to the sensors described herein for monitoring physiological characteristics of the body and other characteristics for facilitating positioning of a lead in a body. Expandable sheath1202can include a channel1206running through a hollow cylinder1208of expandable sheath1202. When tip1204of expandable sheath1202is at the desired location, gas or liquid can be introduced into hollow cylinder1208. The gas or liquid can be introduced into hollow cylinder1208through a first port1210. Hollow cylinder1208can expand, under the pressure of the gas or liquid, causing channel1206running through hollow cylinder1208to increase in size. A lead, such as lead802illustrated inFIG.8A, can be inserted into channel1206through a central port1212. Hollow cylinder1208can be expanded until channel1206is larger than the lead. In some variations, channel1206can be expanded to accommodate leads of several French sizes. Once the lead is in the desired place, expandable sheath1202can be removed, by allowing the lead to pass through channel1206. In some variations, liquid or gas can be introduced into or removed from channel1006through a second port1214. Using expandable sheath1202can provide an insertion diameter smaller than the useable diameter. This can facilitate a reduction in the risk of damage to tissues and vessels within the patient when placing the lead. When electricity is brought within the vicinity of muscle tissue, the muscle will contract. Consequently, having a lead for carrying electrical pulses traversing through intercostal muscle tissue may cause the intercostal muscle tissue to contract. Electrical insulation can be provided in the form of a receptacle disposed in the intercostal muscle, where the receptacle is configured to electrically insulate the intercostal muscle from the lead. FIG.13is an illustration1300of an intercostal space1302associated with the cardiac notch of the left lung with an exemplary lead receptacle1304having features consistent with the present disclosure. Lead receptacle1304can facilitate the placement of leads, and/or other instruments and avoid the leads and/or instruments physically contacting the intercostal tissue. When the distal end of the lead is positioned to terminate in the intercostal muscle, the lead can be passed through lead receptacle1304that has been previously placed within the patient's intercostal muscles. Lead receptacle1304can be configured to be electrically insulated so that electrical energy emanating from the lead will not stimulate the surrounding intercostal and skeletal muscle tissue, but will allow the electrical energy to traverse through and stimulate cardiac tissue. The intercostal space1302is the space between two ribs, for example, rib1306aand rib1306b. Intercostal muscles1308a,1308band1308ccan extend between two ribs1306aand1306b, filling intercostal space1302. Various blood vessels and nerves can run between the different layers of intercostal muscles. For example, intercostal vein1310, intercostal artery1312, the intercostal nerve1314can be disposed under a flange1316of upper rib1306aand between the innermost intercostal muscle1308cand its adjacent intercostal muscle1308b. Similarly, collateral branches1318can be disposed between the innermost intercostal muscle1308cand its adjacent intercostal muscle1308b. The endothoracic facia1320can abut the inner-most intercostal muscle1308cand separate the intercostal muscles from the parietal pleura1322. The pleural cavity1324can be disposed between the parital pleura1322and the visceral pleura1326. The visceral pleura1326can abut the lung1328. FIG.14is an illustration1400of an exemplary lead fixation receptacle1304illustrated inFIG.13, having features consistent with the present disclosure. Lead receptacle1304may comprise a cylindrical body, or lumen1328, from an outer side of an outermost intercostal muscle to an inner side of an innermost intercostal muscle of an intercostal space. Lumen1328may be configured to support a lead traversing through it. Lumen1328may comprise an electrically insulating material configured to inhibit traversal of electrical signals through walls of lumen1328. In some variations, end1336of the receptacle1304may pass through the innermost intercostal muscle1308c. In some variations, end1338of receptacle1304can pass through outermost intercostal muscle1308a. Lumen1328can terminate adjacent the pleural space1324. In some variations, the lumen1328can terminate in the mediastinum. In some variations, receptacle1304can be configured to be screwed into the intercostal muscles1308a,1308b, and1308c. Receptacle1304can also be configured to be pushed into the intercostal muscles1308a,1308band1308c. Lead receptacle1304may include a fixation flange1330a. Fixation flange1330amay be disposed on the proximal end of the lumen1328and configured to abut the outermost intercostal muscle1308a. Lead receptacle1304may include a fixation flange1330b. Fixation flange1330bcan be disposed on the distal end of the lumen1328and configured to abut the outermost intercostal muscle1308c. Lead receptacle1304can be implanted into the intercostal muscles1308a,1308b, and1308cby making an incision in the intercostal muscles1308a,1308b, and1308c, stretching the opening and positioning lead receptacle1304into the incision, taking care to ensure that the incision remains smaller than the outer diameter of flanges1330aand1330b. In some variations flanges1330aand1330bcan be configured to be retractable allowing for removal and replacement of the lead fixation receptacle1304. Lead receptacle1304can be fixed in place by using just flanges1330aand1330b. Lead receptacle1304may also be fixed in place by using a plurality of surgical thread eyelets1332. Surgical thread eyelets1332can be configured to facilitate stitching lead receptacle1304to the intercostal muscles1308aand1308cto fix lead receptacle1304in place. Receptacle1304can include an internal passage1334. Internal passage1334can be configured to receive one or more leads and facilitate their traversal through the intercostal space1302. Lead receptacle1304can be formed from an electrically insulating material. The electrically insulating material can electrically isolate the intercostal muscles1308a,1308band1308cfrom the leads traversing through lead receptacle1304. Lead receptacle1304can be formed from materials that are insulative. The material can include certain pharmacological agents. For example, antibiotic agents, immunosuppressive agents to avoid rejection of lead receptacle1304after implantation, and the like. In some variations, lead receptacle1304can be comprised of an insulative polymer coated or infused with an analgesic. In some variations, the lead receptacle1304can be comprised of an insulative polymer coated or infused with an anti-inflammatory agent. The polymer can be coated or infused with other pharmacological agents known to one skilled in the art to treat acute adverse effects from the implantation procedure or chronic adverse effects from the chronic implantation of the lead or receptacle within the thoracic cavity. FIG.15is an illustration of lead receptacle1304having features consistent with the present disclosure. Lead fixation receptacle can comprise a septum1340, or multiple septums disposed traversely within lumen1338. Septum1340can be selectively permeable such that when a lead is inserted through septum1340, septum1340can be configured to form a seal around the lead traversing through lumen1338to prevent the ingress or egress of gas, fluid, other materials, and the like, through lumen1338. Septum1340may optionally permit the egress of certain gas and fluid but prevent ingress of such materials through lumen1338. In some variations, the lead receptacle can comprise multiple lumens. For example, lead receptacle can comprise a second lumen configured to traverse from an outermost side of an outermost intercostal muscle to an innermost side of an innermost intercostal muscle. Second lumen can be configured to facilitate dispensing of pharmacological agents into the thorax of the patient. The lumens for such a lead receptacle can be used for differing purposes in addition to the passage of a single lead into the pleural space or mediastinum. The multiple lumens can provide access for multiple leads to be passed into the pleural space or mediastinum. FIG.16is an illustration of an exemplary lead fixation receptacle1342having features consistent with the present disclosure. Lead fixation receptacle1342can include a first lumen1344, similar to lumen1338of the lead receptacle1304illustrated inFIGS.14and15. Lead fixation receptacle1342can include an additional lumen1346. Additional lumen1346can be provided as a port to provide access to the thoracic cavity of the patient. Access can be provided to facilitate dispensing of pharmacological agents, such as pharmacological agents to treat various adverse effects such as infection or pain in the area surrounding lead receptacle1342, pleural space, mediastinum, and/or other areas surrounding the thoracic cavity of the patient. Additional lumen1346can provide access for treatment of other diseases or disorders affecting organs or other anatomical elements within the thoracic cavity. For example, additional lumen1346can facilitate the evacuation of gas or fluid from the thorax, and the like. The lead receptacle as described with reference toFIGS.13-16can be fixated to cartilage, or bone within the thoracic cavity. In some variations, the lead receptacle can be configured to be disposed between the intercostal muscles and a rib, thereby potentially reducing damage to the intercostal muscles caused by its insertion. The lead receptacle can be in passive contact with tissue surrounding the cardiac notch. For example, the lead receptacle can abut the superficial facia on the outermost side and the endothoracic facia or the parietal pleura on the innermost side. In some variations, the lead receptacle can be actively fixed into position using one end of the lead receptacle. For example, only one flange can include surgical thread holes to facilitate sewing of the flange into the intercostal muscles. Active fixation, whether at flanges, or along the lumen of the lead fixation receptacle, can include, for example, the use of tines, hooks, springs, screws, flared wings, flanges and the like. Screws can be used to screw the lead fixation receptacle into bone or more solid tissues within the thoracic cavity. Hooks, tines, springs, and the like, can be used to fix the lead fixation receptacle into soft tissues within the thoracic cavity. In some variations the lead receptacle can be configured to facilitate in-growth of tissue into the material of which the lead fixation receptacle is comprised. For example, the lead fixation receptacle can be configured such that bone, cartilage, intercostal muscle tissue, or the like, can readily grow into pockets or fissures within the surface of the lead receptacle. Facilitating the growth of tissue into the material of the lead receptacle can facilitate fixation of the receptacle. In some variations, the receptacle can be configured to actively fix between layers of the intercostal muscle. With reference toFIG.13, the layered nature of the intercostal muscle layers1308a,1308band1308ccan be used to facilitate fixation of the lead receptacle into the intercostal space. For example, flanges can be provided that extend between the intercostal muscle layers. Incisions can be made at off-set positions at each layer of intercostal muscle such that when the lead receptacle is inserted through the incisions, the intercostal muscles apply a transverse pressure to the lead receptacle keeping it in place. For example, a first incision can be made in the first intercostal muscle layer1308a, a second incision can be made in the second intercostal muscle layer1308b, offset from the first incision, and a third incision can be made to the third intercostal muscle layer1308cin-line with the first incision. Inserting the lead receptacle through the incisions, such that the lead receptacle is situated through all three incisions, will cause the second intercostal muscle layer1308bto apply a transverse pressure to the lead receptacle that is countered by the first intercostal muscle layer1308aand the third intercostal muscle layer1308c, facilitating keeping the lead receptacle in place. Sensing and detection will be performed using one or more available signals to determine when pacing should be delivered or inhibited. Cardiac signals will be measured from one or more electrodes. Additional non-cardiac sensors may also be used to enhance the accuracy of sensing and detection. Such sensors include, but are not limited to rate response sensors, posture/positional sensors, motion/vibration sensors, myopotential sensors and exogenous noise sensors. One or more algorithms will be utilized to make decisions about pacing delivery and inhibition. Such algorithms will evaluate available signal attributes and relationships, including but not limited to analysis of morphology, timing, signal combinations, signal correlation, template matching or pattern recognition. A pulse generator, such as pulse generator102illustrated inFIG.1, can be configured to monitor physiological characteristics and physical movements of the patient. Monitoring can be accomplished through sensors disposed on, or in, the pulse generator, and/or through sensors disposed on one or more leads disposed within the body of the patient. The pulse generator can be configured to monitor physiological characteristics and physical movements of the patient to properly detect heart arrhythmias, dyssynchrony, and the like. Sensor(s) can be configured to detect an activity of the patient. Such activity sensors can be contained within or on the housing of the pulse generator, such as pulse generator102illustrated inFIG.1. Activity sensors can comprise one or more accelerometers, gyroscopes, position sensors, and/or other sensors, such as location-based technology, and the like. Sensor information measured by the activity sensors can be cross-checked with activity information measured by any concomitant devices. In some variations, an activity sensor can include an accelerometer. The accelerometer can be configured to detect accelerations in any direction in space. Acceleration information can be used to identify potential noise in signals detected by other sensor(s), such as sensor(s) configured to monitor the physiological characteristics of the patient, and the like, and/or confirm the detection of signals indicating physiological issues, such as arrhythmias or other patient conditions. In some variations, a lead, such as lead802inFIG.8, can be configured to include sensors that are purposed solely for monitoring the patient's activity. Such sensors may not be configured to provide additional assistance during the implantation procedure. These sensors can include pulmonary, respiratory, minute ventilation, accelerometer, hemodynamic, and/or other sensors. Those sensors known in the art that are used to real-time, or periodically monitor a patient's cardiac activity can be provided in the leads. These sensors are purposed to allow the implanted device to sense, record and in certain instances, communicate the sensed data from these sensors to the patient's physician. In alternative embodiments, the implanted medical device may alter the programmed therapy regimen of the implanted medical device based upon the activity from the sensors. In some variations, sensors, such as sensors810and812ofFIG.8A, may be configured to detect the condition of various organs and/or systems of the patient. Sensor(s)810,812can be configured to detect movement of the patient to discount false readings from the various organs and/or systems. Sensor(s)810,812can be configured to monitor patient activity. Having a distal end806of lead802positioned in the cardiac notch abutting the parietal pleura, sensor(s)810,812can collect information associated with the organs and/or systems of the patient in that area, for example the lungs, the heart, esophagus, arteries, veins and other organs and/or systems. Sensor(s)810can include sensors to detect cardiac ECG, pulmonary function, sensors to detect respiratory function, sensors to determine minute ventilation, hemodynamic sensors and/or other sensors. Sensors can be configured independently to monitor several organs or systems and/or configured to monitor several characteristics of a single organ simultaneously. For example, using a first sensor pair, the implanted cardiac pacing system may be configured to monitor the cardiac ECG signal from the atria, while simultaneously, a second sensor pair is configured to monitor the cardiac ECG signal from the ventricles. A lead disposed in the body of a patient, such as lead802ofFIG.8A, can include sensors at other areas along the lead, for example, sensors812. The location of sensors812along lead802can be chosen based on proximity to organs, systems, and/or other physiological elements of the patient. The location of sensors812can be chosen based on proximity to other elements of the implanted cardiac pacing system. Additional leads may be used to facilitate an increase in the sensing capabilities of the implantable medical device. In one embodiment, in addition to at least one lead disposed within the intercostal muscle, pleural space or mediastinum, another lead is positioned subcutaneously and electrically connected to the implantable medical device. The subcutaneously placed lead can be configured to enhance the implantable medical device's ability to sense and analyze far-field signal's emitted by the patient's heart. In particular, the subcutaneous lead enhances the implantable medical device's ability to distinguish signals from particular chambers of the heart, and therefore, appropriately coordinate the timing of the required pacing therapy delivered by the implantable medical device. Additional leads in communication with the implantable medical device or pulse generator, and/or computing device, can be placed in other areas within the thoracic cavity in order to enhance the sensing activity of the heart, and to better coordinate the timing of the required pacing therapy delivered by the implantable medical device. In certain embodiments, these additional leads are physically attached to the implantable medical device of the present disclosure. The leads used to deliver therapeutic electrical pulses to pace the heart can comprise multiple poles. Each pole of the lead can be configured to deliver therapeutic electrical pulses and/or obtain sensing information. The different leads can be configured to provide different therapies and/or obtain different sensing information. Having multiple sensors at multiple locations can increase the sensitivity and effectiveness of the provided therapy. FIG.8Bis an illustration800bof an exemplary lead802having features consistent with the present disclosure. In some variations, lead802can comprise a yoke816. The yoke can be configured to maintain a hermetically sealed housing for the internal electrical cables of lead802, while facilitating splitting of the internal electrical cables into separate end-leads818a,818b,818c. Yoke816can be disposed toward distal end of lead802. While three end-leads818a,818b,818care illustrated inFIG.8B, the current disclosure contemplates fewer end-leads as well as a greater number of end-leads emanating from yoke816. The different end-leads818a,818b,818c, can include different electrodes and/or sensors. For example, end-lead818bcan include an electrode808bat the distal end806bof end-lead818bthat differs from electrode808aat distal end806aof end-lead818a. Electrode808bcan have flanges820. Flanges820can be configured to act as an anchor, securing the distal end806bof end-lead818bin position within the patient. Electrode808bwith flanges820can be suitable for anchoring into high-motion areas of the body where end-lead818bwould otherwise move away from the desired location without the anchoring effect provided by flanges820. Similarly, electrode808cat the distal end806cof end-lead818ccan be configured for a different function compared to the electrodes at the end of other end-leads. Lead802can be a multi-pole lead. Each pole can be electronically isolated from the other poles. The lead802can include multiple isolated poles, or electrodes, along its length. The individual poles can be selectively activated. The poles may include sensors for monitoring cardiac or other physiological conditions of the patient, or electrodes for deliver therapy to the patient. The sensing characteristics of a patient can change over time, or can change based on a patient's posture, a multi-pole lead permits the implantable medical device facilitate monitoring a patient's state through multiple sensing devices, without requiring intervention to reposition a lead. Furthermore, a multi-pole lead can be configured to facilitate supplementary sensing and therapy delivery vectors, such as sensing or stimulating from one pole to a plurality of poles, sensing or stimulating from a plurality of poles to a single pole, or sensing or stimulating between a plurality of poles to a separate plurality of poles. For example, should one particular vector be ineffective at treating a particular arrhythmia, the implantable medical device, or pulse generator, can be configured to switch vectors between the poles on the lead and reattempt therapy delivery using this alternative vector. This vector switching is applicable for sensing. Sensing characteristics can be monitored, and if a sensing vector becomes ineffective at providing adequate sensing signals, the implantable medical device can be configured to switch vectors or use a combination of one or more sensor pairs to create a new sensing signal. In some variations, at yoke816, each of the poles of the multi-pole lead can be split into their separate poles. Each of the end-leads emanating from the yoke816can be associated with a different pole of the multi-pole lead. Some of the end-leads emanating from yoke816can be configured for providing sensor capabilities of and/or therapeutic capabilities to the patient's heart. Others of the end-leads emanating from yoke816can be configured to provide sensor capabilities and/or therapeutic capabilities that are unrelated to the heart. Similarly, the cardiac pacing system herein described can include leads802, or medical leads, that provide functionality unrelated to the heart. In some variations, the lead can be bifurcated. A bifurcated lead can comprise two cores within the same lead. In some variations, the different cores of the bifurcated lead can be biased to bend in a predetermined manner and direction upon reaching a cavity. Such a cavity can, for example, be the mediastinum. Bifurcated lead cores can be comprised of shape memory materials, for example, nitinol or other material known in the art to deflect in a predetermined manner upon certain conditions. The conditions under which the bifurcated lead cores will deflect include electrical stimulation, pressure, temperature, or other conditions. In some variations, each core of the bifurcated lead can be configured so that it is steerable by the physician, or an automated system, to facilitate independent advancement of each core of the bifurcated lead, in different directions. In some variations, sensors from the cardiac pacing system may be selected to optimize sensing characteristics of the cardiac signals. Sensing signals, comprised from one or more sensor pairs may be selected via manual operation of the programming system or automatic operation of the implanted cardiac pacing system. Sensing signals may be evaluated using one of several characteristics including signal amplitude, frequency, width, morphology, signal-to-noise ratio, and the like. The cardiac pacing system can be configured to use multiple sensors to generate one or more input signals, optionally apply filtering of varying levels to these signals, perform some form of verification of acceptance upon the signals, use the signals to measure levels of intrinsic physiological activity to, subsequently, make therapy delivery decisions. Methods to perform such activities in part or in total include hardware, software, and/or firmware based signal filters, signal amplitude/width analysis, timing analysis, morphology analysis, morphological template comparison, signal-to-noise analysis, impedance analysis, acoustic wave and pressure analysis, or the like. The described analyses may be configured manually via the programming system or via automatic processes contained with the operation software of the cardiac pacing system. As previously discussed, placing the distal end of the pacing lead in the proper location is important for successful monitoring of a patient's heart and for efficient delivery of therapy. Furthermore, during placement of the lead, a physician must avoid damaging important blood vessels and other anatomical structures of the patient. The provision of a stable platform from which to deliver the leads can reduce the likelihood of collateral damage to anatomical structures of the patient. However, if a delivery platform is remote from the patient, the patient can move relative to the platform. The present disclosure describes a lead delivery system configured for placement on an anatomical structure of the patient, thereby reducing the risk of altering the relative location between the delivery system and the patient during delivery. When the term lead delivery system is used in the present disclosure, it is contemplated that such may also be capable of delivering components other than leads, for example, the lead delivery system may also be utilized in conjunction with delivery assist components. The lead delivery system may also be referred to as a component delivery system. FIG.17Ais an illustration1700of a side view of an exemplary lead delivery system1702for facilitating delivery of a lead, having features consistent with the present disclosure. Lead delivery system1702can be provided to facilitate placement of one or more leads into the patient. In some variations, lead delivery system1702can be configured to facilitate placement of the lead(s) into and/or through an intercostal space of the patient. For example, lead delivery system1702can be configured to facilitate placement of the lead(s) into the intercostal spaces of the patient to the right-hand side of the sternum. Alternatively, lead delivery system1702can be configured to facilitate placement of the lead(s) into the intercostal spaces of the patient to the left-hand side of the sternum. Optionally, lead delivery system1702can be configured to facilitate placement of the lead(s) into the intercostal space of the patient in the region of the cardiac notch and further through to the mediastinum.FIG.17Bis an illustration1718of a front view of the exemplary lead delivery system1702illustrated inFIG.17A.FIG.17Cis an illustration1726of a top-down view of the exemplary lead delivery system1702illustrated inFIG.17A. Lead delivery system1702can be configured to be affixed to a patient at a desired location such that it remains stationary relative to the patient. Stable fixation to the patient provides an additional benefit where multiple medical instruments are used in concert with lead delivery system1702. For example, if a device for assisting in lead delivery is first inserted into delivery system1702prior to insertion of the lead itself, the physician will have increased confidence that the system did not move between insertion of the two devices. Optionally, lead delivery system1702can be handheld and not affixed to the patient. Lead delivery system1702may include base1712and lead delivery device1714. Base1712can be configured to secure lead delivery device1714to one or more anatomical structures of the patient. In some variations, lead delivery system1702can be secured to an anatomical structure of the patient by use of an adhesive. For example, base1712can include an adhesive pad. In some variations, an adhesive pad can be reversibly secured to the patient. Proper placement of the adhesive pad to the patient can be accomplished based upon well-known anatomical landmarks, by imaging equipment, or the like. Lead delivery system1702may also be secured to the patient by way of a screw mechanism that securely, but reversibly, affixes lead delivery system1702to bone, cartilage or other material within the patient's body. In some variations, base1712of lead delivery system1702can include a clamp1704. Clamp1704can be configured to secure base1712to the patient. Clamp1704can be configured to secure lead delivery device1714to one or more anatomical structures of the patient. Clamp1704can include a movable clamp platform1706and a stationary platform1715. A hook portion1708can be disposed at one end of clamp platform1706. Hook portion1708can be configured to engage with a known anatomical region of the patient. For example, hook portion1708can be configured to extend or retract to forcibly engage with the edge of the patient's sternum, while the opposite edge of the patient's sternum engages with stationary platform1715. At least a portion of clamp platform1706may rest on the sternum of the patient. In some variations, the patient's sternum will be exposed, and clamp1704can be secured directly to the sternum. In some variations, clamp platform1706can include an adhesive portion configured to be disposed between a clamp platform1706and the patient to cause clamp platform1706to stick to the patient. In some variations, clamp1704can be configured to clamp onto a single rib, multiple ribs, the xyphoid, or other anatomical structures. Clamp1704can be engaged around any portion of the chosen anatomical structure. For example, clamp1704can be configured to clamp on to the sides of an anatomical structure. In some variations, clamp1704can be configured to clamp on the top and bottom of an anatomical structure. In some variations, clamp1704can be configured to engage outwardly to secure lead delivery system1702between two anatomical structures. When secured to two anatomical structures, lead delivery system1702can be secured by expansion forces exerted by clamp1704outwardly from clamp platform1706, against the two anatomical structures. For example, clamp1704can be configured to facilitate exerting an outward pressure against two ribs of the patient. The resultant force exerted back against clamp1704can keep clamp1704in place, relative to the patient. Clamp1704can be tightened when clamp1704has been positioned on, around, and/or between the intended anatomical structure(s). In some variations, a screw, an adjustable latch, a ratcheting mechanism, or the like, can be used to adjust clamp1704. The pressure of clamping clamp1704on the anatomical structure may be adjusted with an adjustment handle1720. Adjustment handle1720can also be configured to make adjustments, or refinements, to the location of lead delivery system1702as it may become necessary to fine tune the position of lead delivery system1702after it has been secured to an anatomical structure of the patient. As previously noted, it is important to avoid certain critical structures and vessels during lead delivery, such as the heart, lungs, pericardium, internal thoracic artery, and other major vessels of the anterior thoracic region. Exemplary lead delivery system1702, depicted inFIG.17, can facilitate the avoidance of critical structures through its locating the lead insertion point proximate the lateral margin of the sternum, especially when system1702is clamped to a patient's sternum (utilizing, for example, stationary platform1715and retractable hook1708, as discussed further below). In this implementation, distal end1713of cannula1716is located proximate stationary platform1715and will result in a lead insertion location proximate the lateral margin of the sternum. Lead delivery devices and systems of the present disclosure are not required to have a clamp1704, as depicted inFIG.17, or to necessarily be a fixed to the patient in any way. For example, lead delivery devices and systems similar to those previously described and depicted inFIG.12andFIG.17(without clamp1704) may be used without fixation to the patient. Such systems have been described, for example, as facilitating the insertion of lead(s) to the side of the sternum and in the region of the cardiac notch. In one implementation, lead delivery systems1702can effect such placement by way of a physician (or other trained healthcare provider) palpating the lateral margin of the sternum, at an intercostal space, prior to making an incision or other method for point of entry (e.g., puncture). Alternatively, lead delivery systems1702may be configured to allow for a distal end of the system to be pressed against the sternum of a patient and slid until reaching the lateral margin, then dropping through the intercostal muscles to create a path for lead(s). For example, in one implementation, following the physician identifying an insertion point above a patient's sternum, stationary platform1715may be inserted through the incision down to the sternum. The physician may then slide the distal tip of stationary platform1715across and against the sternum of the patient until reaching the lateral margin, wherein the pressure applied to the lead delivery device1714causes stationary platform1715to rest against the sternum, and the distal tip of stationary platform1715to insert through the intercostal muscles at the lateral margin of the sternum. The bottoming out of stationary platform1715against the sternum prevents over insertion of lead delivery system1702, and specifically the distal tip of stationary platform1715. Once positioned, distal end1713of cannula1716can be inserted to deploy lead(s) as described herein. In one implementation, such a distal end may be configured to puncture the tissue, for example with a relatively blunt access tip, to facilitate entry into the intercostal space without requiring a surgical incision to penetrate through the intercostal muscles. A blunt access tip, while providing the ability to puncture and push through tissue, does not cut, thereby reducing the potential for damage to the pericardium or other internal organs the tip may contact should such contact occur. Lead delivery systems configured for lead insertion proximate the lateral margin of the sternum may optionally be designed to effect lead placement to a substernal location. For example, a distal end of the lead delivery system may be shaped or curved, or may be articulable to move after passing the sternum. Alternatively, the lead itself may be articulable in a similar manner. When lead delivery systems1702are configured to be pressed against the sternum of the patient, slid across the sternum until reaching the lateral margin, and then dropped down through the intercostal tissue immediately lateral to the sternal margin, this process may be utilized after a physician has made an incision above the sternum. Such an incision may have been made, for example, to insert a pulse generator, as previously described. In such cases, the lead delivery system may easily traverse the sternum prior to puncturing the intercostal muscles and creating a path to the mediastinum for insertion of lead(s). Proper lead delivery system and lead insertion depth determinations in such cases are facilitated by the fact that sternum and rib cage thicknesses are similar across patient populations. As such, the insertion depth of the lead delivery system may be set at a nominal sternum thickness or slightly less, and thereafter be adjusted deeper to ensure that the lead delivery system does not extend too far within the mediastinum. However, in some cases, lead delivery systems may be utilized in a percutaneous manner, without an incision above the sternum (or without an incision at other entry locations described herein). In these cases, the thickness of a patient's subcutaneous tissue must be accounted for. Numerous methods for proper lead depth determination have been described herein including systems, methods and software for automating the lead delivery process. These and other implementations may be modified to further account for subcutaneous tissue thickness estimations. In one example, an implanting physician may assess the thickness of subcutaneous tissue based upon specific patient attributes such as height, weight, sex, waist size, chest size, sternum length, etc. These patient attributes may be assessed individually or in combination to predict subcutaneous tissue thickness. Alternatively, direct measurement of the subcutaneous tissue thickness may be made by means such as a needle probe, ultrasound, CT scan, MRI, or the like. Information related to items such as the distance between the posterior surface of the sternum and the pericardium, the distance between the sternal margin to the thoracic vein or artery, and sternum thickness may then be used by the physician, or by an automated delivery system, to adjust the intended lead implantation location, orientation and depth. With further reference toFIG.17, exemplary lead delivery system1702can include a lead delivery device1714configured to facilitate delivery of a lead into the patient to a desired location. Lead delivery device1714can include a lead advancer, which can be configured to incrementally advance a lead into a patient. The lead may be advanced into a patient by a predefined amount. The lead advancer can be configured to facilitate the delivery of leads to the correct position, orientation and depth within the patient. Leads delivered by lead delivery system1702may be leads configured to deliver therapeutic electrical pacing to the heart of the patient. Leads delivered by lead delivery system1702can also be leads configured to obtain physiological information about the patient, such as heart function, lung function, the performance of various blood vessels, and the like. Lead delivery device1714can be configured to advance a lead through an intercostal space of the patient and, optionally, into the mediastinum of the patient. Lead delivery device1714can be configured to position the distal end of the lead at any of the positions described in the present disclosure. Lead delivery device1714can also be configured to control an angle at which a lead is inserted into the patient. Lead delivery system1702can include a cannula1716, which may extend through the length of lead delivery device1714. Cannula1716can also extend through the lead advancer. Cannula1716can be configured to receive a lead for insertion and may be configured to accompany a lead as it is inserted into the patient. In some variations, cannula1716can be configured to receive delivery assist components (discussed below) for insertion into the patient. In some variations, lead delivery system1702can include multiple cannulas for simultaneous delivery of leads and/or delivery assist components into the patient. In some variations, a screw, an adjustable latch, a ratcheting mechanism, or the like, can be used to adjust the distance between the distal end1713of cannula1716and stationary platform1715. Such adjustments or refinements may become necessary to fine tune the position of lead delivery system1702and the location of the distal end1713of cannula1716after it has been secured to an anatomical structure of the patient In some instances, a smaller cannula opening may ease the insertion through tissue. As such, in additional variations, the size of the cannula opening may be variably controlled by the operator. The cannula may, for example, be comprised of two cannula halves, or multiple cannula segments, that expand or separate to a desired opening size. The variably selected cannula opening size may be controlled via screw, an adjustable latch, a ratcheting mechanism, lever, or the like, in order to facilitate delivery of a variety of lead shapes and sizes. Lead delivery system1702may utilize delivery assist component(s) such as a needle, a guide wire, guide catheter, sheath, expandable catheter, balloon catheter and the like. A delivery assist component can be configured to facilitate delivery of a lead into the patient. Delivery assist components may be configured to be inserted into a patient and advanced to the desired location prior to lead insertion. Alternatively, a delivery assist component can be configured to be inserted into the patient with a lead and advanced with the lead to the desired location. The delivery assist component can be used to create space and minimize damage to surrounding tissue prior to, or in connection with, the deployment of a lead into the patient. The delivery assist component can be removed from the patient once the lead has been placed at the desired location. Delivery assist components can be inserted into the patient by the lead delivery system1702in much the same way as a lead. Delivery assist components may incorporate sensors. Such sensors can include the sensor types described in the present disclosure for use on leads to monitor the location of leads with respect to patient anatomy. It is understood that delivery assist components may interact with lead delivery system1702in much the same way as leads themselves, as described herein. Careful advancement of the component into the patient is desirable. Lead delivery system1702can include a lead advancer, which can be configured to incrementally advance a lead into a patient in response to an interaction by an operator. Limiting movement of the lead advancing into the body can avoid accidental perforation and damage to anatomical structures. In some variations, lead delivery system1702can include a trigger1724, which can be configured to activate a ratcheting mechanism to advance the lead. One pull on trigger1724connected to the ratcheting mechanism can cause the lead to be advanced a known, prescribed, amount. For example, the amount can be set to 1 mm, 2 mm, or the like. In some variations, this length can be set or programmed by the physician. In some variations, a partial pull on trigger1724can result in a partial advancement of the lead by a partial amount of the set amount. For example, where depressing the trigger fully can result in an advancement of 1 mm and therefore a partial depression of the trigger can be set to cause the result of an advancement of 0.5 mm. The lead delivery system1702can include a limit on the number of trigger1724pulls permitted within an interval. For example, the lead delivery system1702may restrict the physician from pulling trigger1724more than one time per second. In another option, the lead delivery system1702may require the physician to actively set a trigger limit in order to permit trigger1724pulls in excess of the permitted interval. Lead delivery system1702can include a locking mechanism activated by a locking switch1728that can reversibly lock a lead with respect to the lead delivery system1702. When locked, the lead being delivered to the patient can be engaged with delivery system1702such that it cannot be moved independent of movement from, say, the ratcheting system. When unlocked, the lead can move freely within cannula1716of lead delivery system1702. Lead delivery system1702can be further configured to only permit movement of the lead in one direction when locking switch1728is in the unlocked position. Where a delivery assist component is used and unlocked from lead delivery system1702, the physician can remove the delivery assist component, such as a needle, from cannula1716. The physician may then insert another delivery assist component, or a lead, into cannula1716of lead delivery system1702. The physician can lock the lead, or the new delivery assist component, for example, into the ratcheting mechanism of lead delivery system1702. The physician, or an automated system, can then advance the lead within the patient to a depth indicated by the previous component's readout. In some variations, the physician can use the previous depth readout with sensors or physical markers on the lead to ensure proper placement of the lead. While the lead is being inserted into the patient to the desired location, the movement of the lead can be metered. Transverse movement of the lead can be metered as well the depth of the lead into the patient. Metering the movement of the lead can avoid excessive movement of the lead. In some variations, movement can be metered by a ratcheting mechanism and the magnitude of the movement of the lead can be presented to the operator. For example, a reading indicating the amount of movement can be presented to the operator, such as through reading window1722. In some variations, lead delivery system1702can be configured to coordinate with real-time imaging equipment to assess the relative location of the lead being delivered by lead delivery system1702. Sensor(s) associated with lead delivery system1702can facilitate delivery. Sensor(s) can be disposed on the lead delivery device1714, remote from the lead delivery device1714, such as on a gurney, or in an operating room, on the lead itself, or in other locations. Sensor(s) may be utilized to help determine an appropriate insertion point for the lead by, for example, identifying blood-filled vessels such as arteries and veins below the surface of the skin. An example of such identification of subcutaneous vessels is described in relation toFIGS.9A and9B. Similarly, sensors can be used to identify the location of ribs, or other anatomy. The use of sensors of the types identified herein facilitate determination of an appropriate insertion point that will avoid damage to critical anatomy. Sensor(s) can also utilized to determine a proper depth in the patient for the distal end of the lead, or proper positioning with respect to specific anatomy. As previously described, different tissues within the patient's body can demonstrate varying characteristics. The differing physiological characteristics of the tissues of the body can facilitate placement of the delivery system and/or lead at the desired location. Lead delivery system1702can be configured to monitor the physiological characteristics of the tissues surrounding the distal end, or advancing end, of the lead and/or delivery assist component being delivered to the desired position. Physiological sensors, such as pressure sensors, impedance sensors, accelerometers, pH sensors, temperature sensors, and the like, can monitor the characteristics of the anatomy at the end of the implanted, or advancing, lead or device. Lead delivery system1702can be configured to determine the location of the lead being implanted based on the detected physiological characteristics as has been described with reference toFIGS.10-13and at other locations within the present disclosure. Lead delivery system1702can be configured to provide real-time feedback to an implanting physician based on readings from the above-mentioned sensors. Feedback can be provided with indicators, alarms or the like. Lead delivery system1702can be automated. Automating the lead delivery system can allow a physician to set up the system and then rely on sensors and computer control of lead delivery system1702to deliver the lead to the desired location. In some variations, the lead delivery system1702can be semi-automatic, where measurements and advancements made by lead delivery system1702occur automatically, but only after the physician reviews certain measurements or replies to prompts provided by lead delivery system1702.FIG.18is an illustration of a schematic diagram1800showing components of lead delivery system1702having features consistent with the current subject matter. Lead delivery system1702can include, or be associated with, a computing device1802that can be configured to control the operation of delivery system1702. Computing device1802can include processor(s)1804configured to cause computing device1802to transmit signals to the various elements of the lead delivery system1702and/or other devices to control lead delivery system1702. Computing device1802can also be configured to control other devices in concert with lead delivery system1702. Computing device1802can include electronic storage1806to store computer-readable instructions for execution by processor(s)1804. The computer-readable instructions can cause processor(s)1804to perform functions consistent with the present disclosure. The functions that can be performed include the functions described herein attributable to a physician. Sensors disposed on an advancing lead, a delivery assist component, or the lead delivery system1702(all referred to as component sensors1808inFIG.18), can be used to facilitate the identification of an insertion point, and the delivery of a lead, as discussed herein. External sensor machinery1810such as x-ray machines, fluoroscopy machines, ultrasound machines, and the like, can also be used to assist in the lead delivery process. Computing device1802can be in communication with one or more component sensors1808and/or external sensors1810. Computing device1802may communicate with such sensors through wired or wireless communication systems. As described throughout the present disclosure, such sensors can be used by computing device1802to determine an insertion point that is optimally placed with respect to anatomy such as the sternum, ribs, or critical arteries. The sensors can also be used by computing device1802to determine a safe path of advancement and fixation for a lead, which will avoid damage to critical structures and provide optimal distal end placement for effective pacing and sensing. Optimal placement effected by an automated delivery system1702, in conjunction with computing device1802can result in the distal end of a lead being placed in any of the locations described within the present disclosure (for example, intercostally into the mediastinum, or to just beyond the innermost intercostal muscle, etc.). Computing device1802can be further configured to control one or more actuators1814disposed on a lead delivery system1702. The lead delivery system can comprise motors configured to advance or retract a delivery assist component and/or lead, or to effect lateral movements, or to change the angle of advancement or retraction of the lead. Computing device1802and automated lead delivery system can be further configured to present information via indicators, alarms or on a screen associated with the placement of a lead and/or delivery assist component. Computing device1802can be in electronic communication with a display1812. Computing device1812can be configured to cause a presentation on display1812of information associated with the advancing lead. For example, measurements obtained by sensor(s)1808and/or1810can be processed by processor(s)1804to provide images or representations of anatomy in the vicinity of an advancing lead. Computing device1802can be configured to cause presentation of warnings on display1812. For example, computing device1802can be configured to cause an indication to be presented on display1812that the end of the lead has reached the desired location within the patient. Display1812can display an indication of damage to tissues caused by an advancing lead. Display1812can display an indication of future potential damage of tissues allowing the operator to stop the procedure or determine solutions to circumvent problems. In some variations, processor(s)1804can be configured to determine solutions to circumvent problems and cause the solutions to be presented on the display1812. While components have been described herein in their individual capacities, it will be readily appreciated the functionality of individually described components can be attributed to one or more other components or can be split into separate components. This disclosure is not intended to be limiting to the exact variations described herein, but is intended to encompass all implementations of the presently described subject matter. In the descriptions above, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” Use of the term “based on,” above is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. | 130,548 |
11857381 | (j) DETAILED DESCRIPTION OF THE INVENTION With reference now to the drawings, and in particular toFIGS.1through10thereof, a new anatomical marker embodying the principles and concepts of an embodiment of the disclosure and generally designated by the reference numeral10will be described. As best illustrated inFIGS.1through10, the anatomical localization device10generally comprises an elongate handle12, which is configured to be grasped in a hand of a user proximate to its first end14. As is shown inFIG.2, the elongate handle12comprises a plurality of nested sections16and thus is selectively extensible. The plurality of nested sections16may comprise two nested sections16, which provides for sufficient compaction of the anatomical localization device10to facilitate storage when not in use, or more than two nested sections16. A marker assembly18is attached to a second end20of the elongate handle12and is configured for selective actuation to impart a mark to skin of a subject. The marker assembly18and comprises plastic composite so that the marker assembly18is radiolucent. The marker assembly18may comprise, for example, polyetherimide, polycarbonate, polyethylene, polypropylene, polyoxymethylene, or the like. The elongate handle12may be comprised of any conventional rigid material providing sufficient stability when using the marker assembly18, particularly as the elongated handle12need not be radiolucent. Thus, while plastics may be utilized, other materials such as stainless steel, titanium or other metals may be preferred for their durability and mass. One configuration anticipated for the marker assembly18is depicted inFIGS.7and8, wherein the marker assembly18comprises a first tube22, a second tube24, a marking pen26, and a biaser28. The second tube24is slidably attached to the first tube22, with an upper limit30of the second tube24being positioned within the first tube22and a lower limit32of the second tube24being external to the first tube22. The first tube22and the second tube24may be cylindrical, as is shown inFIGS.1-5. As shaping of the first tube22and the second tube24are not critical to functioning of the anatomical localization device10, alternative shaping is anticipated by the present invention, such as, but not limited to, cuboid, polyhedric, or the like. The first tube22is internally threaded adjacent to its upper end34. A cap36is selectively threadedly attachable to the first tube22to close the upper end34. A channel38extends axially through the cap36. As is shown inFIG.6, the elongate handle12extends substantially radially from the first tube22. The present invention also anticipates the elongate handle12extending angularly from the first tube22, for example, toward the upper end34of the first tube22. As will become apparent, angling of the elongate handle12toward the upper end34of the first tube22may facilitate pressing of the marker assembly18against skin of a subject. The marking pen26, which is attached to and positioned substantially within the first tube22, extends also into the second tube24. The marking pen26is circumferentially complementary to and positioned through the channel38so that the marking pen26is frictionally attached to the cap36. The present invention also anticipates the marking pen26being attached to the cap36or to the first tube22by other attachment means, such as, but not limited to, adhesives, clips, or the like. The biaser28is attached to the first tube22and is operationally engaged to the second tube24so that the second tube24is biased to a first position, as is shown inFIGS.5and7, wherein the second tube24extends from the first tube22past a tip40of the marking pen26. With the second tube24in the first position, the tip40is prevented from marking the skin of the subject. The second tube24is configured to be pressed against the skin over an anatomical target, such as a shoulder joint, knee joint, or the like, such that the second tube24retracts into the first tube22, as is shown inFIGS.6and8, and the marking pen26imparts a mark to the skin. As is shown inFIGS.7and8, a first ring42is attached to and positioned within the first tube22proximate to its upper end34. A second ring44is attached to and extends radially from the upper limit30of the second tube24. The biaser28comprises a spring46, which is positioned in the first tube22between the first ring42and the second ring44. The spring46is configured to be tensioned upon pressing of the lower limit32of the second tube24against the skin of the subject and to rebound upon lifting of the marker assembly18from the skin. The present invention anticipates other configurations of the marker assembly18, such as, but not limited to, the marking pen26being spring loaded and selectively actuatable, a stamp actuated by a trigger attached to the elongate handle12proximate to its first end14, or the like. An indicator48, which is attached to the marker assembly18, is radiopaque and thus configured to be radiographically visualized. The indicator48comprises barium sulfate, aluminum, stainless steel, titanium, gold, platinum, tantalum, or the like. As is shown inFIGS.5and6, the indicator48is attached to the lower limit32of the second tube24. The present invention also anticipates the indicator48being attached to a lower end50of the first tube22. The elongate handle12is configured to be manipulated by the user to motivate the marker assembly18across an anatomical region of a subject who is being radiographically visualized. Such radiographic imaging is performed routinely using fluoroscopy, radiography, and X-ray computed tomography. The indicator48thus can be localized over an anatomical target, enabling the user to selectively actuate the marker assembly18to mark the skin over the anatomical target. The elongate handle12allows the user to utilize the marker assembly18without positioning their hand within an area being exposed to the radiological imaging. The fully extended length of elongate handle12will typically be up to about 24.0 inches. In use, the anatomical localization device10enables a radiographically guided method of marking an anatomical target52. The method52comprises a first provision step54, which entails providing a radiographic imaging device56. A second provision step58of the method52is providing an anatomical localization device10, according to the specification above. A first use step60of the method52is positioning a subject for imaging by the radiographic imaging device56. A second use step62of the method52is grasping the elongate handle12and positioning the marker assembly18on the skin of the subject proximate to the anatomical target. A third use step64of the method52is actuating the radiographic imaging device. A fourth use step66of the method52is manipulating the elongate handle12to position the indicator48over the anatomical target. The elongate handle12, in being elongated, allows the user to manipulate the marking assembly18with little or no risk of radiation exposure from the radiographic imaging device56. A fifth use step68of the method52is actuating the marker assembly18to mark the skin over the anatomical target. With the marker assembly18comprising a first tube22, a second tube24, a marking pen26, and a biaser28, as described in the specification above, the step of actuating the marker assembly18comprises application of a force to the elongate handle12to press the second tube24against the skin of the subject. In so doing, the second tube24is retracted into the first tube22and the tip40of the marking pen26contacts the skin over the anatomical target. With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of an embodiment enabled by the disclosure, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by an embodiment of the disclosure. Therefore, the foregoing is considered as illustrative only of the principles of the disclosure. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the disclosure to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure. In this patent document, the word “comprising” is used in its non-limiting sense to mean that items following the word are included, but items not specifically mentioned are not excluded. A reference to an element by the indefinite article “a” does not exclude the possibility that more than one of the element is present, unless the context clearly requires that there be only one of the elements. | 8,946 |
11857382 | DETAILED DESCRIPTION Referring toFIGS.1-7, an imaging element cleaning apparatus configured in accordance with one or more embodiments of the disclosures made herein (i.e., the imaging element cleaning apparatus100) is shown. The imaging element cleaning apparatus100includes a chassis103, a surgical system attachment body105, a sheath mount110, a sheath assembly115, a rotational movement actuator120(e.g., a motor) and an axial movement actuator125(e.g., a motor). The rotational movement actuator120and the axial movement actuator125jointly define a motion control device128. The chassis103, the surgical system attachment body105, the sheath mount110and the motion control device128jointly define a drive unit129. The surgical system attachment body105is adapted for being engaged with one or more structural components of a robotic surgical system for at least partially securing the imaging element cleaning apparatus100thereto. A visualization scope131, as shown inFIG.4, is one example of such a structural component of a robotic surgical system. In one or more embodiments, as shown, the surgical system attachment body105can be (or comprise) a scope attachment body106adapted to be secured to a structural component of the visualization scope131. More specifically, in one or more embodiments, the scope attachment body106includes a central passage126(i.e., a securement portion) adapted for having an extension portion132of the visualization scope131disposed therein. Examples of robotic surgical systems include, but are not limited to, those available from Intuitive Surgical, Zimmer Biomet, Medtronic, Stryker, Siemens Healthineers, Johnson & Johnson, and Auris Health. Although disclosed in the context of robotic surgical system, imaging element cleaning apparatus configured in accordance with one or more embodiments of the disclosures made herein may be implemented in a manner adapted for manual laparoscopic imaging devices and surgical methodologies. Examples of commercially available manual laparoscopic imaging devices include, but are not limited to, endoscopes manufactured under brand names of Karl Storz, Linvatec, Olympus, Richard Wolf, Stryker and the like. The sheath assembly115comprises a scope sheath130, a cleaning member135and a coupling element140. The coupling element140is movably attached to the scope sheath130. For example, the coupling element140(i.e., a control wire) can extend through a passage of the scope sheath such as an open or closed channel, groove, or the like. The cleaning member135is attached to a first end portion137of the coupling element140and is adjacent a first end portion145of the scope sheath130. The scope sheath130may be detachably or fixedly attached to the sheath mount110. The central passage126preferably has a centerline longitudinal axis L1that extends colinearly with a centerline longitudinal axis L2of the scope sheath130. The scope sheath130may be a thin-walled tube made from a metallic, composite and/or polymeric material. The coupling element140may be a flexible small-diameter wire, cable, tubular structure or the like made from a metallic, fibrous, polymeric material and/or the like. In some embodiments, the coupling element140is characterized by an elongated small diameter structure that offers at least a limited degree of bendability in combination with high torsional rigidity. In other embodiments, the coupling element140is characterized by an elongated small diameter structure that offers a given (e.g., predictable) amount of torsional compliance. Based on these characterizing attributes, examples of coupling element140include, but are not limited to, solid metallic wire, tube, spiraled metal wire, a polymeric filament(s), a composite filament(s) or the like. In one or more embodiments, the coupling member140may be used to deliver a flowable material (e.g., gas or liquid material) to the first end portion137of the scope sheath130. As shown, the sheath assembly115(i.e., a “cleaning cartridge”) docks to the drive unit129at two locations. The sheath assembly115comprises a sheath mounting body150attached to a second end portion146of the scope sheath130. The sheath mounting body150is detachably attached to the sheath mount110such as by pins151(i.e., protrusions) that each engage a mating groove (i.e., a mating features for providing an interlocked interface). Such mating features may be configured for defining/providing a desired angular “clocking” of the sheath assembly115relative to the drive unit129. The interlocked interface either alone or in combination with a supplemental interlocking structure (e.g., a spring-loading structure) may be configured to secure the scope sheath130in a positionally and rotationally locked configuration relative to the sheath mount110. The sheath assembly115also comprises a coupling element engagement body155attached to a second end portion142of the coupling element140such as via a fastener that secures the coupling element at a fixed location along a length of the coupling element140. The coupling element engagement body155is located on the coupling element140at a prescribed distance from the cleaning member135. As shown, the coupling element engagement body155may be embodied as a tapered body. The coupling element engagement body155may be selectively and securely engageable with a mating engagement body160of the rotational movement actuator120. These engagement arrangements of the scope sheath130and the coupling element140individually and jointly provide for a simple, yet effective and efficient approach for mechanically securing the sheath assembly115to the drive unit129such as to enable selective interchangeability/replacement of sheath assemblies (e.g., 0-degree wiper sheath assembly or 30-degree wiper sheath assembly). As shown inFIG.6, in one or more embodiments of the disclosures made herein, the imaging element cleaning apparatus100may be securely engaged with the visualization scope131via the scope attachment body106(i.e., a surgical system attachment body), thereby forming a cleaning device enabled visualization scope134. To this end, the extension portion132of the visualization scope131extends through the central passage126of the scope attachment body106. The scope attachment body106may be in the form of a clamp whereby a securement fastener133is used to exert a clamping for on the extension portion132by the scope attachment body106, thereby fixedly securing the drive unit129to the visualization scope134. In view of the disclosures made herein, a skilled person will recognize other approaches for securing the drive unit129to the visualization scope131. A robotic arm mount of a robotic surgical system is another example of a structural components of a robotic surgical system through which an imaging element cleaning apparatus in accordance with one or more embodiments of the disclosures made herein may be at least partially secured thereto. As shown inFIGS.6and7, a robotic arm connector165may be moveably attached to the scope sheath130to provide for connection with a robotic arm mount such as by the scope sheath130passing through a central passages166of the robotic arm connector165. Typically, connection of an imaging element cleaning apparatus configured in accordance with one or more embodiments of the disclosures made herein via the robotic arm connector165is used in combination with one or more other robotic arm connections located remotely from the robotic arm connector165. Spacing between the robotic arm connections serves to provide for stability and rigidity of the as-mounted imaging element cleaning apparatus. Referring toFIGS.6and7, the robotic arm connector165may be configured to mimic the structural support provided for by a cannula typically mounted on a robotic arm and through which a surgical instrument (e.g., a visualization scope) passes. As is well known, a length of the cannula serves to provide an elongated bearing-surface upon which an extension portion of the surgical instrument (e.g., the extension portion132of the visualization scope131) bears and slides and to provide support against loadings resulting from forces generated by pivoting of the surgical instrument while the surgical instrument extends through a patient's abdominal wall. To this end, the robotic arm connector165may be configured in a similar manner with spaced apart and/or elongated support portions166that each carry a respective and linearly aligned central passage167. The robotic arm connector165includes an arm engaging portion168configured for being engaged by a mating engagement portion of an arm of a robotic surgical system. As related to the motion control device128best shown inFIGS.1and2, the rotational movement actuator120is a rotational movement imparting portion thereof and the axial movement actuator125is an axial movement imparting portion thereof. Thus, a person of ordinary skill will appreciate that the rotational movement actuator120is a first movement actuator adapted to provide rotational movement of a structure coupled to a motion imparting portion thereof and the axial movement actuator125is a second movement actuator adapted to provide axial movement of a structure coupled to (e.g., directly attached to) a motion imparting portion thereof. In one or more embodiments, as shown, a mounting portion of the axial movement actuator125is attached to the chassis103or the surgical system attachment body105, a mounting portion of the rotational movement actuator120is attached to an axial movement imparting portion of the axial movement actuator125and the coupling element140is attached to a rotational movement imparting portion of the rotational movement actuator120. The rotational movement actuator120can be attached to the chassis103or the surgical system attachment body105for limiting translational movement of the entire rotational movement actuator120to being along a particular axial translation axis—e.g., via a motion control device127(e.g., a slide rail) that limits movement to being along an axial translation axis thereof. This arrangement of the motion control device128enables independent rotational movement and axial movement of the cleaning member135relative to the first end portion145of the scope sheath130via selective actuation of the rotational movement actuator120and the axial movement actuator125. It is contemplated and disclosed herein that, in one or more other embodiments, such independent rotational movement and axial movement of the cleaning member135may be provided via a single movement actuator or more than two movement actuators. It is also contemplated and disclosed herein that, in one or more other embodiments, such single movement actuator or more than two movement actuators may each be an actuator (e.g., motor) integral with a structural component of a robotic surgical system—e.g., motors integral with a robotic arm of a robotic surgical system controlled by a control apparatus of the robotic surgical system. In one or more embodiments, as shown, operation of the motion control device128(e.g., the rotational movement actuator120and the axial movement actuator125) may be controlled via a movement controller161—e.g., one or more micro-controllers comprising basic programmable control logic instructions/code/software). For example, in response to the movement controller161receiving a cleaning event trigger signal (e.g., via a manual actuation button or system-issued signal), the movement controller161issues one or more corresponding signals for causing the motion control device128to correspondingly move the cleaning member rotationally and, optionally, axially. Disclosed now is a method of facilitating cleaning of an imaging element of a visualization scope of a robotic surgical system in accordance with one or more embodiments of the disclosures made herein. The objective of such a method includes arriving at an imaging element cleaning apparatus configured in accordance with one or more embodiments of the disclosures made herein (e.g., as discussed above in reference toFIGS.1-7), arriving at a cleaning device enabled visualization scope (e.g., as discussed above in reference toFIGS.4and6) or the like. Following are examples of steps that may be taken for performing such a method of facilitating cleaning of an imaging element of a visualization scope of a robotic surgical system, in the order described or otherwise. A step may be performed to provide a drive unit, such as that discussed above, where the drive unit comprising one or more surgical system attachment bodies, a sheath mount attached to at least one of the surgical system attachment bodies and a motion control device having a mounting portion thereof attached to the surgical system attachment body and where the surgical system attachment body is adapted to be engaged with one or more structural components of the robotic surgical system. A step may be performed to provide a sheath assembly, such as that discussed above, where the sheath assembly comprises a scope sheath, a cleaning member and a coupling element, where the coupling element is movably attached to the scope sheath and where the cleaning member is attached to a first end portion of the coupling element and is adjacent a first end portion of the scope sheath. A step may be performed to attach a second end portion of the scope sheath to the sheath mount. A step may be performed to attach a second end portion of the coupling element to a rotational movement imparting portion of the motion control device. A step may be performed to engage at least one of the surgical system attachment bodies with one or more structural components of the robotic surgical system. Although the invention has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the invention in all its aspects. Although the invention has been described with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed; rather, the invention extends to all functionally equivalent technologies, structures, methods and uses such as are within the scope of the appended claims. | 14,487 |
11857383 | DETAILED DESCRIPTION The present disclosure encompasses washing appliances, washing systems, washing controllers, and methods of using thereof designed, developed, and built by the inventors. The appliances, systems, and controllers are capable of high throughput cleaning of medical devices and accessory devices normally relegated to waste such as those used during cardiovascular procedures. The ability to clean medical devices and accessories can be used during reprocessing of single-use devices in conjunction with other reprocessing methods such as those used for inspection of lumen devices for occlusions in the lumen. The washing appliances, washing systems, washing controllers, and methods of using thereof will be understood from the accompanying drawings, taken in conjunction with the accompanying description. It is noted that, for purposes of illustrative clarity, certain elements in various drawings may not be drawn to scale. Several variations of the system are presented herein. It should be understood by those of skill in the art, that various components, parts, and features of the different variations may be combined together and/or interchanged with one another, all of which are within the scope of the present application, even though not all variations and particular variations are shown in the drawings. It should also be understood that the mixing and matching of features, elements, and/or functions between various variations is expressly contemplated herein so that one of ordinary skill in the art would appreciate from this invention that the features, elements, and/or functions of one variation may be incorporated into another variation as appropriate, unless described otherwise. I. Washing Appliance One aspect of the present disclosure encompasses a washing appliance for washing devices. The washing appliance comprises a rinse tank, manifolds for suspending devices to be washed in the washing appliance, and spray assemblies for spraying devices suspended in the rinse tank. (a) Devices The washing appliance of the instant disclosure can be used to wash any device, including single-use medical devices used during cardiovascular procedures (cath lab) and accessory devices. Accordingly, as used herein, the term “medical device” comprises a medical device used during a medical procedure and any other accessory that may be used during the procedure. In some aspects, the washing appliance is used to wash a medical device used during cardiovascular procedures (cath lab) and accessory devices. Non-limiting examples of medical devices and accessories include medical devices comprising lumens, catheters, endoscopes, dilators, dilator accessories, introducer sheaths, guidewires, balloons, vascular closure devices, atherectomy devices, fractional flow reserve (FFR) wires, racetrack tubing, cables and wiring, medical tubing, and hypodermic, transseptal, and other needles. In some aspects, the medical device comprises one or more lumen such as catheters and endoscopes. Catheters and endoscopes are extensively used to perform an array of minimally invasive procedures. An endoscope is an illuminated optical, typically slender, and tubular instrument (a type of borescope) used to look deep into the body by way of openings such as the mouth or anus. Endoscopes use tubes which are only a few millimeters thick to transfer illumination in one direction and high-resolution images in real time in the other direction, resulting in minimally invasive surgeries. Endoscopes are used to examine the internal organs like the throat or esophagus. Specialized endoscopes are named after their target organ. Examples include the cystoscope (bladder), nephroscope (kidney), bronchoscope (bronchus), arthroscope (joints) and colonoscope (colon), and laparoscope (abdomen or pelvis). Endoscopes can be used to visually examine and diagnose or assist in surgery such as an arthroscopy. For non-medical uses, similar instruments are called borescopes. Some endoscopes comprise working channels comprising a lumen to allow entry of medical instruments or manipulators. As used herein, the term “lumen device” is used to refer to a medical device comprising a lumen and accessories used during procedures employing the lumen device. A catheter is a thin tube made from medical grade materials serving a broad range of functions in medicine. Catheters can be inserted in the body to treat diseases or perform a surgical procedure. By modifying the material or adjusting the way catheters are manufactured, it is possible to tailor catheters for cardiovascular, urological, gastrointestinal, neurovascular, and ophthalmic applications. In most uses, a catheter is a thin, flexible tube (“soft” catheter) though catheters are available in varying levels of stiffness depending on the application. A catheter left inside the body, either temporarily or permanently, may be referred to as an “indwelling catheter” (for example, a peripherally inserted central catheter). A permanently inserted catheter may be referred to as a “permcath.” Catheters can be inserted into a body cavity, duct, vessel, brain, skin, or adipose tissue. Functionally, catheters allow drainage and administration of fluids or gases, access by surgical instruments, and can also perform a wide variety of other tasks depending on the type of catheter. Placement of a catheter into a particular part of the body may allow:Administration of fluids (i.e., heparinized saline, contrast dyes) during an electrophysiology, or related, study;Fluid sampling during an electrophysiology, or related, study;Direct blood pressure measurement during an electrophysiology, or related, study;Angioplasty, angiography, balloon septostomy, balloon sinuplasty, cardiac, catheter ablation;Draining urine from the urinary bladder as in urinary catheterization, e.g., the intermittent catheters or Foley catheter or even when the urethra is damaged as in suprapubic catheterization;Drainage of urine from the kidney by percutaneous (through the skin) nephrostomy;Drainage of fluid collections, e.g. an abdominal abscess;Drainage of air from around the lung (pigtail catheter);Administration of intravenous fluids, medication or parenteral nutrition with a peripheral venous catheter;Direct measurement of blood pressure in an artery or vein;Direct measurement of intracranial pressure;Direct measurement of blood flow;Intravascular ultrasound;Optical coherence tomography (OCT) imaging;Near-infrared spectoscopy (NIRS);Administration of anesthetic medication into the epidural space, the subarachnoid space, or around a major nerve bundle such as the brachial plexus;Administration of oxygen, volatile anesthetic agents, and other breathing gases into the lungs using a tracheal tube;Subcutaneous administration of insulin or other medications, with the use of an infusion set and insulin pump;Administering drugs or fluids into a large-bore catheter positioned either in a vein near the heart or just inside the atrium;Measuring pressures in the heart;Inserting fertilized embryos from in vitro fertilization into the uterus;Providing quick access to the central circulation of premature infants using an umbilical line;Attaching catheters to various other devices;Hemodialysis using a double or triple lumen, external catheter; andArtificial insemination. Non-limiting examples of needles used in the medical field include:Abrams' needle: A biopsy needle designed to reduce the danger of introducing air into tissues; used in pleural biopsy.Agar cutting needle: A needle with a sharpened punch end and an obturator to pick up and transfer a sample of agar media.Aneurysm needle: A needle with a handle, used in ligating blood vessels.Aspirating needle: A long, hollow needle for removing fluid from a cavity.Brockenbrough needle: A curved steel transseptal needle within a Brockenbrough transseptal catheter; used to puncture the interatrial septum.Cataract needle: A needle used in removing a cataract.Chiba needle: A common type of thin, flexible biopsy needle with a small-diameter needle and a stylet in the needle lumen.Cope's needle: A blunt-ended hook-like needle with a concealed cutting edge and snare, used in biopsy of the pleura, pericardium, peritoneum, and synovium.Deschamps' needle: A needle with the eye near the point, and a long handle attached; used in ligating deep-seated arteries.Discission needle: A special form of cataract needle.Emulsifying needle: A small tube with luer fittings at each end for mixing a liquid and an emulsifying agent by pushing the liquids through the tubing into opposing syringes. A simple type of static mixer.Hagedorn's needles: Surgical needles that are flat from side to side and have a straight cutting edge near the point and a large eye.Hasson trocar: A blunt trocar inserted into the peritoneal cavity after a celiotomy. Used for insufflation and introduction of a laparoscope.Knife needle: A slender knife with a needle like point, used in discission of a cataract and other ophthalmic operations, as in goniotomy and goniopuncture.Ligature needle: A slender steel needle with a long handle and an eye in its curved end, used for passing a ligature underneath an artery.Menghini needle: A needle that does not require rotation to cut loose the tissue specimen in a biopsy of the liver. This represented a significant advance in the previously slow and hazardous methods of liver biopsy.Reverdin's needle: A surgical needle having an eye that can be opened and closed by means of a slide.Seldinger needle: A needle with a blunt, tapered external cannula with a sharp obturator; used for the initial percutaneous insertion characteristic of the Seldinger technique for arterial or venous access.Silverman needle: An instrument for taking tissue specimens, consisting of an outer cannula, an obturator, and an inner split needle with longitudinal grooves in which the tissue is retained when the needle and cannula are withdrawn.Stop needle: A needle with a shoulder that prevents it from being inserted beyond a certain distance.Transseptal needle: A needle used to puncture the interatrial septum in transseptal catheterization.Tuohy needle: One in which the opening at the end is angled so that a catheter exits at an angle. The end of the Tuohy needle provides controlled penetration during the administering of spinal anesthesia and placement of an epidural spinal catheter.Veress needle: Named for Janos Veress, a German doctor. A Veress needle is a spring-loaded needle originally used to drain ascites and evacuate fluid and air from the chest. Veress needles were later adapted to use in laparoscopy. The diameter of a lumen in a medical device can range from about 0.1 to about 5 mm. For instance, the diameter of a lumen can range from about 0.001″ to about 0.1″, or from about 0.01″ to about 0.05″ internal diameter. The length of a lumen of a device can range from about 1 cm to a few meters. For instance, the length of a lumen can range from about 5 cm to about 5 meters, from about 20 cm to about 4 m, from about 50 cm to about 2 m. In some aspects, the length of a lumen can range from about 50 cm to about 150 cm. The gage of a needle can range from about 50 ga to about 5 ga, from about 40 ga to about 10 ga, or from about 30 ga to about 15 ga. Some devices can comprise multiple lumens, each performing one or more functions. These lumens can serve as inflation ports, fluid-transfer channels, guidewire access points, or even steering lumens, among others. As such, devices can have one lumen, or can have multiple lumens. For instance, a device can have 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 or more lumens. The lumens can be a multi-lumen tube extruded or attached into a single tube or can be separately bundled inside a device. (b) Rinse Tank The washing appliance comprises a rinse tank. The rinse tank comprises an interior space defined by a top, a bottom, and walls. The rinse tank also comprises an opening in the walls, and an access door for reversibly sealing the opening. A drain in the bottom of the rinse tank is operable to drain wash and spray fluids from the washing appliance during operation. The rinse tank can be constructed from any appropriate material including chemically and biologically inert, easy to clean material. In some aspects, the rinse tank is partially of completely constructed of transparent material which can allow monitoring of the washing process during use. Dimensions of the rinse tank can and will vary depending on the device to be washed and the number of devices to be washed in a single washing appliance among other variables. For instance, as the washing appliance is operable to suspend a device during washing, the rinse tank comprises a height capable of accommodating the full length of the device when suspended in the rinse tank. Additionally, the width and depth of the rinse tank will vary to accommodate the number of spray assemblies and manifolds that can accommodate the number of devices to be washed. The opening of the rinse tank and the access door for reversibly sealing the opening is generally sufficiently wide to allow access to the internal components of the washing appliance, including the one or more spray assemblies and one or more manifolds. For instance, in one aspect of a method of using the washing appliance, a manifold can be removed from the rinse tank, loaded with devices to be washed, returned into and attached in the rinse tank, and removed again from the rinse tank to collect the washed device. Accordingly, an opening of the rinse tank is generally of a width and height that permits the loading and unloading of devices and accessories before and after washing such that the washed device can be collected without contaminating the washed device. (c) Spray Assembly The washing appliance of the instant disclosure comprises one or more spray assemblies disposed in the interior space of the rinse tank. A spray assembly comprises a spray tube disposed in the interior space of the rinse tank. The spray tube extends through a spray tube opening in the top of the rinse tank into the interior space of the rinse tank. The spray tube extends along a longitudinal axis extending from the top to the bottom of the rinse tank. The spray tube comprises a spray fluid opening at a spray tube proximal end external to the rinse tank and a spray tube distal end in the interior space of the rinse tank. The spray fluid opening is in fluid communication with a source of spray fluid. The spray assembly also comprises one or more spray nozzles attached to the spray tube. The one or more nozzles are in fluid communication with the source of wash fluid through the spray tube at the spray fluid opening. As described in Section I(f) herein below, a washing appliance comprises at least one manifold defining a volume of space extending along the longitudinal axis below the manifold in the interior space of the rinse tank where the medical device or accessory to be washed hangs. Accordingly, the nozzles are generally operable to direct a spray stream towards the volume of space. Further, the length of the spray assembly, the number of spray assemblies, the number of spray nozzles, and the type of spray nozzle can and will vary depending on the height of the rinse tank, the length of the device to be washed, and the type of nozzle, among other variables, provided the combination of spray assembly length, the type of spray nozzle and the number of spray nozzles can provide a full coverage of the external surface of the device to be washed. A washing appliance comprises any number of spray assemblies, provided the spray assemblies can provide a full coverage of the external surface of the device to be washed. A washing appliance can comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or more spray assemblies directed to the volume of space below the manifold. In some aspects, the washing appliance comprises one spray assembly operable to direct the spray fluid into the volume of space below the manifold. In some aspects, the washing appliance comprises a pair of spray assemblies on either side of the manifold with the nozzles direct towards the volume of space where a device hangs. Such an arrangement allows for the one or more nozzles to be operable to spray the spray fluid from two directions into the volume of space to thereby provide complete coverage of the external surface of the device to be washed. When the washing appliance comprises more than one manifold, the washing appliance can comprise one spray assembly disposed alternatively between the manifolds. For instance, when the washing appliance comprises more than one manifold, the washing appliance can comprise three or more pairs of spray assemblies and two or more manifolds disposed alternatively between the pairs of spray assemblies. In such an arrangement, each spray assembly disposed between a first and a second manifold can comprise a pair of spray nozzles wherein one spray nozzle is operable to spray the spray fluid to the volume of space below the first manifold and the other spray nozzle is directed to the volume of space below the second manifold. In some aspects, the washing appliance comprises three or more spray assemblies and two or more manifolds disposed alternatively between the pairs of spray assemblies. In other aspects, the washing appliance comprises three spray assemblies and two manifolds disposed alternatively between the pairs of spray assemblies. In some aspects, the washing appliance comprises three spray assemblies and two manifolds disposed alternatively between the pairs of spray assemblies. A spray assembly can comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 or more nozzles. In some aspects, the spray assembly comprises seven nozzles. In some aspects, the nozzles are arranged in pairs on the tube of the spray assembly, wherein one spray nozzle is operable to spray the spray fluid to the volume of space below the first manifold and the other spray nozzle is directed to the volume of space below the second manifold. A spray assembly can comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 or more pairs of nozzles. In some aspects, the spray assembly comprises seven pairs of nozzles. Spray nozzles break the liquid into droplets, form the spray pattern, and propel the droplets in the proper direction. Nozzles can determine the amount of spray volume at a given operating pressure, travel speed, and spacing. The nozzle can be a factor in determining the amount of spray applied to an area, the uniformity of application, and the coverage obtained on the target surface among other variables. Selecting nozzles that produce the appropriate droplet size while providing adequate coverage at the intended application rate and pressure can be determined using methods known to individuals of skill in the art. Non-limiting examples of nozzle types include flat-fan nozzles, flood nozzles, rain drop nozzles, hollow core nozzles, full cone nozzles. (d) Manifold The washing appliance comprises one or more manifolds, wherein each manifold is operable to hang a device in a volume of space extending along the longitudinal axis below the manifold in the interior space of the rinse tank. The manifold is attached in the interior space of the rinse tank at a first manifold surface to a manifold support structure at the top of the rinse tank. The manifold can be attached to the top of the rinse vessel using any number of mechanical attachment methods. Non-limiting examples of mechanical attachment methods include glue, magnets, a notch, a groove, a hook and loop fastener, mated threads on the manifold and rinse vessel, nuts and bolts, clips, clamps such as band clamps, or any combination thereof. The manifold can be permanently or removable attached in the rinse tank. In some aspects, the manifold is removably attached in the rinse tank. In some aspects, the manifold comprises protruding edges and the manifold support structure comprises channels operable to engage the protruding edges of the manifold. When the manifold is removably attached to the rinse tank, the manifold support structure can further comprise a locking mechanism operable to secure the manifold to the manifold support structure. Locking mechanisms will vary depending on the mechanical attachment method used to attach the manifold, and are known in the art. In one aspect, when the manifold is removably attached to the rinse tank using channels and a lip, the locking mechanism is as shown inFIG.12. A washing appliance can comprise any number of manifolds depending on the particular application. For instance, a washing appliance can comprise 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or more manifolds. In some aspects, the washing appliance comprises one manifold. In other aspects, the washing appliance comprises two manifolds. In some aspects, the washing appliance comprises four manifolds. The manifold comprises device attachment points at a second manifold surface opposite the first surface of the manifold. The attachment points are operable to removably accept a device attachment accessory, wherein the device attachment accessory is operable to hang a device in a volume of space extending along the longitudinal axis below the manifold in the interior space of the rinse tank. The number of attachment points as well as the types of attachment accessory can and will vary depending on the intended application. The manifold can comprise 1, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, or 100 attachment points or more. In some aspects, the manifold comprises forty-eight attachment points. Further, the type of device attachment accessory can and will vary depending on the device to be washed. Any accessory that can be used to attach or hang an item can be used as device attachment accessory. As described above, when the device to be washed is a medical device used during cardiovascular procedures (cath lab) and accessory devices, non-limiting examples of medical devices and accessories include medical devices comprising lumens, catheters, endoscopes, dilators, dilator accessories, introducer sheaths, guidewires, balloons, vascular closure devices, atherectomy devices, fractional flow reserve (FFR) wires, racetrack tubing, cables and wiring, medical tubing, and hypodermic, transseptal, and other needles. Accordingly, a device attachment accessory can be any type of accessory that can be operable to hang medical devices comprising lumens, catheters, endoscopes, dilators, dilator accessories, introducer sheaths, guidewires, balloons, vascular closure devices, atherectomy devices, fractional flow reserve (FFR) wires, racetrack tubing, cables and wiring, medical tubing, and hypodermic, transseptal, and other needles in the rinse tank. For instance, when a device to be washed is a guidewire, the device attachment accessory can be a hook connected to the attachment point, upon which the guidewire can be hung. Alternatively, when the device to be hung is a device comprising a lumen comprising a Luer connector, the device attachment accessory can be a Luer connector connected to the attachment point. It will be recognized that a manifold can comprise more than one type of attachment accessory to hang more than type of device on the same manifold. Further, a manifold can have all attachment points in use during cleaning. Alternatively, a manifold can have a fraction of the attachment points in use during cleaning. In some aspects, the washing appliance can be used to wash the lumen of a device comprising a lumen. When the device is used for washing the lumen of a lumen device, the manifold further comprises a wash fluid opening in the manifold to which one end of a wash fluid adapter can be connected. A second end of the wash fluid adapter can extend through the rinse wall and can be in fluid communication with a source of wash fluid. When the device is used for washing the lumen of a lumen device, the one or more of the attachment points also comprise a wash fluid delivery opening for delivery of wash fluid to the lumen of an attached device. The wash fluid delivery opening is in fluid communication with the source of wash fluid through a wash fluid flow path extending from the wash fluid delivery opening to the source of wash fluid through a manifold channel in the manifold extending between the wash fluid delivery opening and the wash fluid opening or the wash fluid adapter opening. (e) Other Components The washing appliance can further comprise other components that can aid in cleaning of devices or optimize cleaning parameters. Non limiting examples of other components include sensors, filters, connectors, probes, samplers, connectors valves, other devices, seals, gaskets other fluid containment and control elements to direct the spray fluid and wash fluid during operation of the washing appliance. In some aspects, the washing appliance further comprises a pressure pump or compressor (not shown) and associated piping and control valves and mechanisms disposed between the spray fluid opening and the spray fluid source (not shown), wherein the pressure pump causes the spray fluid to spray from the one or more nozzles at a predetermined pressure and/or deliver wash fluid to the lumen of a lumen device. In other aspects, pressure can be provided gravity using tanks placed at a height suitable for providing pressure. Non-limiting examples of sensors that may be used in conjunction with a washing device of the instant disclosure include sensors for fluid flow, temperature, pH, oxygen, pressure, concentration, and sensors that can detect specific compounds in a wash or spray fluid. Fluid flow sensors can sense the rate of reagent or solvent addition which can be adjusted in an adaptive response to real time, or near real time, touchless measurements. Other devices can include compression fittings, quick disconnects, aseptic G sterile connectors and other such fitting that would allow for the creation of sterile connections, septums for sampling, filters, bearings such as agitator shaft bearings and bearing assemblies, viewports, and probe ports. The washing appliances of the instant disclosure can further comprise contact or contactless measuring systems, which may comprise instruments operable to measure, for example, quantity (i.e., volume, weight, etc.), cleaner or contaminant identity and/or concentration, flow rate, temperature, pressure, turbidly, color, and verifying a cleaning endpoint is reached. The measurement can be performed using spectroscopic analysis, or optical detection. Verification of cleanliness can be performed using a range of analytical instruments, such as liquid chromatography (LC), MS high performance liquid chromatography (HPLC) with or without UV—VIS, UV—VIS-DAD, and/or mass spectrometry detectors, electromagnetic radiation spectroscopy, such as UV/Vis NIRF, FTIR, and RAMAN, and combinations thereof. A washing appliance of the instant invention can also comprise a temperature control device adapted to control the temperature of the wash or spray fluid. The temperature control device can control temperature by conductive, thermoelectric, resistance heating, impedance, temperature modulation using induction, microwave dielectric heating and any combination thereof. The washing appliance of the instant invention can further comprise a controller in functional communication with components of the appliance such as valves, and sensors, and is operable to provide tight control of the operational sequence of the cleaning process on parameters such as temperature and pH. For instance, a controller can perform one or more of the following functions: allow switching on or off components of the washing appliance such as a fluid discharge valve, fluid inlet valve, or drain valve, provide controls for system function such as fluid pressure and provide monitoring information using data collected by the sensors. The controller can include additional input and output components that permit input by a user (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, a microphone, etc.). The controller can also include output components that provide output information (e.g., a display, a speaker, one or more light-emitting diodes (LEDs), etc.). In addition to the controller, the device can further comprise at least one processor and associated memory adapted to receive the operational and sensor data from the controller. The processor and associated memory can be hard wired to the system or can be networked in a wired or wireless manner. The processor and associated memory can also communicate with a server or other remote computing device in order to execute specific steps. A non-transitory computer readable medium programmed to execute the methods can be loaded on the processor and associated memory or in communication with the system. In some aspects, the processor can be operable to assign one or more event times, wherein each event time indicates the time of a change in the state of a signal received from a component of the system or a sensor. In this aspect, the associated memory can be operable to receive and store the signals and/or outputs of the sensors of the device, and the one or more event times. The storage component may store information and/or software related to the operation and use of the controller. The storage component can include a random-access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, an optical memory, etc.) that stores information and/or instructions for use by the controller. In some aspects, it is contemplated that the processor can comprise an alarm system that can be activated in response to one or more inputs from a sensor. In these aspects, it is contemplated that the alarm system can comprise a conventional device for selectively generating optical, thermal, vibrational, and/or audible alarm signals. (f) Aspects of a Washing Appliance Aspects of a washing appliance100are shown inFIGS.1-13. The washing appliance100comprises a rinse tank110, spray assemblies200, and manifolds500for hanging devices to be washed in the washing appliance100. The rinse tank110comprises an interior space140defined by a rinse tank top120, a rinse tank bottom130, and rinse tank walls150. The rinse tank110also comprises a rinse tank opening160in the rinse tank walls150, and access doors170for reversibly sealing the rinse tank opening160. A drain135in the rinse tank bottom130is operable to drain wash and spray fluids from the washing appliance100during operation. The spray assemblies200each comprises a spray tube210disposed in the interior space140of the rinse tank. The spray tube210extends through a spray tube opening165in the rinse tank top120into the rinse tank interior space140along a vertical longitudinal axis180extending from the rinse tank top120to the rinse tank bottom130. The spray tube210comprises a spray fluid opening220at a spray tube proximal end215external to the rinse tank110and a spray tube distal end216in the rinse tank interior space140. The spray fluid opening220is in fluid communication with a source of spray fluid. The spray assemblies200also comprise one or more nozzles400attached to the spray tube210. In an aspect shown inFIGS.1-3and6-8, the spray assemblies200each comprises seven pairs of nozzles410distributed along the spray assembly200, wherein a first nozzle400in each pair of nozzles410is positioned opposite a second nozzle400in the pair of nozzles410. In an aspect shown inFIGS.1-3, the washing appliance comprises five pairs of spray assemblies205and four manifolds500disposed alternatively between the pairs of spray assemblies205. InFIG.8, five pairs of spray assemblies205are shown attached to a piping230to form a pipe assembly240. In another aspect shown inFIG.6, the washing appliance comprises three pairs of spray assemblies205and two manifolds500disposed alternatively between the pairs of spray assemblies205. The manifold500of the washing appliance100is attached in the rinse tank interior space140at a first surface510of the manifold500to a manifold support structure520at the rinse tank top. The manifold500further comprises device attachment points530at a second manifold surface540opposite the first manifold surface510(see e.g.,FIG.9). In the aspect shown in the figures, the manifold500comprises forty-eight device attachment points530. The manifold500is operable to suspend an attached device600in a volume of space190extending along the longitudinal axis180below the manifold500in the rinse tank interior space140, and the one or more nozzles400are operable to spray the spray fluid into the volume of space190. The manifold500in the aspects shown in the figures comprises protruding edges550and the manifold support structure520comprises channels operable to engage the protruding edges550of the manifold500to attach the manifold to the rinse tank top120. The manifold support structure520further comprises a locking mechanism570operable to secure the manifold500to the manifold support structure520. The manifold support structure520with a locking mechanism is shown inFIG.12.FIGS.1and2show three manifolds500attached and secured to manifold support structures520using the locking mechanism570and one manifold500attached to a manifold support structure520without a locking mechanism570. In the aspect shown inFIGS.9and10, the manifold500comprises a wash fluid adapter590connected to a wash fluid opening580in the manifold500and extending through a wash fluid adapter opening595(shown inFIG.4) in the rinse tank walls150. the device attachment points530comprise wash fluid delivery openings596. The wash fluid delivery openings596are in fluid communication with a source of wash fluid through a wash fluid flow path extending from the source of wash fluid, through the wash fluid adapter590and a manifold channel (not shown) in the manifold500extending between the wash fluid opening580to the wash fluid delivery openings596. In the aspect shown inFIGS.9and10, wash fluid adapters590, here Luer connectors, connected to the wash fluid delivery openings596in the manifold500. FIG.13shows the washing appliance100with devices600comprising lumens630attached to the manifold500using Luer connector device attachment accessory535. The devices600comprise fluid delivery connectors640. Each device600is attached to the manifold500and is suspended below the manifold500. II. Method of Washing a Device The instant invention also encompasses a method of cleaning a device comprising a lumen using a washing appliance described in Section (I) herein above. The device can be as described in Section I(a) herein above. The method comprises attaching one or more devices to be washed to device attachment accessories of a manifold. The method can further comprise attaching device attachment accessories to the attachment points of the manifold appropriate for hanging the device or devices to be washed. After the one or more devices are attached to the manifold, the manifold is attached and optionally secured to the manifold support structure in the interior space of the rinse tank. The method further comprises washing exterior surfaces of the device by attaching the spray fluid opening of the spray assembly to a source of cleaning fluid by attaching piping extending from the spray fluid opening to the source of cleaning fluid and spraying the exterior surfaces of the device with cleaning fluid to thereby washing the exterior surfaces of the device. In some aspects, the method further comprises drying the exterior surface of the device by spraying the exterior surface with drying fluid. In some aspects, the device is a lumen device. When the device is a lumen device, the method can further comprise cleaning the lumen of the device concurrently or sequentially with cleaning the exterior of the device. Clean the lumen of a lumen device can comprise attaching the wash fluid opening of the manifold to a source of cleaning fluid by attaching piping extending from the wash fluid opening of the manifold to the source of cleaning fluid. The method further comprises flushing the lumen of the device with the cleaning fluid at a predetermined flow rate or pressure. As with cleaning the exterior of a device, the lumen of the device can optionally be dried by flushing the lumen of the device with drying fluid. It will be recognized that in some aspects, only the lumen of the lumen device can be cleaned without also cleaning the exterior of the device. Accordingly, a method of cleaning a lumen device can comprise cleaning the exterior of the lumen device, cleaning the lumen, or both. The duration of during which the device is sprayed or flushed can and will vary depending on the device, the type of contaminants to be washed, the degree of contamination (dirtiness) of the device to be cleaned, the parameters used during cleaning such as pressure and volume of cleaning fluid, and the cleaning fluid among other variables or any combination thereof. In essence, a device is sprayed with the cleaning fluid until a predetermined cleanliness endpoint is determined. A cleaning endpoint can be determined while using the washing appliance. Alternatively, cleanliness endpoint can be a predetermined cleanliness endpoint such as an endpoint determined by regulatory authorities. Non-limiting examples of cleanliness standards for re-usable medical devices established by regulatory authorities include:ASTM E2314: Standard Test Method for Determination of Effectiveness of Cleaning Processes for Reusable Medical Instruments Using a Microbiologic Method (Simulated Use Test)ASTM D7225: Standard Guide for Blood Cleaning Efficiency of Detergents and Washer-DisinfectorsASTM F3208: Standard Guide for Selecting Test Soils for Validation of Cleaning Methods for Reusable Medical DevicesASTM F3172: Standard Guide for Validating Cleaning Processes Used During the Manufacture of Medical DevicesGuidelines for reprocessing reusable medical devices, such as orthoscopic shavers, endoscopes, and suction tubes established by the FDA The methods of the instant disclosure can further comprise developing protocols for cleaning a device to reach a desired cleaning endpoint for that specific device. For instance, the duration of wash, the pressure and flow of liquids, the cleaning and drying fluids can all be adjusted to develop a protocol for cleaning the device. The methods of the instant disclosure can further comprise developing protocols for determining when a cleaning endpoint is reached for a specific device. For instance, a protocol for determining when a cleaning endpoint is reached for a bloody device can comprise assaying used or recirculated cleaning fluid for the presence of blood residue, wherein the device is considered clean when a certain concentration of blood residue, the cleaning endpoint, is reached. Established or newly developed test methods can be used. One or more than one cleaning fluid can be used in a method of cleaning a certain device. For instance, the device can first be cleaned with water to hydrate dirt and contaminants on and in the device, followed by cleaning with a cleaning fluid appropriate for the device, the contaminants, or both. Cleaning fluid can and will vary depending on the device, the type of contaminants to be washed, the degree of contamination (dirtiness) of the device to be cleaned, the parameters used during cleaning such as pressure and volume of cleaning fluid, and the cleaning fluid among other variables or any combination thereof. Appropriate cleaning fluids are known in the art. Non-limiting examples of cleaning fluids appropriate for cleaning devices using a method of the instant disclosure include disinfectants, germicides, sanitizers, soaps, enzymes, alcohols, solvents, or any combination thereof among others. Similarly, drying fluid can and will vary depending on the device, the cleaning fluid(s) used, the parameters used during cleaning such as pressure and volume of cleaning fluid, and the drying fluid among other variables or any combination thereof. Appropriate drying fluids are known in the art. Non-limiting examples of drying fluids appropriate for drying devices using a method of the instant disclosure include air such as ambient air, nitrogen, or oxygen, alcohols, solvents, and any combination thereof among others. Definitions Unless defined otherwise, all technical and scientific terms used herein have the meaning commonly understood by a person skilled in the art to which this invention belongs. The following references provide one of skill with a general definition of many of the terms used in this invention: Singleton et al., Dictionary of Microbiology and Molecular Biology (2nd ed. 1994); The Cambridge Dictionary of Science and Technology (Walker ed., 1988); The Glossary of Genetics, 5th Ed., R. Rieger et al. (eds.), Springer Verlag (1991); and Hale & Marham, The Harper Collins Dictionary of Biology (1991). As used herein, the following terms have the meanings ascribed to them unless specified otherwise. When introducing elements of the present disclosure or the preferred aspects(s) thereof, the articles “a”, “an”, “the” and “said” are intended to mean that there are one or more of the elements. The terms “comprising”, “including” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. A “genetically modified” cell refers to a cell in which the nuclear, organellar or extrachromosomal nucleic acid sequences of a cell has been modified, i.e., the cell contains at least one nucleic acid sequence that has been engineered to contain an insertion of at least one nucleotide, a deletion of at least one nucleotide, and/or a substitution of at least one nucleotide. The terms “genome modification” and “genome editing” refer to processes by which a specific nucleic acid sequence in a genome is changed such that the nucleic acid sequence is modified. The nucleic acid sequence may be modified to comprise an insertion of at least one nucleotide, a deletion of at least one nucleotide, and/or a substitution of at least one nucleotide. The modified nucleic acid sequence is inactivated such that no product is made. Alternatively, the nucleic acid sequence may be modified such that an altered product is made. The term “heterologous” refers to an entity that is not native to the cell or species of interest. The terms “nucleic acid” and “polynucleotide” refer to a deoxyribonucleotide or ribonucleotide polymer, in linear or circular conformation. For the purposes of the present disclosure, these terms are not to be construed as limiting with respect to the length of a polymer. The terms may encompass known analogs of natural nucleotides, as well as nucleotides that are modified in the base, sugar and/or phosphate moieties. In general, an analog of a particular nucleotide has the same base-pairing specificity, i.e., an analog of A will base-pair with T. The nucleotides of a nucleic acid or polynucleotide may be linked by phosphodiester, phosphothioate, phosphoramidite, phosphorodiamidate bonds, or combinations thereof. The term “nucleotide” refers to deoxyribonucleotides or ribonucleotides. The nucleotides may be standard nucleotides (i.e., adenosine, guanosine, cytidine, thymidine, and uridine) or nucleotide analogs. A nucleotide analog refers to a nucleotide having a modified purine or pyrimidine base or a modified ribose moiety. A nucleotide analog may be a naturally occurring nucleotide (e.g., inosine) or a non-naturally occurring nucleotide. Non-limiting examples of modifications on the sugar or base moieties of a nucleotide include the addition (or removal) of acetyl groups, amino groups, carboxyl groups, carboxymethyl groups, hydroxyl groups, methyl groups, phosphoryl groups, and thiol groups, as well as the substitution of the carbon and nitrogen atoms of the bases with other atoms (e.g., 7-deaza purines). Nucleotide analogs also include dideoxy nucleotides, 2′-O-methyl nucleotides, locked nucleic acids (LNA), peptide nucleic acids (PNA), and morpholinos. The terms “polypeptide” and “protein” are used interchangeably to refer to a polymer of amino acid residues. As used herein, the terms “target site”, “target sequence”, or “nucleic acid locus” refer to a nucleic acid sequence that defines a portion of a nucleic acid sequence to be modified or edited and to which a homologous recombination composition is engineered to target. The terms “upstream” and “downstream” refer to locations in a nucleic acid sequence relative to a fixed position. Upstream refers to the region that is 5′ (i.e., near the 5′ end of the strand) to the position, and downstream refers to the region that is 3′ (i.e., near the 3′ end of the strand) to the position. Techniques for determining nucleic acid and amino acid sequence identity are known in the art. Typically, such techniques include determining the nucleotide sequence of the mRNA for a gene and/or determining the amino acid sequence encoded thereby, and comparing these sequences to a second nucleotide or amino acid sequence. Genomic sequences may also be determined and compared in this fashion. In general, identity refers to an exact nucleotide-to-nucleotide or amino acid-to-amino acid correspondence of two polynucleotides or polypeptide sequences, respectively. Two or more sequences (polynucleotide or amino acid) may be compared by determining their percent identity. The percent identity of two sequences, whether nucleic acid or amino acid sequences, is the number of exact matches between two aligned sequences divided by the length of the shorter sequences and multiplied by 100. An approximate alignment for nucleic acid sequences is provided by the local homology algorithm of Smith and Waterman, Advances in Applied Mathematics 2:482-489 (1981). This algorithm may be applied to amino acid sequences by using the scoring matrix developed by Dayhoff, Atlas of Protein Sequences and Structure, M. O. Dayhoff ed., 5 suppl. 3:353-358, National Biomedical Research Foundation, Washington, D.C., USA, and normalized by Gribskov, Nucl. Acids Res. 14(6):6745-6763 (1986). An exemplary implementation of this algorithm to determine percent identity of a sequence is provided by the Genetics Computer Group (Madison, Wis.) in the “BestFit” utility application. Other suitable programs for calculating the percent identity or similarity between sequences are generally known in the art, for example, another alignment program is BLAST, used with default parameters. For example, BLASTN and BLASTP may be used using the following default parameters: genetic code=standard; filter=none; strand=both; cutoff=60; expect=10; Matrix=BLOSUM62; Descriptions=50 sequences; sort by=HIGH SCORE; Databases=non-redundant, GenBank+EMBL+DDBJ+PDB+GenBank CDS translations+Swiss protein+Spupdate+PIR. Details of these programs may be found on the GenBank website. With respect to sequences described herein, the range of desired degrees of sequence identity is approximately 80% to 100% and any integer value therebetween. Typically the percent identities between sequences are at least 70-75%, preferably 80-82%, more preferably 85-90%, even more preferably 92%, still more preferably 95%, and most preferably 98% sequence identity. As various changes could be made in the above-described cells and methods without departing from the scope of the invention, it is intended that all matter contained in the above description and in the examples given below, shall be interpreted as illustrative and not in a limiting sense. | 47,997 |