US8571277B2 - Image interpolation for medical imaging - Google Patents

Image interpolation for medical imaging Download PDF

Info

Publication number
US8571277B2
US8571277B2 US11/874,778 US87477807A US8571277B2 US 8571277 B2 US8571277 B2 US 8571277B2 US 87477807 A US87477807 A US 87477807A US 8571277 B2 US8571277 B2 US 8571277B2
Authority
US
United States
Prior art keywords
image
dimensional
images
interest
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/874,778
Other versions
US20090103791A1 (en
Inventor
Jasjit S. Suri
Dinesh Kumar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eigen Health Services LLC
KAZI MANAGEMENT ST CROIX LLC
KAZI MANAGEMENT VI LLC
KAZI ZUBAIR
Eigen LLC
Original Assignee
Eigen LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eigen LLC filed Critical Eigen LLC
Priority to US11/874,778 priority Critical patent/US8571277B2/en
Assigned to EIGEN, LLC reassignment EIGEN, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUMAR, DINESH, SURI, JASJIT S.
Publication of US20090103791A1 publication Critical patent/US20090103791A1/en
Assigned to EIGEN INC. reassignment EIGEN INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: EIGEN LLC
Assigned to KAZI MANAGEMENT VI, LLC reassignment KAZI MANAGEMENT VI, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EIGEN, INC.
Assigned to KAZI, ZUBAIR reassignment KAZI, ZUBAIR ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAZI MANAGEMENT VI, LLC
Assigned to KAZI MANAGEMENT ST. CROIX, LLC reassignment KAZI MANAGEMENT ST. CROIX, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAZI, ZUBAIR
Assigned to IGT, LLC reassignment IGT, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAZI MANAGEMENT ST. CROIX, LLC
Publication of US8571277B2 publication Critical patent/US8571277B2/en
Application granted granted Critical
Assigned to ZMK MEDICAL TECHNOLOGIES, INC., A NEVADA CORPORATION reassignment ZMK MEDICAL TECHNOLOGIES, INC., A NEVADA CORPORATION ASSET PURCHASE AGREEMENT Assignors: ZMK MEDICAL TECHNOLOGIES, INC., A DELAWARE CORPORATION
Assigned to ZMK MEDICAL TECHNOLOGIES, INC. reassignment ZMK MEDICAL TECHNOLOGIES, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: IGT, LLC
Assigned to EIGEN HEALTH SERVICES, LLC reassignment EIGEN HEALTH SERVICES, LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: ZMK MEDICAL TECHNOLOGIES, INC.
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation

Definitions

  • the present disclosure pertains to the field of medical imaging, and more particular to the registration of arbitrarily aligned 2-D images to allow for the generation/reconstruction of a 3-D image/volume.
  • Medical imaging including X-ray, magnetic resonance (MR), computed tomography (CT), ultrasound, and various combinations of these and other image acquisition modalities are utilized to provide images of internal patient structure for diagnostic purposes as well as for interventional procedures.
  • MR magnetic resonance
  • CT computed tomography
  • ultrasound ultrasound
  • 3-D three-dimensional
  • 2-D image to 3-D image reconstruction has been used for a number of image acquisition modalities (such as MRI, CT, Ultrasound) and image based/guided procedures. These images may be acquired as a number of parallel 2-D image slices/planes or rotational slices/planes, which are then combined together to reconstruct a 3-D image volume.
  • image acquisition modalities such as MRI, CT, Ultrasound
  • These images may be acquired as a number of parallel 2-D image slices/planes or rotational slices/planes, which are then combined together to reconstruct a 3-D image volume.
  • the movement of the imaging device has to be constrained such that only a single degree of freedom is allowed (e.g., rotation). This single degree of freedom may be rotation of the equipment or a linear motion. During such a procedure, the presence any other type of movement will typically cause the registration of 2-D images in 3-D space to be inaccurate.
  • 3-D reconstruction techniques currently require a significant number of 2-D images in order to achieve a reasonably good resolution. This typically results in a slow scan process and/or slow 3-D image reconstruction. The requirement of a large number of 2-D images may also lead to unnecessary workflow issues, causing hindrance to workflow and/or patient discomfort. Further, in many imaging situations, the actual region of interest is generally much smaller than the actual image acquired, resulting in unnecessary computational overheads for interpolation at regions outside the object of interest.
  • a medical procedure such as image based biopsy or therapy
  • a user is generally interested in only one organ and not in the background information. Further, in operating room environments time allocated for imaging procedures and/or image guide procedures is minimal due to time pressures on surgeons to undergo more procedures in allocated time. Accordingly, it is desirable to perform 3-D image generation in a manner that reduces the constraints on image acquisition and allows for quickly generating a 3-D image, while also providing sufficient resolution to perform a desired procedure.
  • the invention presented herein solves a number of problems using novel systems and methods (i.e., utilities).
  • These utilities allow for interpolation of a 3-D volume from arbitrarily oriented 2-D images.
  • the interpolation of 3-D volume from arbitrarily oriented 2-D images reduces or eliminates most constraints on image acquisition thereby allowing for, inter alia, freehand manipulation of an image acquisition device (e.g. an ultrasound transducer).
  • the utilities maintain the relationship between acquired 2-D images and a prospective 3-D image volume via a tracker that tracks the coordinates and orientation of the imaging plane of each 2-D image. This can be done using any type of tracker including but not limited to optical, magnetic and/or mechanical trackers.
  • the interpolation methods are not limited by the method of tracking, imaging modality or type of procedure.
  • the interpolation method is not limited to ultrasound but applicable to other modalities such as MRI, CT, PET, SPECT and its fusion.
  • a related utility involves the use of prior information about a specific object of interest to interpolate a surface (e.g., 3-D surface) of the object from limited information obtained from very few 2-D images.
  • shape statistics of a structure/object of interest are computed beforehand and stored in a computer.
  • the prior shape statistics may include a mean shape of the object and/or the statistics over a large number of samples of the object of interest.
  • This allows for generation of a shape based deformation model. Deformation of the shape based model may be guided (in real time) and constrained by the actual deformation statistics from the shape priors.
  • the shape model is used to deform the mean shape of the object to the available 2-D images/planes through, for example, hierarchical optimization over the modes of variation of the object.
  • the deformation may be guided by intensity gradients over the available 2-D images of arbitrary orientation.
  • the inventive aspects may be implemented in processing systems that are integrated into medical imaging devices/systems and/or be implemented into stand-alone processing systems that interface with medical
  • the utility is provided for allowing interpolation and/or reconstruction of a 3-D image based on arbitrarily oriented 2-D image planes obtained from a 2-D imaging device.
  • the method includes obtaining at least first and second 2-D images of an internal object of interest.
  • Each 2-D image includes an image plane and a first 3-D point of reference.
  • the image planes and 3-D points of reference are different for each image.
  • Pixel information e.g., intensity information
  • a 3-D image of the object of interest is generated.
  • Translating the pixel information of the 2-D images into a common 3-D volume may include applying a coordinate transform to the 2-D images. As may be appreciated, this may require obtaining or otherwise determining vector information for use with the images.
  • the 3-D point of reference may be provided by a tracker assembly that provides location information for a medical imaging device that generates the 2-D image.
  • information regarding the depth and orientation of the image in relation to the reference point may also be obtained. Accordingly, a normal may be determined for the plane of the 2-D image.
  • a transformation matrix may be applied to the vector information associated with the 2-D images. Accordingly, such transformation may allow for translating pixels within the 2-D images into the common 3-D volume.
  • the method may further include interpolating pixel intensities from the 2-D images to discrete locations within the 3-D volume.
  • the user may selectively obtain additional 2-D images. This information may be incorporated into the 3-D image. This may allow a user to focus on specific parts of anatomy by acquiring samples non-uniformly. This provides the user with flexibility to acquire images in better resolutions at some regions while acquiring imaging at lower resolution at others. This not only enhances the resolution at desired locations but also reduces the unnecessary computational overhead of acquiring samples from locations where lower resolution may be adequate. Likewise, this may reduce the time required to adequately scan an object of interest.
  • a shape model may be fit to pixel information in a common 3-D volume in order to define a 3-D surface of an internal object of interest. Accordingly, a plurality of arbitrarily aligned 2-D medical images may be obtained for an internal object of interest. Pixel intensity information from each of the 2-D images may be translated into a 3-D volume. Accordingly, a predefined shape model may be fit to the pixel intensity information such that the shape model defines a 3-D surface for the object of interest.
  • the predefined shape model may be generated based on an average of a population of a corresponding object. For instance, for prostate imaging, such a shape model may be generated from a training set of prostate images. Likewise, shape statistics for the training set population may be utilized in order to fit the predefined shape model to the pixel intensity information.
  • the shape model may be constrained by boundaries identified within the 2-D images. In this regard, segmentation may be performed on the 2-D images in order to identify one or more boundaries therein. These boundaries may then be utilized as constraints for the shape model.
  • a principle component analysis is utilized to identify the largest modes of variation for the shape model. Accordingly, by identifying the largest modes of variation the shape model may be fit to the pixel intensities utilizing fewer variables.
  • the utility is provided for registering current two-dimensional images with previously stored three-dimensional images.
  • an imaging instrument e.g., freehand
  • previous few image frames may be kept in memory. These frames may then be used for motion correction of the current imaging device location relative to a previously acquired/stored three-dimensional image.
  • Such motion correction may be necessitated due to patient movement, motion of anatomical structures to the procedure and/or to device movement or miscalibration.
  • a series of two-dimensional images are obtained. The series of two-dimensional images may be obtained in real-time such that the last image is the most current image.
  • Pixel information from each of the two-dimensional images may be translated into common three-dimensional volume in order to generate a current three-dimensional image using the series of two-dimensional images.
  • This current three-dimensional image of the object of interest may be utilized to align the most current two-dimensional with the previously stored three-dimensional image.
  • use of the current three-dimensional image may include registering the current three-dimensional image with the previous three-dimensional image in order to identify orientation correspondence therebetween. Accordingly, the most current two-dimensional image may be aligned based on this information.
  • the two-dimensional images are maintained and buffer in the computerized imaging device.
  • Such images may be maintained on a first and first-out basis.
  • the buffer may maintain the previous five to 10 images wherein the oldest image is continuously replaced by the most current image.
  • all the utilities also allow for using more information during navigation. That is, the presented utilities use more information than most conventional 2-D ultrasound based navigation systems. This is done by keeping some of the previous image frames in a buffer. The previous frames together with the current frame provide at least partially 3-D information and thus provide a better and more robust solution when correlating the current 2-D image with an earlier acquired 3-D.
  • the utilities also allow for non-rigid motion correction. That is, in addition to correlating a current 2-D image with 3-D scan, the additional information can also be used to perform non-rigid registration. Since more frames are used, more deformation can be captured by the deformation model.
  • the utilities also permit shape based boundary interpolation.
  • a mean shape model may be fit to the boundaries of an object based on much less information compared to an explicit segmentation method.
  • the mean shape is used as the template for fitting onto the surface.
  • mean shape is more similar to the population and thus, is ideal as the deformation template.
  • Further usage of actual deformation statistics from the training samples (i.e., used to form the shape model) corresponding to the anatomy of the object in consideration represent actual deformation modes of the objects in the population and thus, the results are representative of the population.
  • the use of reduced dimensionality of the shape model may allow for faster image generation speeds.
  • the presented utilities use a principal component analysis (PCA), which identifies and optimizes over a smaller number of parameters while capturing population statistics well. Further hierarchical optimization over modes of variations ensures that coefficients for larger modes get optimized first and then smaller modes of variations follow. This adds to robustness and stability of utilities and also avoids small local minima.
  • PCA principal component analysis
  • the utilities are also adaptable to image fusion with other imaging modalities. That is, the utilities are easily extensible to include image fusion (e.g., ultrasound images) with other modalities such as Elastography, MR spectroscopy, MRI, etc.
  • image fusion e.g., ultrasound images
  • other modalities such as Elastography, MR spectroscopy, MRI, etc.
  • the interpolated frames in the 3-D space can be correlated with the images from other modality and the correspondence computed between them.
  • the utilities also provide more information for tracking in 3-D. Due to dynamic nature of the placement of 2-D data in 3-D and arbitrariness of the orientation, the 2-D frames can be kept in memory buffer of the computer and called upon to correct for motion compensation in 3-D. Most current techniques only use the live 2-D image to correlate the current field of view, which is an ill-conditioned problem. Extra information present in form of previous frames makes it better conditioned.
  • the utilities can be used to compute local deformation at every place inside the object of interest, which can be used for clinical interpretation about the nature and progression of a disease.
  • Many diseases including cancer manifest themselves into change in tissue deformation characteristics or change in tissue volume over time, which can be easily captured through computation of local deformation.
  • FIG. 1 illustrates a medical imaging system utilized for ultrasound imaging.
  • FIGS. 2A and 2B illustrate acquisition of medical images having a single degree of freedom.
  • FIGS. 3A-3D illustrate arbitrarily aligned 2-D images and the use of those images to identify the boundaries of an internal object of interest.
  • FIGS. 4A and 4B illustrate a coordinate system for a 2-D imaging plane.
  • FIG. 5 illustrates a process for reconstructing a 3-D image from arbitrarily oriented 2-D images.
  • FIG. 6 illustrates a process for use of arbitrarily orienting 2-D images in conjunction with a shape model.
  • FIG. 7 illustrates generation of a shape model for identifying boundaries of a 3-D object.
  • the invention is directed towards systems and methods for interpolation and reconstruction of a 3-D image from 2-D image planes/frames/slices obtained in arbitrary orientation during, for example, an unconstrained scan procedure. Also included is a method for adaptively improving interpolation based on dynamic addition of more 2-D frames to an imaging buffer. Systems and methods are also provided for using shape priors to interpolate the surface of an internal object of interest using intensity information of a limited number of 2-D image planes that have the object of interest in their field of view. Further, a combination of the above systems and methods allow arbitrary (e.g., freehand) scanning of a few 2-D images and then fitting a surface for the object of interest using intensity information from the scanned images.
  • the reconstruction method pertains to all types of 2-D image acquisition methods under various modalities and specifically for 2-D image acquisition methods used while performing an image-guided diagnostic or surgical procedure. It will be appreciated that such procedures include, but are not limited to ultrasound guided biopsy of various organs, such as prostate (trans-rectal and trans-perineal), liver, kidney, breast, etc., brachytherapy, ultrasound guided laparoscopy, ultrasound guided surgery or an image-guided drug delivery procedures.
  • FIG. 1 illustrates a transrectal ultrasound probe 10 that may be utilized to obtain a plurality of two-dimensional ultrasound images of the prostate 12 .
  • the probe 10 may be operative to scan an area of interest.
  • the probe 10 may also include a biopsy gun that may be attached to the probe.
  • a biopsy gun may include a spring driven needle that is operative to obtain a core from desired area within the prostate and/or deliver medicine (e.g., a brachytherapy seed) to a location within the prostate.
  • medicine e.g., a brachytherapy seed
  • the probe may be affixed to a positioning device (not shown) and a motor may sweep the transducer of the ultrasound probe 10 over a radial area of interest (e.g., around a fixed axis 70 of FIG. 2A ). Accordingly, the probe 10 may acquire plurality of individual images while being rotated through the area of interest. Each of these individual image slices may be represented as a two-dimensional image. Alternately, the probe 10 may be linearly advanced to obtain a plurality of uniformly spaced images as is illustrated in FIG. 2B . In both instances, the resulting 2-D image sets may be registered to generate a three-dimensional image.
  • FIG. 3A illustrates a plurality of 2-D images 80 a - n acquired for an object of interest (e.g., prostate) where the images are not aligned to at least one axis.
  • an object of interest e.g., prostate
  • a user may not need 3-D information in same resolution inside and outside the object.
  • the procedure requirements may not need the user to go through a tedious and constrained acquisition of 2-D images, which may take longer for unnecessarily high resolution everywhere in a reconstructed 3-D image, or yield a uniformly low-resolution image. Instead, higher resolution at the region of interest and lower resolution at other places may in some instances be sufficient.
  • 2-D images are obtained in an unconstrained fashion (e.g., using handheld imaging devices) while the imaging device is manipulated to scan the object.
  • the user may scan the object in a freehand fashion in various different orientations. See, e.g., FIG. 3A .
  • the orientation and location of the imaging planes are measured using a mechanical or magnetic tracker 14 , which may be off-the-shelf or designed specifically for that application. See FIG. 1 .
  • the position of the tracker 14 is recorded in relation to a known reference by a reading device 16 , which outputs the identified location of the tracker 14 to an imaging system 30 that also receives images from the imaging device 10 .
  • the configuration of the reading device may depend upon the type of tracker 14 utilized. For instance, when utilizing a magnetic tracker, the reader may be a magnetic field sensor, which is interfaced to a computer or recording device via an interface box.
  • Mechanical trackers may have a combination of rotary and linear encoders to track the transducer attached to the tracker.
  • Optical trackers have attachment containing LEDs and their position is space is computed using images captured from two cameras at different locations.
  • the imaging system 30 is operative to correlate the recorded position of the tracker and a corresponding acquired image. As will be discussed herein, this allows for utilizing non-aligned/arbitrary images for 3-D image reconstruction. That is, the imaging system 30 utilizes the acquired images to populate the 3-D image volume as per their measured locations. In addition to reconstructing the 3-D volume after images are acquired, the method also allows for dynamic refinement at desired regions. That is, a user may acquire additional images at desired locations and the reconstruction method interpolates the additional images into the 3-D volume.
  • a second aspect of the present invention addresses this issue by allowing the user to collect images in a simplified manner (e.g., freehand, just before a procedure).
  • the object instead of interpolating the pixel intensity values in 3-D image volume from the scanned 2-D image planes, the object itself is interpolated using shape priors.
  • Shape priors refer to a set of information 60 collected previously for a particular anatomical object and includes the mean shape of the object along with statistical information. The statistical information provides the modes of variations in the shape of the object and represents anatomically meaningful interpretations of the shape variability within the population.
  • the tracker While imaging in freehand or using any constrained or unconstrained method, the tracker provides full information about the 2-D plane being imaged. A normal to the imaging plane and a reference line passing through a known point in the imaging plane are sufficient to describe the imaged plane fully based on the geometry of the imaging device. For example, as illustrated in FIGS.
  • the position of the center of the transducer tip (i.e., provided by a tracker) along with the normal to the image plane 22 and the central axis 24 of the transducer 20 is sufficient to place the 2-D image in a 3-D volume, assuming that the other characteristics such as the depth setting and geometry (rectangular, fan, etc) of 2-D imaging plane are known.
  • the tracker output can be measured to provide the location and orientation of the imaging equipment, which in turn provides the normal to imaging plane, central axis 24 of the transducer 20 and the tip of the transducer 20 . Based on geometry of the 2-D image observed, its corresponding location in 3-D can be determined using the method presented.
  • FIG. 4B shows a 2-D imaging slice 22 and its associated co-ordinate system.
  • x 3i ′ is always zero and the center of transducer tip is at the origin.
  • FIG. 4A illustrates the co-ordinate system for the 3-D image, where origin is selected by the user at the time of initialization of an imaging scan by pointing roughly to the center of the object 28 to be imaged.
  • the 2-D image coordinate system has to be placed in a 3-D image coordinate system.
  • a process 500 is illustrated in FIG. 5 .
  • 2-D images which may have arbitrary orientations, are obtained ( 502 ).
  • Tracking information for each image is likewise obtained ( 504 ).
  • each pixel in each 2-D image is placed ( 506 ) in a corresponding location in a 3-D volume. This is done first by placing the origin of each 2-D image in a frame of reference of a common 3-D volume and then applying the coordinate transformation.
  • the coordination information is obtained by tracking the location and orientation of the imaging transducer using an unconstrained tracker such as a magnetic tracker, freehand mechanical tracker, optical tracker, etc.
  • n and t represent the measured normal and tangent unit vectors for the i-th slice and b represents the unit bi-normal representing the cross-product n ⁇ t. This information is provided by the tracker.
  • the rotation matrix R computed above is used to compute the transformation of the coordinate system from the coordinates of the 2-D image into coordinates of the 3-D image.
  • O be the origin of the frame of reference of the 3-D image
  • O′ be the origin of frame of reference of the 2-D image.
  • the overall transformation of coordinate point (x i ′, y i ′, z i ′) from the coordinate system of the 2-D image into the frame of reference of the 3-D image is given by:
  • the real location of the 2-D pixels is located ( 508 ) in a common 3-D volume.
  • the transformed coordinates do not, in general, lie on a discrete lattice and the intensities at the transformed “real” coordinate locations need to be interpolated ( 510 ) onto the neighboring discrete locations.
  • the neighboring discrete locations are computed and variables defined as follows:
  • y li d , y hi d , dy li , dy hi , z li d , z hi d , dz li and dz hi be defined similarly for x i , y i and z i , respectively.
  • I ⁇ ( x , y , z ) ⁇ i ⁇ w i ⁇ I i ⁇ ( x , y , z ) ⁇ i ⁇ w i Eq . ⁇ 3
  • the summation is over all the frames (i's in above equation) that contribute to the pixel (x,y,z). This results in a space interpolation ( 512 ) of 3-D image voxels.
  • the intensity gets refined to produce better results.
  • a Gaussian interpolation filter or other appropriate filter ( 514 ) with appropriate window size (3 3 to 7 3 , based on image resolution) can be used to interpolate ( 516 ) the results to pixels that have not been initialized using Eq. 1 above.
  • the result is that a 3_D image volume may be constructed ( 518 ) from the acquired images. If the user wishes to scan portions of image in better resolution, they may acquire more images from the area of interest and the method will dynamically update the intensity values in that location as per Eq. (3).
  • the method thus, interpolates a 3-D image from 2-D images that are acquired in arbitrary orientations and thereby permits freehand scanning. This may improve the workflow, as it removes need for any locking or constrained tracker.
  • the dynamic interpolation allows the user to dynamically improve the resolution by acquiring more images from a particular are of interest. The flexibility in choosing different resolution for different regions is another key advantage of the method.
  • Shape priors are utilized to perform interpolation of a surface for the 3-D image volume.
  • shape statistics are generated and analyzed from a number of actual data sets from the anatomical object in question (e.g., prostate).
  • a number of images of the anatomical object are collected.
  • the boundaries of the object of interest are then extracted either manually by expert segmentation, or using a semi-automatic or automatic method.
  • the surfaces are then normalized so as to remove the translation, rotation and scaling artifacts.
  • the normalized images are then averaged together by computing mean position of each vertex in the template chosen for computing the mean shape.
  • the shape obtained is run through the same process until convergence. This provides the mean shape of the object and this shape is then registered with all the other shapes in the sample dataset.
  • the registration provides deformation details at each vertex of the image and the statistics of the same are then used to drive the fitting of the mean shape into the shape of the subject.
  • Previously computed statistics may be used to deform the mean shape to fit the 3-D image volume, which may be sparse due to being generated using very few 2-D image planes and/or due to the 2-D images being obtained in different orientations.
  • Conventional segmentation techniques for extraction of surface rely heavily on the resolution of images. Although 2-D segmentation may be possible on individual frames, the combination of these frames together to generate a surface in 3-D using a heuristic approach typically produces artifacts in surface interpolation. Further, due to the limited number of 2-D images boundaries of the 3-D volume based solely on the 2-D images may not provide a useful estimation of the actual boundary of the object. See FIG. 3B .
  • FIG. 6 shows the overall scheme for combination of the two methods. The images are acquired.
  • the process ( 600 ) includes acquiring 2-D images ( 602 ) and tracking information ( 604 ). These elements are utilized to interpolate/reconstruct ( 606 ) a 3-D image volume ( 608 ), as set forth above. Object shape statistics ( 610 ) associated with the object of the 3-D image volume ( 608 ) are utilized to fit ( 612 ) the mean shape to the 3-D image volume. This generates an object surface ( 614 ) that may be output to, for example, a monitor 40 (See FIG. 1 ) and/or utilized for an image guided procedure or therapy.
  • the first step is to generate population statistics including a mean shape from a collection of samples from population of the object of interest. This may be done once and may be performed off-line (i.e., at a time before an imaging procedure).
  • the population may correspond to a specific organ, for example: prostate, in which case, a number of images acquired from prostates of different individuals may be used as samples. Larger sample size can capture more variability in the population and hence, a reasonably large sample size should be used to capture the shape statistics.
  • the population may consist of images from different modality.
  • FIG. 7 shows the computation of mean shape and capturing the shape statistics from the training dataset ( 702 ).
  • the surface of the object is extracted/segmented ( 704 ) for all the images in the sample dataset. This may be done manually, semi-automatically or automatically.
  • one image that best represents the population is chosen ( 708 ) as template image ( 710 ).
  • the surface of the template is aligned/registered 714 to the current target image 712 using rotation, translation and anisotropic scaling and a mean surface is computed 716 .
  • the mean surface is then deformed into every other image in the training set and the mean surface is updated by computing an average again 720 to separate an updated mean shape 722 .
  • the mean surface is warped into all the surfaces in the training set such that the vertex locations of undeformed mean shape and deformed mean surface are known.
  • V i represent the set of vertices for image i
  • V ⁇ represent the set of vertices for the mean shape
  • the eigen-vectors of the covariance matrix then represent the modes of variations of the shape within the training data.
  • the eigenvalues can then be arranged in descending order such that the eigen vectors representing first few modes of variations represent most of the variability in the population.
  • a few numbers of images are sufficient to perform a good segmentation of the object. This is achieved by first acquiring a few images that captures the object in various orientations and then placing the mean shape 900 into the reconstructed 3-D image 904 . See FIG. 3C .
  • the reconstructed 3-D image will be very sparse with mostly zero values.
  • only available 2-D information in various orientations can be used to deform the mean shape 900 into the shape of the scanned object by letting the mean surface deform while following the population deformation characteristics (modes of variations).
  • edges 910 of the object as represented by the acquired image frames may be utilized as constraints for defining the mean shape. These edges may be identified using any appropriate segmentation process. In any case, use of the edges 910 allows for closely fitting the mean shape to the actual object boundaries. See FIG. 3D .
  • an image of one modality is scanned in 3-D before the procedure (pre-op) and then a 2-D live image is used to navigate during the procedure.
  • Examples include pre-op MR scan before a surgery (say, brain) and live 2-D ultrasound image while performing the actual surgery. Due to differences in time between the two images, difference in coordinate system, differences in image qualities and deformation of tissue between the images or during the procedure, it is important to align the 2-D live ultrasound image with the previously acquired 3-D images, which may be stored by an imaging device. See FIG. 1 . Any motion of patient, instrument or tissue can complicate the problem.
  • the real-time interpolation from arbitrary image planes easily allows for such a system, where the 2-D live image being acquired is kept in buffer for next few frames.
  • the 2-D live image slice number is n, and there may always be previous m images kept in a buffer.
  • m may be a small number such as 5-10.
  • the robustness is much improved and the live images can be placed in coordinate frame of reference of the 3-D image with more confidence.
  • the sequence of images provide a much better starting point than just one image and capture deformation in more directions than just along the frame of image acquisition.
  • the image acquisition method discussed above can be used for diagnostic imaging or for biopsy, surgical or image based drug delivery system.
  • the interpolation method may be used to acquire images from different modalities such as ultrasound and elastography.
  • the 2-D image acquired from these modalities can be combined together with the tracking information to place the results in 3-D such that the elastography images can be shown overlaid on the ultrasound data, thus providing additional information to the user.
  • the reconstructed elastographic data can be overlaid onto the structural scan acquired earlier.
  • functional information such as MR spectroscopy is available
  • the live ultrasound image can be placed in correspondence with the spectroscopy data based on the tracker information and the registration step as discussed above.
  • the registration may be two-step process: rigid, followed by non-rigid.
  • the non-rigid registration may be based on the segmented surface, where a shape model can be used to segment the images in different modalities and the segmented surfaces registered together.
  • the presented method may also be used to study local tissue deformations over time. Many diseases manifest themselves in abnormal tissue deformation and the local deformation can be used for interpreting tissue conditions.
  • a patient image obtained from previous visit may be registered with the image obtained from the repeat visit and the local deformation may be used to observe the abnormal local volume changes in tissue.
  • the local deformation may be a useful indicator of locations of cancer growth.
  • the registration may be mutual information based, intensity based, surface based, landmark based or a combination of any of these.
  • the registration provides the correspondence between the previous image and the current image.
  • the Jacobian value for the deformation map may then be computed and overlaid on the 3-D prostate volume to look at the abnormal localized deformations and any such locations found must be sampled.
  • the above-noted utilities provide a number of advantages.
  • One primary advantage is that the utilities can construct 3-D images using 2-D images that are acquired in an unconstrained manner (e.g., freehand) where uniform angular or linear spacing between images is no longer required. Likewise, this may result in better workflow as a user no longer needs to obtain images in a constrained environment.
  • Another advantage is that the utilities allow for flexibility of resolution.
  • the user can scan different regions with different resolution thereby providing more flexibility and usage. This also permits fine tuning of an image in real-time. If the user desires better resolution for an acquired image, they may scan more images of the same anatomy and have those images applied to the reconstructed image.

Abstract

Presented are systems and methods that allow for interpolation of a 3-D volume from arbitrarily oriented 2-D medical images. The interpolation of 3-D volume from arbitrarily oriented 2-D images reduces or eliminates most constraints on image acquisition thereby allowing for, inter alia, freehand manipulation of an image acquisition device (e.g. an ultrasound transducer). Related utilities involve the use of prior information about a specific object of interest to interpolate a surface (e.g., 3-D surface) of the object from limited information obtained from very few 2-D images.

Description

FIELD
The present disclosure pertains to the field of medical imaging, and more particular to the registration of arbitrarily aligned 2-D images to allow for the generation/reconstruction of a 3-D image/volume.
BACKGROUND
Medical imaging, including X-ray, magnetic resonance (MR), computed tomography (CT), ultrasound, and various combinations of these and other image acquisition modalities are utilized to provide images of internal patient structure for diagnostic purposes as well as for interventional procedures. Often, it is desirable to utilize multiple two-dimensional (i.e. 2-D) images to generate (e.g., reconstruct) a three-dimensional (i.e., 3-D) image of an internal structure of interest.
2-D image to 3-D image reconstruction has been used for a number of image acquisition modalities (such as MRI, CT, Ultrasound) and image based/guided procedures. These images may be acquired as a number of parallel 2-D image slices/planes or rotational slices/planes, which are then combined together to reconstruct a 3-D image volume. Generally, the movement of the imaging device has to be constrained such that only a single degree of freedom is allowed (e.g., rotation). This single degree of freedom may be rotation of the equipment or a linear motion. During such a procedure, the presence any other type of movement will typically cause the registration of 2-D images in 3-D space to be inaccurate. This presents some difficulties in handheld image acquisition where rigidly constraining movement of an imaging device to a single degree of freedom is difficult if not impossible. Further constraining an imaging device to a single degree of freedom may also limit the image information that may be acquired. This is true for handheld, automated and semi-automated image acquisition. Depending upon the constraints of the image acquisition methods, this may limit use or functionality the acquisition system for 3-D image generation.
Many 3-D reconstruction techniques currently require a significant number of 2-D images in order to achieve a reasonably good resolution. This typically results in a slow scan process and/or slow 3-D image reconstruction. The requirement of a large number of 2-D images may also lead to unnecessary workflow issues, causing hindrance to workflow and/or patient discomfort. Further, in many imaging situations, the actual region of interest is generally much smaller than the actual image acquired, resulting in unnecessary computational overheads for interpolation at regions outside the object of interest. During a medical procedure such as image based biopsy or therapy, a user is generally interested in only one organ and not in the background information. Further, in operating room environments time allocated for imaging procedures and/or image guide procedures is minimal due to time pressures on surgeons to undergo more procedures in allocated time. Accordingly, it is desirable to perform 3-D image generation in a manner that reduces the constraints on image acquisition and allows for quickly generating a 3-D image, while also providing sufficient resolution to perform a desired procedure.
SUMMARY
The invention presented herein solves a number of problems using novel systems and methods (i.e., utilities). These utilities allow for interpolation of a 3-D volume from arbitrarily oriented 2-D images. The interpolation of 3-D volume from arbitrarily oriented 2-D images reduces or eliminates most constraints on image acquisition thereby allowing for, inter alia, freehand manipulation of an image acquisition device (e.g. an ultrasound transducer). In one arrangement, the utilities maintain the relationship between acquired 2-D images and a prospective 3-D image volume via a tracker that tracks the coordinates and orientation of the imaging plane of each 2-D image. This can be done using any type of tracker including but not limited to optical, magnetic and/or mechanical trackers. It will be appreciated the interpolation methods are not limited by the method of tracking, imaging modality or type of procedure. The interpolation method is not limited to ultrasound but applicable to other modalities such as MRI, CT, PET, SPECT and its fusion.
A related utility involves the use of prior information about a specific object of interest to interpolate a surface (e.g., 3-D surface) of the object from limited information obtained from very few 2-D images. In this utility, shape statistics of a structure/object of interest are computed beforehand and stored in a computer. The prior shape statistics may include a mean shape of the object and/or the statistics over a large number of samples of the object of interest. This allows for generation of a shape based deformation model. Deformation of the shape based model may be guided (in real time) and constrained by the actual deformation statistics from the shape priors. The shape model is used to deform the mean shape of the object to the available 2-D images/planes through, for example, hierarchical optimization over the modes of variation of the object. The deformation may be guided by intensity gradients over the available 2-D images of arbitrary orientation. The inventive aspects may be implemented in processing systems that are integrated into medical imaging devices/systems and/or be implemented into stand-alone processing systems that interface with medical imaging devices/systems.
In one aspect, the utility is provided for allowing interpolation and/or reconstruction of a 3-D image based on arbitrarily oriented 2-D image planes obtained from a 2-D imaging device. The method includes obtaining at least first and second 2-D images of an internal object of interest. Each 2-D image includes an image plane and a first 3-D point of reference. Typically, the image planes and 3-D points of reference are different for each image. Pixel information (e.g., intensity information) for each of the 2-D images is translated into a common 3-D volume. Based on the pixel information disposed within a common 3-D volume, a 3-D image of the object of interest is generated.
Translating the pixel information of the 2-D images into a common 3-D volume may include applying a coordinate transform to the 2-D images. As may be appreciated, this may require obtaining or otherwise determining vector information for use with the images. Typically, the 3-D point of reference may be provided by a tracker assembly that provides location information for a medical imaging device that generates the 2-D image. Furthermore, information regarding the depth and orientation of the image in relation to the reference point may also be obtained. Accordingly, a normal may be determined for the plane of the 2-D image. Utilizing this information, a transformation matrix may be applied to the vector information associated with the 2-D images. Accordingly, such transformation may allow for translating pixels within the 2-D images into the common 3-D volume. The method may further include interpolating pixel intensities from the 2-D images to discrete locations within the 3-D volume.
It will be appreciated that the ability to translate arbitrary 2-D images into a common 3-D volume avoids the need for a constrained image acquisition procedure. That is, no mechanical assembly is required for obtaining 2-D images. As a result, an imaging device may be freely maneuvered (e.g., freehand), which may result in better workflow.
Once a 3-D image is generated from the pixel information of the 3-D volume, the user may selectively obtain additional 2-D images. This information may be incorporated into the 3-D image. This may allow a user to focus on specific parts of anatomy by acquiring samples non-uniformly. This provides the user with flexibility to acquire images in better resolutions at some regions while acquiring imaging at lower resolution at others. This not only enhances the resolution at desired locations but also reduces the unnecessary computational overhead of acquiring samples from locations where lower resolution may be adequate. Likewise, this may reduce the time required to adequately scan an object of interest.
In a further aspect, a shape model may be fit to pixel information in a common 3-D volume in order to define a 3-D surface of an internal object of interest. Accordingly, a plurality of arbitrarily aligned 2-D medical images may be obtained for an internal object of interest. Pixel intensity information from each of the 2-D images may be translated into a 3-D volume. Accordingly, a predefined shape model may be fit to the pixel intensity information such that the shape model defines a 3-D surface for the object of interest.
In one arrangement, the predefined shape model may be generated based on an average of a population of a corresponding object. For instance, for prostate imaging, such a shape model may be generated from a training set of prostate images. Likewise, shape statistics for the training set population may be utilized in order to fit the predefined shape model to the pixel intensity information. In a further arrangement, the shape model may be constrained by boundaries identified within the 2-D images. In this regard, segmentation may be performed on the 2-D images in order to identify one or more boundaries therein. These boundaries may then be utilized as constraints for the shape model.
In one arrangement, a principle component analysis is utilized to identify the largest modes of variation for the shape model. Accordingly, by identifying the largest modes of variation the shape model may be fit to the pixel intensities utilizing fewer variables.
In another aspect, the utility is provided for registering current two-dimensional images with previously stored three-dimensional images. In this regard, during an imaging procedure as a user manipulates an imaging instrument (e.g., freehand) previous few image frames may be kept in memory. These frames may then be used for motion correction of the current imaging device location relative to a previously acquired/stored three-dimensional image. Such motion correction may be necessitated due to patient movement, motion of anatomical structures to the procedure and/or to device movement or miscalibration. Initially, a series of two-dimensional images are obtained. The series of two-dimensional images may be obtained in real-time such that the last image is the most current image. Pixel information from each of the two-dimensional images may be translated into common three-dimensional volume in order to generate a current three-dimensional image using the series of two-dimensional images. This current three-dimensional image of the object of interest may be utilized to align the most current two-dimensional with the previously stored three-dimensional image.
In one arrangement, use of the current three-dimensional image may include registering the current three-dimensional image with the previous three-dimensional image in order to identify orientation correspondence therebetween. Accordingly, the most current two-dimensional image may be aligned based on this information.
In one arrangement, the two-dimensional images are maintained and buffer in the computerized imaging device. Such images may be maintained on a first and first-out basis. For instance, the buffer may maintain the previous five to 10 images wherein the oldest image is continuously replaced by the most current image.
Generally, all the utilities also allow for using more information during navigation. That is, the presented utilities use more information than most conventional 2-D ultrasound based navigation systems. This is done by keeping some of the previous image frames in a buffer. The previous frames together with the current frame provide at least partially 3-D information and thus provide a better and more robust solution when correlating the current 2-D image with an earlier acquired 3-D.
The utilities also allow for non-rigid motion correction. That is, in addition to correlating a current 2-D image with 3-D scan, the additional information can also be used to perform non-rigid registration. Since more frames are used, more deformation can be captured by the deformation model.
The utilities also permit shape based boundary interpolation. In this regard a mean shape model may be fit to the boundaries of an object based on much less information compared to an explicit segmentation method. The mean shape is used as the template for fitting onto the surface. By definition, mean shape is more similar to the population and thus, is ideal as the deformation template. Further usage of actual deformation statistics from the training samples (i.e., used to form the shape model) corresponding to the anatomy of the object in consideration represent actual deformation modes of the objects in the population and thus, the results are representative of the population.
Further, the use of reduced dimensionality of the shape model may allow for faster image generation speeds. In one arrangement, the presented utilities use a principal component analysis (PCA), which identifies and optimizes over a smaller number of parameters while capturing population statistics well. Further hierarchical optimization over modes of variations ensures that coefficients for larger modes get optimized first and then smaller modes of variations follow. This adds to robustness and stability of utilities and also avoids small local minima.
The utilities are also adaptable to image fusion with other imaging modalities. That is, the utilities are easily extensible to include image fusion (e.g., ultrasound images) with other modalities such as Elastography, MR spectroscopy, MRI, etc. The interpolated frames in the 3-D space can be correlated with the images from other modality and the correspondence computed between them.
The utilities also provide more information for tracking in 3-D. Due to dynamic nature of the placement of 2-D data in 3-D and arbitrariness of the orientation, the 2-D frames can be kept in memory buffer of the computer and called upon to correct for motion compensation in 3-D. Most current techniques only use the live 2-D image to correlate the current field of view, which is an ill-conditioned problem. Extra information present in form of previous frames makes it better conditioned.
The utilities can be used to compute local deformation at every place inside the object of interest, which can be used for clinical interpretation about the nature and progression of a disease. Many diseases including cancer manifest themselves into change in tissue deformation characteristics or change in tissue volume over time, which can be easily captured through computation of local deformation.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 illustrates a medical imaging system utilized for ultrasound imaging.
FIGS. 2A and 2B illustrate acquisition of medical images having a single degree of freedom.
FIGS. 3A-3D illustrate arbitrarily aligned 2-D images and the use of those images to identify the boundaries of an internal object of interest.
FIGS. 4A and 4B illustrate a coordinate system for a 2-D imaging plane.
FIG. 5 illustrates a process for reconstructing a 3-D image from arbitrarily oriented 2-D images.
FIG. 6 illustrates a process for use of arbitrarily orienting 2-D images in conjunction with a shape model.
FIG. 7 illustrates generation of a shape model for identifying boundaries of a 3-D object.
DETAILED DESCRIPTION
Reference will now be made to the accompanying drawings, which assist in illustrating the various pertinent features of the present disclosure. Although the present disclosure is described primarily in conjunction with transrectal ultrasound imaging for prostate imaging, it should be expressly understood that aspects of the present invention may be applicable to other medical imaging applications. In this regard, the following description is presented for purposes of illustration and description.
As presented, the invention is directed towards systems and methods for interpolation and reconstruction of a 3-D image from 2-D image planes/frames/slices obtained in arbitrary orientation during, for example, an unconstrained scan procedure. Also included is a method for adaptively improving interpolation based on dynamic addition of more 2-D frames to an imaging buffer. Systems and methods are also provided for using shape priors to interpolate the surface of an internal object of interest using intensity information of a limited number of 2-D image planes that have the object of interest in their field of view. Further, a combination of the above systems and methods allow arbitrary (e.g., freehand) scanning of a few 2-D images and then fitting a surface for the object of interest using intensity information from the scanned images.
The reconstruction method pertains to all types of 2-D image acquisition methods under various modalities and specifically for 2-D image acquisition methods used while performing an image-guided diagnostic or surgical procedure. It will be appreciated that such procedures include, but are not limited to ultrasound guided biopsy of various organs, such as prostate (trans-rectal and trans-perineal), liver, kidney, breast, etc., brachytherapy, ultrasound guided laparoscopy, ultrasound guided surgery or an image-guided drug delivery procedures.
As noted above, most current methods for reconstructing a 3-D image from 2-D image planes assume some type of uniformity (e.g., constraint) in image acquisition. For example, most previous methods assume (or require) that the 2-D images be obtained as parallel slices or are displaced from each other through an angle while meeting at one fixed axis.
FIG. 1 illustrates a transrectal ultrasound probe 10 that may be utilized to obtain a plurality of two-dimensional ultrasound images of the prostate 12. As shown, the probe 10 may be operative to scan an area of interest. The probe 10 may also include a biopsy gun that may be attached to the probe. Such a biopsy gun may include a spring driven needle that is operative to obtain a core from desired area within the prostate and/or deliver medicine (e.g., a brachytherapy seed) to a location within the prostate.
In an automated arrangement, the probe may be affixed to a positioning device (not shown) and a motor may sweep the transducer of the ultrasound probe 10 over a radial area of interest (e.g., around a fixed axis 70 of FIG. 2A). Accordingly, the probe 10 may acquire plurality of individual images while being rotated through the area of interest. Each of these individual image slices may be represented as a two-dimensional image. Alternately, the probe 10 may be linearly advanced to obtain a plurality of uniformly spaced images as is illustrated in FIG. 2B. In both instances, the resulting 2-D image sets may be registered to generate a three-dimensional image. In order to generate a highly accurate 3-D reconstruction, previous interpolation techniques have typically depended heavily on the tolerances on deviation from the assumptions (i.e., that all images are fixed except for a single degree of freedom). However, it is often desirable to utilize a handheld probe to acquire images, for example, just prior to performing a procedure.
Such handheld acquisition, however, often introduces multiple degrees of freedom into the acquired 2-D images. For example, FIG. 3A illustrates a plurality of 2-D images 80 a-n acquired for an object of interest (e.g., prostate) where the images are not aligned to at least one axis. Further, while performing a procedure or delivering drug to specific part of an organ (or focused surgery such as focusing in narrow regions such as in high frequency ultrasound methods), a user may not need 3-D information in same resolution inside and outside the object. Specifically, the procedure requirements may not need the user to go through a tedious and constrained acquisition of 2-D images, which may take longer for unnecessarily high resolution everywhere in a reconstructed 3-D image, or yield a uniformly low-resolution image. Instead, higher resolution at the region of interest and lower resolution at other places may in some instances be sufficient.
Aspects of the presented systems and methods provide such an option while also allowing for conventional and constrained interpolation and reconstruction strategies. In this method, 2-D images are obtained in an unconstrained fashion (e.g., using handheld imaging devices) while the imaging device is manipulated to scan the object. The user may scan the object in a freehand fashion in various different orientations. See, e.g., FIG. 3A. The orientation and location of the imaging planes are measured using a mechanical or magnetic tracker 14, which may be off-the-shelf or designed specifically for that application. See FIG. 1. The position of the tracker 14 is recorded in relation to a known reference by a reading device 16, which outputs the identified location of the tracker 14 to an imaging system 30 that also receives images from the imaging device 10. The configuration of the reading device may depend upon the type of tracker 14 utilized. For instance, when utilizing a magnetic tracker, the reader may be a magnetic field sensor, which is interfaced to a computer or recording device via an interface box. Mechanical trackers may have a combination of rotary and linear encoders to track the transducer attached to the tracker. Optical trackers have attachment containing LEDs and their position is space is computed using images captured from two cameras at different locations.
The imaging system 30 is operative to correlate the recorded position of the tracker and a corresponding acquired image. As will be discussed herein, this allows for utilizing non-aligned/arbitrary images for 3-D image reconstruction. That is, the imaging system 30 utilizes the acquired images to populate the 3-D image volume as per their measured locations. In addition to reconstructing the 3-D volume after images are acquired, the method also allows for dynamic refinement at desired regions. That is, a user may acquire additional images at desired locations and the reconstruction method interpolates the additional images into the 3-D volume.
While a significant number of 2-D images typically need to be acquired to generate a 3-D image of reasonably good resolution for an object of interest, it can be time consuming to obtain enough 2-D images to get such resolution. Moreover, it remains a challenge to extract surface from the images in a short time so as to be usable. Without extraction of surface of the object, some of key advantages of 3-D images are lost since the user can no longer visualize the anatomy in a 3-D space without the background being suppressed. As a result, the correspondence of a 2-D live image during the procedure with the previously acquired 3-D image is rendered useless. In order to take full advantage of 1) live image corresponding to a 3-D anatomical object imaged previously, the operator has to 1) acquire a series of 2-D images at a reasonably good resolution and 2) after reconstruction, extract the surface of anatomical object from the 3-D image. Both processes are time-consuming if meaningful results are to be obtained.
A second aspect of the present invention addresses this issue by allowing the user to collect images in a simplified manner (e.g., freehand, just before a procedure). In this aspect, instead of interpolating the pixel intensity values in 3-D image volume from the scanned 2-D image planes, the object itself is interpolated using shape priors. Shape priors refer to a set of information 60 collected previously for a particular anatomical object and includes the mean shape of the object along with statistical information. The statistical information provides the modes of variations in the shape of the object and represents anatomically meaningful interpretations of the shape variability within the population.
Interpolation of 2-D Images in Arbitrary Orientation.
While imaging in freehand or using any constrained or unconstrained method, the tracker provides full information about the 2-D plane being imaged. A normal to the imaging plane and a reference line passing through a known point in the imaging plane are sufficient to describe the imaged plane fully based on the geometry of the imaging device. For example, as illustrated in FIGS. 4A and 4B, which illustrate an end-fire TRUS ultrasound transducer 10, the position of the center of the transducer tip (i.e., provided by a tracker) along with the normal to the image plane 22 and the central axis 24 of the transducer 20 is sufficient to place the 2-D image in a 3-D volume, assuming that the other characteristics such as the depth setting and geometry (rectangular, fan, etc) of 2-D imaging plane are known. In such a case, the tracker output can be measured to provide the location and orientation of the imaging equipment, which in turn provides the normal to imaging plane, central axis 24 of the transducer 20 and the tip of the transducer 20. Based on geometry of the 2-D image observed, its corresponding location in 3-D can be determined using the method presented.
As illustrated, FIG. 4B shows a 2-D imaging slice 22 and its associated co-ordinate system. Letting Xi′=[x1i′ x2i′ x3i′] represent the co-ordinate system of i-th 2-D image acquired as shown in the figure, where x3′ faces out of plane of the 2-D image slice 22. Then, in frame of reference of the coordinates of image, x3i′ is always zero and the center of transducer tip is at the origin.
FIG. 4A illustrates the co-ordinate system for the 3-D image, where origin is selected by the user at the time of initialization of an imaging scan by pointing roughly to the center of the object 28 to be imaged. As shown in the figure, let X=[x1 x2 x3]t represent the coordinates in frame of reference a resulting 3-D image.
In order to interpolate the images to fill a 3-D space and thereby construct a 3-D image, the 2-D image coordinate system has to be placed in a 3-D image coordinate system. Such a process 500 is illustrated in FIG. 5. Initially, 2-D images, which may have arbitrary orientations, are obtained (502). Tracking information for each image is likewise obtained (504). Then, each pixel in each 2-D image is placed (506) in a corresponding location in a 3-D volume. This is done first by placing the origin of each 2-D image in a frame of reference of a common 3-D volume and then applying the coordinate transformation. The coordination information is obtained by tracking the location and orientation of the imaging transducer using an unconstrained tracker such as a magnetic tracker, freehand mechanical tracker, optical tracker, etc. The co-ordinate transformation simply involves a rotation matrix obtained by solving the following set of linear equations:
Rn i ′=n,
Rt i ′=t,
Rb i ′=b.  Eq. 1
where, ni′=[0 0 1]t is the unit vector in direction of x3i′, ti′=[0 1 0]t represents the unit vector in the direction of x2i and bi′=[1 0 0] is the unit vector in direction of x1i′ in frame of reference of the 2-D image. n and t represent the measured normal and tangent unit vectors for the i-th slice and b represents the unit bi-normal representing the cross-product n×t. This information is provided by the tracker.
The rotation matrix R computed above is used to compute the transformation of the coordinate system from the coordinates of the 2-D image into coordinates of the 3-D image. Letting O be the origin of the frame of reference of the 3-D image and O′ be the origin of frame of reference of the 2-D image. Then, the overall transformation of coordinate point (xi′, yi′, zi′) from the coordinate system of the 2-D image into the frame of reference of the 3-D image is given by:
( x i y i z i 1 ) = [ Δ x i R Δ y i Δ z i 0 0 0 1 ] ( x i y i z i 1 ) where , ( Δ x i Δ y i Δ z i ) = O - O . Eq . 2
At this time, the real location of the 2-D pixels is located (508) in a common 3-D volume. The transformed coordinates do not, in general, lie on a discrete lattice and the intensities at the transformed “real” coordinate locations need to be interpolated (510) onto the neighboring discrete locations. First, the neighboring discrete locations are computed and variables defined as follows:
x li d = mod ( x i ) = largest integer smaller than x i . x hi d = x li d + 1 = smallest integer greater than x i , if x li d x i ; = x li d , if x li d = x i , and dx li = x i - x li d , dx hi = x hi d - x i .
Letting yli d, yhi d, dyli, dyhi, zli d, zhi d, dzli and dzhi be defined similarly for xi, yi and zi, respectively. Then, initializing all the intensities in the reconstructed image to be equal to zero and define the following weights to intensities of pixels for 2-D frame i as shown below:
w i(x li ,y li ,z li)=(1−dx li)(1−dy li)(1−dz li)
w i(x li ,y li ,z hi)=(1−dx li)(1−dy li)(1−dz hi)
w i(x li ,y hi ,z li)=(1−dx li)(1−dy hi)(1−dz li)
w i(x li ,y hi ,z hi)=(1−dx li)(1−dy hi)(1−dz hi)
w i(x hi ,y li ,z li)=(1−dx hi)(1−dy li)(1−dz li)
w i(x hi ,y li ,z hi)=(1−dx hi)(1−dy li)(1−dz hi)
w i(x hi ,y hi ,z li)=(1−dx hi)(1−dy hi)(1−dz li)
w i(x hi ,y hi ,z hi)=(1−dx hi)(1−dy hi)(1−dz hi)
Letting Ii(x, y, z) be the intensity of the transformed image at coordinate (x, y, z) based on i-th 2-D image frame. Then, the intensity at a pixel (voxel) location (x,y,z) can be dynamically computed as:
I ( x , y , z ) = i w i I i ( x , y , z ) i w i Eq . 3
where, the summation is over all the frames (i's in above equation) that contribute to the pixel (x,y,z). This results in a space interpolation (512) of 3-D image voxels. As more and more 2-D frames contribute to the same pixel location in discrete lattice of the 3-D image volume, the intensity gets refined to produce better results.
Once the images are acquired, a Gaussian interpolation filter or other appropriate filter (514) with appropriate window size (33 to 73, based on image resolution) can be used to interpolate (516) the results to pixels that have not been initialized using Eq. 1 above. The result is that a 3_D image volume may be constructed (518) from the acquired images. If the user wishes to scan portions of image in better resolution, they may acquire more images from the area of interest and the method will dynamically update the intensity values in that location as per Eq. (3).
The method thus, interpolates a 3-D image from 2-D images that are acquired in arbitrary orientations and thereby permits freehand scanning. This may improve the workflow, as it removes need for any locking or constrained tracker. In addition, the dynamic interpolation allows the user to dynamically improve the resolution by acquiring more images from a particular are of interest. The flexibility in choosing different resolution for different regions is another key advantage of the method.
Using Shape Information to Interpolate the Object Surface.
Shape priors are utilized to perform interpolation of a surface for the 3-D image volume. In this method, first, shape statistics are generated and analyzed from a number of actual data sets from the anatomical object in question (e.g., prostate). To do this, a number of images of the anatomical object are collected. The boundaries of the object of interest are then extracted either manually by expert segmentation, or using a semi-automatic or automatic method. The surfaces are then normalized so as to remove the translation, rotation and scaling artifacts. The normalized images are then averaged together by computing mean position of each vertex in the template chosen for computing the mean shape. The shape obtained is run through the same process until convergence. This provides the mean shape of the object and this shape is then registered with all the other shapes in the sample dataset. The registration provides deformation details at each vertex of the image and the statistics of the same are then used to drive the fitting of the mean shape into the shape of the subject.
Previously computed statistics may be used to deform the mean shape to fit the 3-D image volume, which may be sparse due to being generated using very few 2-D image planes and/or due to the 2-D images being obtained in different orientations. Conventional segmentation techniques for extraction of surface rely heavily on the resolution of images. Although 2-D segmentation may be possible on individual frames, the combination of these frames together to generate a surface in 3-D using a heuristic approach typically produces artifacts in surface interpolation. Further, due to the limited number of 2-D images boundaries of the 3-D volume based solely on the 2-D images may not provide a useful estimation of the actual boundary of the object. See FIG. 3B. With the presented method however, limited information is supported by the prior knowledge about the shape and the shape prior information is used to fit an entire surface in 3-D such that it still represents the boundaries in the sampled 2-D image but is defined entirely. FIG. 6 shows the overall scheme for combination of the two methods. The images are acquired.
As shown, the process (600) includes acquiring 2-D images (602) and tracking information (604). These elements are utilized to interpolate/reconstruct (606) a 3-D image volume (608), as set forth above. Object shape statistics (610) associated with the object of the 3-D image volume (608) are utilized to fit (612) the mean shape to the 3-D image volume. This generates an object surface (614) that may be output to, for example, a monitor 40 (See FIG. 1) and/or utilized for an image guided procedure or therapy.
The first step is to generate population statistics including a mean shape from a collection of samples from population of the object of interest. This may be done once and may be performed off-line (i.e., at a time before an imaging procedure). The population may correspond to a specific organ, for example: prostate, in which case, a number of images acquired from prostates of different individuals may be used as samples. Larger sample size can capture more variability in the population and hence, a reasonably large sample size should be used to capture the shape statistics. The population may consist of images from different modality.
FIG. 7 shows the computation of mean shape and capturing the shape statistics from the training dataset (702). First, the surface of the object is extracted/segmented (704) for all the images in the sample dataset. This may be done manually, semi-automatically or automatically. After extraction of surface (706) from all the images through segmentation, one image that best represents the population is chosen (708) as template image (710). The surface of the template is aligned/registered 714 to the current target image 712 using rotation, translation and anisotropic scaling and a mean surface is computed 716. The mean surface is then deformed into every other image in the training set and the mean surface is updated by computing an average again 720 to separate an updated mean shape 722. This process is repeated until convergence to generate a final mean shape 730. A more complete description of computing a mean shape is set forth in co-pending U.S. patent application Ser. No. 11/740,807, entitled, “Improved System and Method for 3-D Biopsy,” the contents of which are incorporated herein by reference.
To generate population statistics, the mean surface is warped into all the surfaces in the training set such that the vertex locations of undeformed mean shape and deformed mean surface are known. Letting Vi represent the set of vertices for image i, and Vμ represent the set of vertices for the mean shape, then the population co-variance matrix is computed as S=(Vi−Vμ)t(Vi−Vμ). The eigen-vectors of the covariance matrix then represent the modes of variations of the shape within the training data. The eigenvalues can then be arranged in descending order such that the eigen vectors representing first few modes of variations represent most of the variability in the population. Specifically, let Sλ=pλ, where e represents the vector of all the eigen values and p represents the corresponding eigenvectors. Then, first few eigenvectors be chosen to represent most of variability in the population. In addition, this knowledge can be used to generate more objects from the population, e.g. V′=Vμ+pc, where c is set of weights that determine the differences from mean shape according to modes of allowable variation.
While the theory of shape models is well established, the presented method utilizes the theory to fit a mean surface into the sparse image set acquired such that entire object can be represented using just a few number of 2-D images. Generally, to perform segmentation directly on the image will require a reasonably good resolution of image, which means that a large number of images will need to be acquired and 3-D image space will need to be filled before segmentation can be done. In addition, segmentation in 3-D is computationally demanding and may not be able to be performed within a desirable time. As a result, some techniques resort to segmentation in 2-D and then combining the 2-D segmentations into a surface in 3-D. Such techniques, while faster, obviously suffer from disadvantages of not using the available information fully by neglecting the 3-D interconnectivity.
In presented method, a few numbers of images are sufficient to perform a good segmentation of the object. This is achieved by first acquiring a few images that captures the object in various orientations and then placing the mean shape 900 into the reconstructed 3-D image 904. See FIG. 3C. The reconstructed 3-D image will be very sparse with mostly zero values. However, only available 2-D information in various orientations can be used to deform the mean shape 900 into the shape of the scanned object by letting the mean surface deform while following the population deformation characteristics (modes of variations). The deformation optimizes the value of coefficients “c” such that V′=Vμ+pc yields maximization of intensity gradient at the boundaries of the surface. For instance, the edges 910 of the object as represented by the acquired image frames may be utilized as constraints for defining the mean shape. These edges may be identified using any appropriate segmentation process. In any case, use of the edges 910 allows for closely fitting the mean shape to the actual object boundaries. See FIG. 3D.
Motion Correction in 3-D
In a number of procedures, an image of one modality is scanned in 3-D before the procedure (pre-op) and then a 2-D live image is used to navigate during the procedure. Examples include pre-op MR scan before a surgery (say, brain) and live 2-D ultrasound image while performing the actual surgery. Due to differences in time between the two images, difference in coordinate system, differences in image qualities and deformation of tissue between the images or during the procedure, it is important to align the 2-D live ultrasound image with the previously acquired 3-D images, which may be stored by an imaging device. See FIG. 1. Any motion of patient, instrument or tissue can complicate the problem. Most current techniques align 2-D live image with a stored 3-D image by extracting corresponding 2-D slice from the 3-D image, which is equivalent to a rigid registration between the images. Since the registration is based on only a 2-D image slice, there can be many false minima and robustness is an issue. Moreover, a non-rigid registration that may follow, assumes only in-plane deformation, which captures only a part of actual deformation. In the presented invention, the robustness is improved by using a series of 2-D slices for motion correction such that the registration is based on partial 3-D information rather than purely 2-D information.
The real-time interpolation from arbitrary image planes, as discussed above, easily allows for such a system, where the 2-D live image being acquired is kept in buffer for next few frames. For instance, at any time the 2-D live image slice number is n, and there may always be previous m images kept in a buffer. m may be a small number such as 5-10. In this case, there are more than one slices to be placed in 3-D. Since the data input refresh rate is typically around 30 frames per second, 5-10 slices only means that the data corresponding to a small fraction (≅0.2 seconds) is being kept in buffer. This immediate previous data, together with current frame provides 5-10 slices in 3-D, that can be correlated with a previously acquired 3-D image. Since there are more slices in different orientation, the robustness is much improved and the live images can be placed in coordinate frame of reference of the 3-D image with more confidence. In addition, if non-rigid registration is needed, the sequence of images provide a much better starting point than just one image and capture deformation in more directions than just along the frame of image acquisition.
Acquisition of Different Modalities and Their Fusion
The image acquisition method discussed above can be used for diagnostic imaging or for biopsy, surgical or image based drug delivery system. The interpolation method may be used to acquire images from different modalities such as ultrasound and elastography. The 2-D image acquired from these modalities can be combined together with the tracking information to place the results in 3-D such that the elastography images can be shown overlaid on the ultrasound data, thus providing additional information to the user. In addition, if a pre-operative MRI scan is available in 3-D, the reconstructed elastographic data can be overlaid onto the structural scan acquired earlier. Likewise, if functional information such as MR spectroscopy is available, the live ultrasound image can be placed in correspondence with the spectroscopy data based on the tracker information and the registration step as discussed above. The registration may be two-step process: rigid, followed by non-rigid. The non-rigid registration may be based on the segmented surface, where a shape model can be used to segment the images in different modalities and the segmented surfaces registered together.
In addition to registering images from different modalities to perform a fusion, the presented method may also be used to study local tissue deformations over time. Many diseases manifest themselves in abnormal tissue deformation and the local deformation can be used for interpreting tissue conditions. As an example, in a repeat prostate biopsy case, a patient image obtained from previous visit may be registered with the image obtained from the repeat visit and the local deformation may be used to observe the abnormal local volume changes in tissue. The local deformation may be a useful indicator of locations of cancer growth. The registration may be mutual information based, intensity based, surface based, landmark based or a combination of any of these. The registration provides the correspondence between the previous image and the current image. The Jacobian value for the deformation map may then be computed and overlaid on the 3-D prostate volume to look at the abnormal localized deformations and any such locations found must be sampled.
The above-noted utilities provide a number of advantages. One primary advantage is that the utilities can construct 3-D images using 2-D images that are acquired in an unconstrained manner (e.g., freehand) where uniform angular or linear spacing between images is no longer required. Likewise, this may result in better workflow as a user no longer needs to obtain images in a constrained environment.
Another advantage is that the utilities allow for flexibility of resolution. In this regard, the user can scan different regions with different resolution thereby providing more flexibility and usage. This also permits fine tuning of an image in real-time. If the user desires better resolution for an acquired image, they may scan more images of the same anatomy and have those images applied to the reconstructed image.
The foregoing description of the present invention has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the above teachings, and skill and knowledge of the relevant art, are within the scope of the present invention. The embodiments described hereinabove are further intended to explain best modes known of practicing the invention and to enable others skilled in the art to utilize the invention in such, or other embodiments and with various modifications required by the particular application(s) or use(s) of the present invention. It is intended that the appended claims be construed to include alternative embodiments to the extent permitted by the prior art.

Claims (13)

The invention claimed is:
1. A method for use in medical ultrasound imaging, comprising:
obtaining a first two dimensional ultrasound image of an internal object of interest using a handheld ultrasound device, said first two dimensional ultrasound image having a first image plane and a first three dimensional point of reference obtained from an unconstrained tracker adapted to provide location information in three dimensional space, wherein said first three dimensional point of reference is correlated with said first image plane, and wherein said unconstrained tracker is attached to the handheld ultrasound device;
obtaining a second two dimensional ultrasound image of the internal object of interest using the handheld ultrasound device, said second two dimensional ultrasound image having a second image plane and second three dimensional point of reference obtained from the unconstrained tracker attached to the handheld ultrasound device, wherein said second three dimensional point of reference is correlated with said second image plane, and wherein said first and second three dimensional points of reference are different and said image planes are non-aligned such that said first and second ultrasound images have arbitrary orientations relative to one another;
translating pixel information from said first and second two dimensional ultrasound images into a common three dimensional volume; and
interpolating between information associated with translated pixel information of said two dimensional ultrasound images as disposed within said common three dimensional volume to generate a three dimensional image of the object of interest having different resolution in at least first and second regions of said three dimensional image.
2. The method of claim 1, wherein translating pixel information further comprises:
applying a coordinate transform to said two-dimensional ultrasound images.
3. The method of claim 2, wherein applying a coordinate transform further comprises:
for each two-dimensional ultrasound image determining a normal to said image plane;
defining unit vectors in three directions for said two dimensional image; and
applying a rotation matrix to said vectors.
4. The method of claim 1, wherein translating said pixel intensity further comprises:
interpolating pixel intensities from said two-dimensional ultrasound images to discrete locations in said three-dimensional volume.
5. The method of claim 1, further comprising:
after generating said three dimensional image, obtaining an additional ith two dimensional image having an ith image plane and an ith three dimensional point of reference; and
incorporating pixel information of said ith image into said three-dimensional image.
6. The method of claim 5, wherein obtaining an additional ith two dimensional image further comprises:
selectively obtaining a two-dimensional image for a location of interest of said internal object of interest.
7. The method of claim 1, wherein for each image obtaining comprises:
receiving an image plane from a hand-guided ultrasound imaging device.
8. The method of claim 1, wherein generating a three dimensional image of said object of interest comprises interpolating a surface of said object of interest using statistics associated with said object of interest.
9. The method of claim 8, wherein interpolating said surface comprises:
fitting a predefined shape model to said pixel information translated into said common three dimensional volume.
10. A method for use in medical ultrasound imaging, comprising:
obtaining a plurality of arbitrarily aligned two dimensional ultrasound images using a handheld ultrasound device, wherein said images are of an internal object of interest and at least a portion of the images have non-aligned image planes;
translating pixel intensity information from each of said two dimensional ultrasound images into a common three dimensional volume;
applying a deformable shape model based on an average corresponding object of a training set population to said pixel intensity information of said common three dimensional volume;
deforming a surface of said shape model to match boundaries of said internal object of interest within said two dimensional images, wherein said deformed surface of said shape model defines a three dimensional surface for said object of interest in said three dimensional volume; and
generating, in response to said deforming, a three-dimensional image of the object of interest having different resolution in at least first and second regions of said three dimensional image.
11. The method of claim 10, wherein shape statistics of said training set population are utilized to fit said predefined shape model to said pixel intensity information.
12. The method of claim 10, wherein boundaries identified within said two dimensional images form constraints for deforming said surface of said predefined shape model.
13. The method of claim 10, further comprising:
performing a three dimensional segmentation of said object of interest using said three dimensional surface as an initial surface boundary.
US11/874,778 2007-10-18 2007-10-18 Image interpolation for medical imaging Expired - Fee Related US8571277B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/874,778 US8571277B2 (en) 2007-10-18 2007-10-18 Image interpolation for medical imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/874,778 US8571277B2 (en) 2007-10-18 2007-10-18 Image interpolation for medical imaging

Publications (2)

Publication Number Publication Date
US20090103791A1 US20090103791A1 (en) 2009-04-23
US8571277B2 true US8571277B2 (en) 2013-10-29

Family

ID=40563538

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/874,778 Expired - Fee Related US8571277B2 (en) 2007-10-18 2007-10-18 Image interpolation for medical imaging

Country Status (1)

Country Link
US (1) US8571277B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130223753A1 (en) * 2012-02-23 2013-08-29 Andrew T. Sornborger Methods and systems for enhancing data
WO2019092167A1 (en) 2017-11-09 2019-05-16 Agfa Healthcare Nv Method of segmenting a 3d object in a medical radiation image
US10340041B2 (en) * 2014-05-09 2019-07-02 Acupath Laboratories, Inc. Biopsy mapping tools
US10716544B2 (en) 2015-10-08 2020-07-21 Zmk Medical Technologies Inc. System for 3D multi-parametric ultrasound imaging
US11854281B2 (en) 2019-08-16 2023-12-26 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for processing brain images and extracting neuronal structures

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8325988B2 (en) * 2008-03-03 2012-12-04 California Institute Of Technology Image reconstruction by position and motion tracking
US20110184684A1 (en) * 2009-07-21 2011-07-28 Eigen, Inc. 3-d self-correcting freehand ultrasound tracking system
BR112012016973A2 (en) * 2010-01-13 2017-09-26 Koninl Philips Electronics Nv surgical navigation system for integrating a plurality of images of an anatomical region of a body, including a digitized preoperative image, a fluoroscopic intraoperative image, and an endoscopic intraoperative image
JP2013517039A (en) * 2010-01-19 2013-05-16 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Imaging device
US20110176715A1 (en) * 2010-01-21 2011-07-21 Foos David H Four-dimensional volume imaging system
US8875139B2 (en) * 2010-07-30 2014-10-28 Mavro Imaging, Llc Method and process for tracking documents by monitoring each document's electronic processing status and physical location
WO2012078280A1 (en) * 2010-11-05 2012-06-14 Sonocine, Inc. Elastography imaging system
US9224240B2 (en) * 2010-11-23 2015-12-29 Siemens Medical Solutions Usa, Inc. Depth-based information layering in medical diagnostic ultrasound
CN103229210B (en) 2010-12-02 2016-07-06 皇家飞利浦电子股份有限公司 Image registration device
CN106709900B (en) * 2015-11-17 2020-12-18 上海联影医疗科技股份有限公司 Registration method of heart perfusion magnetic resonance image
US10346993B2 (en) 2015-11-17 2019-07-09 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image processing in magnetic resonance imaging

Citations (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5282472A (en) 1993-05-11 1994-02-01 Companion John A System and process for the detection, evaluation and treatment of prostate and urinary problems
US5320101A (en) 1988-12-22 1994-06-14 Biofield Corp. Discriminant function analysis method and apparatus for disease diagnosis and screening with biopsy needle sensor
US5383454A (en) 1990-10-19 1995-01-24 St. Louis University System for indicating the position of a surgical probe within a head on an image of the head
US5398690A (en) 1994-08-03 1995-03-21 Batten; Bobby G. Slaved biopsy device, analysis apparatus, and process
US5454371A (en) 1993-11-29 1995-10-03 London Health Association Method and system for constructing and displaying three-dimensional images
US5531520A (en) * 1994-09-01 1996-07-02 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets including anatomical body data
US5562095A (en) 1992-12-24 1996-10-08 Victoria Hospital Corporation Three dimensional ultrasound imaging system
US5611000A (en) 1994-02-22 1997-03-11 Digital Equipment Corporation Spline-based image registration
US5810007A (en) 1995-07-26 1998-09-22 Associates Of The Joint Center For Radiation Therapy, Inc. Ultrasound localization and image fusion for the treatment of prostate cancer
US5842473A (en) 1993-11-29 1998-12-01 Life Imaging Systems Three-dimensional imaging system
WO2000014668A1 (en) 1998-09-08 2000-03-16 Catholic University Of America Method and system for improved detection of prostate cancer
US6047080A (en) * 1996-06-19 2000-04-04 Arch Development Corporation Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images
US6092059A (en) 1996-12-27 2000-07-18 Cognex Corporation Automatic classifier for real time inspection and classification
US6171249B1 (en) 1997-10-14 2001-01-09 Circon Corporation Ultrasound guided therapeutic and diagnostic device
US6238342B1 (en) 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
US6251072B1 (en) 1999-02-19 2001-06-26 Life Imaging Systems, Inc. Semi-automated segmentation method for 3-dimensional ultrasound
US6261234B1 (en) 1998-05-07 2001-07-17 Diasonics Ultrasound, Inc. Method and apparatus for ultrasound imaging with biplane instrument guidance
US6298148B1 (en) 1999-03-22 2001-10-02 General Electric Company Method of registering surfaces using curvature
US6334847B1 (en) 1996-11-29 2002-01-01 Life Imaging Systems Inc. Enhanced image processing for a three-dimensional imaging system
US6342891B1 (en) 1997-06-25 2002-01-29 Life Imaging Systems Inc. System and method for the dynamic display of three-dimensional image data
US6351547B1 (en) * 1999-04-28 2002-02-26 General Electric Company Method and apparatus for formatting digital images to conform to communications standard
US6351660B1 (en) 2000-04-18 2002-02-26 Litton Systems, Inc. Enhanced visualization of in-vivo breast biopsy location for medical documentation
US6360027B1 (en) 1996-02-29 2002-03-19 Acuson Corporation Multiple ultrasound image registration system, method and transducer
US6385332B1 (en) 1999-02-19 2002-05-07 The John P. Roberts Research Institute Automated segmentation method for 3-dimensional ultrasound
US6423009B1 (en) 1996-11-29 2002-07-23 Life Imaging Systems, Inc. System, employing three-dimensional ultrasonographic imaging, for assisting in guiding and placing medical instruments
US6447477B2 (en) 1996-02-09 2002-09-10 Emx, Inc. Surgical and pharmaceutical site access guide and methods
US6500123B1 (en) 1999-11-05 2002-12-31 Volumetrics Medical Imaging Methods and systems for aligning views of image data
US20030000535A1 (en) 2001-06-27 2003-01-02 Vanderbilt University Method and apparatus for collecting and processing physical space data for use while performing image-guided surgery
US6561980B1 (en) 2000-05-23 2003-05-13 Alpha Intervention Technology, Inc Automatic segmentation of prostate, rectum and urethra in ultrasound imaging
US6567687B2 (en) 1999-02-22 2003-05-20 Yaron Front Method and system for guiding a diagnostic or therapeutic instrument towards a target region inside the patient's body
US20030135115A1 (en) 1997-11-24 2003-07-17 Burdette Everette C. Method and apparatus for spatial registration and mapping of a biopsy needle during a tissue biopsy
US6610013B1 (en) 1999-10-01 2003-08-26 Life Imaging Systems, Inc. 3D ultrasound-guided intraoperative prostate brachytherapy
US6611615B1 (en) 1999-06-25 2003-08-26 University Of Iowa Research Foundation Method and apparatus for generating consistent image registration
US6675211B1 (en) 2000-01-21 2004-01-06 At&T Wireless Services, Inc. System and method for adjusting the traffic carried by a network
US6675032B2 (en) 1994-10-07 2004-01-06 Medical Media Systems Video-based surgical targeting system
US6674916B1 (en) 1999-10-18 2004-01-06 Z-Kat, Inc. Interpolation in transform space for multiple rigid object registration
US6689065B2 (en) 1997-12-17 2004-02-10 Amersham Health As Ultrasonography
US20040081340A1 (en) * 2002-10-28 2004-04-29 Kabushiki Kaisha Toshiba Image processing apparatus and ultrasound diagnosis apparatus
US6778690B1 (en) 1999-08-13 2004-08-17 Hanif M. Ladak Prostate boundary segmentation from 2D and 3D ultrasound images
US20040210133A1 (en) 2003-04-15 2004-10-21 Dror Nir Method and system for selecting and recording biopsy sites in a body organ
US6824516B2 (en) 2002-03-11 2004-11-30 Medsci Technologies, Inc. System for examining, mapping, diagnosing, and treating diseases of the prostate
US6842638B1 (en) 2001-11-13 2005-01-11 Koninklijke Philips Electronics N.V. Angiography method and apparatus
US20050027188A1 (en) * 2002-12-13 2005-02-03 Metaxas Dimitris N. Method and apparatus for automatically detecting breast lesions and tumors in images
US6852081B2 (en) 2003-03-13 2005-02-08 Siemens Medical Solutions Usa, Inc. Volume rendering in the acoustic grid methods and systems for ultrasound diagnostic imaging
US6909792B1 (en) 2000-06-23 2005-06-21 Litton Systems, Inc. Historical comparison of breast tissue by image processing
US20050159676A1 (en) 2003-08-13 2005-07-21 Taylor James D. Targeted biopsy delivery system
US20050190189A1 (en) 2004-02-25 2005-09-01 Christophe Chefd'hotel System and method for GPU-based 3D nonrigid registration
US20050197977A1 (en) 2003-12-09 2005-09-08 Microsoft Corporation Optimizing performance of a graphics processing unit for efficient execution of general matrix operations
US6952211B1 (en) 2002-11-08 2005-10-04 Matrox Graphics Inc. Motion compensation using shared resources of a graphics processor unit
US20050243087A1 (en) 2004-04-30 2005-11-03 Shmuel Aharon GPU-based Finite Element
US20050249398A1 (en) 2004-04-21 2005-11-10 Ali Khamene Rapid and robust 3D/3D registration technique
US20060002601A1 (en) 2004-06-30 2006-01-05 Accuray, Inc. DRR generation using a non-linear attenuation model
US20060002630A1 (en) 2004-06-30 2006-01-05 Accuray, Inc. Fiducial-less tracking with non-rigid image registration
US6985612B2 (en) 2001-10-05 2006-01-10 Mevis - Centrum Fur Medizinische Diagnosesysteme Und Visualisierung Gmbh Computer system and a method for segmentation of a digital image
US20060013482A1 (en) 2004-06-23 2006-01-19 Vanderbilt University System and methods of organ segmentation and applications of same
US20060036162A1 (en) 2004-02-02 2006-02-16 Ramin Shahidi Method and apparatus for guiding a medical instrument to a subsurface target site in a patient
US7004904B2 (en) 2002-08-02 2006-02-28 Diagnostic Ultrasound Corporation Image enhancement and segmentation of structures in 3D ultrasound images for volume measurements
US7008373B2 (en) 2001-11-08 2006-03-07 The Johns Hopkins University System and method for robot targeting under fluoroscopy based on image servoing
US20060074297A1 (en) * 2004-08-24 2006-04-06 Viswanathan Raju R Methods and apparatus for steering medical devices in body lumens
US7039216B2 (en) 2001-11-19 2006-05-02 Microsoft Corporation Automatic sketch generation
US7039239B2 (en) 2002-02-07 2006-05-02 Eastman Kodak Company Method for image region classification using unsupervised and supervised learning
US7043063B1 (en) 1999-08-27 2006-05-09 Mirada Solutions Limited Non-rigid motion image analysis
US7095890B2 (en) 2002-02-01 2006-08-22 Siemens Corporate Research, Inc. Integration of visual information, anatomic constraints and prior shape knowledge for medical segmentations
WO2006089426A1 (en) 2005-02-28 2006-08-31 Robarts Research Institute System and method for performing a biopsy of a target volume and a computing device for planning the same
US20060197837A1 (en) 2005-02-09 2006-09-07 The Regents Of The University Of California. Real-time geo-registration of imagery using cots graphics processors
US7119810B2 (en) 2003-12-05 2006-10-10 Siemens Medical Solutions Usa, Inc. Graphics processing unit for simulation or medical diagnostic imaging
US20060227131A1 (en) 2005-04-12 2006-10-12 Thomas Schiwietz Flat texture volume rendering
US20060258933A1 (en) 2005-05-10 2006-11-16 Advanced Clinical Solutions, Inc. Method of defining a biological target for treatment
US7139601B2 (en) 1993-04-26 2006-11-21 Surgical Navigation Technologies, Inc. Surgical navigation systems including reference and localization frames
US7148895B2 (en) 1999-01-29 2006-12-12 Scale Inc. Time-series data processing device and method
US20060285755A1 (en) * 2005-06-16 2006-12-21 Strider Labs, Inc. System and method for recognition in 2D images using 3D class models
US7155316B2 (en) 2002-08-13 2006-12-26 Microbotics Corporation Microsurgical robot system
US20070014446A1 (en) 2005-06-20 2007-01-18 Siemens Medical Solutions Usa Inc. Surface parameter adaptive ultrasound image processing
US7167760B2 (en) 2003-04-28 2007-01-23 Vanderbilt University Apparatus and methods of optimal placement of deep brain stimulator
US20070040830A1 (en) 2005-08-18 2007-02-22 Pavlos Papageorgiou Volume rendering apparatus and process
US20070081703A1 (en) * 2005-10-12 2007-04-12 Industrial Widget Works Company Methods, devices and systems for multi-modality integrated imaging
US20070116381A1 (en) 2005-10-19 2007-05-24 Ali Khamene Method for deformable registration of images
US20070116339A1 (en) 2005-10-17 2007-05-24 Siemens Corporate Research Inc System and Method For Myocardium Segmentation In Realtime Cardiac MR Data
US7225012B1 (en) 2000-09-18 2007-05-29 The Johns Hopkins University Methods and systems for image-guided surgical interventions
US20070189603A1 (en) 2006-02-06 2007-08-16 Microsoft Corporation Raw image processing
US20070201611A1 (en) 2006-02-24 2007-08-30 Guillem Pratx Method of reconstructing a tomographic image using a graphics processing unit
US20070219448A1 (en) * 2004-05-06 2007-09-20 Focus Surgery, Inc. Method and Apparatus for Selective Treatment of Tissue
US7274811B2 (en) 2003-10-31 2007-09-25 Ge Medical Systems Global Technology Company, Llc Method and apparatus for synchronizing corresponding landmarks among a plurality of images
US7298881B2 (en) * 2004-02-13 2007-11-20 University Of Chicago Method, system, and computer software product for feature-based correlation of lesions from multiple images
US20070270687A1 (en) 2004-01-13 2007-11-22 Gardi Lori A Ultrasound Imaging System and Methods Of Imaging Using the Same
US7302092B1 (en) 1998-03-20 2007-11-27 London Health Sciences Research Inc. Three-dimensional imaging system
US20080002870A1 (en) 2006-06-30 2008-01-03 University Of Louisville Research Foundation, Inc. Automatic detection and monitoring of nodules and shaped targets in image data
US20080123927A1 (en) 2006-11-16 2008-05-29 Vanderbilt University Apparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same
WO2008062346A1 (en) 2006-11-21 2008-05-29 Koninklijke Philips Electronics N.V. A system, method, computer-readable medium and use for imaging of tissue in an anatomical structure
US20080123910A1 (en) 2006-09-19 2008-05-29 Bracco Imaging Spa Method and system for providing accuracy evaluation of image guided surgery
US20080170770A1 (en) 2007-01-15 2008-07-17 Suri Jasjit S method for tissue culture extraction
US7403646B2 (en) 2002-10-24 2008-07-22 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and recording medium for generating a difference image from a first radiographic image and second radiographic image
US20080247616A1 (en) 2007-04-03 2008-10-09 General Electric Company System and method of navigating an object in an imaged subject
US20080247622A1 (en) * 2004-09-24 2008-10-09 Stephen Aylward Methods, Systems, and Computer Program Products For Hierarchical Registration Between a Blood Vessel and Tissue Surface Model For a Subject and a Blood Vessel and Tissue Surface Image For the Subject
WO2008124138A1 (en) 2007-04-05 2008-10-16 Aureon Laboratories, Inc. Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US7519206B2 (en) * 2000-11-22 2009-04-14 Siemens Medical Solutions Usa, Inc. Detection of features in images

Patent Citations (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5320101A (en) 1988-12-22 1994-06-14 Biofield Corp. Discriminant function analysis method and apparatus for disease diagnosis and screening with biopsy needle sensor
US5383454A (en) 1990-10-19 1995-01-24 St. Louis University System for indicating the position of a surgical probe within a head on an image of the head
US5383454B1 (en) 1990-10-19 1996-12-31 Univ St Louis System for indicating the position of a surgical probe within a head on an image of the head
US5562095A (en) 1992-12-24 1996-10-08 Victoria Hospital Corporation Three dimensional ultrasound imaging system
US7139601B2 (en) 1993-04-26 2006-11-21 Surgical Navigation Technologies, Inc. Surgical navigation systems including reference and localization frames
US5282472A (en) 1993-05-11 1994-02-01 Companion John A System and process for the detection, evaluation and treatment of prostate and urinary problems
US5842473A (en) 1993-11-29 1998-12-01 Life Imaging Systems Three-dimensional imaging system
US6461298B1 (en) * 1993-11-29 2002-10-08 Life Imaging Systems Three-dimensional imaging system
US5454371A (en) 1993-11-29 1995-10-03 London Health Association Method and system for constructing and displaying three-dimensional images
US5611000A (en) 1994-02-22 1997-03-11 Digital Equipment Corporation Spline-based image registration
US5398690A (en) 1994-08-03 1995-03-21 Batten; Bobby G. Slaved biopsy device, analysis apparatus, and process
US5531520A (en) * 1994-09-01 1996-07-02 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets including anatomical body data
US6675032B2 (en) 1994-10-07 2004-01-06 Medical Media Systems Video-based surgical targeting system
US5810007A (en) 1995-07-26 1998-09-22 Associates Of The Joint Center For Radiation Therapy, Inc. Ultrasound localization and image fusion for the treatment of prostate cancer
US6447477B2 (en) 1996-02-09 2002-09-10 Emx, Inc. Surgical and pharmaceutical site access guide and methods
US6360027B1 (en) 1996-02-29 2002-03-19 Acuson Corporation Multiple ultrasound image registration system, method and transducer
US6047080A (en) * 1996-06-19 2000-04-04 Arch Development Corporation Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images
US6334847B1 (en) 1996-11-29 2002-01-01 Life Imaging Systems Inc. Enhanced image processing for a three-dimensional imaging system
US6423009B1 (en) 1996-11-29 2002-07-23 Life Imaging Systems, Inc. System, employing three-dimensional ultrasonographic imaging, for assisting in guiding and placing medical instruments
US6092059A (en) 1996-12-27 2000-07-18 Cognex Corporation Automatic classifier for real time inspection and classification
US6342891B1 (en) 1997-06-25 2002-01-29 Life Imaging Systems Inc. System and method for the dynamic display of three-dimensional image data
US6171249B1 (en) 1997-10-14 2001-01-09 Circon Corporation Ultrasound guided therapeutic and diagnostic device
US20030135115A1 (en) 1997-11-24 2003-07-17 Burdette Everette C. Method and apparatus for spatial registration and mapping of a biopsy needle during a tissue biopsy
US6689065B2 (en) 1997-12-17 2004-02-10 Amersham Health As Ultrasonography
US7302092B1 (en) 1998-03-20 2007-11-27 London Health Sciences Research Inc. Three-dimensional imaging system
US6261234B1 (en) 1998-05-07 2001-07-17 Diasonics Ultrasound, Inc. Method and apparatus for ultrasound imaging with biplane instrument guidance
US6238342B1 (en) 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
WO2000014668A1 (en) 1998-09-08 2000-03-16 Catholic University Of America Method and system for improved detection of prostate cancer
US7148895B2 (en) 1999-01-29 2006-12-12 Scale Inc. Time-series data processing device and method
US6251072B1 (en) 1999-02-19 2001-06-26 Life Imaging Systems, Inc. Semi-automated segmentation method for 3-dimensional ultrasound
US6385332B1 (en) 1999-02-19 2002-05-07 The John P. Roberts Research Institute Automated segmentation method for 3-dimensional ultrasound
US6567687B2 (en) 1999-02-22 2003-05-20 Yaron Front Method and system for guiding a diagnostic or therapeutic instrument towards a target region inside the patient's body
US6298148B1 (en) 1999-03-22 2001-10-02 General Electric Company Method of registering surfaces using curvature
US6351547B1 (en) * 1999-04-28 2002-02-26 General Electric Company Method and apparatus for formatting digital images to conform to communications standard
US6611615B1 (en) 1999-06-25 2003-08-26 University Of Iowa Research Foundation Method and apparatus for generating consistent image registration
US7162065B2 (en) 1999-08-13 2007-01-09 John P. Robarts Research Instutute Prostate boundary segmentation from 2D and 3D ultrasound images
US6778690B1 (en) 1999-08-13 2004-08-17 Hanif M. Ladak Prostate boundary segmentation from 2D and 3D ultrasound images
US7043063B1 (en) 1999-08-27 2006-05-09 Mirada Solutions Limited Non-rigid motion image analysis
US6610013B1 (en) 1999-10-01 2003-08-26 Life Imaging Systems, Inc. 3D ultrasound-guided intraoperative prostate brachytherapy
US6674916B1 (en) 1999-10-18 2004-01-06 Z-Kat, Inc. Interpolation in transform space for multiple rigid object registration
US6500123B1 (en) 1999-11-05 2002-12-31 Volumetrics Medical Imaging Methods and systems for aligning views of image data
US6675211B1 (en) 2000-01-21 2004-01-06 At&T Wireless Services, Inc. System and method for adjusting the traffic carried by a network
US6351660B1 (en) 2000-04-18 2002-02-26 Litton Systems, Inc. Enhanced visualization of in-vivo breast biopsy location for medical documentation
US6561980B1 (en) 2000-05-23 2003-05-13 Alpha Intervention Technology, Inc Automatic segmentation of prostate, rectum and urethra in ultrasound imaging
US6909792B1 (en) 2000-06-23 2005-06-21 Litton Systems, Inc. Historical comparison of breast tissue by image processing
US7225012B1 (en) 2000-09-18 2007-05-29 The Johns Hopkins University Methods and systems for image-guided surgical interventions
US7519206B2 (en) * 2000-11-22 2009-04-14 Siemens Medical Solutions Usa, Inc. Detection of features in images
US20030000535A1 (en) 2001-06-27 2003-01-02 Vanderbilt University Method and apparatus for collecting and processing physical space data for use while performing image-guided surgery
US6985612B2 (en) 2001-10-05 2006-01-10 Mevis - Centrum Fur Medizinische Diagnosesysteme Und Visualisierung Gmbh Computer system and a method for segmentation of a digital image
US7008373B2 (en) 2001-11-08 2006-03-07 The Johns Hopkins University System and method for robot targeting under fluoroscopy based on image servoing
US6842638B1 (en) 2001-11-13 2005-01-11 Koninklijke Philips Electronics N.V. Angiography method and apparatus
US7039216B2 (en) 2001-11-19 2006-05-02 Microsoft Corporation Automatic sketch generation
US7095890B2 (en) 2002-02-01 2006-08-22 Siemens Corporate Research, Inc. Integration of visual information, anatomic constraints and prior shape knowledge for medical segmentations
US7039239B2 (en) 2002-02-07 2006-05-02 Eastman Kodak Company Method for image region classification using unsupervised and supervised learning
US6824516B2 (en) 2002-03-11 2004-11-30 Medsci Technologies, Inc. System for examining, mapping, diagnosing, and treating diseases of the prostate
US7004904B2 (en) 2002-08-02 2006-02-28 Diagnostic Ultrasound Corporation Image enhancement and segmentation of structures in 3D ultrasound images for volume measurements
US7155316B2 (en) 2002-08-13 2006-12-26 Microbotics Corporation Microsurgical robot system
US7403646B2 (en) 2002-10-24 2008-07-22 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and recording medium for generating a difference image from a first radiographic image and second radiographic image
US20040081340A1 (en) * 2002-10-28 2004-04-29 Kabushiki Kaisha Toshiba Image processing apparatus and ultrasound diagnosis apparatus
US6952211B1 (en) 2002-11-08 2005-10-04 Matrox Graphics Inc. Motion compensation using shared resources of a graphics processor unit
US20050027188A1 (en) * 2002-12-13 2005-02-03 Metaxas Dimitris N. Method and apparatus for automatically detecting breast lesions and tumors in images
US6852081B2 (en) 2003-03-13 2005-02-08 Siemens Medical Solutions Usa, Inc. Volume rendering in the acoustic grid methods and systems for ultrasound diagnostic imaging
US20040210133A1 (en) 2003-04-15 2004-10-21 Dror Nir Method and system for selecting and recording biopsy sites in a body organ
US20060079771A1 (en) 2003-04-15 2006-04-13 Dror Nir Method and system for selecting and recording biopsy sites in a body organ
US7167760B2 (en) 2003-04-28 2007-01-23 Vanderbilt University Apparatus and methods of optimal placement of deep brain stimulator
US20050159676A1 (en) 2003-08-13 2005-07-21 Taylor James D. Targeted biopsy delivery system
US7274811B2 (en) 2003-10-31 2007-09-25 Ge Medical Systems Global Technology Company, Llc Method and apparatus for synchronizing corresponding landmarks among a plurality of images
US7119810B2 (en) 2003-12-05 2006-10-10 Siemens Medical Solutions Usa, Inc. Graphics processing unit for simulation or medical diagnostic imaging
US20050197977A1 (en) 2003-12-09 2005-09-08 Microsoft Corporation Optimizing performance of a graphics processing unit for efficient execution of general matrix operations
US20070270687A1 (en) 2004-01-13 2007-11-22 Gardi Lori A Ultrasound Imaging System and Methods Of Imaging Using the Same
US20060036162A1 (en) 2004-02-02 2006-02-16 Ramin Shahidi Method and apparatus for guiding a medical instrument to a subsurface target site in a patient
US7298881B2 (en) * 2004-02-13 2007-11-20 University Of Chicago Method, system, and computer software product for feature-based correlation of lesions from multiple images
US20050190189A1 (en) 2004-02-25 2005-09-01 Christophe Chefd'hotel System and method for GPU-based 3D nonrigid registration
US20050249398A1 (en) 2004-04-21 2005-11-10 Ali Khamene Rapid and robust 3D/3D registration technique
US20050243087A1 (en) 2004-04-30 2005-11-03 Shmuel Aharon GPU-based Finite Element
US20070219448A1 (en) * 2004-05-06 2007-09-20 Focus Surgery, Inc. Method and Apparatus for Selective Treatment of Tissue
US20060013482A1 (en) 2004-06-23 2006-01-19 Vanderbilt University System and methods of organ segmentation and applications of same
US20060002601A1 (en) 2004-06-30 2006-01-05 Accuray, Inc. DRR generation using a non-linear attenuation model
US20060002630A1 (en) 2004-06-30 2006-01-05 Accuray, Inc. Fiducial-less tracking with non-rigid image registration
US20060074297A1 (en) * 2004-08-24 2006-04-06 Viswanathan Raju R Methods and apparatus for steering medical devices in body lumens
US20080247622A1 (en) * 2004-09-24 2008-10-09 Stephen Aylward Methods, Systems, and Computer Program Products For Hierarchical Registration Between a Blood Vessel and Tissue Surface Model For a Subject and a Blood Vessel and Tissue Surface Image For the Subject
US20060197837A1 (en) 2005-02-09 2006-09-07 The Regents Of The University Of California. Real-time geo-registration of imagery using cots graphics processors
US20090093715A1 (en) 2005-02-28 2009-04-09 Donal Downey System and Method for Performing a Biopsy of a Target Volume and a Computing Device for Planning the Same
WO2006089426A1 (en) 2005-02-28 2006-08-31 Robarts Research Institute System and method for performing a biopsy of a target volume and a computing device for planning the same
US20060227131A1 (en) 2005-04-12 2006-10-12 Thomas Schiwietz Flat texture volume rendering
US20060258933A1 (en) 2005-05-10 2006-11-16 Advanced Clinical Solutions, Inc. Method of defining a biological target for treatment
US20060285755A1 (en) * 2005-06-16 2006-12-21 Strider Labs, Inc. System and method for recognition in 2D images using 3D class models
US20070014446A1 (en) 2005-06-20 2007-01-18 Siemens Medical Solutions Usa Inc. Surface parameter adaptive ultrasound image processing
US20070040830A1 (en) 2005-08-18 2007-02-22 Pavlos Papageorgiou Volume rendering apparatus and process
US20070081703A1 (en) * 2005-10-12 2007-04-12 Industrial Widget Works Company Methods, devices and systems for multi-modality integrated imaging
US20070116339A1 (en) 2005-10-17 2007-05-24 Siemens Corporate Research Inc System and Method For Myocardium Segmentation In Realtime Cardiac MR Data
US20070116381A1 (en) 2005-10-19 2007-05-24 Ali Khamene Method for deformable registration of images
US20070189603A1 (en) 2006-02-06 2007-08-16 Microsoft Corporation Raw image processing
US20070201611A1 (en) 2006-02-24 2007-08-30 Guillem Pratx Method of reconstructing a tomographic image using a graphics processing unit
US20080002870A1 (en) 2006-06-30 2008-01-03 University Of Louisville Research Foundation, Inc. Automatic detection and monitoring of nodules and shaped targets in image data
US20080123910A1 (en) 2006-09-19 2008-05-29 Bracco Imaging Spa Method and system for providing accuracy evaluation of image guided surgery
US20080123927A1 (en) 2006-11-16 2008-05-29 Vanderbilt University Apparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same
WO2008062346A1 (en) 2006-11-21 2008-05-29 Koninklijke Philips Electronics N.V. A system, method, computer-readable medium and use for imaging of tissue in an anatomical structure
US20080170770A1 (en) 2007-01-15 2008-07-17 Suri Jasjit S method for tissue culture extraction
US20080247616A1 (en) 2007-04-03 2008-10-09 General Electric Company System and method of navigating an object in an imaged subject
WO2008124138A1 (en) 2007-04-05 2008-10-16 Aureon Laboratories, Inc. Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130223753A1 (en) * 2012-02-23 2013-08-29 Andrew T. Sornborger Methods and systems for enhancing data
US9014501B2 (en) * 2012-02-23 2015-04-21 University Of Georgia Research Foundation, Inc. Methods and systems for enhancing data
US10340041B2 (en) * 2014-05-09 2019-07-02 Acupath Laboratories, Inc. Biopsy mapping tools
US10716544B2 (en) 2015-10-08 2020-07-21 Zmk Medical Technologies Inc. System for 3D multi-parametric ultrasound imaging
WO2019092167A1 (en) 2017-11-09 2019-05-16 Agfa Healthcare Nv Method of segmenting a 3d object in a medical radiation image
US11854281B2 (en) 2019-08-16 2023-12-26 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for processing brain images and extracting neuronal structures

Also Published As

Publication number Publication date
US20090103791A1 (en) 2009-04-23

Similar Documents

Publication Publication Date Title
US8571277B2 (en) Image interpolation for medical imaging
EP2961322B1 (en) Segmentation of large objects from multiple three-dimensional views
JP4576228B2 (en) Non-rigid image registration based on physiological model
US10290076B2 (en) System and method for automated initialization and registration of navigation system
US9349197B2 (en) Left ventricle epicardium estimation in medical diagnostic imaging
US9934579B2 (en) Coupled segmentation in 3D conventional ultrasound and contrast-enhanced ultrasound images
US20110178389A1 (en) Fused image moldalities guidance
US20160217560A1 (en) Method and system for automatic deformable registration
US20080161687A1 (en) Repeat biopsy system
US20120089008A1 (en) System and method for passive medical device navigation under real-time mri guidance
US11672505B2 (en) Correcting probe induced deformation in an ultrasound fusing imaging system
US20090048515A1 (en) Biopsy planning system
US20120287131A1 (en) Image processing apparatus and image registration method
US9355454B2 (en) Automatic estimation of anatomical extents
Sivaramakrishna 3D breast image registration—a review
Kadoury et al. Realtime TRUS/MRI fusion targeted-biopsy for prostate cancer: a clinical demonstration of increased positive biopsy rates
CN111166373B (en) Positioning registration method, device and system
zu Berge et al. Ultrasound Decompression for Large Field-of-View Reconstructions.
Ji et al. Coregistered volumetric true 3D ultrasonography in image-guided neurosurgery
Karnik et al. Evaluation of inter-session 3D-TRUS to 3D-TRUS image registration for repeat prostate biopsies
Welch et al. Real-time freehand 3D ultrasound system for clinical applications
Nathawat et al. A Case Analysis on Different Registration Methods on Multi-modal Brain Images
Marami Elastic Registration of Medical Images Using Generic Dynamic Deformation Models
Balci Sub-pixel registration in computational imaging and applications to enhancement of maxillofacial CT data
Williams Multi-modal registration for image-guided therapy

Legal Events

Date Code Title Description
AS Assignment

Owner name: EIGEN, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SURI, JASJIT S.;KUMAR, DINESH;REEL/FRAME:021670/0117

Effective date: 20081009

AS Assignment

Owner name: EIGEN INC.,CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:EIGEN LLC;REEL/FRAME:024587/0911

Effective date: 20080401

Owner name: EIGEN INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:EIGEN LLC;REEL/FRAME:024587/0911

Effective date: 20080401

AS Assignment

Owner name: KAZI MANAGEMENT VI, LLC, VIRGIN ISLANDS, U.S.

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EIGEN, INC.;REEL/FRAME:024652/0493

Effective date: 20100630

AS Assignment

Owner name: KAZI, ZUBAIR, VIRGIN ISLANDS, U.S.

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAZI MANAGEMENT VI, LLC;REEL/FRAME:024929/0310

Effective date: 20100630

AS Assignment

Owner name: KAZI MANAGEMENT ST. CROIX, LLC, VIRGIN ISLANDS, U.

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAZI, ZUBAIR;REEL/FRAME:025013/0245

Effective date: 20100630

AS Assignment

Owner name: IGT, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAZI MANAGEMENT ST. CROIX, LLC;REEL/FRAME:025132/0199

Effective date: 20100630

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: ZMK MEDICAL TECHNOLOGIES, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:IGT, LLC;REEL/FRAME:044607/0623

Effective date: 20140915

Owner name: ZMK MEDICAL TECHNOLOGIES, INC., A NEVADA CORPORATI

Free format text: ASSET PURCHASE AGREEMENT;ASSIGNOR:ZMK MEDICAL TECHNOLOGIES, INC., A DELAWARE CORPORATION;REEL/FRAME:045054/0925

Effective date: 20150320

AS Assignment

Owner name: EIGEN HEALTH SERVICES, LLC, CALIFORNIA

Free format text: MERGER;ASSIGNOR:ZMK MEDICAL TECHNOLOGIES, INC.;REEL/FRAME:053815/0807

Effective date: 20190101

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20211029