Friday, November 16, 2012

Important Considerations to Review Before Purchasing an MRI System



mri system
mri system
Before purchasing an MRI system, it is important to take into consideration the following main technical points, in order to make the best decisions possible for your medical facility:
  • The MRI system should be FDA-approved
  • The MRI system should comply with the IEC 60601-2-23 standard
  • The MRI system should comply with all Dicom 3.0 mri working modalities
  • Consider the magnetic field strength of the MRI system: 1.5T ,3T or other. (1.5T is currently the most commonly accepted MRI magnetic field strength for general use).
  • Decide which bore diameter of the MRI system, 60 cm or the larger bore diameter, 70 cm is what you are looking for, keeping in mind that sometimes, the size of the useful volume of view and the image quality are lower with the larger bore diameter.

Understanding MRI Scanner Receivers




How Advanced Technologies Improve MRI Imaging

Multichannel radiofrequency and parallel imaging technologies are hardware and software implementations, respectively, aimed at improving the coverage, signal resolution and speed of MRI examinations. With multichannel RF technology, the MRI signal used to form an image is collected by an array of separate detectors, or coil elements. Each element relays signal information along a separate channel to an image reconstruction computer. Such arrays of coil elements and receivers can improve imaging coverage and the ratio of signal-to-noise in the image.
The number of elements in the array of detectors and receivers is an important factor in characterizing an MRI scanner. Parallel imaging technology uses complex software algorithms to reconstruct the signals from multiple channels in a way that can reduce imaging times and/or increase image resolution.
The Main Sources of MRI Noise
Before we examine the parameters of MRI scanner receivers, it is important to understand what the principal source of noise in MRI scanner signals is:
The magnetic resonance signal is an electromotive force induced in a coil by a rotating magnetic moment of nuclear spins. The MRI scanner signal level must be well above noise levels to produce clinically useful MRI images, and yet this signal is very weak.
Image noise originates in the patient to be imaged and is added during the processing of the signal in the receiver chain. In the receiver chain, noise may be generated in the preamplifiers and at the connection between the preamplifier and the RF receive coil. In the RF coil, which is a conductor, thermal noise is produced by the stochastic motion of free electrons. This motion is caused by ohmic losses in the RF coil itself, and by eddy current losses in the patient, which are inductively coupled to the RF coil. High conductivity of receiver coils avoids noise, whereas conduction in the patient causes noise.

Introducing MRI Parallel Imaging



MRI Parallel Imaging
How is MRI Parallel Imaging Used?

MRI parallel imaging utilizes the multiple elements of phased array coil system. Each element of the coil system is associated with a dedicated radio frequency channel (a special single-channel radio receiver) whose output is processed and combined with the outputs of the other channels (signals acquired by the other coil elements). This technology improves the signal–to-noise ratio (the signal quality) as compared to a standard MRI scanner coil system; while covering the same explored body volume.
The spatial data acquired by the array of coil elements is used for partial phase encoding, only to speed up the acquisition process.
The acceleration factors routinely employed at a magnetic field strength of 1.5 T can range from 2 to 3. At field 3T, this factor can be even higher.

How Parallel Imaging Improves MRI Scanning
Multi-channel radio frequency and parallel imaging technologies are hardware and software implementations, respectively aimed at improving the coverage signal resolution and speed of MRI scanner examinations. With multi-channel technology, the MRI scanner signal used to form an image is collected by an array of separate coil elements. Each element relays signal information along a separate channel to an image reconstruction computer. Such arrays of coil elements can improve imaging coverage and the ratio of signal-to-noise in the image. The number of elements in the array of detectors is an important factor in characterizing a parallel imaging system. Parallel imagingtechnology uses complex software algorithms to reconstruct the signals from multiple channels in an way that can reduce imaging times or increase  image resolution, in image resolution (without the corresponding increase in imaging times associated with standard MRI scanner imaging).

The Basics of Digital Radiology (DR)



digital-radiology-equipment
What is Digital Radiology (DR)?
Digital radiology (DR) is a form of x-ray imaging, where digital x-ray sensors are used instead of traditional photographic film. Advantages of digital radiology (DR) include time efficiency, as a result of being able to do without the standard chemical processing, as well as the ability to digitally transfer and enhance images. Also, less radiation can be used to produce an image of similar contrast.
Digital radiology (DR) may represent the greatest technological advancement in medical imaging over the last decade. The use of radiographic films in x-ray imaging will become completely obsolete within a few years. An appropriate analogy that is easy to understand is the replacement of typical film cameras with digital cameras. Images can be immediately acquired, deleted, modified and subsequently sent to a network of computers.
What are the Main Benefits?
The benefits of digital radiography (DR) are enormous as it makes a radiological facility or department filmless. The referring physician can view the requested image on a desktop or a personal computer and often file a report just a few minutes after the examination was performed. The images are no longer held in a single location, but can be seen simultaneously by physicians who are many kilometers/miles apart. In addition, the patient can easily transfer the x-ray images on a compact disk or on a “disk-on-key” to take to another physician or hospital for consultation.
Are There Any Disadvantages?
Although digital radiography (DR) systems have the potential for dose reduction, experience shows that many facilities actually impart more doses to patients. The primary reason is that over-exposure goes undetected, unlike with film where the image turns dark or black. In digital imaging, in contrast, the image becomes better when there is over-exposure. Further, there is a tendency to take more images than necessary. In a study performed in several hospitals, it became obvious that the number of examinations per patient, per day increased after transition todigital radiography (DR).
Also, it is very easy to delete images before archiving, and technologists tend to repeat exposure if the positioning is wrong or if there is motion blur. Such repeats normally go unreported. As a result, digital imaging has the potential to increase the number of exposures and therefore, patient dose.
Flat-Panel Detectors
Due to the physical structure of the detector, detectors are frequently referred to as flat-panel detectors. In addition to forming part of an integrated digital radiography (DR) system, their shape allows them to be incorporated into retrofit digital bucky assemblies. There are also portable digital cassettes available, which are either sold as part of a system or can be retrofitted to an existing film/screen room.
Types of Digital Radiography (DR) Systems
Mobile radiography systems with portable digital detectors are also available. Portable detectors can be connected to the review workstation by either a wire, or have a wireless communication interface.
Many types of digital detectors will need some level of environmental control. This may be in terms of operating temperature range, rate of change of temperature and/or relative humidity.
All types of detectors may not function optimally if outside the recommended temperature range, but normally recover once the temperature has returned to normal. Certain types of digital radiography (DR) detectors may be irreparably damaged if the temperature remains too high or too low for an appreciable period.
There are main 4 types of flat panel detectors used in digital radiology ( DR):
  1. Indirect conversion detectors first convert the X-ray photons to visible light photons in a scintillator, typically caesium iodide (Csl). The light photonos are in turn converted into electrical charge and read out with an amorphous silicon (a-Si) photos – detector/ thin film transistor (TFT) array bonded to the scintillator. The detector can be fixed or portable.
  2. Direct conversion detectors convert X-rays directly to charge, which is then read out. Most current systems use a layer of amorphous selenium (a-Se) coupled to an active matrix for read out.
  3. Charge coupled devices (CCDs) are sometimes used in digital systems for general radiography. A scintillator (Csi or a rate earth phosphor) is coupled to the CCD with a lens/mirror system. Due to the limited area of a typical CCD, a considerable degree of demagnification is required, which can have an effect on image quality.
  4. Slot scanning systems use a fan beam of X-rays, which scans the area of interest in conjunction with a slot detector. The detector is typically a linear array of CCDs coupled to a Csi scintillator, although other combinations; such as a rare earth phosphor coupled to a linear array of sensors, are in use. This arrangement provides excellent rejection of scattered radiation (and therefore potential for lower doses).
The sensitivity of the detector is a measure of how efficiently the detector uses the incident X-ray photons and can be described in terms of various technical parameters, such as detective quantum efficiency (DQE). Sensitivity will depend on the technology and design of the detector and technically similar detectors from different manufactures may exhibit different sensitivities. Additionally, the sensitivity may vary with the energy of the incident X-rays.

Source: http://www.medwow.com/articles/

Computed Radiography (CR) in Radiology Applications




What is Computed radiography (CR)?

Computed radiography (CR) is a cost-effective solution to move from analog to digital imaging. With computed radiography (CR) the transition to digital is completed by installing computed radiography (CR) readers and replacing X-ray cassettes (which use X-ray film) with computed radiography (CR) cassettes (which use imaging plates). The imaging plates are exposed and inserted into the computed radiography (CR) reader. The computed radiography (CR) reader scans the plates, digitally displays the image on the workstation, and erases the imaging plate for reuse.
Computed radiography (CR) is a mature technology, which was developed in the middle of the twentieth century. It is currently in use in medical centers around the world. In many cases, it has replaced the process of taking X-rays on film in order to produce digital images. With these, better quality scans are possible, in shorter times, and with wider availability for study. The technology is found not only in medicine and dentistry, but in other areas, such as manufacturing for safety testing and analysis.
Traditional radiography, in use since its invention by W.C. Roentgen, in 1885, stores images on a photographic plate.Computed radiography (CR) can use existing X-ray equipment to take pictures but stores the images on a plate with phosphors that are activated and retained when the image is taken. A laser is used to scan the plate, which is converted to digital format. The results are then fed directly into a computer for interpretation. This simplifies the whole process, since no photographic development process is involved, meaning no dark rooms are necessary.
Since the early 1990s, it has become technically possible and economically feasible for digital imaging technologies to challenge film for projection radiography. This was made possible by certain prerequisite technological advances, such as high-luminance and high-resolution display monitors, combined with high-performance computer workstations which, though still costly, are now readily available. Electronic image archives that can efficiently store and retrieve the massive amounts of image data generated by projection radiography are becoming increasing cost- effective.
High-speed electronic networks with bandwidth adequate to transmit image files wherever and whenever needed are now accepted as an essential infrastructure component in health care.
Until the past few years, storage phosphor-based computed radiography (CR) has been the best alternative for acquiring digital projection radiography images. Computed radiography (CR) has the advantage of being fully compatible with existing x-ray equipment designed for film screen imaging. However, computed radiography (CR) has the disadvantage of requiring readout and processing steps that take about the same time as conventional film to obtain a diagnostically different has entered the medical imaging market, offering a new standard for digital x-ray image capture: digital radiography flat panel, solid state detectors with integrated, thin film transistor readout mechanisms.
Computed radiography (CR) and digital radiography (DR) have many similarities. Both computed radiography (CR) and digital radiography (DR) use a medium to capture x-ray energy and both produce a digital image that can be enhanced for soft copy diagnosis or further review. Both computed radiography (CR) and digital radiography (DR) can also present an image within seconds of exposure. Computed radiography (CR) generally involves the use of a cassette that houses the imaging plate, similar to traditional film screen systems, to record the image; while digital radiography (DR) typically captures the image directly onto a flat panel detector without the use of a cassette. Image processing or enhancement can be applied on digital radiography (DR) images as well as computed radiography (CR) images due to the digital format of each.
Advantages and Disadvantages of Computed Radiography (CR)
Computed radiography (CR) has its unique advantages:
  • Cost-effective solution for upgrading old X-ray equipment.
  • No silver-based film or chemicals are required to process films.
  • Reduced film storage costs, as the images are stored digitally.
  • Image brightness and contrast can be adjusted after the exposure.
  • Image can be processed and enhanced at any time after the exposure.
However, computed radiography (CR) has also some distinct disadvantages:
  • Manual handling of the cassette housing.
  • Imaging plates are expensive and can be easily damaged.
  • Inherent geometric lack of sharpness results in lower spatial resolution, as compared to film images.
  • Low signal-to-noise ratio and sensitivity to scattered radiation.
Moving to Digital with Computed Radiography (CR)
Changing healthcare needs require tomorrow’s diagnostic imaging service provider to rapidly produce the highest quality images, transmit them broadly, display them in alternative ways, and computed radiography (CR) image systems are an important element in this all-digital vision.
With digital image systems, the image data sent to workstations, printers, and archives is always identical to the original.
With improved workflow and increased efficiency, the all-digital radiology department will help hospitals, imaging centers, private practices, and clinics realize the full benefits of a picture archiving and communication system (PACS)

Source: http://www.medwow.com/articles/
Source: http://www.medwow.com/articles/

Mitigating Liability in Medical Device Recalls



Product recalls are nothing new; they exist in just about every industry. However in the medical device industry, 2012 saw a significant jump in recalls. According to the ExpertRECALL index, over 123 million device units were recalled alone in the first quarter of 2012.

Not surprising, there has been a noticeable increase in device failure and damage claims over the last few years. It’s interesting to note that although the number of unit recalls has increased, the amount of affected devices has not risen greatly.
Today’s market boasts many more medical devices that are smaller, less invasive and made of newer and lighter materials. New medical devices are designed to be more effective than previous technologies; with this in mind, why are there so many more product recalls? Industry professionals point the cause towards the baby boom generation. Representing the largest population segment at the age that requires life-saving medical devices, statistically baby boomers are increasing the magnitude of product recalls.
Medical device categories that are the most prone to recall are orthopedic implants and cardiac devices such as leads and pacemakers. These types of devices are subjected to the most friction and movement that can wear down materials leaving patients at risk to substance poisoning, electrocution or complications due to device failure.
It is known that products can fail and no medical device manufacturer is immune to product recalls. Companies Johnson & Johnson, St. Judes and Covidien are just a few that have been in the news over the past year for product recalls.
Health product recalls are big news. They increase public awareness of device failure and place pressure on government agencies to take action to protect the public. In the USA, the FDA is developing a device tracking system that will facilitate supply chain monitoring of some devices in order to improve recall efforts. Likewise in the UK, legislation is being drawn that will support greater product transparency and allow patients and doctors to make informed choices about medical products.
Other groups at play in recalls include patients and health insurance companies. Because of mounting costs from exponential increases in patient damage claims, health insurance companies are now holding medical device manufacturer’s financially responsible.
As a result of various groups passing on the liability hot potato to medical device manufacturers, clearly changes are necessary in order to mitigate damages. An obvious first step is the development of strong and easily deployed recall strategies to protect brands and limit risks to patients. However, it appears that medical manufacturers are facing a greater learning curve in order to work out issues from newer technologies and adopt procedures geared towards limiting product failure moving forward.
Source: http://www.medwow.com/articles/

Beware: Airport Security Could Damage Diabetes Insulin Devices




A warning published in the journal ‘Diabetes Technology & Therapeutics’ has diabetics and doctors concerned about the dangers of air travel with diabetic medical devices. According to the article, insulin pumps and continuous glucose monitorscould be adversely affected by airport security magnetic x-ray. This refers to x-ray machines used to scan luggage and travelers. It is unclear however whether this warning includes metal detectors; devices that also create a magnetic field.

Medical professionals already know magnetic and radiation based imaging technology like Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) interfere with the functioning of diabetes devices. However, initial findings from research conducted by Andrew Cornish and H. Peter Chase at the University of Colorado suggest that magnetic x-ray equipment used in airport security can also hinder the operation of insulin pumps and continuous glucose monitor (CGM) devices. The researchers found that magnetic x-ray can cause a malfunction in the devices’ motors.
The amount of diabetics that could be potentially affected worldwide is enormous. In 2010, company Market Research valued global markets for insulin delivery pumps and continuous glucose monitor systems at $7.4 billion and 92.2 million dollars respectively. Further, these markets are expected to steadily increase because of improved patient access to diabetes care and rising cases particularly among children.
There is no doubt that Cornish and Chase’s research raises a public global health risk that demands further exploration. Little is yet known about how magnetic x-ray affects insulin pumps and CGM. More research is necessary to better understand and develop solutions to the problem.
In the interim, patients are advised to travel with a doctor’s letter that forbids subjecting diabetes equipment to x-ray search. For further updates, contact your local Diabetes Association.
Source: http://www.medwow.com/articles/

Improving the Market Health of CT Scanners




What does the future of computed tomography technology look like according to Frost & Sullivan? They predict within two years, higher CT slice counts will push low slice machines with only 1 or 2 slices out of the European market.

The study compares the use of 1, 2, 4, 16, 20, 32, 40 and 64 slice scanners with that of new higher slice counts 128, 256+ and suggests that the benefits of higher slice scanners should render low slice CT Scanners obsolete in the European market.
Industry professionals cite the push for higher slice CT development as being a direct result of efforts to increase machine efficiency, reduce radiation dosage and enhance image quality. Why? The danger of radiation exposure from imaging technology has been a hot global news topic for some time now. In recent years the use of radiation based methods of imaging like CT and x-ray has dropped significantly.

Thursday, November 1, 2012

Positron Emission Tomography (PET) Explained



 
 
 
What is Positron Emission Tomography (PET)?
Positron Emission Tomography (PET) is a relatively new medical imaging technique. PET is the most sophisticated nuclear medicine technique that produces 3D images or pictures of functional processes in the human organism. The Positron Emission Tomography (PET) system is designed to selectively detect pairs of gamma photons emitted, as a result of positron and electron collision. The positrons are emitted by radionuclide, which is injected into the body on a biologically active molecule.
The Positron Emission Tomography (PET) produces 3D images of the tracer concentration in various parts of the body. The acquired nuclear data is processed by the PET computing system and used for the construction of the 3D images.

How Echocardiography is Used

 


 

 
 
 
Echocardiography, also known as cardiac ultrasound, is an ultrasound- based diagnostic imaging technique used for visualizing of subcutaneous body structures, including: tendons, muscles, joints, vessel and internal organs, for possible pathology or lesions. In physics, “ultrasound” applies to all sound waves with a frequency above the audible range of human hearing, about 20,000 Hz. The frequencies used in diagnostic cardiac ultrasound are typically between 2 and 18 MHz.
The choice of frequency of the cardiac ultrasound is a trade-off between spatial resolution of the image and imaging depth: lower frequencies produce less resolution but image deeper into the body. Higher frequency sound waves have a smaller wavelength and are therefore capable of reflecting or scattering from smaller structures. Higher frequency waves also have a larger reduction coefficient and are therefore more readily absorbed in tissue, limiting the depth of penetration of the sound wave into the body.
Cardiac ultrasound is most effective for imaging soft tissues of the body. Superficial structures such as muscles, tendons, testes, breast and the neonatal brain are imaged at a higher frequency, which provides better axial and lateral resolution. Deeper structures such as liver and kidney are imaged at a lower frequency with lower axial and lateral resolution, but greater penetration.

The Advantages of Cone-Beam CT Scanners

 


 

 
How Cone-Beam CT is Superior
Computed tomography imaging, also known as computed axial tomography scanning, involves the use of rotating x-ray equipment, combined with a digital computer, to acquire images of the body. Using CT imaging, cross sectional images of body organs and tissue can be produced. Though there are many other imaging techniques, CT imaging has the exceptional capability of offering clear images of different types of tissue. CT imaging can provide views of soft tissue, bone, muscle, and blood vessels, without giving up on precision and clarity. Other imaging techniques are much more limited in the types of images they can provide.
Cone-beam CT scanner is a compact, quicker and safer version of the regular CT. Through the use of a cone-shaped x-ray beam and a flat panel detector, the size of the scanner, radiation dosage and time needed for scanning are all spectacularly reduced. A typical cone-beam CT scanner can fit easily into any dental (or other medical) practice and is easily accessible by patients. The time needed for a full scan is minimal and the radiation dosage is up to a hundred times less than that of a standard CT scanner.

All About X-Rays and X-Ray Tubes

 



 
 
The History of X-Ray
X-rays are capable of penetrating some thickness of matter. Medical x-rays are produced by letting a stream of fast electrons come to a sudden stop at a metal plate. It is believed that X-rays emitted by the sun or stars also come from fast electrons.
The images produced by X-rays are due to the different absorption rates of different tissues. Calcium in bones absorbs X-rays the most, so bones look white on a film recording of the X-ray image, which is called a radiograph. Fat and other soft tissues absorb less, and look gray. Air absorbs the least, so lungs look black on a radiograph.
Wilhelm Conrad Röntgen discovered an image cast from his cathode ray generator, projected far beyond possible range of the cathode rays. Further investigation showed that the rays were generated at the point of contact of the cathode ray beam on the interior of the vacuum tube, that they were not deflected by magnetic fields, and they penetrated many kinds of matter.
A week after his discovery, in 1895, Rontgen took an X-ray photograph of his wife’s hand which clearly revealed her wedding ring and her bones. The photograph electrified the general public and aroused great scientific interest in the new form of radiation. Röntgen named the new form of radiation “X-radiation”, therefore the term X-rays.

How Digital Tomosynthesis is Used


 

 
What is Digital Tomosynthesis?
Digital tomosynthesis is a technique of generating images of slices through the body using a general radiographic X-ray system with a direct digital radiography detector. This is accomplished by obtaining a large, representative number of low-dose acquisitions across a range of projection angles of the X-ray tube. Currently, tomosynthesis is an optional add-on for suitable direct digital radiographic systems (flat panel detectors). The additional software controls the movement of the X-ray tube and the reconstruction of the images.
The primary interest in tomosynthesis is in breast imaging, as an extension to mammography, where it may offer better detection rates with little extra increase in radiation exposure.
Tomosynthesis can be also used in place of a number of other radiographic imaging techniques. Studies have been undertaken for a range of clinical examinations including chest, dental and orthopedic imaging and in localization of brachytherapy seeds.

Extracorporeal Shockwave Lithotripsy Defined

 


 

 
What Extracorporeal Shockwave Lithotripsy is Used For
Extracorporeal shockwave lithotripsy is the most commonly prescribed treatment for kidney stones. The technique uses shockwaves to break up stones, so that they can easily pass through the urinary tract. Most people can resume normal activities within a few days. Complications of extracorporeal shockwave lithotripsy include blood in the urine, bruising, and minor discomfort in the back or abdomen.
In extracorporeal shockwave lithotripsy, shockwaves that are created outside the body travel through the skin and body tissues until they hit the denser kidney stones. After the stones have been hit, they will break down into sand-like particles that are easily passed through the urinary tract in the urine.

Medical Equipment Leasing & Financing Solutions





Purchasing Medical Equipment
The process of purchasing medical equipment is important and should be carefully evaluated. The first step is to identify your precise needs, and then calculate what your medical practice can afford. Then choose the manufacturer and model that best suits those needs, find several models on MedWOW, contact the vendors selling it and compare offers. The next step is usually assessing your medical equipment financing options.
Is Buying New Medical Equipment Cost Effective?
Equipment vendors can be an excellent source when evaluating how much revenue a new, used or refurbished medical equipment system will generate. Based on your patient demographics, they can assist you in evaluating how many of your patients will use the medical equipment on a monthly average. It is important to make note of how many referrals for specific tests are made to other medical facilities, that could be taken care of if you buy new medical equipment. How long will it take you to break even? How long will it take before you make a profit and realistically, how many tests can you perform on a daily basis. These kinds of calculations can make all the difference in your long-term projections.
Review Costs and Benefits Before Purchasing Medical Equipment
Be sure to examine all potential costs and benefits associated with buying new or refurbished medical equipment. Some benefits are obvious, while there are many advantages clinic administrators may not have thought of, such as recruiting physicians straight out of residency who have been trained on and prefer to work with the latest medical equipment. Many are very technologically savvy and can make good use out of the latest, cutting-edge medical equipment technologies.
Medical Equipment as a Marketing Tool
Upgraded medical equipment can also work as an effective marketing tool for your practice or hospital department. Targeting your patient population and what medical equipment would most benefit their specific needs and basing your medical equipment purchase based on that is a solid way to attract new patients. It is important to be as creative as possible when looking at how new medical equipment can augment your medical facility in as many ways as possible.

News About PACS Workstations

 




 
What is a PACS Workstation?
The radiological workstation is the most important part of the Picture Archiving and Communication System (PACS), since staff radiologists work with it almost all day long.
The picture archiving and communication system is designed to store, retrieve and transfer digital medical images. A PACS integrates image data from system to system, allowing for transfers within and between healthcare settings. This facilitates the availability of both images and image-related data at the point of care, as and when required. Most PACS handle images from various medical imaging instruments, including: ultrasound, magnetic resonance, positron emission tomography, computed tomography, endoscopy, mammograms, digital radiography, computed radiography, ophthalmology, etc. Additional types of image formats are always being added. Clinical areas beyond radiology: cardiology, oncology, gastroenterology and even the laboratory are creating medical images that can be incorporated into PACS.
What is a PACS Used For?
The main uses of PACS are:
  • Hard copy replacement: PACS replaces hard-copy based means of managing medical images, such as film archives. With the decreasing price of digital storage, PACS provide a growing cost and space advantage over film archives in addition to the instant access to prior images at the same institution. Digital copies are referred to as soft-copy.
  • Remote access: PACs expands on the possibilities of conventional systems by providing capabilities of off-site viewing and reporting (distance education, telediagnosis). It enables practitioners in different physical locations to access the same information simultaneously for teleradiology.
  • Electronic image integration platform: PACS provides the electronic platform for radiology images interfacing with other medical automation systems such as: Hospital Information System (HIS), Electronic Medical Record, Practice Management Software and Radiology Information System (RIS).
  • Radiology Workflow Management : PACS is used by radiology personnel to manage the workflow of patient exams.

Forced-Air Warming Systems Explained



Forced-Air Warming Systems Explained
How Can Hypothermia be Prevented in Medical Settings?
Unplanned hypothermia (a core temperature of less than 36 degrees C) can negatively impact patients in many ways. Even mild hypothermia may contribute to complications such as: surgical site infection, altered drug metabolism, impaired blood clotting, cardiovascular ischemia, prolonged recovery following surgery and shivering.
It is maintained by many professionals in the field that active patient warming is associated with normalizing patient temperature. The literature supports the use of forced air warming devices for normalizing patient temperature and reducing shivering. In addition, the literature suggests that forced-air warming is associated with reduced time in recovery. Also, it is agreed that both the perioperative maintenance of normothermia and the use of forced-air warming reduce shivering and improve patient comfort and satisfaction. It is recommended that normothermia should be a goal during emergence and recovery, and that when available, forced-air warming systems should be used for treating hypothermia.
Inadvertent perioperative hypothermia is a common and preventable complication of surgery. Inadvertent perioperative hypothermia is defined when the core body temperature is drops below 360C and is associated with poor outcomes for patients.

Wednesday, October 31, 2012

The Basis of Image Guided Radiotherapy (IGRT)


 

What is radiotherapy and what does it do?
Radiotherapy, together with conventional surgery, is one of the most common cancer treatment options available. Radiation can shrink a tumor by killing tumor cells or interfering with the tumor’s ability to grow. With conventional radiotherapy the radiation dose needed to destroy the tumor is applied in low doses during many sessions.
An effective radiation delivery method used in radiotherapy is Intensity Modulated Radiation Therapy (IMRT). During IMRT the radiation dose is matched to the three-dimensional shape of a patient’s lesion, focusing higher radiation doses on the tumor while minimizing exposure to healthy tissue surrounding the treatment area. IMRT utilizes multiple radiation beams from more than one direction that constantly adjust to achieve the three-dimensional shape of the tumor.
Radiation therapy stops tumor cells from growing and dividing. In many cases radiation therapy can effectively kill cancer cells by shrinking or eliminating the tumor all together.
One of the most important steps on the way to improve radiotherapy was the introduction of computed tomography with direct applications within treatment planning. This new imaging technique, coupled with improvements in computer processing capabilities and speed, meant computer planning systems rapidly developed to allow individualized patient planning in 3 dimensions. This was followed by the introduction of multi-leaf collimators, which resulted in an increase in the conformality of the dose distribution achievable around the treatment target.
How is image-guided radiotherapy (IGRT) superior to other methods?
More sophisticated methods of planning and beam delivery are now available in the form of intensity modulated radiotherapy, IMRT in which the intensity of the radiation is varied during radiation beam delivery. This enables better sparing of organs at risk and the possibility of escalating the dose to the target without compromising surrounding healthy tissue. This benefit can only be fully realized if the radiation distribution is assured to be delivered where it is planned in relation to patient structures.
Image-guided radiotherapy (IGRT) uses imaging techniques to improve the accuracy of radiotherapy delivery to the target tumor, allowing more accurate and precise targeting of the treatment volume and avoidance of organs at risk. This may lead to a reduction in the radiation-induced complications and side effects that are caused by irradiation of normal tissues. It may also allow an increased dose to be delivered to the target tissues, thereby maximizing the chances of successful control or eradication of the tumor.
Image-guided radiotherapy (IGRT) improves the radiotherapy treatment in the following ways:
  • To visualize the anatomical target and organs at risk in 3D
  • To identify changes in position, shape and size of target anatomy relative to that seen when the treatment was planned
  • To quantify the variation in position of the anatomical target between the planned and initial setup treatment images
  • To correct any patient misalignment by changing the relative geometry of the treatment machine before the treatment is delivered.
What is the Image-Guided Radiotherapy (IGRT) Unit Comprised of?
The typical image-guided radiotherapy (IGRT) system is a kV-cone beam CT system integrated onto a precise linear accelerator. The system consists of an X-ray tube and amorphous silicon flat panel detector both of which are mounted with a view direction that is perpendicular to the treatment beam axis. The tube is deployed for imaging while the detector unfolds from its stored position against the face of the gantry under motorized control. The configuration of the system has the X-ray source at 1000 mm from the machine’s isocenter, which is the same standard distance of the therapeutic source to the isocenter.
There is also another, technically different, approach to image-guided radiotherapy (IGRT): Tomotherapy is a new way of delivering radiation treatment for cancer and literally means “slice therapy”. The tomotherapy system can deliver small beamlets of radiation from every point on a spiral, providing exceptional accuracy.
The more angles that a radiation treatment beam can be delivered from, the better the focus on the tumor and the less effect on surrounding tissue.
What makes tomotherapy truly revolutionary, however, is the ability to create a computed tomography (CT) image just prior to radiation treatment. This means that we can now view a full three-dimensional image of a patient’s anatomy and adjust the size, shape and intensity of the radiation beam to the precise location of the patient’s tumor.
.


Source: http://www.medwow.com/articles/

tags: Tomotherapy  , image-guided radiotherapy (IGRT)    , Intensity Modulated Radiation Therapy (IMRT) , Radiotherapy   ,  imaging equipment, Radiology

Telemedicine and How it Benefits Society

 


 

e-Health Defined
The term e-Health has been in use since the year 2000. e-Health includes much of medical information science, but is inclined to prioritize the delivery of clinical information, care and services rather than the functions of technologies. No single consensus, all-encompassing definition of e-health exists – the term tends to be defined in terms of a series of characteristics specified at varying levels of detail and generality. e-health is considered an important revolution in healthcare since the beginning of modern medicine or even public health measures, including sanitation, clean water and more.
The term e-Health can encompass a range of services or systems that are on the edge of medicine/healthcare and information technology, including:
  • Electronic health records: enabling the communication of patient data between different healthcare professionals.
  • Telemedicine: physical and psychological treatments at a distance.
  • Consumer health informatics: use of electronic resources on medical topics by healthy individuals or patients.
  • Health knowledge management: e.g. in an overview of latest medical journals, best practice guidelines or epidemiological tracking.
  • Virtual healthcare teams: consisting of healthcare professionals who collaborate and share information on patients through digital equipment.
  • m-Health includes the use of mobile devices in collecting aggregate and patient level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct provision of care via mobile telemedicine.
  • Healthcare Information Systems: also often refers to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health.
Over time, chronic patients often acquire a high level of knowledge about the processes involved in their own care, and frequently develop a routine in coping with their condition. For these types of routine patients, front-end e-health solutions tend to be relatively easy to implement.
What exactly is e-Mental health?
e-Mental health refers to the delivery of mental health services via internet through: videoconferencing, chat, or email web applications. e-Mental health encompasses online talk therapy, online pharmaceutical therapy, online counseling, computer-based interventions, cyber mental health approaches, and online life coaching. This form of psychological intervention modality offers a series of benefits, as well as challenges to providers and clients. Most notable of all challenges is online security.
How does Telemedicine and m-Health work?
Telemedicine is the use of medical information exchanged from one site to another via electronic communications to improve patients’ health status or for educational purposes. It includes consultative, diagnostic and treatment services. Mobile health information technology (m-Health) typically refers to portable devices with the capability to create, store, retrieve, and transmit data in real time between end users for the purpose of improving patient safety and quality of care. The flow of mobile health information is characterized by portable hardware coupled with software applications central to patient care and subsequently increases clinicians’ reach, mobility, and ease of information access, regardless of location.
For example, a clinician might use a mobile device to access a patient electronic health record, write and transmit prescriptions to a pharmacy, interact with patient treatment plans, communicate public health data, order diagnostic tests, review labs, or access medical references. Data transmission is accomplished by technologies common in everyday life including: blue tooth, cell phone, infra-red, WiFi, and wired technologies; all of which operate as part of a network. Mobile devices can be helpful across the health care spectrum, transmitting vital information quickly during an acute public health crisis or being used for ongoing needs, such as education and training. When utilized for patient care, mobile devices are credited with improving patient safety by eliminating errors commonly associated with paper-based medical records and improving and enhancing the continuity of care. In addition to improved patient outcomes, workflow and administrative efficiencies from the use of mobile devices can produce cost savings for the user or user organization.
The future of Telemedicine
Telemedicine applications will play an increasingly important role in healthcare and provide tools that are indispensable for home health care, remote patient monitoring, and disease management. Telemedicine will include not only rural health and battlefield care, but nursing home, assisted living facilities, and maritime and aviation applications.
Advances in technology, including wireless connectivity and mobile devices, will give practitioners, medical centers, and hospitals important new tools for managing patient care, electronic records, and medical billing in order to ultimately enable patients to have more control of their own well being.
The benefits of Mobile Health (m-Health)
m-Health or mobile health, is a term used for the practice of medicine and public health, supported by mobile devices. The term is most commonly used in reference to using mobile communication devices, such as mobile phones and PDAs, for health services and information. The m-Health field has emerged as a sub-segment of e-Health, the use of information and communication technology such as computers, mobile phones, communications satellite, patient monitors, etc., for health services and information. m-Health applications include the use of mobile devices in collecting community and clinical health data, delivery of healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vital signs, and direct provision of care via mobile telemedicine.
While m-Health certainly is applicable for industrialized nations, the field has emerged in recent years primarily as an application for developing countries, stemming from the rapid rise of mobile phone penetration in low-income nations. The field, then, largely emerges as a means of providing greater access to larger segments of populations in developing countries, as well as improving the capacity of health systems in such countries to provide quality healthcare.
Tags: , , ,    , m-Health or mobile health  , Telemedicine   , Healthcare Information Systems   , healthcare    ,  medical equipment, medical equipment parts, Medical Software, MedWOW, Telemedicine



Source: http://www.medwow.com/articles/

The Benefits of Pulse Oximetry

 


 

 
 
The Non-Invasive Advantage
There is no doubt that pulse oximetry represents a great advance in patient monitoring. It is a relatively inexpensive and above all, completely non-invasive technique.
Pulse oximetry is a continuous and non-invasive method of measuring the level of arterial oxygen saturation in blood. The measurement is taken by placing a sensor on a patient, usually on the fingertip for adults, and the hand or foot for infants. The sensor is connected to the pulse oximetry instrument with a patient cable. The pulse oximetry sensor collects signal data from the patient and sends it to the instrument. The instrument displays the calculated data in three ways:
  • As a percent value for arterial oxygen saturation (SpO2).
  • As a pulse rate (PR).
  • As a plethysmographic waveform.
The Evolution of Pulse Oximetry
Development of non-invasive spectrophotometric techniques to monitor O2 saturation began during World War II. The development of high altitude aircraft created a need for pilots to be externally monitored for any physiological changes induced by extreme altitude. In response to this need, the first functional non-invasive spectrophotometer was developed in 1942. Its inventor, Glen Millikan, named this new device the “oximeter”.
Pulse oximeters have evolved from physiologic monitoring curiosities to common patient monitoring devices. New pulse oximetry technology couples spectrophotometry with pulse waveform monitoring and permits clinicians to continuously assess arterial O2 saturation in operating rooms, in intensive care units, during sleep studies (polysomnography), and at the bedside. Portable pulse oximeters and recorders have also become popular monitoring devices during emergency medical transport and outpatient assessment of gas exchange. Advantages to pulse oximeters, other than their non-invasiveness, include their well-documented accuracy, ease-of-application, and good patient tolerance.
Pulse Oximetry’s Abilities

Continuous pulse oximetric monitoring of arterial oxygenation can detect intermittent or chronic disruptions in gas exchange that may not be detected by random arterial blood sampling and analysis. Also, pulse oximeter measurements of O2 saturation do not carry the risk of morbidity and mortality associated with invasive arterial blood sampling. Another value of continuous monitoring is the ability to quantitatively determine the amount of time spent at any given level of arterial O2 saturation. This information can then be used to monitor the progression of gas exchange impairment or to evaluate the effectiveness of therapeutic interventions. With such widespread application of pulse oximetry technology, comprehension of the operating principles and the practical limitations of use can aid clinicians. The following section describes the fundamental principles used in pulse oximetry technology to acquaint clinicians with environmental and physiological conditions that can affect their use.
The Measurement Process
The measurement process is based on two factors:
  • A pulsatile signal is generated by the heart in arterial blood, which is not present in venous blood and other tissues.
  • Oxyhemoglobin and reduced hemoglobin have different absorption spectra. Also, it is important to note that both spectra are within the optical window of water (and the soft tissue).
Pulse oximeters measure oxygen saturation by means of a sensor attached to the patient’s finger, toe, nose, earlobe or forehead. Typically, the sensor uses two light-emitting diodes (LEDs) at wavelengths of 660nm and 940 nm (infrared) and a photodetector placed opposite them. The photodetector measures the amount of red and infrared light that passes through the tissue to determine the quantity of light absorbed by the oxyhemoglobin and hemoglobin. As the proportion of oxyhemoglobin increases in the blood, the absorbance of the red wavelength decreases, while the absorption of infrared increases. SpO2 is determined by calculating the ratio of red-to-infrared light absorbencies and comparing it with values in a look-up table or calibration curve, which is a standardized curve developed empirically by simultaneous measurement of SaO2 and light absorbencies.
SpO2 is physiologically related to arterial oxygen tension (PaO2) according to the O2Hb dissociation curve. Because the O2Hb dissociation curve has a sigmoid shape, oximetry is relatively insensitive in the detection of developing hypoxemia in patients with high baseline PaO2.
SpO2 measurements made by a pulse oximeter are defined as being accurate if the root-mean-square (RMS) difference is less than or equal to 4.0% SpO2 over the arterial oxygen saturation (SaO2) range of 70% to 100%, SpO2 accuracy should be determined by clinical study of healthy or sick subjects, whereby SpO2 measurements are compared with SaO2 measurements.
Other Pulse Oximeter Factors

Pulse oximeters can also measure pulse rate. The standard states that pulse rate accuracy should be defined as the RMS difference between paired pulse data recorded with pulse oximeter and a reference method.
There are several limitations of pulse oximetry:
skin pigmentation, ambient light, intravenous dyes, low perfusion and motion artifact.
As pulse oximetry technology has advanced, manufacturers have attempted to reduce the effect of some of the limitations mentioned above. Particular improvements have been made in the ability of oximeters to deal with low signal-to-noise conditions observed during periods of motion or low perfusion.
Regular functional checks should be carried out on equipment to ensure it is safe to use. This should include visual checks, especially checking for signs of damage.
Functionality of an oximeter can be checked using a pulse oximeter tester or simulator. These simulate the properties of a finger and its pulsatile blood flow. Their purpose is allowing testing of a pulse oximeter and the continuity of probes. They cannot be used to validate the accuracy of a pulse oximeter.
Tags: , , ,    ,  MedWOW, Pulse Oximeters   , pulse oximeter tester   , patient monitoring devices  ,  Pulse oximetry   ,
 
 
Source: http://www.medwow.com/articles/

About Remote Monitoring for Cardiac Pacemakers and Implantable Cardioverters Defibrillators Patients

 


 

How Remote Cardiac Monitoring Works
Remote monitoring of implantable active cardiac devices involves transmission stored in a patient’s cardiac implant, automatically or by patient-activation, to a receiver in the patient’s home. From the receiver, information is transmitted via a telephone or other network to a server or service center, where the data is published on a secure and dedicated website, which is viewable by the patient’s clinician. In the case of a significant event which requires urgent treatment, the clinician can be alerted by fax, email or short message service.
Disorders of the heart’s conduction system may lead to arrhythmias that are associated with reduced quality of life and sudden cardiac death if untreated. Treatments include pharmacological therapy or direct electrical stimulation. Electrical intervention is delivered via implanted cardiac devices that can stimulate the heart, resynchronize contraction, or deliver intracardiac shocks to terminate lethal rhythms. These devices include permanent pacemakers to treat bradyarrhythmias, implantable cardioverter defibrillators (ICDs) to decrease the risk of sudden cardiac death among high risk patients, and cardiac resynchronization therapy pacemakers and ICDs to alleviate symptoms and decrease mortality for patients with severe heart failure associated with dyssynchronous ventricular contraction.
Wireless Communication Options
In the era of communication technology, new options are now available for following-up patients implanted with cardiac pacemakers and implantable cardioverter defibrillators ICDs. Most major companies offer devices with wireless capabilities that communicate automatically with home receivers-transmitters, which then relay data to the physician, thereby allowing remote patient follow-up and monitoring. These systems can be widely used for remote follow-up, and their adoption is rapidly increasing.
Remote monitoring systems with minimal patient involvement have been developed by the main manufacturers which supply implantable cardiac devices. These remote monitoring systems have the ability to transmit periodic messages and, in some cases, patient-activated messages, via landline or mobile telephone networks. Devices transmit data to a secure server or service center at a scheduled time. This may be daily or at a different regular interval specified by the clinician. Data is viewable by clinicians on secure websites. If a significant event that requires urgent patient treatment is detected, an alert can be sent to the clinician by email, SMS or fax.
The Main Components of Remote Pacemaker and ICD Systems
The main components of remote pacemaker and ICD monitoring systems are:
  • The implanted cardiac devices capable of storing and transmitting data about the device’s functions.
  • All monitoring systems include a remote communication device, which is usually located in a patient’s home. Its function is to receive data from the implant and to transmit the data via a landline or mobile telephone network to a secure server or remote monitoring service center. Some data transfer systems are automated, while others require patient initiation. Data transmission typically last a few minutes, but can be as quick as 10-15 seconds.
  • Facilities for clinicians to access patient data or to receive alerts are also required. Generally, patient data can be accessed anywhere and at any time via a secure website and alerts can be sent via e-mail, fax or short-message service (SMS).
Broad Information for Optimal Patient Care
The specially dedicated network enables patients to transmit data from their implantable device automatically or as instructed by their physician, using the communication device that is connected to the cell phone or the standard telephone line. Within minutes, the patient’s physician and nurses can view the data on a secure Internet Web site. Available information includes arrhythmia episode reports and stored electrograms along with device integrity information, which is comparable to the information provided during an in-clinic device follow-up visit, and provides the physician with a view of how the device and patient’s heart are operating. The system provides an efficient, safe and convenient way for specialty physicians to optimize patient care by remotely monitoring the condition of their patients and, if needed, make adjustments to medication or prescribe additional therapy.
Messages received at the secure server or service center are translated into a report, which can be accessed by clinicians using the internet. Data transferred by email or the internet is encrypted before dispatch, to safeguard patient confidentiality. New data is added to a database as it is received. Some cardiac pacemaker and implantable cardioverters defibrillator manufacturers also have a dedicated secure website for patients to access personalized information about their device and condition. Data can also be sent directly to a hospital’s electronic health records system and merged with patients’ health records.
Some manufacturers’ cardiac implants and pacemakers have the capability to detect a problem, such as atrial fibrillation or a device integrity issue, and (when in range) automatically establish wireless communication with the remote sensor/transmitter in the patient’s home. This in turn automatically sends a message to the secure server or service center and the clinician receives a notification via e-mail, fax or SMS.
Informed Purpose and Limitations
The patient needs to be informed of the purpose and limitations of remote monitoring, such as the fact that it does not replace an emergency service or absence of dealing with alert events outside office hours. Before initiating remote monitoring and follow-up, the patient may be requested to sign a written informed consent stating these points and authorizing transmission of personal data to third parties, respect of privacy, and confidentiality of patient data by device companies should be subjected to strict rules, described in contracts.
 

Source: http://www.medwow.com/articles/

Tags: , , ,    , Remote monitoring of implantable active cardiac devices   , Remote monitoring systems   ,  devices with wireless capabilities   ,   implanted cardiac devices   , Cardiac Ultrasound, ICD, Pacemakers