US9678864B2 - Data reallocation upon detection of errors - Google Patents

Data reallocation upon detection of errors Download PDF

Info

Publication number
US9678864B2
US9678864B2 US14/559,327 US201414559327A US9678864B2 US 9678864 B2 US9678864 B2 US 9678864B2 US 201414559327 A US201414559327 A US 201414559327A US 9678864 B2 US9678864 B2 US 9678864B2
Authority
US
United States
Prior art keywords
location
physical
data
soft
physical location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/559,327
Other versions
US20160162208A1 (en
Inventor
Jun Cai
AndiSumaryo Sutiawan
Jeetandra Kella
ChuanPeng Ong
Mark Allen Gaertner
Brian T. Edgar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seagate Technology LLC
Original Assignee
Seagate Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seagate Technology LLC filed Critical Seagate Technology LLC
Priority to US14/559,327 priority Critical patent/US9678864B2/en
Assigned to SEAGATE TECHNOLOGY LLC reassignment SEAGATE TECHNOLOGY LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SUTIAWAN, ANDISUMARYO, CAI, JUN, EDGAR, BRIAN T., GAERTNER, MARK ALLEN, KELLA, JEETANDRA, ONG, CHUANPENG
Publication of US20160162208A1 publication Critical patent/US20160162208A1/en
Application granted granted Critical
Publication of US9678864B2 publication Critical patent/US9678864B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1012Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using codes or arrangements adapted for a specific type of error
    • G06F11/1016Error in accessing a memory location, i.e. addressing error
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1072Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in multilevel memories
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • G11B20/1217Formatting, e.g. arrangement of data block or words on the record carriers on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/18Error detection or correction; Testing, e.g. of drop-outs
    • G11B20/1883Methods for assignment of alternate areas for defective areas
    • G11B20/1889Methods for assignment of alternate areas for defective areas with discs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2206/00Indexing scheme related to dedicated interfaces for computers
    • G06F2206/10Indexing scheme related to storage interfaces for computers, indexing schema related to group G06F3/06
    • G06F2206/1014One time programmable [OTP] memory, e.g. PROM, WORM
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages

Definitions

  • Data storage devices are used to access digital data in a fast and efficient manner.
  • user data are often structured in terms of variable length files, which can be constituted from one or more fixed-sized logical blocks (such as logical block addresses (LBAs)).
  • LBAs logical block addresses
  • host commands are generally issued to the device using a logical block convention.
  • the device links LBAs associated with host write commands to physical locations or blocks of media on which the data are to be stored.
  • the device may also utilize LBAs to locate physical blocks from which the data are to be retrieved.
  • a controller may be used to execute a seek command to move a data transducer or head adjacent a rotating recording disc and carry out the data transfer (i.e., read/write) operation with the associated physical block(s).
  • Other types of data storage devices for example, solid state data storage devices that have no moving parts generally carry out other types of access operations to transfer the associated data.
  • Disc drives may encounter a read error during an attempt to read data from a location of the recording disc.
  • Read errors may occur, for example, due to poor head placement during read operations, adjacent track interference/noise during read operations, poorly written data in the location of interest, foreign matter on the disc surface, a damaged or degraded region of the disc or the like.
  • the disc drive may implement some sort of data and location repair procedure.
  • Current repair procedures that target individual storage locations with defective data may be more suitable for disc drives that employ conventional recording techniques (for example, utilize non-overlapping tracks and permit sector/location-level random data updates) than for drives that utilize, for example, bands of partially overlapping tracks where individual locations of a band may not be randomly updated.
  • a method in a particular embodiment, includes storing a plurality of data packets into a plurality of physical locations in a main storage area of one or more data storage media. Each of the plurality of data packets is associated with a different logical block address (LBA), and each of the plurality of physical locations is associated with a different physical location address. The method also includes generating mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations. Additionally, upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, the at least one data packet associated with the soft error is relocated to at least one physical location of a non-volatile cache memory. An indication is made that the at least one physical location of the plurality of physical locations is a suspect location. The mapping information is updated to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile cache memory.
  • a method in another particular embodiment, includes detecting an error when reading a data packet stored in a non-volatile location of a data storage medium. The method also includes recovering the data packet from the non-volatile location and marking the non-volatile location as suspect. The method further includes relocating the data packet to a different non-volatile location, and updating mapping information to reflect the relocation.
  • a device in yet another particular embodiment, includes one or more data storage media having a main storage area.
  • the device also includes a non-volatile cache memory and a controller.
  • the controller stores a plurality of data packets into a plurality of physical locations in the main storage area of the one or more data storage media.
  • Each of the plurality of data packets is associated with a different logical block address (LBA), and each of the plurality of physical locations is associated with a different physical location address.
  • LBA logical block address
  • the controller generates mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations.
  • the controller Upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, the controller relocates the at least one data packet associated with the soft error to at least one physical location of a non-volatile cache memory.
  • the controller also makes an indication that the at least one physical location of the plurality of physical locations is a suspect location.
  • the controller updates the mapping information to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile cache memory.
  • FIG. 1 is a block diagram of a data storage device in accordance with one embodiment.
  • FIGS. 2A-2J are diagrammatic illustrations that include an example of soft error/defect management in accordance with an embodiment.
  • FIG. 3 is a flow diagram of an error/defect management method in accordance with one embodiment.
  • FIG. 4 is an isometric view of a solid-state drive that employs a soft error/defect management system in accordance with one embodiment.
  • the disclosure is generally related to data error handling/management in data storage devices such as hard drives, hybrid drives, and solid state drives.
  • a data storage device such as a disc drive may encounter a read error during an attempt to read data from, for example, a defective location of a recording disc.
  • Errors/defects may be categorized as soft and hard errors/defects.
  • Soft defects include errors where data may still be read correctly, but the quality of the signal representing the data may be below nominal values.
  • Soft defects may be caused by phenomena such as side track erasure (STE), adjacent track interference, transient weak write, etc.
  • STE side track erasure
  • adjacent track interference for example, utilize non-overlapping tracks for recording data
  • transient weak write etc.
  • a soft defect may be repaired by scrubbing or refresh, which involves rewriting the data in the same location.
  • a hard defect may be caused by effects such as a damaged or degraded region of the medium. Repairing a hard defect may involve moving/reallocating the data associated with defective media regions to a new location that may not be a cache memory location, but may be a location that is part of a main storage area of the device. Upon successful relocation, the defective region(s) may be marked/flagged so that no further data is allowed to be stored at the region.
  • FIG. 1 shows a block diagram of the disc drive 100 that employs a soft error/defect management system in accordance with one embodiment.
  • disc drive 100 employs one or more discs on which multiple data tracks may be written in a partially-overlapping shingled pattern, with each successive track overwriting a portion of the previous track.
  • Disc drive 100 is shown in FIG. 1 to be operably connected to a host computer 102 in which disc drive 100 may be mounted. Control communication paths are provided between host computer 102 and a disc drive microprocessor 104 , the microprocessor 104 generally providing top level communication and control for disc drive 100 in conjunction with programming for microprocessor 104 stored in microprocessor memory (MEM) 106 .
  • Disc drive 100 may communicate with host computer 102 using a bus 108 .
  • Bus 108 may not be a single physical object, but rather a collection of cabling/wiring, for example, that, taken together, make up a communication channel between host computer 102 and disc drive 100 . As such, bus 108 carries the cables/wires used to transfer data between a disc drive interface 110 and host computer 102 as well as the cables/wires used to transfer data between microprocessor 104 and host computer 102 .
  • MEM 106 can include random access memory (RAM), read only memory (ROM), and other sources of resident memory for microprocessor 104 .
  • Disc drive 100 includes one or more data storage discs 112 .
  • Discs 112 are rotated at a substantially constant high speed by a spindle control circuit 114 .
  • One or more heads 116 communicate with the surface(s) of discs 112 to carry out data read/write operations.
  • the radial position of heads 116 is controlled through the application of current to a coil in an actuator assembly 117 .
  • a servo control system 120 provides such control.
  • tracks may be written on one or more storage discs 112 in a partially-overlaying relationship.
  • the overlaying of tracks is shown in close-up view of area 115 of disc(s) 112 .
  • area 115 a corner of head 116 A is shown writing a track portion 124 .
  • Different shading within the track portion 124 represents different magnetic orientations that correspond to different values of stored binary data.
  • the track portion 124 is overlaid over part of track portion 125 .
  • track portion 125 is overlaid over part of portion 126
  • portion 126 is overlaid over portion 127 , etc.
  • the portions 124 - 127 may be part of what is referred to herein as a “band” which may include hundreds or thousands of similarly overlapping, concentric portions 124 - 127 . Gaps are created between bands so that each band can be updated independently of other bands.
  • the overlaying of successive track portions within a band in shingled magnetic recording (SMR) means that individual parts of the band may not be randomly updated on their own. This is because spacings between centers of track portions 124 , 125 , 126 , 127 , for example, are smaller than a width of a write pole (not separately shown) of head 116 . However, a width of a reader (not separately shown) of head 116 may be small enough to read individual track portions 124 , 125 , 126 , 127 , thereby enabling random reads of data to be carried out.
  • a portion of the media disc(s)) 112 may be reserved for use as a media cache 130 , or locations for media cache 130 may be dynamically allocated from a pool of available locations on disc(s) 112 .
  • media cache 130 may include a variable number of disc storage locations that can be selected from any suitable area(s) of the disc(s) 112 . Is should be noted that both main storage locations 135 and media cache locations may be selected/dynamically allocated from a pool of available locations on disc(s) 112 .
  • media cache 130 may comprise shingled bands.
  • media cache 130 may be non-shingled (i.e., element 130 may include tracks that are each of a sufficiently large width relative to the width of the write pole of head 116 to allow the write pole to write data to individual ones of the tracks without overwriting data in any adjacent tracks).
  • disc drive interface 110 which may include a buffer 118 to facilitate high speed data transfer between host computer 102 and disc drive 100 .
  • buffer 118 is constructed from solid-state components. While buffer 118 is depicted in FIG. 1 as being physically co-located with interface 110 , one skilled in the art should appreciate that buffer 118 may be electrically connected, yet physically separated from interface 110 . Also, although buffer 118 is shown as a single memory unit in FIG. 1 , it can comprise multiple sections or even multiple memory chips with individual ones of the memory chips even being of different memory types.
  • buffer memory 118 can include a first section or first memory chip, which may be a non-volatile memory (NVM) 121 , and a second section or second memory chip that may be a volatile memory (VM) 123 .
  • NVM non-volatile memory
  • VM volatile memory
  • drive 100 may neglect to substantially immediately transfer data received from the sending interface to data storage medium 112 .
  • disc drive 100 uses buffer 118 and media cache 130 to manage data transfers to and/or from main storage locations 135 on disc(s) 112 .
  • Data to be written to disc drive 100 are passed from host computer 102 to buffer 118 and then to a read/write channel 122 , which encodes and serializes the data and provides the requisite write current signals to heads 116 .
  • read signals are generated by the heads 116 and provided to read/write channel 122 .
  • Interface 110 performs read signal decoding, error detection, and error correction operations.
  • Interface 110 then outputs the retrieved data to buffer 118 for subsequent transfer to the host computer 102 .
  • Disc drive 100 may encounter a read error during an attempt to read data from, for example, a defective one of main storage locations 135 .
  • disc drive 100 may carry out a data recovery procedure, which may include determining whether the defect at the location is a soft defect or a hard defect. If the determination is that the error is due to a hard defect, the location at which the error occurred may be marked/flagged as damaged and may no longer be used for storing data. Any logical block address (LBA) mapping associated with that location is removed. However, if a determination is made that the error is due to a soft defect, the location is not marked/flagged as damaged and data recovered from that location may be relocated to a cache memory location.
  • LBA logical block address
  • the recovered data may be relocated to a location within media cache 130 . In another embodiment, the recovered data may be relocated to a location within non-volatile memory 121 .
  • LBA mapping information that may be stored in non-volatile memory 121 , media cache 130 and/or main storage 135 is updated to reflect the relocation of data from the particular one of main storage locations 135 at which the defect was detected to the location in media cache 130 or non-volatile memory 121 .
  • the particular one of main storage locations 135 at which the soft error occurred is not marked/flagged as damaged and therefore data from subsequent write commands received in disc drive 100 may be stored at that location.
  • Example of error management in a disc drive such as 100 is provided below in connection with FIGS. 2A-2J .
  • FIGS. 2A-2J are diagrammatic illustrations that show error management in accordance with some embodiments.
  • FIG. 2A shows a host command 200 that includes data packets P 1 , P 2 and P 3 , which are associated with host LBAs 0 , 1 and 2 , respectively.
  • LBAs 0 , 1 and 2 are collectively denoted by reference numeral 202 .
  • Packets P 1 , P 2 and P 3 may be stored in a main storage area 135 in a manner described above in connection with FIG. 1 .
  • packets P 1 , P 2 , P 3 are stored in three physical locations 204 denoted by physical block addresses (PBAs) 0 , 1 and 2 , respectively.
  • PBAs physical block addresses
  • mapping information for packets P 1 , P 2 and P 3 of command 200 may include a host start LBA (LBA 0 for this example), a command length or transfer length (i.e., the number of LBAs in the command), which is 3 for this example, and the starting PBA (PBA 0 for this example).
  • LBA 0 host start LBA
  • PBA 0 command length or transfer length
  • more than one LBA and/or PBA associated with the stored command may be included in the mapping information. If all LBAs and/or all PBAs associated with the stored packets (for example, P 1 , P 2 and P 3 ) are included in the mapping information, then the transfer length may be excluded.
  • FIG. 2D shows packet P 2 being relocated to a media cache location denoted by PBA X.
  • FIG. 2E includes mapping information for relocated packet P 2 .
  • the host start LBA is LBA 1
  • the transfer length is 1 hand the starting PBA is PBA X.
  • Mapping of packets P 1 and P 3 may also change due to the break in continuity of LBAs/PBAs used for storage of the command 200 .
  • An example of a change in mapping for packets P 1 and P 3 is shown in FIG. 2F .
  • data from subsequent write commands received in the storage device may be stored at PBA 1 .
  • a relocation count (or soft reallocation count) associated with PBA 1 may be set to 1.
  • the soft reallocation count for PBA 1 is incremented by 1 for each subsequent relocation of data from that location due to a soft error (i.e., for each subsequent soft reallocation).
  • a soft reallocation count for a particular physical location crosses a predetermined soft reallocation count threshold, then that physical location is marked/flagged as damaged and no longer used for data storage.
  • an outer codeword may be generated to protect a plurality of packets received in the data storage device.
  • an outer codeword is a collection of packets that is resistant to damage.
  • an outer codeword is a collection of packets and redundancy that protects packets even in the face of losing one or more entire packets in the outer codeword.
  • an outer codeword redundancy which is a part of the outer codeword, includes one or more packets or information that are not LBA data. The outer codeword redundancy may exist solely as redundancy to help the overall outer codeword survive damage.
  • an outer codeword or multi-packet codeword may be generated to protect packets P 1 , P 2 and P 3 .
  • FIGS. 2G-2J shows such an example in which packets P 1 , P 2 and P 3 are stored in PBAs 0 , 1 , 2 , respectively, and the outer codeword redundancy, which is denoted by RP (redundancy packet), is stored in PBA 3 .
  • P 1 , P 2 , P 3 and RP may constitute the outer codeword.
  • RP may be computed for a set of packets in which P 1 , P 2 and P 3 are a subset.
  • optional box 206 is used to denote any suitable number of additional packets that may also be under protection in the codeword. In the interest of simplification, the description below in connection with FIGS. 2G-2J does not address any additional packets 206 .
  • a read error is detected at PBA 1 during an attempt to retrieve packet P 2 .
  • any remaining valid (i.e., current version) data for example, packets P 1 and P 2
  • the outer codeword redundancy for example, RP
  • all the valid data for example, P 1 , P 2 and P 3
  • FIG. 2H shows migration of packets P 1 , P 2 and P 3 to physical locations denoted by PBA W, PBA X and PBA Y, respectively, in media cache.
  • new outer codeword redundancy is optionally generated and employed at the new non-volatile packet locations.
  • the optional new codeword redundancy is denoted by RP′ and is stored at a media cache location denoted by PBA Z.
  • RP′ may be an outer codeword redundancy that is generated to protect only P 1 , P 2 , and P 3 , or generated to protect any suitable number of packets that may include P 1 , P 2 and P 3 .
  • the migration of the packets P 1 , P 2 , P 3 may be a soft reallocation (i.e., PBAs 0 , 1 and 2 may be used to store data from subsequent write commands received in the data storage device.) Also, since RP no longer serves as an outer codeword redundancy for P 1 , P 2 and P 3 , data from subsequent write commands may be stored at PBA 3 .
  • the entire codeword with its redundancy (for example, packets P 1 , P 2 , P 3 and RP) is recovered and migrated to a different set of non-volatile locations.
  • the migration of packets P 1 , P 2 , P 3 and RP i.e., migration of the entire codeword in accordance with this example is shown in FIG. 2I .
  • only the one or more suspect packets (for example, P 2 ) of the multiple packets (for example, P 1 , P 2 and P 3 ) protected by the outer codeword redundancy (for example, RP) are migrated to one or more different non-volatile locations.
  • the new non-volatile location for the recovered packet (for example, P 2 ) is chosen and managed such that the recovered data (for example, P 2 ) in its new location retains its former participation in the multi-packet outer codeword despite its physical migration.
  • FIG. 2J shows P 2 being migrated to physical location PBA X in media cache without any changes being made to RP.
  • FIG. 3 is a flow diagram 300 of an error management method in accordance with one embodiment.
  • the method includes, at step 302 , storing data packets associated with logical block addresses into physical locations in a main storage area of one or more data storage media.
  • mapping information which links the logical block addresses associated with the data packets to addresses of the physical locations, is generated.
  • a read operation is initiated to retrieve the data packets.
  • control passes to step 310 at which the read operation is completed. If a defect is encountered at step 308 , control passes to step 312 at which a determination is made as to whether the defect is a hard or soft defect.
  • step 314 error correction algorithms are used to attempt to recover the data, and the location at which the defect was encountered is marked/flagged as damaged. Control may then pass to step 310 at which the read operation is completed. If, at step 312 , the encountered defect is a soft defect, control passes to step 316 . At step 316 , a determination is made as to whether a soft reallocation count (described above) for the physical location at which the soft error is detected is greater than a predetermined threshold. If the soft reallocation count is greater than the threshold, control moves to step 318 at which data is recovered and the location is marked/flagged as damaged and no longer used for data storage. Control may then pass to step 310 at which the read operation is completed.
  • a soft reallocation count described above
  • a non-volatile cache for example, a media cache.
  • the soft reallocation count for the physical location at which the soft error was detected is incremented by 1, for example.
  • a soft reallocation count value between 1 and the reallocation count threshold, both inclusive may indicate that the particular physical storage location is suspect, but not damaged.
  • the mapping information is updated to reflect relocation of the data packet associated with the soft error to the physical location in the non-volatile cache. The write operation is then completed at step 310 .
  • the section of the flowchart that includes steps 312 - 322 is shown using dashed lines because the collective functionality provided by those steps may be implemented in a data storage device such as a disc drive using any suitable technique/process.
  • a retrieval operation which may require error recovery measures, may be carried out to successfully read the packet.
  • the process may identify the physical storage location as suspect (for being damaged) and mark the physical storage location as such with the intent that the physical storage location will later be tested to determine whether it is actually damaged. The test may be carried out before that physical storage location is again relied upon to store information.
  • the data packet that was read from the physical storage location is written to a different non-volatile location to protect it from the possibility that its former physical location is damaged.
  • the migration of the packet to the non-volatile location may be temporary or permanent. In some embodiments, the migration is temporary when a determination is made that the physical location marked suspect is not damaged. Otherwise, the suspect location may be marked damaged.
  • a soft reallocation count value between 1 and the soft reallocation count threshold may indicate that the particular physical storage location is suspect, but not damaged.
  • the soft reallocation count is greater than the soft reallocation count threshold, the particular physical location is marked/flagged as damaged.
  • different techniques may be used to mark a physical location as suspect and any suitable method may be utilized to determine whether the suspect location is damaged.
  • the suspect packet is part of a multi-packet outer codeword
  • all valid data under protection by the outer codeword is recovered and migrated to one or more different non-volatile locations.
  • new outer codeword redundancy is optionally generated and used at the new non-volatile locations.
  • the entire outer codeword with its redundancy is recovered and migrated to a different set of non-volatile packet locations.
  • a new non-volatile location for the recovered packet is chosen such that the recovered data in its new location retains its former participation in the multi-packet outer codeword despite its physical migration.
  • the above-described embodiments may also be used in connection with relocating data from one media cache location at which a defect is detected when reading the data to another media cache location.
  • the above-described embodiments may be applicable for use with storage locations of different types of storage media.
  • FIG. 4 illustrates an oblique view of a solid state drive (SSD) 400 in which the presently disclosed methods described above in connection with FIG. 3 , for example, are useful.
  • SSD 400 includes one or more circuit card assemblies 402 and typically includes a protective, supportive housing 404 , a top cover (not shown), and one or more interface connectors 406 .
  • SSD 400 further includes a controller ASIC 408 , one or more non-volatile memory devices 410 , and power regulation circuitry 412 .
  • the memory devices 410 are essentially the SSD's data storage media and non-volatile cache.
  • SSD 400 further includes a power-backup energy storage device, such as a super-capacitor 414 .
  • the SSD 400 includes a circuit card assembly 402 that includes a connector 406 for connection to a host computer.
  • the connector 406 includes a SAS, FC-AL, SCSI, PCI-E, IDE, AT, ATA, SATA, IEEE-1394, USB or other interface connector adapted for connection to a host.
  • non-volatile memory devices 410 may be used as the non-volatile buffer/cache.
  • Physical storage locations (for example, erasure blocks) in the other one or more non-volatile memory devices 410 may be utilized as media cache and as main storage locations. In other embodiments, physical storage locations in the one or more non-volatile memory devices 410 may serve a pool of memory locations for assignment to main storage locations and to media cache locations.
  • SSD 400 includes erasure blocks that each comprise multiple pages.
  • the data management method used in SSD 400 may be similar to that described in connection with FIG. 1 .
  • the read error management methods used in SSD 400 may be similar to those described in connection with FIGS. 2A-2J and 3 . Therefore, to avoid repetition, a detailed description of read error management methods in SSD 400 is not provided.
  • the methods described herein may be implemented as one or more software programs running on one or more computer processors or controllers, such as those included in devices 100 and 400 .
  • Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein.
  • inventions of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to limit the scope of this application to any particular invention or inventive concept.
  • inventions merely for convenience and without intending to limit the scope of this application to any particular invention or inventive concept.
  • specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
  • This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. It should be noted that at least some of the above-described embodiments may be employed in disc drives that include non-shingled main storage locations.

Abstract

A device includes one or more data storage media having a main storage area, and includes a non-volatile cache memory and a controller. The controller stores a plurality of data packets into a plurality of physical locations in the main storage area. Each of the data packets is associated with a different logical block address (LBA), and each of the physical locations is associated with a different physical location address (PLA). The controller generates mapping information that links the different LBAs to the different PLAs. Upon detecting a soft error when reading a data packet stored in a physical location, the controller relocates the data packet associated with the soft error to a physical location of the non-volatile cache memory. The controller also marks the physical location as a suspect location. The controller updates the mapping information to reflect the relocation of the data packet associated with the soft error.

Description

BACKGROUND
Data storage devices are used to access digital data in a fast and efficient manner. At a host level, user data are often structured in terms of variable length files, which can be constituted from one or more fixed-sized logical blocks (such as logical block addresses (LBAs)).
To store or retrieve user data with an associated data storage device, host commands are generally issued to the device using a logical block convention. The device links LBAs associated with host write commands to physical locations or blocks of media on which the data are to be stored. The device may also utilize LBAs to locate physical blocks from which the data are to be retrieved.
When the data storage device is characterized as a disc drive, a controller may be used to execute a seek command to move a data transducer or head adjacent a rotating recording disc and carry out the data transfer (i.e., read/write) operation with the associated physical block(s). Other types of data storage devices (for example, solid state data storage devices that have no moving parts) generally carry out other types of access operations to transfer the associated data.
Disc drives, for example, may encounter a read error during an attempt to read data from a location of the recording disc. Read errors may occur, for example, due to poor head placement during read operations, adjacent track interference/noise during read operations, poorly written data in the location of interest, foreign matter on the disc surface, a damaged or degraded region of the disc or the like. Upon detecting the read error, the disc drive may implement some sort of data and location repair procedure. Current repair procedures that target individual storage locations with defective data may be more suitable for disc drives that employ conventional recording techniques (for example, utilize non-overlapping tracks and permit sector/location-level random data updates) than for drives that utilize, for example, bands of partially overlapping tracks where individual locations of a band may not be randomly updated. Thus, there is a need for improvements in data error handling/management procedures. It is to these and other improvements that the present embodiments are generally directed.
SUMMARY
In a particular embodiment, a method is disclosed that includes storing a plurality of data packets into a plurality of physical locations in a main storage area of one or more data storage media. Each of the plurality of data packets is associated with a different logical block address (LBA), and each of the plurality of physical locations is associated with a different physical location address. The method also includes generating mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations. Additionally, upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, the at least one data packet associated with the soft error is relocated to at least one physical location of a non-volatile cache memory. An indication is made that the at least one physical location of the plurality of physical locations is a suspect location. The mapping information is updated to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile cache memory.
In another particular embodiment, a method is disclosed that includes detecting an error when reading a data packet stored in a non-volatile location of a data storage medium. The method also includes recovering the data packet from the non-volatile location and marking the non-volatile location as suspect. The method further includes relocating the data packet to a different non-volatile location, and updating mapping information to reflect the relocation.
In yet another particular embodiment, a device includes one or more data storage media having a main storage area. The device also includes a non-volatile cache memory and a controller. The controller stores a plurality of data packets into a plurality of physical locations in the main storage area of the one or more data storage media. Each of the plurality of data packets is associated with a different logical block address (LBA), and each of the plurality of physical locations is associated with a different physical location address. The controller generates mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations. Upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, the controller relocates the at least one data packet associated with the soft error to at least one physical location of a non-volatile cache memory. The controller also makes an indication that the at least one physical location of the plurality of physical locations is a suspect location. The controller updates the mapping information to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile cache memory.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a data storage device in accordance with one embodiment.
FIGS. 2A-2J are diagrammatic illustrations that include an example of soft error/defect management in accordance with an embodiment.
FIG. 3 is a flow diagram of an error/defect management method in accordance with one embodiment.
FIG. 4 is an isometric view of a solid-state drive that employs a soft error/defect management system in accordance with one embodiment.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
The disclosure is generally related to data error handling/management in data storage devices such as hard drives, hybrid drives, and solid state drives.
A data storage device such as a disc drive may encounter a read error during an attempt to read data from, for example, a defective location of a recording disc. Errors/defects may be categorized as soft and hard errors/defects. Soft defects include errors where data may still be read correctly, but the quality of the signal representing the data may be below nominal values. Soft defects may be caused by phenomena such as side track erasure (STE), adjacent track interference, transient weak write, etc. In disc drives that employ conventional recording techniques (for example, utilize non-overlapping tracks for recording data), a soft defect may be repaired by scrubbing or refresh, which involves rewriting the data in the same location. However, in disc drives that employ, for example, discs on which data tracks are written in a partially-overlapping shingled pattern, with each successive track overwriting a portion of the previous track, rewriting data into a particular location may corrupt data in downstream tracks. Thus, to carry out a scrubbing operation in a disc drive in which such a shingled recording technique is used, an entire band of data tracks/locations may have to be rewritten. This may be relatively time-consuming and therefore inefficient. Thus, in one or more of the embodiments described in detail further below, instead of carrying out a band-level scrubbing operation, data from the defective location is relocated to a cache memory location, and mapping information is updated to reflect the relocation of that data. It should be noted that, in such embodiments, the location at which the soft error occurred is not marked/flagged as damaged and therefore data from subsequent write commands received in the data storage device may be stored at that location.
As noted above, in addition to soft errors/defects, certain other errors/defects may be categorized as hard errors/defects. A hard defect may be caused by effects such as a damaged or degraded region of the medium. Repairing a hard defect may involve moving/reallocating the data associated with defective media regions to a new location that may not be a cache memory location, but may be a location that is part of a main storage area of the device. Upon successful relocation, the defective region(s) may be marked/flagged so that no further data is allowed to be stored at the region.
FIG. 1 shows a block diagram of the disc drive 100 that employs a soft error/defect management system in accordance with one embodiment. As will be described in detail further below, disc drive 100 employs one or more discs on which multiple data tracks may be written in a partially-overlapping shingled pattern, with each successive track overwriting a portion of the previous track.
Disc drive 100 is shown in FIG. 1 to be operably connected to a host computer 102 in which disc drive 100 may be mounted. Control communication paths are provided between host computer 102 and a disc drive microprocessor 104, the microprocessor 104 generally providing top level communication and control for disc drive 100 in conjunction with programming for microprocessor 104 stored in microprocessor memory (MEM) 106. Disc drive 100 may communicate with host computer 102 using a bus 108. Bus 108 may not be a single physical object, but rather a collection of cabling/wiring, for example, that, taken together, make up a communication channel between host computer 102 and disc drive 100. As such, bus 108 carries the cables/wires used to transfer data between a disc drive interface 110 and host computer 102 as well as the cables/wires used to transfer data between microprocessor 104 and host computer 102.
MEM 106 can include random access memory (RAM), read only memory (ROM), and other sources of resident memory for microprocessor 104. Disc drive 100 includes one or more data storage discs 112. Discs 112 are rotated at a substantially constant high speed by a spindle control circuit 114. One or more heads 116 communicate with the surface(s) of discs 112 to carry out data read/write operations. The radial position of heads 116 is controlled through the application of current to a coil in an actuator assembly 117. A servo control system 120 provides such control.
As noted above, tracks may be written on one or more storage discs 112 in a partially-overlaying relationship. The overlaying of tracks is shown in close-up view of area 115 of disc(s) 112. In area 115, a corner of head 116A is shown writing a track portion 124. Different shading within the track portion 124 represents different magnetic orientations that correspond to different values of stored binary data. The track portion 124 is overlaid over part of track portion 125. Similarly, track portion 125 is overlaid over part of portion 126, portion 126 is overlaid over portion 127, etc.
The portions 124-127 may be part of what is referred to herein as a “band” which may include hundreds or thousands of similarly overlapping, concentric portions 124-127. Gaps are created between bands so that each band can be updated independently of other bands. The overlaying of successive track portions within a band in shingled magnetic recording (SMR) means that individual parts of the band may not be randomly updated on their own. This is because spacings between centers of track portions 124, 125, 126, 127, for example, are smaller than a width of a write pole (not separately shown) of head 116. However, a width of a reader (not separately shown) of head 116 may be small enough to read individual track portions 124, 125, 126, 127, thereby enabling random reads of data to be carried out.
In some embodiments, a portion of the media disc(s)) 112 may be reserved for use as a media cache 130, or locations for media cache 130 may be dynamically allocated from a pool of available locations on disc(s) 112. Thus, although media cache 130 is shown in FIG. 1 as being located proximate to an outer diameter 134 of disc(s) 112, in some embodiments, media cache 130 may include a variable number of disc storage locations that can be selected from any suitable area(s) of the disc(s) 112. Is should be noted that both main storage locations 135 and media cache locations may be selected/dynamically allocated from a pool of available locations on disc(s) 112. In some embodiments, media cache 130 may comprise shingled bands. In other embodiments, media cache 130 may be non-shingled (i.e., element 130 may include tracks that are each of a sufficiently large width relative to the width of the write pole of head 116 to allow the write pole to write data to individual ones of the tracks without overwriting data in any adjacent tracks).
Data is transferred between host computer 102 and disc drive 300 by way of disc drive interface 110, which may include a buffer 118 to facilitate high speed data transfer between host computer 102 and disc drive 100. In one embodiment, buffer 118 is constructed from solid-state components. While buffer 118 is depicted in FIG. 1 as being physically co-located with interface 110, one skilled in the art should appreciate that buffer 118 may be electrically connected, yet physically separated from interface 110. Also, although buffer 118 is shown as a single memory unit in FIG. 1, it can comprise multiple sections or even multiple memory chips with individual ones of the memory chips even being of different memory types. For example, buffer memory 118 can include a first section or first memory chip, which may be a non-volatile memory (NVM) 121, and a second section or second memory chip that may be a volatile memory (VM) 123. By employing NVM 121, which does not lose its data upon power loss, drive 100 may neglect to substantially immediately transfer data received from the sending interface to data storage medium 112. In general, disc drive 100 uses buffer 118 and media cache 130 to manage data transfers to and/or from main storage locations 135 on disc(s) 112.
Data to be written to disc drive 100 are passed from host computer 102 to buffer 118 and then to a read/write channel 122, which encodes and serializes the data and provides the requisite write current signals to heads 116. To retrieve data that have been previously stored by disc drive 100, read signals are generated by the heads 116 and provided to read/write channel 122. Interface 110 performs read signal decoding, error detection, and error correction operations. Interface 110 then outputs the retrieved data to buffer 118 for subsequent transfer to the host computer 102.
Disc drive 100 may encounter a read error during an attempt to read data from, for example, a defective one of main storage locations 135. With the help of microprocessor/controller 104, disc drive 100 may carry out a data recovery procedure, which may include determining whether the defect at the location is a soft defect or a hard defect. If the determination is that the error is due to a hard defect, the location at which the error occurred may be marked/flagged as damaged and may no longer be used for storing data. Any logical block address (LBA) mapping associated with that location is removed. However, if a determination is made that the error is due to a soft defect, the location is not marked/flagged as damaged and data recovered from that location may be relocated to a cache memory location. In one embodiment, the recovered data may be relocated to a location within media cache 130. In another embodiment, the recovered data may be relocated to a location within non-volatile memory 121. LBA mapping information that may be stored in non-volatile memory 121, media cache 130 and/or main storage 135 is updated to reflect the relocation of data from the particular one of main storage locations 135 at which the defect was detected to the location in media cache 130 or non-volatile memory 121. As indicated above, the particular one of main storage locations 135 at which the soft error occurred is not marked/flagged as damaged and therefore data from subsequent write commands received in disc drive 100 may be stored at that location. Example of error management in a disc drive such as 100 is provided below in connection with FIGS. 2A-2J.
FIGS. 2A-2J are diagrammatic illustrations that show error management in accordance with some embodiments. FIG. 2A shows a host command 200 that includes data packets P1, P2 and P3, which are associated with host LBAs 0, 1 and 2, respectively. LBAs 0, 1 and 2 are collectively denoted by reference numeral 202. Packets P1, P2 and P3 may be stored in a main storage area 135 in a manner described above in connection with FIG. 1. In the particular example shown in FIG. 2B, packets P1, P2, P3 are stored in three physical locations 204 denoted by physical block addresses (PBAs) 0, 1 and 2, respectively. As shown in FIG. 2C, mapping information for packets P1, P2 and P3 of command 200 may include a host start LBA (LBA 0 for this example), a command length or transfer length (i.e., the number of LBAs in the command), which is 3 for this example, and the starting PBA (PBA 0 for this example). Alternatively, more than one LBA and/or PBA associated with the stored command may be included in the mapping information. If all LBAs and/or all PBAs associated with the stored packets (for example, P1, P2 and P3) are included in the mapping information, then the transfer length may be excluded.
If, during a read operation from PBAs 0, 1 and 2, a soft error is detected, for example, at PBA 1, then P2 is relocated to a location within a media cache such as 130 or any other non-volatile cache/buffer storage location. FIG. 2D, shows packet P2 being relocated to a media cache location denoted by PBA X. FIG. 2E includes mapping information for relocated packet P2. In FIG. 2E, the host start LBA is LBA 1, the transfer length is 1 hand the starting PBA is PBA X. Mapping of packets P1 and P3 may also change due to the break in continuity of LBAs/PBAs used for storage of the command 200. An example of a change in mapping for packets P1 and P3 is shown in FIG. 2F.
As indicated above, data from subsequent write commands received in the storage device may be stored at PBA 1. If, in the example shown in FIG. 2A-2F, the relocation of packet P2 from PBA 1 as a result of a soft error being detected at that location is a first relocation from that PBA, then a relocation count (or soft reallocation count) associated with PBA 1 may be set to 1. The soft reallocation count for PBA 1 is incremented by 1 for each subsequent relocation of data from that location due to a soft error (i.e., for each subsequent soft reallocation). In some embodiments, if a soft reallocation count for a particular physical location crosses a predetermined soft reallocation count threshold, then that physical location is marked/flagged as damaged and no longer used for data storage.
In some embodiments, an outer codeword may be generated to protect a plurality of packets received in the data storage device. In general, an outer codeword is a collection of packets that is resistant to damage. In some embodiments, an outer codeword is a collection of packets and redundancy that protects packets even in the face of losing one or more entire packets in the outer codeword. In such embodiments, an outer codeword redundancy, which is a part of the outer codeword, includes one or more packets or information that are not LBA data. The outer codeword redundancy may exist solely as redundancy to help the overall outer codeword survive damage. For example, an outer codeword or multi-packet codeword may be generated to protect packets P1, P2 and P3. FIG. 2G shows such an example in which packets P1, P2 and P3 are stored in PBAs 0, 1, 2, respectively, and the outer codeword redundancy, which is denoted by RP (redundancy packet), is stored in PBA 3. In a particular embodiment, P1, P2, P3 and RP may constitute the outer codeword. It should be noted that RP may be computed for a set of packets in which P1, P2 and P3 are a subset. In FIGS. 2G-2J, optional box 206 is used to denote any suitable number of additional packets that may also be under protection in the codeword. In the interest of simplification, the description below in connection with FIGS. 2G-2J does not address any additional packets 206.
As described above, in a particular example, a read error is detected at PBA 1 during an attempt to retrieve packet P2. To protect any remaining valid (i.e., current version) data (for example, packets P1 and P2) protected by the outer codeword redundancy (for example, RP), all the valid data (for example, P1, P2 and P3) under protection is recovered and migrated (i.e., written) to one or more different non-volatile locations. FIG. 2H shows migration of packets P1, P2 and P3 to physical locations denoted by PBA W, PBA X and PBA Y, respectively, in media cache. As part of the migration, new outer codeword redundancy is optionally generated and employed at the new non-volatile packet locations. In FIG. 2H, the optional new codeword redundancy is denoted by RP′ and is stored at a media cache location denoted by PBA Z. It should be noted that RP′ may be an outer codeword redundancy that is generated to protect only P1, P2, and P3, or generated to protect any suitable number of packets that may include P1, P2 and P3. It should also be noted that the migration of the packets P1, P2, P3 may be a soft reallocation (i.e., PBAs 0, 1 and 2 may be used to store data from subsequent write commands received in the data storage device.) Also, since RP no longer serves as an outer codeword redundancy for P1, P2 and P3, data from subsequent write commands may be stored at PBA 3.
It should be noted that, in general, as error in one of multiple packets (for example, P2 of packets P1, P2 and P3) protected by an outer codeword redundancy (such as RP) may result in the multi-packet codeword becoming suspect. In the example provided earlier in connection with FIGS. 2G and 2H, a condition of RP is disregarded because all valid data under protection (i.e., P1, P2 and P3) is migrated without RP, and RP′ is generated and utilized at the new locations PBA W, PBA X and PBA Y. However, in some embodiments, to protect any remaining valid data (for example, packets P1 and P3) protected by the outer codeword redundancy, the entire codeword with its redundancy (for example, packets P1, P2, P3 and RP) is recovered and migrated to a different set of non-volatile locations. The migration of packets P1, P2, P3 and RP (i.e., migration of the entire codeword) in accordance with this example is shown in FIG. 2I.
In an alternate embodiment, only the one or more suspect packets (for example, P2) of the multiple packets (for example, P1, P2 and P3) protected by the outer codeword redundancy (for example, RP) are migrated to one or more different non-volatile locations. To restore a prior level of confidence in the outer codeword redundancy (for example, RP), the new non-volatile location for the recovered packet (for example, P2) is chosen and managed such that the recovered data (for example, P2) in its new location retains its former participation in the multi-packet outer codeword despite its physical migration. FIG. 2J shows P2 being migrated to physical location PBA X in media cache without any changes being made to RP. Although the examples shown in FIGS. 2A-2J involve addressing a single suspect packet of multiple packets, the embodiments described above may apply to any number of suspect packets.
FIG. 3 is a flow diagram 300 of an error management method in accordance with one embodiment. The method includes, at step 302, storing data packets associated with logical block addresses into physical locations in a main storage area of one or more data storage media. At step 304, mapping information, which links the logical block addresses associated with the data packets to addresses of the physical locations, is generated. At step 306, a read operation is initiated to retrieve the data packets. At step 308, if no defects are encountered at any of the physical locations, control passes to step 310 at which the read operation is completed. If a defect is encountered at step 308, control passes to step 312 at which a determination is made as to whether the defect is a hard or soft defect. If the encountered defect is a hard defect, at step 314, error correction algorithms are used to attempt to recover the data, and the location at which the defect was encountered is marked/flagged as damaged. Control may then pass to step 310 at which the read operation is completed. If, at step 312, the encountered defect is a soft defect, control passes to step 316. At step 316, a determination is made as to whether a soft reallocation count (described above) for the physical location at which the soft error is detected is greater than a predetermined threshold. If the soft reallocation count is greater than the threshold, control moves to step 318 at which data is recovered and the location is marked/flagged as damaged and no longer used for data storage. Control may then pass to step 310 at which the read operation is completed. If, at step 316, the determination made is that the soft reallocation count is less than or equal to the threshold, control passes to step 320 at which the data packet associated with the soft error is relocated to a physical location of a non-volatile cache (for example, a media cache). Also, at step 320, the soft reallocation count for the physical location at which the soft error was detected is incremented by 1, for example. For a particular physical location, a soft reallocation count value between 1 and the reallocation count threshold, both inclusive, may indicate that the particular physical storage location is suspect, but not damaged. At step 322, the mapping information is updated to reflect relocation of the data packet associated with the soft error to the physical location in the non-volatile cache. The write operation is then completed at step 310. It should be noted that, although the above description in connection with FIG. 3 provides details regarding relocation of data from a single location in response to detecting a soft error/defect at that location, the same procedure described above may be repeated for any additional locations if soft errors/defects are encountered at multiple locations.
In FIG. 3, the section of the flowchart that includes steps 312-322 is shown using dashed lines because the collective functionality provided by those steps may be implemented in a data storage device such as a disc drive using any suitable technique/process. In general, when an error is encountered when reading a data packet at a physical storage location (or generally, a non-volatile location), a retrieval operation, which may require error recovery measures, may be carried out to successfully read the packet. The process may identify the physical storage location as suspect (for being damaged) and mark the physical storage location as such with the intent that the physical storage location will later be tested to determine whether it is actually damaged. The test may be carried out before that physical storage location is again relied upon to store information. Furthermore, the data packet that was read from the physical storage location is written to a different non-volatile location to protect it from the possibility that its former physical location is damaged. The migration of the packet to the non-volatile location may be temporary or permanent. In some embodiments, the migration is temporary when a determination is made that the physical location marked suspect is not damaged. Otherwise, the suspect location may be marked damaged.
As indicated earlier, in one embodiment, for a particular physical storage location, a soft reallocation count value between 1 and the soft reallocation count threshold, both inclusive, may indicate that the particular physical storage location is suspect, but not damaged. As noted above, in such an embodiment, if the soft reallocation count is greater than the soft reallocation count threshold, the particular physical location is marked/flagged as damaged. In different embodiments, different techniques may be used to mark a physical location as suspect and any suitable method may be utilized to determine whether the suspect location is damaged.
As noted above, when the suspect packet is part of a multi-packet outer codeword, in one embodiment, all valid data under protection by the outer codeword is recovered and migrated to one or more different non-volatile locations. In this embodiment, as part of the migration, new outer codeword redundancy is optionally generated and used at the new non-volatile locations. In another embodiment, the entire outer codeword with its redundancy is recovered and migrated to a different set of non-volatile packet locations. In yet another embodiment, to restore a prior level of confidence in the outer codeword, a new non-volatile location for the recovered packet is chosen such that the recovered data in its new location retains its former participation in the multi-packet outer codeword despite its physical migration.
The above-described embodiments may also be used in connection with relocating data from one media cache location at which a defect is detected when reading the data to another media cache location. In general, in different applications, the above-described embodiments may be applicable for use with storage locations of different types of storage media.
FIG. 4 illustrates an oblique view of a solid state drive (SSD) 400 in which the presently disclosed methods described above in connection with FIG. 3, for example, are useful. SSD 400 includes one or more circuit card assemblies 402 and typically includes a protective, supportive housing 404, a top cover (not shown), and one or more interface connectors 406. SSD 400 further includes a controller ASIC 408, one or more non-volatile memory devices 410, and power regulation circuitry 412. The memory devices 410 are essentially the SSD's data storage media and non-volatile cache. In some applications, SSD 400 further includes a power-backup energy storage device, such as a super-capacitor 414.
In accordance with certain aspects, the SSD 400 includes a circuit card assembly 402 that includes a connector 406 for connection to a host computer. In accordance with certain aspects, the connector 406 includes a SAS, FC-AL, SCSI, PCI-E, IDE, AT, ATA, SATA, IEEE-1394, USB or other interface connector adapted for connection to a host.
If, as shown in FIG. 4, more than one non-volatile memory device 410 is included in SSD 400, then one of the non-volatile memory devices 410 may be used as the non-volatile buffer/cache. Physical storage locations (for example, erasure blocks) in the other one or more non-volatile memory devices 410 may be utilized as media cache and as main storage locations. In other embodiments, physical storage locations in the one or more non-volatile memory devices 410 may serve a pool of memory locations for assignment to main storage locations and to media cache locations.
As indicated above, instead of the physical storage locations being bands of the type described in connection with FIG. 1, SSD 400 includes erasure blocks that each comprise multiple pages. In other respects, the data management method used in SSD 400 may be similar to that described in connection with FIG. 1. Also, the read error management methods used in SSD 400 may be similar to those described in connection with FIGS. 2A-2J and 3. Therefore, to avoid repetition, a detailed description of read error management methods in SSD 400 is not provided.
In accordance with various embodiments, the methods described herein may be implemented as one or more software programs running on one or more computer processors or controllers, such as those included in devices 100 and 400. Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be reduced. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. It should be noted that at least some of the above-described embodiments may be employed in disc drives that include non-shingled main storage locations.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.

Claims (20)

What is claimed is:
1. A method comprising:
storing a plurality of data packets into a plurality of physical locations in a main storage area of one or more data storage media, wherein each of the plurality of data packets is associated with a different logical block address (LBA), and wherein each of the plurality of physical locations is associated with a different physical location address;
generating mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations;
upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, relocating the at least one data packet associated with the soft error to at least one physical location of a non-volatile memory;
when the soft error is detected, marking the at least one physical location of the plurality of physical locations as a suspect location, wherein the suspect location marking is indicative of an intent to carry out later-in-time damage testing of the at least one physical location, and wherein any determination as to whether the relocation of the at least one data packet to the at least one physical location in the non-volatile memory is temporary or permanent is deferred until after the later-in-time damage testing of the at least one physical location with the suspect location marking is carried out; and
updating the mapping information to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile memory.
2. The method of claim 1 and further comprising, upon detecting the soft error, incrementing a soft reallocation count of the at least one physical location of the plurality of physical locations, wherein the soft reallocation count is indicative of a number of data reallocations from the at least one physical location of the plurality of physical locations resulting from soft errors being detected at that physical location.
3. The method of claim 2 and further comprising flagging the at least one physical location of the plurality of physical locations as damaged if the soft reallocation count of the at least one physical location of the plurality of physical locations is greater than a predetermined soft reallocation count threshold.
4. The method of claim 1 and wherein the non-volatile memory is a media cache included in the one or more data storage media or a non-volatile memory that is of a different type than the one or more data storage media.
5. The method of claim 1 and wherein storing the plurality of data packets into the plurality of physical locations in the main storage area is carried out using a shingled magnetic recording technique.
6. The method of claim 1 and wherein the plurality of data packets is a part of a first outer codeword, and wherein the first outer codeword comprises the plurality of data packets and a first outer codeword redundancy.
7. The method of claim 6 and wherein the first outer codeword redundancy is stored in a physical location, in the main storage area, which is different from the plurality of physical locations at which the plurality of data packets is stored.
8. The method of claim 7 and further comprising, upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, relocating the plurality of data packets without the first outer codeword redundancy to a plurality of physical locations of the non-volatile memory.
9. The method of claim 8 and further comprising generating a second outer codeword redundancy for the plurality of data packets relocated to the non-volatile memory, the second outer codeword redundancy and the plurality of data packets comprising a second outer codeword.
10. The method of claim 7 and further comprising, upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, relocating the plurality of data packets and the first outer codeword redundancy to a plurality of physical locations of the non-volatile memory.
11. A method comprising:
detecting a soft error when reading a data packet stored in a non-volatile location of a data storage medium;
recovering the data packet from the non-volatile location;
when the soft error is detected, marking the non-volatile location as suspect, wherein the suspect location marking is indicative of an intent to carry out later-in-time damage testing of the non-volatile location;
when the soft error is detected, relocating the data packet to a different non-volatile location, wherein any determination as to whether the relocation of the data packet to the different non-volatile location is temporary or permanent is deferred until after the later-in-time damage testing of the non-volatile location with the suspect location marking is carried out; and
updating mapping information to reflect the relocation.
12. The method of claim 11 and further comprising carrying out the later-in-time damage testing of the non-volatile location marked as suspect to determine whether the suspect non-volatile location is damaged.
13. The method of claim 12 and further comprising storing a received data packet in the suspect non-volatile location if a determination is made that the suspect non-volatile location is not damaged.
14. The method of claim 11 and further comprising incrementing a soft reallocation count of the suspect non-volatile location, wherein the soft reallocation count is indicative of a number of data reallocations from the at least one physical location of the plurality of physical locations resulting from soft errors being detected at that physical location.
15. The method of claim 11 wherein the data storage medium comprises a main storage area having a plurality of main storage physical locations including the suspect non-volatile location, and wherein the different non-volatile location is one of a plurality of media cache locations, and wherein the data packet is one of a plurality of data packets protected by an outer codeword redundancy, and wherein the plurality of data packets and the outer codeword redundancy are stored in the plurality of main storage physical locations.
16. The method of claim 15 and further comprising relocating the plurality of data packets to at least some of the plurality of media cache locations.
17. The method of claim 15 and further comprising relocating only the data packet associated with the soft error to the different non-volatile location without relocating the remaining ones of the plurality of data packets and the outer codeword redundancy to the media cache such that a participation of the relocated data packet associated with the soft error in an outer codeword that comprises the plurality of data packets and the outer codeword redundancy is retained.
18. A device comprising:
one or more data storage media having a main storage area;
a non-volatile cache memory; and
a controller configured to:
store a plurality of data packets into a plurality of physical locations in the main storage area of the one or more data storage media, wherein each of the plurality of data packets is associated with a different logical block address (LBA), and wherein each of the plurality of physical locations is associated with a different physical location address;
generate mapping information that links the different LBAs associated with the different data packets to the different physical location addresses associated with the different physical locations;
upon detecting a soft error when reading at least one data packet of the plurality of data packets stored in at least one physical location of the plurality of physical locations, relocate the at least one data packet associated with the soft error to at least one physical location of a non-volatile memory;
when the soft error is detected, mark that the at least one physical location of the plurality of physical locations as a suspect location, wherein the suspect location marking is indicative of an intent to carry out later-in-time damage testing of the at least one physical location;
update the mapping information to reflect the relocation of the at least one data packet associated with the soft error to the at least one physical location in the non-volatile memory; and
determine as to whether the relocation of the at least one data packet to the at least one physical location in the non-volatile memory is temporary or permanent after the later-in-time damage testing of the at least one physical location with the suspect location marking is carried out.
19. The device of claim 18 and wherein the controller is further configured to, upon detecting the soft error, increment a soft reallocation count of the at least one physical location of the plurality of physical locations, wherein the soft reallocation count is indicative of a number of data reallocations from the at least one physical location of the plurality of physical locations resulting from soft errors being detected at that physical location.
20. The device of claim 19 and wherein the controller is further configured to flag the at least one physical location of the plurality of physical locations as damaged if the soft reallocation count of the at least one physical location of the plurality of physical locations is greater than a predetermined soft reallocation count threshold.
US14/559,327 2014-12-03 2014-12-03 Data reallocation upon detection of errors Active 2034-12-24 US9678864B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/559,327 US9678864B2 (en) 2014-12-03 2014-12-03 Data reallocation upon detection of errors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/559,327 US9678864B2 (en) 2014-12-03 2014-12-03 Data reallocation upon detection of errors

Publications (2)

Publication Number Publication Date
US20160162208A1 US20160162208A1 (en) 2016-06-09
US9678864B2 true US9678864B2 (en) 2017-06-13

Family

ID=56094370

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/559,327 Active 2034-12-24 US9678864B2 (en) 2014-12-03 2014-12-03 Data reallocation upon detection of errors

Country Status (1)

Country Link
US (1) US9678864B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170069395A1 (en) * 2015-09-09 2017-03-09 Kabushiki Kaisha Toshiba Host device and memory device
US10761925B2 (en) * 2015-03-24 2020-09-01 Nxp Usa, Inc. Multi-channel network-on-a-chip

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9934151B2 (en) 2016-06-28 2018-04-03 Dell Products, Lp System and method for dynamic optimization for burst and sustained performance in solid state drives
US10720200B2 (en) * 2018-04-25 2020-07-21 Seagate Technology Llc Reduced latency I/O in multi-actuator device
US10424328B1 (en) 2018-04-25 2019-09-24 Seagate Technology Llc Reduced latency I/O in multi-actuator device
US11145332B2 (en) * 2020-03-05 2021-10-12 International Business Machines Corporation Proactively refreshing storage zones within a storage device
KR20220125836A (en) * 2021-03-03 2022-09-15 삼성전자주식회사 Storage device, operating method of storage device, and electronic device including storage device

Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1381467A (en) * 1972-06-01 1975-01-22 Burroughs Corp Method and apparatus for providing alternative storage areas on a magnetic disc pack
US5287363A (en) * 1991-07-01 1994-02-15 Disk Technician Corporation System for locating and anticipating data storage media failures
US5297148A (en) * 1989-04-13 1994-03-22 Sundisk Corporation Flash eeprom system
US5559958A (en) * 1991-06-24 1996-09-24 Compaq Computer Corporation Graphical user interface for computer management system and an associated management information base
US5974544A (en) * 1991-12-17 1999-10-26 Dell Usa, L.P. Method and controller for defect tracking in a redundant array
US6285607B1 (en) * 1998-03-27 2001-09-04 Memory Corporation Plc Memory system
US20010024386A1 (en) * 1989-04-13 2001-09-27 Sandisk Corporation Flash EEprom system
US6895464B2 (en) * 2002-06-03 2005-05-17 Honeywell International Inc. Flash memory management system and method utilizing multiple block list windows
US20070011563A1 (en) * 2005-06-28 2007-01-11 Seagate Technology Llc Shared redundancy in error correcting code
US20070150774A1 (en) * 2004-06-30 2007-06-28 Seagate Technology Llc Maintaining data integrity in a data storage system
US20070283193A1 (en) * 2006-04-21 2007-12-06 Altera Corporation Soft error location and sensitivity detection for programmable devices
US20080010566A1 (en) * 2006-06-21 2008-01-10 Chang Tsung-Yung Jonathan Disabling portions of memory with non-deterministic errors
US7350046B2 (en) * 2004-04-02 2008-03-25 Seagate Technology Llc Managed reliability storage system and method monitoring storage conditions
US7490261B2 (en) * 2003-12-18 2009-02-10 Seagate Technology Llc Background media scan for recovery of data errors
US7603530B1 (en) * 2005-05-05 2009-10-13 Seagate Technology Llc Methods and structure for dynamic multiple indirections in a dynamically mapped mass storage device
US20100122148A1 (en) 2008-11-10 2010-05-13 David Flynn Apparatus, system, and method for predicting failures in solid-state storage
US7917688B2 (en) * 2007-01-11 2011-03-29 Hitachi, Ltd. Flash memory module, storage apparatus using flash memory module as recording medium, and address translation table verification method for flash memory module
US7934052B2 (en) * 2007-12-27 2011-04-26 Pliant Technology, Inc. System and method for performing host initiated mass storage commands using a hierarchy of data structures
US7958430B1 (en) * 2005-06-20 2011-06-07 Cypress Semiconductor Corporation Flash memory device and method
US7996710B2 (en) * 2007-04-25 2011-08-09 Hewlett-Packard Development Company, L.P. Defect management for a semiconductor memory system
US8028119B2 (en) * 2005-05-20 2011-09-27 Renesas Electronics Corporation Memory module, cache system and address conversion method
US8031701B2 (en) 2006-09-11 2011-10-04 Cisco Technology, Inc. Retransmission-based stream repair and stream join
US20110296228A1 (en) * 2010-05-27 2011-12-01 International Business Machines Corporation Tolerating soft errors by selective duplication
US20110296242A1 (en) 2010-05-27 2011-12-01 Elmootazbellah Nabil Elnozahy Energy-efficient failure detection and masking
US8321625B2 (en) * 2007-12-05 2012-11-27 Densbits Technologies Ltd. Flash memory device with physical cell value deterioration accommodation and methods useful in conjunction therewith
US20130170060A1 (en) 2011-12-30 2013-07-04 Hitachi Global Storage Technologies Netherlands B.V. Microwave-assisted magnetic recording head and systems thereof with environmental conditions control
US20130305086A1 (en) * 2012-05-11 2013-11-14 Seagate Technology Llc Using cache to manage errors in primary storage
US20130308433A1 (en) * 2012-05-16 2013-11-21 Seagate Technology Llc Logging disk recovery operations in a non-volatile solid-state memory cache
US20140095962A1 (en) * 2012-09-28 2014-04-03 SK Hynix Inc. Semiconductor device and operating method thereof
US8837067B2 (en) 2011-04-27 2014-09-16 Seagate Technology Llc Method and apparatus for contiguous data address management
US9032244B2 (en) * 2012-11-16 2015-05-12 Microsoft Technology Licensing, Llc Memory segment remapping to address fragmentation
US9229854B1 (en) * 2013-01-28 2016-01-05 Radian Memory Systems, LLC Multi-array operation support and related devices, systems and software
US9400749B1 (en) * 2013-01-28 2016-07-26 Radian Memory Systems, LLC Host interleaved erase operations for flash memory controller
US9442799B2 (en) * 2014-06-26 2016-09-13 Microsoft Technology Licensing, Llc Extended lifetime memory

Patent Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1381467A (en) * 1972-06-01 1975-01-22 Burroughs Corp Method and apparatus for providing alternative storage areas on a magnetic disc pack
US20010024386A1 (en) * 1989-04-13 2001-09-27 Sandisk Corporation Flash EEprom system
US5297148A (en) * 1989-04-13 1994-03-22 Sundisk Corporation Flash eeprom system
US5559958A (en) * 1991-06-24 1996-09-24 Compaq Computer Corporation Graphical user interface for computer management system and an associated management information base
US5287363A (en) * 1991-07-01 1994-02-15 Disk Technician Corporation System for locating and anticipating data storage media failures
US5974544A (en) * 1991-12-17 1999-10-26 Dell Usa, L.P. Method and controller for defect tracking in a redundant array
US6285607B1 (en) * 1998-03-27 2001-09-04 Memory Corporation Plc Memory system
US6895464B2 (en) * 2002-06-03 2005-05-17 Honeywell International Inc. Flash memory management system and method utilizing multiple block list windows
US7490261B2 (en) * 2003-12-18 2009-02-10 Seagate Technology Llc Background media scan for recovery of data errors
US7350046B2 (en) * 2004-04-02 2008-03-25 Seagate Technology Llc Managed reliability storage system and method monitoring storage conditions
US20070150774A1 (en) * 2004-06-30 2007-06-28 Seagate Technology Llc Maintaining data integrity in a data storage system
US7603530B1 (en) * 2005-05-05 2009-10-13 Seagate Technology Llc Methods and structure for dynamic multiple indirections in a dynamically mapped mass storage device
US8028119B2 (en) * 2005-05-20 2011-09-27 Renesas Electronics Corporation Memory module, cache system and address conversion method
US7958430B1 (en) * 2005-06-20 2011-06-07 Cypress Semiconductor Corporation Flash memory device and method
US20070011563A1 (en) * 2005-06-28 2007-01-11 Seagate Technology Llc Shared redundancy in error correcting code
US20070283193A1 (en) * 2006-04-21 2007-12-06 Altera Corporation Soft error location and sensitivity detection for programmable devices
US8091000B2 (en) * 2006-06-21 2012-01-03 Intel Corporation Disabling portions of memory with defects
US20080010566A1 (en) * 2006-06-21 2008-01-10 Chang Tsung-Yung Jonathan Disabling portions of memory with non-deterministic errors
US8588077B2 (en) 2006-09-11 2013-11-19 Cisco Technology, Inc. Retransmission-based stream repair and stream join
US8031701B2 (en) 2006-09-11 2011-10-04 Cisco Technology, Inc. Retransmission-based stream repair and stream join
US7917688B2 (en) * 2007-01-11 2011-03-29 Hitachi, Ltd. Flash memory module, storage apparatus using flash memory module as recording medium, and address translation table verification method for flash memory module
US7996710B2 (en) * 2007-04-25 2011-08-09 Hewlett-Packard Development Company, L.P. Defect management for a semiconductor memory system
US8321625B2 (en) * 2007-12-05 2012-11-27 Densbits Technologies Ltd. Flash memory device with physical cell value deterioration accommodation and methods useful in conjunction therewith
US7934052B2 (en) * 2007-12-27 2011-04-26 Pliant Technology, Inc. System and method for performing host initiated mass storage commands using a hierarchy of data structures
US20100122148A1 (en) 2008-11-10 2010-05-13 David Flynn Apparatus, system, and method for predicting failures in solid-state storage
US8516343B2 (en) 2008-11-10 2013-08-20 Fusion-Io, Inc. Apparatus, system, and method for retiring storage regions
US8448027B2 (en) 2010-05-27 2013-05-21 International Business Machines Corporation Energy-efficient failure detection and masking
US20110296242A1 (en) 2010-05-27 2011-12-01 Elmootazbellah Nabil Elnozahy Energy-efficient failure detection and masking
US8271831B2 (en) 2010-05-27 2012-09-18 International Business Machines Corporation Tolerating soft errors by selective duplication
US20110296228A1 (en) * 2010-05-27 2011-12-01 International Business Machines Corporation Tolerating soft errors by selective duplication
US8837067B2 (en) 2011-04-27 2014-09-16 Seagate Technology Llc Method and apparatus for contiguous data address management
US20130170060A1 (en) 2011-12-30 2013-07-04 Hitachi Global Storage Technologies Netherlands B.V. Microwave-assisted magnetic recording head and systems thereof with environmental conditions control
US20130305086A1 (en) * 2012-05-11 2013-11-14 Seagate Technology Llc Using cache to manage errors in primary storage
US20130308433A1 (en) * 2012-05-16 2013-11-21 Seagate Technology Llc Logging disk recovery operations in a non-volatile solid-state memory cache
US20140095962A1 (en) * 2012-09-28 2014-04-03 SK Hynix Inc. Semiconductor device and operating method thereof
US9032244B2 (en) * 2012-11-16 2015-05-12 Microsoft Technology Licensing, Llc Memory segment remapping to address fragmentation
US9229854B1 (en) * 2013-01-28 2016-01-05 Radian Memory Systems, LLC Multi-array operation support and related devices, systems and software
US9400749B1 (en) * 2013-01-28 2016-07-26 Radian Memory Systems, LLC Host interleaved erase operations for flash memory controller
US9442799B2 (en) * 2014-06-26 2016-09-13 Microsoft Technology Licensing, Llc Extended lifetime memory

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BulletProof: a defect-tolerant CMP switch architecture; Constantinides et al; The Twelfth International Symposium on High-Performance Computer Architecture; Feb. 11-15, 2006; pp. 3-14 (12 pages). *
Evaluation of a soft error tolerance technique based on time and/or space redundancy; Anghel et al; Proceedings of the 13th Symposium on Integrated Circuits and Systems Design; Sep. 18-24, 2000; pp. 237-242 (6 pages). *
Microarchitecture of HaL's memory management unit; Chang et al; Compcon '95 ‘Technologies for the Information Superhighway’, Digest of Papers; Mar. 5-9, 1995; pp. 272-279 (8 pages). *
Microarchitecture of HaL's memory management unit; Chang et al; Compcon '95 'Technologies for the Information Superhighway', Digest of Papers; Mar. 5-9, 1995; pp. 272-279 (8 pages). *
The soft error problem: an architectural perspective; Mukherjee et al; Proceedings of the 11th International Symposium on High-Performance Computer Architecture; Feb. 12-16, 2005 (5 pages). *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10761925B2 (en) * 2015-03-24 2020-09-01 Nxp Usa, Inc. Multi-channel network-on-a-chip
US20170069395A1 (en) * 2015-09-09 2017-03-09 Kabushiki Kaisha Toshiba Host device and memory device
US10665305B2 (en) * 2015-09-09 2020-05-26 Toshiba Memory Corporation Host device connectable to memory device performing patrol read and memory device performing patrol read

Also Published As

Publication number Publication date
US20160162208A1 (en) 2016-06-09

Similar Documents

Publication Publication Date Title
US9678864B2 (en) Data reallocation upon detection of errors
US10210900B2 (en) Rewriting of data stored in defective storage regions into other storage regions
US10186296B2 (en) Error correction for storage devices
US9805762B2 (en) Band rewrite optimization
US8578242B1 (en) Data storage device employing seed array for data path protection
US8004785B1 (en) Disk drive write verifying unformatted data sectors
JP6422600B2 (en) Stripe mapping in memory
US8565053B1 (en) Methods and devices for preventing media errors due to media scratches
US8599510B1 (en) Disk drive adjusting data track density based on write condition when writing to contiguous data tracks
US8825977B1 (en) Hybrid drive writing copy of data to disk when non-volatile semiconductor memory nears end of life
US8179627B2 (en) Floating guard band for shingle magnetic recording
US8711500B1 (en) Disk drive to enable defect margining
US9536563B1 (en) Detecting shingled overwrite errors
US10140180B1 (en) Segment-based outer code recovery
US9177607B2 (en) Logging disk recovery operations in a non-volatile solid-state memory cache
JP2010211910A (en) Data storage device
US8964320B1 (en) Disk drive defect scanning by writing consecutive data tracks and skipping tracks when reading the data tracks
US9070378B2 (en) Partial write system
US8879183B1 (en) Segmenting of read-modify-write operations
US20090037644A1 (en) System and Method of Storing Reliability Data
KR20160055885A (en) Forming bands of shingled recording tracks
US9330701B1 (en) Dynamic track misregistration dependent error scans
US9377956B1 (en) Rewrite operation for recording bands
US10574270B1 (en) Sector management in drives having multiple modulation coding
US8069384B2 (en) Scanning reassigned data storage locations

Legal Events

Date Code Title Description
AS Assignment

Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAI, JUN;SUTIAWAN, ANDISUMARYO;KELLA, JEETANDRA;AND OTHERS;SIGNING DATES FROM 20150302 TO 20150303;REEL/FRAME:035244/0581

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4