Peakdetection

functions for peak detection and related tasks

heartpy.peakdetection.make_windows(data, sample_rate, windowsize=120, overlap=0, min_size=20)[source]

slices data into windows

Funcion that slices data into windows for concurrent analysis. Used by process_segmentwise wrapper function.

Parameters:
  • data (1-d array) – array containing heart rate sensor data
  • sample_rate (int or float) – sample rate of the data stream in ‘data’
  • windowsize (int) – size of the window that is sliced in seconds
  • overlap (float) – fraction of overlap between two adjacent windows: 0 <= float < 1.0
  • min_size (int) – the minimum size for the last (partial) window to be included. Very short windows might not stable for peak fitting, especially when significant noise is present. Slightly longer windows are likely stable but don’t make much sense from a signal analysis perspective.
Returns:

out – tuples of window indices

Return type:

array

Examples

Assuming a given example data file:

>>> import heartpy as hp
>>> data, _ = hp.load_exampledata(1)

We can split the data into windows:

>>> indices = make_windows(data, 100.0, windowsize = 30, overlap = 0.5, min_size = 20)
>>> indices.shape
(9, 2)

Specifying min_size = -1 will include the last window no matter what:

>>> indices = make_windows(data, 100.0, windowsize = 30, overlap = 0.5, min_size = -1)
heartpy.peakdetection.append_dict(dict_obj, measure_key, measure_value)[source]

appends data to keyed dict.

Function that appends key to continuous dict, creates if doesn’t exist.

Parameters:
  • dict_obj (dict) – dictionary object that contains continuous output measures
  • measure_key (str) – key for the measure to be stored in continuous_dict
  • measure_value (any data container) – value to be appended to dictionary
Returns:

dict_obj – dictionary object passed to function, with specified data container appended

Return type:

dict

Examples

Given a dict object ‘example’ with some data in it:

>>> example = {}
>>> example['call'] = ['hello']

We can use the function to append it:

>>> example = append_dict(example, 'call', 'world')
>>> example['call']
['hello', 'world']

A new key will be created if it doesn’t exist:

>>> example = append_dict(example, 'different_key', 'hello there!')
>>> sorted(example.keys())
['call', 'different_key']
heartpy.peakdetection.detect_peaks(hrdata, rol_mean, ma_perc, sample_rate, update_dict=True, working_data={})[source]

detect peaks in signal

Function that detects heartrate peaks in the given dataset.

Parameters:
  • data (hr) – array or list containing the heart rate data
  • rol_mean (1-d numpy array) – array containing the rolling mean of the heart rate signal
  • ma_perc (int or float) – the percentage with which to raise the rolling mean, used for fitting detection solutions to data
  • sample_rate (int or float) – the sample rate of the provided data set
  • update_dict (bool) – whether to update the peak information in the module’s data structure Settable to False to allow this function to be re-used for example by the breath analysis module. default : True

Examples

Normally part of the peak detection pipeline. Given the first example data it would work like this:

>>> import heartpy as hp
>>> from heartpy.datautils import rolling_mean
>>> data, _ = hp.load_exampledata(0)
>>> rol_mean = rolling_mean(data, windowsize = 0.75, sample_rate = 100.0)
>>> wd = detect_peaks(data, rol_mean, ma_perc = 20, sample_rate = 100.0)

Now the peaklist has been appended to the working data dict. Let’s look at the first five peak positions:

>>> wd['peaklist'][0:5]
array([ 63, 165, 264, 360, 460], dtype=int64)
heartpy.peakdetection.fit_peaks(hrdata, rol_mean, sample_rate, bpmmin=40, bpmmax=180, working_data={})[source]

optimize for best peak detection

Function that runs fitting with varying peak detection thresholds given a heart rate signal.

Parameters:
  • hrdata (1d array or list) – array or list containing the heart rate data
  • rol_mean (1-d array) – array containing the rolling mean of the heart rate signal
  • sample_rate (int or float) – the sample rate of the data set
  • bpmmin (int) – minimum value of bpm to see as likely default : 40
  • bpmmax (int) – maximum value of bpm to see as likely default : 180
Returns:

working_data – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function

Return type:

dict

Examples

Part of peak detection pipeline. Uses moving average as a peak detection threshold and rises it stepwise. Determines best fit by minimising standard deviation of peak-peak distances as well as getting a bpm that lies within the expected range.

Given included example data let’s show how this works

>>> import heartpy as hp
>>> from heartpy.datautils import rolling_mean
>>> data, _ = hp.load_exampledata(0)
>>> rol_mean = rolling_mean(data, windowsize = 0.75, sample_rate = 100.0)

We can then call this function and let the optimizer do its work:

>>> wd = fit_peaks(data, rol_mean, sample_rate = 100.0)

Now the wd dict contains the best fit paramater(s):

>>> wd['best']
20

This indicates the best fit can be obtained by raising the moving average with 20%.

The results of the peak detection using these parameters are included too. To illustrate, these are the first five detected peaks:

>>> wd['peaklist'][0:5]
array([ 63, 165, 264, 360, 460], dtype=int64)

and the corresponding peak-peak intervals:

>>> wd['RR_list'][0:4]
array([1020.,  990.,  960., 1000.])
heartpy.peakdetection.check_peaks(rr_arr, peaklist, ybeat, reject_segmentwise=False, working_data={})[source]

find anomalous peaks.

Funcion that checks peaks for outliers based on anomalous peak-peak distances and corrects by excluding them from further analysis.

Parameters:
  • rr_arr (1d array or list) – list or array containing peak-peak intervals
  • peaklist (1d array or list) – list or array containing detected peak positions
  • ybeat (1d array or list) – list or array containing corresponding signal values at detected peak positions. Used for plotting functionality later on.
  • reject_segmentwise (bool) – if set, checks segments per 10 detected peaks. Marks segment as rejected if 30% of peaks are rejected. default : False
  • working_data (dict) – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function
Returns:

working_data – working_data dictionary object containing all of heartpy’s temp objects

Return type:

dict

Examples

Part of peak detection pipeline. No standalone examples exist. See docstring for hp.process() function for more info

heartpy.peakdetection.check_binary_quality(peaklist, binary_peaklist, maxrejects=3, working_data={})[source]

checks signal in chunks of 10 beats.

Function that checks signal in chunks of 10 beats. It zeros out chunk if number of rejected peaks > maxrejects. Also marks rejected segment coordinates in tuples (x[0], x[1] in working_data[‘rejected_segments’]

Parameters:
  • peaklist (1d array or list) – list or array containing detected peak positions
  • binary_peaklist (1d array or list) – list or array containing mask for peaklist, coding which peaks are rejected
  • maxjerects (int) – maximum number of rejected peaks per 10-beat window default : 3
  • working_data (dict) – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function
Returns:

working_data – working_data dictionary object containing all of heartpy’s temp objects

Return type:

dict

Examples

Part of peak detection pipeline. No standalone examples exist. See docstring for hp.process() function for more info

Given some peaklist and binary mask: >>> peaklist = [30, 60, 90, 110, 130, 140, 160, 170, 200, 220] >>> binary_peaklist = [0, 1, 1, 0, 0, 1, 0, 1, 0, 0] >>> wd = check_binary_quality(peaklist, binary_peaklist) >>> wd[‘rejected_segments’] [(30, 220)]

The whole segment is rejected as it contains more than the specified 3 rejections per 10 beats.

heartpy.peakdetection.interpolate_peaks(data, peaks, sample_rate, desired_sample_rate=1000.0, working_data={})[source]

interpolate detected peak positions and surrounding data points

Function that enables high-precision mode by taking the estimated peak position, then upsampling the peak position +/- 100ms to the specified sampling rate, subsequently estimating the peak position with higher accuracy.

Parameters:
  • data (1d list or array) – list or array containing heart rate data
  • peaks (1d list or array) – list or array containing x-positions of peaks in signal
  • sample_rate (int or float) – the sample rate of the signal (in Hz)
  • desired_sampled-rate (int or float) – the sample rate to which to upsample. Must be sample_rate < desired_sample_rate
Returns:

working_data – working_data dictionary object containing all of heartpy’s temp objects

Return type:

dict

Examples

Given the output of a normal analysis and the first five peak-peak intervals:

>>> import heartpy as hp
>>> data, _ = hp.load_exampledata(0)
>>> wd, m = hp.process(data, 100.0)
>>> wd['peaklist'][0:5]
array([ 63, 165, 264, 360, 460], dtype=int64)

Now, the resolution is at max 10ms as that’s the distance between data points. We can use the high precision mode for example to approximate a more precise position, for example if we had recorded at 1000Hz:

>>> wd = interpolate_peaks(data = data, peaks = wd['peaklist'],
... sample_rate = 100.0, desired_sample_rate = 1000.0, working_data = wd)
>>> wd['peaklist'][0:5]
array([ 63.5, 165.4, 263.6, 360.4, 460.2])

As you can see the accuracy of peak positions has increased. Note that you cannot magically upsample nothing into something. Be reasonable.