heartpy (main)¶
Main functions¶

heartpy.
process
(hrdata, sample_rate, windowsize=0.75, report_time=False, calc_freq=False, freq_method='welch', freq_square=True, interp_clipping=False, clipping_scale=False, interp_threshold=1020, hampel_correct=False, bpmmin=40, bpmmax=180, reject_segmentwise=False, high_precision=False, high_precision_fs=1000.0, breathing_method='fft', clean_rr=False, clean_rr_method='quotientfilter', measures={}, working_data={})[source]¶ processes passed heart rate data.
Processes the passed heart rate data. Returns measures{} dict containing results.
Parameters:  hrdata (1d array or list) – array or list containing heart rate data to be analysed
 sample_rate (int or float) – the sample rate with which the heart rate data is sampled
 windowsize (int or float) – the window size in seconds to use in the calculation of the moving average. Calculated as windowsize * sample_rate default : 0.75
 report_time (bool) – whether to report total processing time of algorithm default : True
 calc_freq (bool) – whether to compute timeseries measurements default : False
 freq_method (str) – method used to extract the frequency spectrum. Available: ‘fft’ (Fourier Analysis), ‘periodogram’, and ‘welch’ (Welch’s method). default : ‘welch’
 freq_square (bool) – whether to square the power spectrum returned when computing frequency measures default : true
 interp_clipping (bool) – whether to detect and interpolate clipping segments of the signal default : True
 clipping_scale (bool) – whether to scale the data prior to clipping detection. Can correct errors if signal amplitude has been affected after digitization (for example through filtering). Not recommended by default. default : False
 interp_threshold (int or float) – threshold to use to detect clipping segments. Recommended to be a few datapoints below the sensor or ADC’s maximum value (to account for slight data line noise). default : 1020, 4 below max of 1024 for 10bit ADC
 hampel_correct (bool) – whether to reduce noisy segments using large median filter. Disabled by default due to computational complexity and (small) distortions induced into output measures. Generally it is not necessary. default : False
 bpmmin (int or float) – minimum value to see as likely for BPM when fitting peaks default : 40
 bpmmax (int or float) – maximum value to see as likely for BPM when fitting peaks default : 180
 reject_segmentwise (bool) – whether to reject segments with more than 30% rejected beats. By default looks at segments of 10 beats at a time. default : False
 high_precision (bool) – whether to estimate peak positions by upsampling signal to sample rate as specified in high_precision_fs default : false
 high_precision_fs (int or float) – the sample rate to which to upsample for more accurate peak position estimation default : 1000 Hz
 breathing_method (str) – method to use for estimating breathing rate, should be ‘welch’ or ‘fft’ default : fft
 clean_rr (bool) – if true, the RR_list is further cleaned with an outlier rejection pass default : false
 clean_rr_method (str) – how to find and reject outliers. Available methods are ‘ quotientfilter’, ‘iqr’ (interquartile range), and ‘zscore’. default : ‘quotientfilter’
 measures (dict) – dictionary object used by heartpy to store computed measures. Will be created if not passed to function.
 working_data (dict) – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function
Returns:  working_data (dict) – dictionary object used to store temporary values.
 measures (dict) – dictionary object used by heartpy to store computed measures.
Examples
There’s example data included in HeartPy to help you get up to speed. Here are provided two examples of how to approach heart rate analysis.
The first example contains noisy sections and comes with a timer column that counts miliseconds since start of recording.
>>> import heartpy as hp >>> data, timer = hp.load_exampledata(1) >>> sample_rate = hp.get_samplerate_mstimer(timer) >>> '%.3f' %sample_rate '116.996'
The sample rate is one of the most important characteristics during the heart rate analysis, as all measures are relative to this.
With all data loaded and the sample rate determined, nalysis is now easy:
>>> wd, m = hp.process(data, sample_rate = sample_rate)
The measures (‘m’) dictionary returned contains all determined measures
>>> '%.3f' %m['bpm'] '62.376' >>> '%.3f' %m['rmssd'] '57.070'
Using a slightly longer example:
>>> data, timer = hp.load_exampledata(2) >>> print(timer[0]) 20161124 13:58:58.081000
As you can see something is going on here: we have a datetimebased timer. HeartPy can accomodate this and determine sample rate nontheless:
>>> sample_rate = hp.get_samplerate_datetime(timer, timeformat = '%Y%m%d %H:%M:%S.%f') >>> '%.3f' %sample_rate '100.420'
Now analysis can proceed. Let’s also compute frequency domain data and interpolate clipping. In this segment the clipping is visible around amplitude 980 so let’s set that as well:
>>> wd, m = hp.process(data, sample_rate = sample_rate, calc_freq = True, ... interp_clipping = True, interp_threshold = 975) >>> '%.3f' %m['bpm'] '97.270' >>> '%.3f' %m['rmssd'] '34.743' >>> '%.3f' %m['lf/hf'] '4.960'
High precision mode will upsample 200ms of data surrounding detected peak and attempt to estimate the peak’s real position with higher accuracy. Use high_precision_fs to set the virtual sample rate to which the peak will be upsampled (e.g. 1000Hz gives an estimated 1ms accuracy)
>>> wd, m = hp.process(data, sample_rate = sample_rate, calc_freq = True, ... high_precision = True, high_precision_fs = 1000.0)
Finally setting reject_segmentwise will reject segments with more than 30% rejected beats See check_binary_quality in the peakdetection.py module.
>>> wd, m = hp.process(data, sample_rate = sample_rate, calc_freq = True, ... reject_segmentwise = True)
Final test for code coverage, let’s turn all bells and whistles on that haven’t been tested yet
>>> wd, m = hp.process(data, sample_rate = 100.0, calc_freq = True, ... interp_clipping = True, clipping_scale = True, hampel_correct = True, ... reject_segmentwise = True, clean_rr = True)

heartpy.
process_segmentwise
(hrdata, sample_rate, segment_width=120, segment_overlap=0, segment_min_size=20, replace_outliers=False, outlier_method='iqr', mode='full', **kwargs)[source]¶ processes passed heart rate data with a windowed function
Analyses a long heart rate data array by running a moving window over the data, computing measures in each iteration. Both the window width and the overlap with the previous window location are settable.
Parameters:  hrdata (1d array or list) – array or list containing heart rate data to be analysed
 sample_rate (int or float) – the sample rate with which the heart rate data is sampled
 segment_width (int or float) – width of segments in seconds default : 120
 segment_overlap (float) – overlap fraction of adjacent segments. Needs to be 0 <= segment_overlap < 1. default : 0 (no overlap)
 segment_min_size (int) – often a tail end of the data remains after segmenting into segments. segment_min_size indicates the minimum length (in seconds) the tail end needs to be in order to be included in analysis. It is discarded if it’s shorter. default : 20
 replace_outliers (bool) – whether to detct and replace outliers in the segments. Will iterate over all computed measures and evaluate each.
 outlier_method (str) – what method to use to detect outlers. Available are ‘iqr’, which uses the interquartile range, and ‘zscore’, which uses the modified zscore approach.
 mode (str) – ‘full’ or ‘fast’
 arguments (Keyword) –
  –
  1dimensional numpy array or list containing heart rate data (hrdata) –
  the sample rate of the heart rate data (sample_rate) –
  the width of the segment, in seconds, within which all measures (segment_width) – will be computed.
  the fraction of overlap of adjacent segments, (segment_overlap) – needs to be 0 <= segment_overlap < 1
  After segmenting the data, a tail end will likely remain that is shorter than the specified (segment_min_size) – segment_size. segment_min_size sets the minimum size for the last segment of the generated series of segments to still be included. Default = 20.
  bool, whether to replace outliers (likely caused by peak fitting (replace_outliers) – errors on one or more segments) with the median.
  which method to use to detect outliers. Available are the (outlier_method) – ‘interquartilerange’ (‘iqr’) and the ‘modified zscore’ (‘zscore’) methods.
Returns:  working_data (dict) – dictionary object used to store temporary values.
 measures (dict) – dictionary object used by heartpy to store computed measures.
Examples
Given one of the included example datasets we can demonstrate this function:
>>> import heartpy as hp >>> data, timer = hp.load_exampledata(2) >>> sample_rate = hp.get_samplerate_datetime(timer, timeformat = '%Y%m%d %H:%M:%S.%f') >>> wd, m = hp.process_segmentwise(data, sample_rate, segment_width=120, segment_overlap=0.5) >>> len(m['bpm']) 11
The function has split the data into 11 segments and analysed each one. Every key in the measures (m) dict now contains a list of that measure for each segment.
>>> [round(x, 1) for x in m['bpm']] [100.0, 96.8, 97.2, 97.9, 96.7, 96.8, 96.8, 95.0, 92.9, 96.7, 99.2]
Specifying mode = ‘fast’ will run peak detection once and use detections to compute measures over each segment. Useful for speed ups, but typically the full mode has better results.
>>> wd, m = hp.process_segmentwise(data, sample_rate, segment_width=120, segment_overlap=0.5, ... mode = 'fast', replace_outliers = True)
You can specify the outlier detection method (‘iqr’  interquartile range, or ‘zscore’ for modified zscore approach).
>>> wd, m = hp.process_segmentwise(data, sample_rate, segment_width=120, segment_overlap=0.5, ... mode = 'fast', replace_outliers = True, outlier_method = 'zscore')
Visualisation¶

heartpy.
plotter
(working_data, measures, show=True, title='Heart Rate Signal Peak Detection', moving_average=False)[source]¶ plots the analysis results.
Function that uses calculated measures and data stored in the working_data{} and measures{} dict objects to visualise the fitted peak detection solution.
Parameters:  working_data (dict) – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function
 measures (dict) – dictionary object used by heartpy to store computed measures. Will be created if not passed to function
 show (bool) – when False, function will return a plot object rather than display the results. default : True
 title (string) – title for the plot. default : “Heart Rate Signal Peak Detection”
 moving_average (bool) – whether to display the moving average on the plot. The moving average is used for peak fitting. default: False
Returns: out – only returned if show == False.
Return type: matplotlib plot object
Examples
First let’s load and analyse some data to visualise
>>> import heartpy as hp >>> data, _ = hp.load_exampledata(0) >>> wd, m = hp.process(data, 100.0)
Then we can visualise
>>> plot_object = plotter(wd, m, show=False, title='some awesome title')
This returns a plot object which can be visualized or saved or appended. See matplotlib API for more information on how to do this.
A matplotlib plotting object is returned. This can be further processed and saved to a file.

heartpy.
segment_plotter
(working_data, measures, title='Heart Rate Signal Peak Detection', figsize=(6, 6), path='', start=0, end=None, step=1)[source]¶ plots analysis results
Function that plots the results of segmentwise processing of heart rate signal and writes all results to separate files at the path provided.
Parameters:  working_data (dict) – dictionary object that contains all heartpy’s working data (temp) objects. will be created if not passed to function
 measures (dict) – dictionary object used by heartpy to store computed measures. Will be created if not passed to function
 title (str) – the title used in the plot
 figsize (tuple) – figsize tuple to be passed to matplotlib
 path (str) – the path where the files will be stored, folder must exist.
 start (int) – what segment to start plotting with default : 0
 end (int) – last segment to plot. Must be smaller than total number of segments default : None, will plot until end
 step (int) – stepsize used when iterating over plots every step’th segment will be plotted default : 1
Returns: Return type: None
Examples
This function has no examples. See documentation of heartpy for more info.
Preprocessing functions¶

heartpy.
enhance_peaks
(hrdata, iterations=2)[source]¶ enhances peak amplitude relative to rest of signal
Function thta attempts to enhance the signalnoise ratio by accentuating the highest peaks. Note: denoise first
Parameters:  hrdata (1d numpy array or list) – sequence containing heart rate data
 iterations (int) – the number of scaling steps to perform default : 2
Returns: out – array containing enhanced peaks
Return type: 1d numpy array
Examples
Given an array of data, the peaks can be enhanced using the function
>>> x = [200, 300, 500, 900, 500, 300, 200] >>> enhance_peaks(x) array([ 0. , 4.31776016, 76.16528926, 1024. , 76.16528926, 4.31776016, 0. ])

heartpy.
enhance_ecg_peaks
(hrdata, sample_rate, iterations=4, aggregation='mean', notch_filter=True)[source]¶ enhances ECG peaks
Function that convolves synthetic QRS templates with the signal, leading to a strong increase signaltonoise ratio. Function ends with an optional Notch filterstep (default : true) to reduce noise from the iterating convolution steps.
Parameters:  hrdata (1d numpy array or list) – sequence containing heart rate data
 sample_rate (int or float) – sample rate with which the data is sampled
 iterations (int) – how many convolutional iterations should be run. More will result in stronger peak enhancement, but over a certain point (usually between 1216) overtones start appearing in the signal. Only increase this if the peaks aren’t amplified enough. default : 4
 aggregation (str) – how the data from the different convolutions should be aggregated. Can be either ‘mean’ or ‘median’. default : mean
 notch_filter (bool) – whether to apply a notch filter after the last convolution to get rid of remaining low frequency noise. default : true
Returns: output – The array containing the filtered data with enhanced peaks
Return type: 1d array
Examples
First let’s import the module and load the data
>>> import heartpy as hp >>> data, timer = hp.load_exampledata(1) >>> sample_rate = hp.get_samplerate_mstimer(timer)
After loading the data we call the function like so:
>>> filtered_data = enhance_ecg_peaks(data, sample_rate, iterations = 3)
By default the module uses the mean to aggregate convolutional outputs. It is also possible to use the median.
>>> filtered_data = enhance_ecg_peaks(data, sample_rate, iterations = 3, ... aggregation = 'median', notch_filter = False)
In the last example we also disabled the notch filter.

heartpy.
flip_signal
(data, enhancepeaks=False, keep_range=True)[source]¶ invert signal waveforms.
Function that flips raw signal with negative mV peaks to normal ECG. Required for proper peak finding in case peaks are expressed as negative dips.
Parameters:  data (1d list or numpy array) – data section to be evaluated
 enhance_peaks (bool) – whether to apply peak accentuation default : False
 keep_range (bool) – whether to scale the inverted data so that the original range is maintained
Returns: out
Return type: 1d array
Examples
Given an array of data
>>> x = [200, 300, 500, 900, 500, 300, 200]
We can call the function. If keep_range is False, the signal will be inverted relative to its mean.
>>> flip_signal(x, keep_range=False) array([628.57142857, 528.57142857, 328.57142857, 71.42857143, 328.57142857, 528.57142857, 628.57142857])
However, by specifying keep_range, the inverted signal will be put ‘back in place’ in its original range.
>>> flip_signal(x, keep_range=True) array([900., 800., 600., 200., 600., 800., 900.])
It’s also possible to use the enhance_peaks function:
>>> flip_signal(x, enhancepeaks=True) array([1024. , 621.75746332, 176.85545623, 0. , 176.85545623, 621.75746332, 1024. ])

heartpy.
remove_baseline_wander
(data, sample_rate, cutoff=0.05)[source]¶ removes baseline wander
Function that uses a Notch filter to remove baseline wander from (especially) ECG signals
Parameters:  data (1dimensional numpy array or list) – Sequence containing the to be filtered data
 sample_rate (int or float) – the sample rate with which the passed data sequence was sampled
 cutoff (int, float) – the cutoff frequency of the Notch filter. We recommend 0.05Hz. default : 0.05
Returns: out – 1d array containing the filtered data
Return type: 1d array
Examples
>>> import heartpy as hp >>> data, _ = hp.load_exampledata(0)
baseline wander is removed by calling the function and specifying the data and sample rate.
>>> filtered = remove_baseline_wander(data, 100.0)

heartpy.
scale_data
(data, lower=0, upper=1024)[source]¶ scales passed sequence between thresholds
Function that scales passed data so that it has specified lower and upper bounds.
Parameters:  data (1d array or list) – Sequence to be scaled
 lower (int or float) – lower threshold for scaling default : 0
 upper (int or float) – upper threshold for scaling default : 1024
Returns: out – contains scaled data
Return type: 1d array
Examples
When passing data without further arguments to the function means it scales 01024
>>> x = [2, 3, 4, 5] >>> scale_data(x) array([ 0. , 341.33333333, 682.66666667, 1024. ])
Or you can specify a range:
>>> scale_data(x, lower = 50, upper = 124) array([ 50. , 74.66666667, 99.33333333, 124. ])

heartpy.
scale_sections
(data, sample_rate, windowsize=2.5, lower=0, upper=1024)[source]¶ scales data using sliding window approach
Function that scales the data within the defined sliding window between the defined lower and upper bounds.
Parameters:  data (1d array or list) – Sequence to be scaled
 sample_rate (int or float) – Sample rate of the passed signal
 windowsize (int or float) – size of the window within which signal is scaled, in seconds default : 2.5
 lower (int or float) – lower threshold for scaling. Passed to scale_data. default : 0
 upper (int or float) – upper threshold for scaling. Passed to scale_data. default : 1024
Returns: out – contains scaled data
Return type: 1d array
Examples
>>> x = [20, 30, 20, 30, 70, 80, 20, 30, 20, 30] >>> scale_sections(x, sample_rate=1, windowsize=2, lower=20, upper=30) array([20., 30., 20., 30., 20., 30., 20., 30., 20., 30.])
Utilities¶

heartpy.
get_data
(filename, delim=', ', column_name='None', encoding=None, ignore_extension=False)[source]¶ load data from file
Function to load data from a .CSV or .MAT file into numpy array. File can be accessed from local disk or url.
Parameters:  filename (string) – absolute or relative path to the file object to read
 delim (string) – the delimiter used if CSV file passed default : ‘,’
 column_name (string) – for CSV files with header: specify column that contains the data for matlab files it specifies the table name that contains the data default : ‘None’
 ignore_extension (bool) – if True, extension is not tested, use for example for files where the extention is not .csv or .txt but the data is formatted as if it is. default : False
Returns: out – array containing the data from the requested column of the specified file
Return type: 1d numpy array
Examples
As an example, let’s load two example data files included in the package For this we use pkg_resources for automated testing purposes, you don’t need this when using the function.
>>> from pkg_resources import resource_filename >>> filepath = resource_filename(__name__, 'data/data.csv')
So, assuming your file lives at ‘filepath’, you open it as such:
>>> get_data(filepath) array([530., 518., 506., ..., 492., 493., 494.])
Files with multiple columns can be opened by specifying the ‘column_name’ where the data resides:
>>> filepath = resource_filename(__name__, 'data/data2.csv')
Again you don’t need the above. It is there for automated testing.
>>> get_data(filepath, column_name='timer') array([0.00000000e+00, 8.54790319e+00, 1.70958064e+01, ..., 1.28192904e+05, 1.28201452e+05, 1.28210000e+05])
You can open matlab files in much the same way by specifying the column where the data lives:
>>> filepath = resource_filename(__name__, 'data/data2.mat')
Again you don’t need the above. It is there for automated testing. Open matlab file by specifying the column name as well:
>>> get_data(filepath, column_name='hr') array([515., 514., 514., ..., 492., 494., 496.])
You can any csv formatted text file no matter the extension if you set ignore_extension to True:
>>> filepath = resource_filename(__name__, 'data/data.log') >>> get_data(filepath, ignore_extension = True) array([530., 518., 506., ..., 492., 493., 494.])
You can specify column names in the same way when using ignore_extension
>>> filepath = resource_filename(__name__, 'data/data2.log') >>> data = get_data(filepath, column_name = 'hr', ignore_extension = True)

heartpy.
load_exampledata
(example=0)[source]¶ loads example data
Function to load one of the example datasets included in HeartPy and used in the documentation.
Parameters: example (int (0, 1, 2)) – selects example data used in docs of three datafiles. Available (see github repo for source of files): 0 : data.csv 1 : data2.csv 2 : data3.csv default : 0 Returns: out – Contains the data and timer column. If no timer data is available, such as in example 0, an empty second array is returned. Return type: tuple of two arrays Examples
This function can load one of the three example data files provided with HeartPy. It returns both the data and a timer if that is present
For example:
>>> data, _ = load_exampledata(0) >>> data[0:5] array([530., 518., 506., 494., 483.])
And another example:
>>> data, timer = load_exampledata(1) >>> [round(x, 2) for x in timer[0:5]] [0.0, 8.55, 17.1, 25.64, 34.19]

heartpy.
get_samplerate_mstimer
(timerdata)[source]¶ detemine sample rate based on ms timer
Function to determine sample rate of data from msbased timer list or array.
Parameters: timerdata (1d numpy array or list) – sequence containing values of a timer, in ms Returns: out – the sample rate as determined from the timer sequence provided Return type: float Examples
first we load a provided example dataset
>>> data, timer = load_exampledata(example = 1)
since it’s a timer that counts miliseconds, we use this function. Let’s also round to three decimals
>>> round(get_samplerate_mstimer(timer), 3) 116.996
of course if another time unit is used, converting it to msbased should be trivial.

heartpy.
get_samplerate_datetime
(datetimedata, timeformat='%H:%M:%S.%f')[source]¶ determine sample rate based on datetime
Function to determine sample rate of data from datetimebased timer list or array.
Parameters:  timerdata (1d numpy array or list) – sequence containing datetime strings
 timeformat (string) – the format of the datetimestrings in datetimedata default : ‘%H:%M:%S.f’ (24hour based time including ms: e.g. 21:43:12.569)
Returns: out – the sample rate as determined from the timer sequence provided
Return type: float
Examples
We load the data like before
>>> data, timer = load_exampledata(example = 2) >>> timer[0] '20161124 13:58:58.081000'
Note that we need to specify the timeformat used so that datetime understands what it’s working with:
>>> round(get_samplerate_datetime(timer, timeformat = '%Y%m%d %H:%M:%S.%f'), 3) 100.42
Filtering¶

heartpy.
filter_signal
(data, cutoff, sample_rate, order=2, filtertype='lowpass', return_top=False)[source]¶ Apply the specified filter
Function that applies the specified lowpass, highpass or bandpass filter to the provided dataset.
Parameters:  data (1dimensional numpy array or list) – Sequence containing the to be filtered data
 cutoff (int, float or tuple) – the cutoff frequency of the filter. Expects float for low and high types and for bandpass filter expects list or array of format [lower_bound, higher_bound]
 sample_rate (int or float) – the sample rate with which the passed data sequence was sampled
 order (int) – the filter order default : 2
 filtertype (str) – The type of filter to use. Available:  lowpass : a lowpass butterworth filter  highpass : a highpass butterworth filter  bandpass : a bandpass butterworth filter  notch : a notch filter around specified frequency range both the highpass and notch filter are useful for removing baseline wander. The notch filter is especially useful for removing baseling wander in ECG signals.
Returns: out – 1d array containing the filtered data
Return type: 1d array
Examples
>>> import numpy as np >>> import heartpy as hp
Using standard data provided
>>> data, _ = hp.load_exampledata(0)
We can filter the signal, for example with a lowpass cutting out all frequencies of 5Hz and greater (with a sloping frequency cutoff)
>>> filtered = filter_signal(data, cutoff = 5, sample_rate = 100.0, order = 3, filtertype='lowpass') >>> print(np.around(filtered[0:6], 3)) [530.175 517.893 505.768 494.002 482.789 472.315]
Or we can cut out all frequencies below 0.75Hz with a highpass filter:
>>> filtered = filter_signal(data, cutoff = 0.75, sample_rate = 100.0, order = 3, filtertype='highpass') >>> print(np.around(filtered[0:6], 3)) [17.975 28.271 38.609 48.992 58.422 67.902]
Or specify a range (here: 0.75  3.5Hz), outside of which all frequencies are cut out.
>>> filtered = filter_signal(data, cutoff = [0.75, 3.5], sample_rate = 100.0, ... order = 3, filtertype='bandpass') >>> print(np.around(filtered[0:6], 3)) [12.012 23.159 34.261 45.12 55.541 65.336]
A ‘Notch’ filtertype is also available (see remove_baseline_wander).
>>> filtered = filter_signal(data, cutoff = 0.05, sample_rate = 100.0, filtertype='notch')
Finally we can use the return_top flag to only return the filter response that has amplitute above zero. We’re only interested in the peaks, and sometimes this can improve peak prediction:
>>> filtered = filter_signal(data, cutoff = [0.75, 3.5], sample_rate = 100.0, ... order = 3, filtertype='bandpass', return_top = True) >>> print(np.around(filtered[48:53], 3)) [ 0. 0. 0.409 17.088 35.673]

heartpy.
hampel_filter
(data, filtsize=6)[source]¶ Detect outliers based on hampel filter
Funcion that detects outliers based on a hampel filter. The filter takes datapoint and six surrounding samples. Detect outliers based on being more than 3std from window mean. See: https://www.mathworks.com/help/signal/ref/hampel.html
Parameters:  data (1d list or array) – list or array containing the data to be filtered
 filtsize (int) – the filter size expressed the number of datapoints taken surrounding the analysed datapoint. a filtsize of 6 means three datapoints on each side are taken. total filtersize is thus filtsize + 1 (datapoint evaluated)
Returns: out
Return type: array containing filtered data
Examples
>>> from .datautils import get_data, load_exampledata >>> data, _ = load_exampledata(0) >>> filtered = hampel_filter(data, filtsize = 6) >>> print('%i, %i' %(data[1232], filtered[1232])) 497, 496

heartpy.
hampel_correcter
(data, sample_rate)[source]¶ apply altered version of hampel filter to suppress noise.
Function that returns te difference between data and 1second windowed hampel median filter. Results in strong noise suppression characteristics, but relatively expensive to compute.
Result on output measures is present but generally not large. However, use sparingly, and only when other means have been exhausted.
Parameters:  data (1d numpy array) – array containing the data to be filtered
 sample_rate (int or float) – sample rate with which data was recorded
Returns: out – array containing filtered data
Return type: 1d numpy array
Examples
>>> from .datautils import get_data, load_exampledata >>> data, _ = load_exampledata(1) >>> filtered = hampel_correcter(data, sample_rate = 116.995)