Input-output & lazy-loading#
Pynapple provides loaders for NWB format.
Each pynapple objects can be saved as a npz
with a special structure and loaded as a npz
.
In addition, the Folder
class helps you walk through a set of nested folders to load/save npz
/nwb
files.
NWB#
When loading a NWB file, pynapple will walk through it and test the compatibility of each data structure with a
pynapple objects. If the data structure is incompatible, pynapple will ignore it. The class that deals with reading
NWB file is nap.NWBFile
. You can pass the path to a NWB file or directly an opened NWB file. Alternatively
you can use the function nap.load_file
.
Note
Creating the NWB file is outside the scope of pynapple. The NWB files used here have already been created before. Multiple tools exists to create NWB file automatically. You can check neuroconv, NWBGuide or even NWBmatic.
Show code cell content
import numpy as np
import pynapple as nap
import os
import requests, math
import tqdm
nwb_path = 'A2929-200711.nwb'
if nwb_path not in os.listdir("."):
r = requests.get(f"https://osf.io/fqht6/download", stream=True)
block_size = 1024*1024
with open(nwb_path, 'wb') as f:
for data in tqdm.tqdm(r.iter_content(block_size), unit='MB', unit_scale=True,
total=math.ceil(int(r.headers.get('content-length', 0))//block_size)):
f.write(data)
0%| | 0.00/6.00 [00:00<?, ?MB/s]
7.00MB [00:00, 56.6MB/s]
7.00MB [00:00, 56.4MB/s]
data = nap.load_file(nwb_path)
print(data)
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'hdmf-common' version 1.5.0 because version 1.8.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'core' version 2.4.0 because version 2.7.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'hdmf-experimental' version 0.1.0 because version 0.5.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
A2929-200711
┍━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━┑
│ Keys │ Type │
┝━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━┥
│ units │ TsGroup │
│ position_time_support │ IntervalSet │
│ epochs │ IntervalSet │
│ z │ Tsd │
│ y │ Tsd │
│ x │ Tsd │
│ rz │ Tsd │
│ ry │ Tsd │
│ rx │ Tsd │
┕━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━┙
Pynapple will give you a table with all the entries of the NWB file that are compatible with a pynapple object.
When parsing the NWB file, nothing is loaded. The NWBFile
class keeps track of the position of the data whithin the NWB file with a key. You can see it with the attributes key_to_id
.
data.key_to_id
{'units': 'de078912-a093-4e6b-b23c-8647885a7188',
'position_time_support': '69ab7512-2d0a-45ba-af61-fae750c37f53',
'epochs': '2cca3914-ede4-41b8-b500-502465cbed95',
'z': 'e2142cf8-59d9-4637-b55d-796871c38e47',
'y': 'c5a8bba7-2699-4450-aae1-6f5a05309828',
'x': '2ba4ba5e-c6dc-4b9c-b69c-d0763240df46',
'rz': '682f8edf-91cb-4c15-bdf2-5cc5327803f0',
'ry': '57874425-e60e-4fe1-8bd3-7cb06d2adbfc',
'rx': '48597919-cce7-4724-9f72-db6e164c3c3f'}
Loading an entry will get pynapple to read the data.
z = data['z']
print(data['z'])
Time (s)
---------- ---------
670.6407 -0.195725
670.649 -0.19511
670.65735 -0.194674
670.66565 -0.194342
670.674 -0.194059
670.68235 -0.193886
670.69065 -0.193676
...
1199.94495 0.000398
1199.95325 -0.000552
1199.9616 -0.001479
1199.96995 -0.00237
1199.97825 -0.003156
1199.9866 -0.003821
1199.99495 -0.004435
dtype: float64, shape: (63527,)
Internally, the NWBClass
has replaced the pointer to the data with the actual data.
While it looks like pynapple has loaded the data, in fact it did not. By default, calling the NWB object will return an HDF5 dataset.
print(type(z.values))
<class 'h5py._hl.dataset.Dataset'>
Notice that the time array is always loaded.
print(type(z.index.values))
<class 'numpy.ndarray'>
This is very useful in the case of large dataset that do not fit in memory. You can then get a chunk of the data that will actually be loaded.
z_chunk = z.get(670, 680) # getting 10s of data.
print(z_chunk)
Time (s)
---------- ---------
670.6407 -0.195725
670.649 -0.19511
670.65735 -0.194674
670.66565 -0.194342
670.674 -0.194059
670.68235 -0.193886
670.69065 -0.193676
...
679.9485 0.062836
679.95685 0.062831
679.96515 0.062789
679.9735 0.062756
679.98185 0.06277
679.99015 0.062819
679.9985 0.062878
dtype: float64, shape: (1124,)
Data are now loaded.
print(type(z_chunk.values))
<class 'numpy.ndarray'>
You can still apply any high level function of pynapple. For example here, we compute some tuning curves without preloading the dataset.
tc = nap.compute_1d_tuning_curves(data['units'], data['y'], 10)
Warning
Carefulness should still apply when calling any pynapple function on a memory map. Pynapple does not implement any batching function internally. Calling a high level function of pynapple on a dataset that do not fit in memory will likely cause a memory error.
To change this behavior, you can pass lazy_loading=False
when instantiating the NWBClass
.
data = nap.NWBFile(nwb_path, lazy_loading=False)
z = data['z']
print(type(z.d))
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'hdmf-common' version 1.5.0 because version 1.8.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'core' version 2.4.0 because version 2.7.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
/home/runner/.local/lib/python3.10/site-packages/hdmf/spec/namespace.py:535: UserWarning: Ignoring cached namespace 'hdmf-experimental' version 0.1.0 because version 0.5.0 is already loaded.
warn("Ignoring cached namespace '%s' version %s because version %s is already loaded."
<class 'numpy.ndarray'>
Saving as NPZ#
Pynapple objects have save
methods to save them as npz files.
tsd = nap.Tsd(t=np.arange(10), d=np.arange(10))
tsd.save("my_tsd.npz")
print(nap.load_file("my_tsd.npz"))
Time (s)
---------- --
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64, shape: (10,)
To load a NPZ to pynapple, it must contain particular set of keys.
print(np.load("my_tsd.npz"))
NpzFile 'my_tsd.npz' with keys: t, d, start, end, type
When the pynapple object have metadata, they are added to the NPZ file.
tsgroup = nap.TsGroup({
0:nap.Ts(t=[0,1,2]),
1:nap.Ts(t=[0,1,2])
}, my_label = ["a", "b"])
tsgroup.save("group")
print(np.load("group.npz"))
print(np.load("group.npz")["my_label"])
NpzFile 'group.npz' with keys: type, rate, my_label, t, index...
['a' 'b']
Memory map#
Numpy memory map#
Pynapple can work with numpy.memmap
.
Show code cell content
data = np.memmap("memmap.dat", dtype='float32', mode='w+', shape = (10, 3))
data[:] = np.random.randn(10, 3).astype('float32')
timestep = np.arange(10)
print(type(data))
<class 'numpy.memmap'>
Instantiating a pynapple TsdFrame
will keep the data
as a memory map.
eeg = nap.TsdFrame(t=timestep, d=data)
print(eeg)
Time (s) 0 1 2
---------- ---------- --------- ----------
0 0.753904 0.53078 -0.720423
1 0.249622 1.46707 -0.341852
2 -0.273789 1.36487 2.95381
3 0.0596479 0.218011 -1.29375
4 0.892972 0.549492 0.194503
5 1.27229 -1.95401 0.56592
6 1.29459 0.39336 -1.44489
7 -0.527273 -0.501504 -0.244778
8 1.83092 -2.69087 -0.0872698
9 -1.06828 1.85095 0.371089
dtype: float32, shape: (10, 3)
We can check the type of eeg.values
.
print(type(eeg.values))
<class 'numpy.memmap'>
Zarr#
It is possible to use Higher level library like zarr also not directly.
import zarr
zarr_array = zarr.zeros((10000, 5), chunks=(1000, 5), dtype='i4')
timestep = np.arange(len(zarr_array))
tsdframe = nap.TsdFrame(t=timestep, d=zarr_array)
/home/runner/.local/lib/python3.10/site-packages/pynapple/core/utils.py:196: UserWarning: Converting 'd' to numpy.array. The provided array was of type 'Array'.
warnings.warn(
As the warning suggest, zarr_array
is converted to numpy array.
print(type(tsdframe.d))
<class 'numpy.ndarray'>
To maintain a zarr array, you can change the argument load_array
to False.
tsdframe = nap.TsdFrame(t=timestep, d=zarr_array, load_array=False)
print(type(tsdframe.d))
<class 'zarr.core.Array'>
Within pynapple, numpy memory map are recognized as numpy array while zarr array are not.
print(type(data), "Is np.ndarray? ", isinstance(data, np.ndarray))
print(type(zarr_array), "Is np.ndarray? ", isinstance(zarr_array, np.ndarray))
<class 'numpy.memmap'> Is np.ndarray? True
<class 'zarr.core.Array'> Is np.ndarray? False
Similar to numpy memory map, you can use pynapple functions directly.
ep = nap.IntervalSet(0, 10)
tsdframe.restrict(ep)
Time (s) 0 1 2 3 4
---------- --- --- --- --- ---
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
8 0 0 0 0 0
9 0 0 0 0 0
10 0 0 0 0 0
dtype: int32, shape: (11, 5)
group = nap.TsGroup({0:nap.Ts(t=[10, 20, 30])})
sta = nap.compute_event_trigger_average(group, tsdframe, 1, (-2, 3))
print(type(tsdframe.values))
print("\n")
print(sta)
<class 'zarr.core.Array'>
Time (s)
---------- -----------------
-2 [[0. ... 0.] ...]
-1 [[0. ... 0.] ...]
0 [[0. ... 0.] ...]
1 [[0. ... 0.] ...]
2 [[0. ... 0.] ...]
3 [[0. ... 0.] ...]
dtype: float64, shape: (6, 1, 5)