body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
5fbac1222e7e65440aacf634a83faffceb43c5f0de6a06797ed53ff9bc3e7b35 | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError | This method will be called by the Worker to execute in a process.
Override this method.
Use __init__ to set any params needed for this call
The messenger parameter is a Messenger instance
Use messenger.debug/info/warning/error to send logs
Use messenger.submit_tasks to submit sub tasks to the server
Use messenger.query_results to query for results of the submitted sub tasks
If you call predefined functions in this method, to catch possible `print` in the function, do:
predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print
See the RunFunction procedure as an example
ATTENTION: do not use multiprocessing in this method.
:param messenger: Messenger
:return: The data if the task is successful. The data will be constructed to a successful
TaskResult by the TaskWorker.
:raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.
raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.
Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using
the Exception instance as data | src/palpable/procedures/procedure.py | run | XiaoMutt/palpable | 0 | python | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError<|docstring|>This method will be called by the Worker to execute in a process.
Override this method.
Use __init__ to set any params needed for this call
The messenger parameter is a Messenger instance
Use messenger.debug/info/warning/error to send logs
Use messenger.submit_tasks to submit sub tasks to the server
Use messenger.query_results to query for results of the submitted sub tasks
If you call predefined functions in this method, to catch possible `print` in the function, do:
predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print
See the RunFunction procedure as an example
ATTENTION: do not use multiprocessing in this method.
:param messenger: Messenger
:return: The data if the task is successful. The data will be constructed to a successful
TaskResult by the TaskWorker.
:raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.
raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.
Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using
the Exception instance as data<|endoftext|> |
ecf62cc1ea9f0fa2947e86dbd3c3096956d7d15453804987fa3605ecdac0f258 | def comprep():
'Preparation of the communication (termination, etc...)'
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status() | Preparation of the communication (termination, etc...) | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | comprep | Rohde-Schwarz/examples | 0 | python | def comprep():
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status() | def comprep():
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status()<|docstring|>Preparation of the communication (termination, etc...)<|endoftext|> |
f544a7d881d6d549903c86d32ee8ac85892e42f64389c7f5f13264bbd342aa21 | def close():
'Close the VISA session'
Instrument.close() | Close the VISA session | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | close | Rohde-Schwarz/examples | 0 | python | def close():
Instrument.close() | def close():
Instrument.close()<|docstring|>Close the VISA session<|endoftext|> |
9bea74ac8e321f50595cc7a0e895ba4a3cd9d732cedf641a55af3182598af6ba | def comcheck():
'Check communication with the device'
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse)) | Check communication with the device | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | comcheck | Rohde-Schwarz/examples | 0 | python | def comcheck():
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse)) | def comcheck():
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse))<|docstring|>Check communication with the device<|endoftext|> |
ba61186c1f9e39d0df675e76d2adf2db0488f2c68dd23057b657e007b52401df | def meassetup():
'Prepare measurement setup and define calkit'
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF') | Prepare measurement setup and define calkit | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | meassetup | Rohde-Schwarz/examples | 0 | python | def meassetup():
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF') | def meassetup():
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF')<|docstring|>Prepare measurement setup and define calkit<|endoftext|> |
1a174ca8a92d6f72dea95e752a05bb690fafa023c79127a202c0dbc23104a5ae | def calopen():
'Perform calibration of open element'
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1') | Perform calibration of open element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calopen | Rohde-Schwarz/examples | 0 | python | def calopen():
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1') | def calopen():
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1')<|docstring|>Perform calibration of open element<|endoftext|> |
5246185cfcb1fd96f657be57e32b4e4e58ec72c9fc14b5659b7baa3f9190c0fc | def calshort():
'Perform calibration with short element'
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1') | Perform calibration with short element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calshort | Rohde-Schwarz/examples | 0 | python | def calshort():
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1') | def calshort():
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1')<|docstring|>Perform calibration with short element<|endoftext|> |
0ea980840a620f5ad8ba6bb680ddebd9c27ab8cb1869f0b5bb9d6b07de6a9b60 | def calmatch():
'Perform calibration with matched element'
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1') | Perform calibration with matched element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calmatch | Rohde-Schwarz/examples | 0 | python | def calmatch():
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1') | def calmatch():
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1')<|docstring|>Perform calibration with matched element<|endoftext|> |
878fb1ef804542e23bfc82957d210873713d410be5cd1ec95d1c37d78abf1005 | def applycal():
'Apply calibration after it is finished and save the calfile'
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected') | Apply calibration after it is finished and save the calfile | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | applycal | Rohde-Schwarz/examples | 0 | python | def applycal():
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected') | def applycal():
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected')<|docstring|>Apply calibration after it is finished and save the calfile<|endoftext|> |
bacbe8468b363fdae186b5da7d9a47b110550b063a2c6580b26fd411d0103ff8 | def savecal():
'Save the calibration file to the pool'
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"') | Save the calibration file to the pool | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | savecal | Rohde-Schwarz/examples | 0 | python | def savecal():
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"') | def savecal():
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"')<|docstring|>Save the calibration file to the pool<|endoftext|> |
d7da93e442948bd267b4ff1cee4a8225015c47bb1845fc1e4a35de06cf1efe83 | def loadprep():
'Reset the instrument, add two channels and load calibration file to each channel'
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501') | Reset the instrument, add two channels and load calibration file to each channel | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | loadprep | Rohde-Schwarz/examples | 0 | python | def loadprep():
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501') | def loadprep():
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501')<|docstring|>Reset the instrument, add two channels and load calibration file to each channel<|endoftext|> |
05d018a3906438e60511401bb6b55027105fc08dc20fe444d27bbf0eb3a81247 | def loadcal():
'Now load the cal file to each channel'
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"') | Now load the cal file to each channel | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | loadcal | Rohde-Schwarz/examples | 0 | python | def loadcal():
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"') | def loadcal():
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"')<|docstring|>Now load the cal file to each channel<|endoftext|> |
3ff8b9203d34a658b8d6fbb38c1b4711d16c93ff7fbd743ce52f1d65d220ff9a | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count | RyanWei/milvus | 3 | python | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors<|endoftext|> |
be8bf77bff628fe01eec36d76adf1c2e5399e41041412a592a347fc75cba9eb2 | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partition and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_partition | RyanWei/milvus | 3 | python | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partition and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors<|endoftext|> |
b5920042c419c97a35a951c165a210d46ec2cad2fe7513107e0098ce80f976bb | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_A | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
272bebdbd5c9cd15c8052c3e53881740088089d3e7d213a941584aa0ea9556e2 | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_B | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
00f1ab5395ac0bd32cec7ed3d8ab7fa0b8b288a3d202442c8a83d8066a8cfda6 | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_C | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of vectors<|endoftext|> |
44e5a31f5f9477a111a82c0e28bd987d7fb6d1f958d33c358c33d8fe0a7fdc92 | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_D | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities<|endoftext|> |
2cdb648a12c07332d4e0b0fd5801410c88a58b429306bae2303396c9dff41a46 | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
fad252f5927a259eb49255e0d517edd6b375822afd6bbed1958140078b404508 | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection) | target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | test_count_without_connection | RyanWei/milvus | 3 | python | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection) | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection)<|docstring|>target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception<|endoftext|> |
8cd24880cb7b6ccf597167cc40e7109ac350ea8a7985433ac892171c95548b38 | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0) | target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0 | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_no_vectors | RyanWei/milvus | 3 | python | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0) | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0)<|docstring|>target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0<|endoftext|> |
bf479c69039350542da1a3dbda5ad214bfc1765674ee1f4680e4879aa849e55b | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
d7915203feac9e3756bf67cb58d280b08395db2ff74b94246e7d41377c474472 | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count | RyanWei/milvus | 3 | python | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
5b4e0c2f85689b6a90d941758c5cc3ad6fa65a0a5677ce991414c5600d7aedd7 | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partition and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_partition | RyanWei/milvus | 3 | python | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partition and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
9a8d4fa1c99ba833d2357767eb22394f9b78c9a3f54e8fba8742c0079b10bc64 | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_A | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
a141514a7c944e8afc7fd4cf215584a7e3fd272263f44d9679e438737da77bbf | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_B | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
49b11d7d45cd31fee07a8dce5ac73b2193bffe1e5b58ea2120532b9768e65f8b | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_C | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
058a27c1ade68860abe2630065d10cbb5c2c32e412e2282b77480b3130e51b73 | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_D | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities<|endoftext|> |
3d5a047123c96bac76431ad3bab6352d3980074083d2451a60078418aa4c247e | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
4c0e83a981d71177f638fb5a4d4b3a825862d8ffd089741416ac16cebe3e10de | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
a24a30860e0f364d690372fb1f4c159f1ed2570ec92578cd4bbe59b481126796 | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0) | target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0 | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_no_entities | RyanWei/milvus | 3 | python | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0) | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0)<|docstring|>target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0<|endoftext|> |
ad73705c4510671dba0039868bb12878060608b113ec70984d48560d03ce46b7 | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_l2 | RyanWei/milvus | 3 | python | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
811428e6bf687da6052e5ba2e4799f26d7adc67f3cad04cd312e8417f3d50986 | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_binary | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
b613c502350e3d6226c854117fbe6ad761cb98ef1873e21e6998896e4eab2c06 | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb) | target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_mix | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb) | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb)<|docstring|>target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
1451e621da424d5884a0589e168784a348aa8f2e4ab972e04705217f4be8b5b3 | def extractKuronochandesuyoWordpressCom(item):
"\n\tParser for 'kuronochandesuyo.wordpress.com'\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False | Parser for 'kuronochandesuyo.wordpress.com' | WebMirror/management/rss_parser_funcs/feed_parse_extractKuronochandesuyoWordpressCom.py | extractKuronochandesuyoWordpressCom | fake-name/ReadableWebProxy | 193 | python | def extractKuronochandesuyoWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False | def extractKuronochandesuyoWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False<|docstring|>Parser for 'kuronochandesuyo.wordpress.com'<|endoftext|> |
2d02ed9b9acee1e940a18bbe08bd3a25919a069a553f38ed3b60c2bd4a5a8ae6 | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths | Downloads videos that match search parameters and returns their metadata.
This function simply combines the operations of fetch_video_metadata() and
download_video_files(). See documentation of those functions for details.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_video_directory (str): Base of local video tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended | video_io/core.py | fetch_videos | optimuspaul/wf-video-io | 0 | python | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths<|docstring|>Downloads videos that match search parameters and returns their metadata.
This function simply combines the operations of fetch_video_metadata() and
download_video_files(). See documentation of those functions for details.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_video_directory (str): Base of local video tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended<|endoftext|> |
f604327861369202603d278580ab22d60679b560735fa9056d7e5287c44834db | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths | Downloads images that match search parameters and returns their metadata.
This function simply combines the operations of fetch_image_metadata() and
download_image_files(). See documentation of those functions for details.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended | video_io/core.py | fetch_images | optimuspaul/wf-video-io | 0 | python | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths<|docstring|>Downloads images that match search parameters and returns their metadata.
This function simply combines the operations of fetch_image_metadata() and
download_image_files(). See documentation of those functions for details.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended<|endoftext|> |
3fe1e452e5a6a5d9dc5d01f3a9b0c3e6e583a704d2b63764c48ea1fe0517685b | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata | Searches Honeycomb for videos that match specified search parameters and
returns their metadata.
Videos must match all specified search parameters (i.e., the function
performs a logical AND of all of the queries). If camera information is not
specified, returns results for all devices that have one of the specified
camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).
Redundant combinations of search terms will generate an error (e.g., user
cannot specify environment name and environment ID, camera assignment IDs
and camera device IDs, etc.)
If start and end are specified, returns all videos that overlap with
specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,
returns videos starting at 10:32:50, 10:33:00 and 10:33:10).
Returned metadata is a list of dictionaries, one for each video. Each
dictionary has the following fields: data_id, video_timestamp,
environment_id, assignment_id, device_id, bucket, key.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for videos that match search parameters | video_io/core.py | fetch_video_metadata | optimuspaul/wf-video-io | 0 | python | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata<|docstring|>Searches Honeycomb for videos that match specified search parameters and
returns their metadata.
Videos must match all specified search parameters (i.e., the function
performs a logical AND of all of the queries). If camera information is not
specified, returns results for all devices that have one of the specified
camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).
Redundant combinations of search terms will generate an error (e.g., user
cannot specify environment name and environment ID, camera assignment IDs
and camera device IDs, etc.)
If start and end are specified, returns all videos that overlap with
specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,
returns videos starting at 10:32:50, 10:33:00 and 10:33:10).
Returned metadata is a list of dictionaries, one for each video. Each
dictionary has the following fields: data_id, video_timestamp,
environment_id, assignment_id, device_id, bucket, key.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for videos that match search parameters<|endoftext|> |
7573517164918a66d27e76d61c49c21a1342131a5af838d7931148cdb15668fe | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths | Downloads videos from S3 to local directory tree and returns metadata with
local path information added.
Videos are specified as a list of dictionaries, as returned by the function
fetch_video_metadata(). Each dictionary is assumed to have the following
fields: data_id, video_timestamp, environment_id, assignment_id, device_id,
bucket, and key (though only a subset of these are currently used).
Structure of resulting tree is [base directory]/[environment ID]/[camera
assignment ID]/[year]/[month]/[day]. Filenames are in the form
[hour]-[minute]-[second].[filename extension]. Videos are only downloaded if
they don't already exist in the local directory tree. Directories are
created as necessary.
Function returns the metadata with local path information appended to each
record (in the field video_local_path).
Args:
video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended | video_io/core.py | download_video_files | optimuspaul/wf-video-io | 0 | python | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths<|docstring|>Downloads videos from S3 to local directory tree and returns metadata with
local path information added.
Videos are specified as a list of dictionaries, as returned by the function
fetch_video_metadata(). Each dictionary is assumed to have the following
fields: data_id, video_timestamp, environment_id, assignment_id, device_id,
bucket, and key (though only a subset of these are currently used).
Structure of resulting tree is [base directory]/[environment ID]/[camera
assignment ID]/[year]/[month]/[day]. Filenames are in the form
[hour]-[minute]-[second].[filename extension]. Videos are only downloaded if
they don't already exist in the local directory tree. Directories are
created as necessary.
Function returns the metadata with local path information appended to each
record (in the field video_local_path).
Args:
video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended<|endoftext|> |
622b22814916798db87451f451a18212d5d5ffe00e816a0ab0e674ed8f46d326 | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata | Searches Honeycomb for videos containing images that match specified search
parameters and returns video/image metadata.
Image timestamps are rounded to the nearest tenth of a second to synchronize
with video frames. Videos containing these images must match all specified
search parameters (i.e., the function performs a logical AND of all of the
queries). If camera information is not specified, returns results for all
devices that have one of the specified camera device types ('PI3WITHCAMERA'
and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms
will generate an error (e.g., user cannot specify environment name and
environment ID, camera assignment IDs and camera device IDs, etc.)
Returned metadata is a list of dictionaries, one for each image. Each
dictionary contains information both about the image and the video that
contains the image: data_id, video_timestamp, environment_id, assignment_id,
device_id, bucket, key, and image_timestamp, and frame_number.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for images that match search parameters | video_io/core.py | fetch_image_metadata | optimuspaul/wf-video-io | 0 | python | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata<|docstring|>Searches Honeycomb for videos containing images that match specified search
parameters and returns video/image metadata.
Image timestamps are rounded to the nearest tenth of a second to synchronize
with video frames. Videos containing these images must match all specified
search parameters (i.e., the function performs a logical AND of all of the
queries). If camera information is not specified, returns results for all
devices that have one of the specified camera device types ('PI3WITHCAMERA'
and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms
will generate an error (e.g., user cannot specify environment name and
environment ID, camera assignment IDs and camera device IDs, etc.)
Returned metadata is a list of dictionaries, one for each image. Each
dictionary contains information both about the image and the video that
contains the image: data_id, video_timestamp, environment_id, assignment_id,
device_id, bucket, key, and image_timestamp, and frame_number.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for images that match search parameters<|endoftext|> |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 54