complexity
int64
1
139
fun_name
stringlengths
1
80
code
stringlengths
101
62.2k
commit_id
stringlengths
40
40
ast_errors
stringlengths
0
3.11k
ast_levels
int64
6
36
file_name
stringlengths
5
79
n_ast_nodes
int64
17
19.2k
commit_message
stringlengths
3
15.3k
d_id
int64
12
121k
n_ast_errors
int64
0
9
n_whitespaces
int64
4
10.8k
token_counts
int64
5
3.06k
vocab_size
int64
4
1.11k
id
int64
20
338k
n_words
int64
4
4.82k
repo
stringlengths
3
22
n_identifiers
int64
2
176
path
stringlengths
7
134
language
stringclasses
1 value
nloc
int64
1
413
documentation
dict
url
stringlengths
31
59
1
refresh_access_token
def refresh_access_token(self) -> Tuple[str, int]: token, expires_at = super().refresh_access_token() expires_in = pendulum.parse(expires_at) - pendulum.now() return token, expires_in.seconds
6b502d8c326bcde791128f5711a21cc03bde19f3
10
source.py
73
🎉 square: added oauth support (#6842) * fixed test which check incorrect cred config * Added oauth2 authentication * Added oauth creds * fixed formatting * added oauth2 spec section, added missing type hints * Added java part of Square OAuth * fixed checkstyle * removed commented code * added support for old format of spec.json files, updated change logs docs * renamed spec property 'authentication' to default 'credentials'. fixed changes in java part * recovered empty files * updated OAuthImplementationFactory.java * fixed issue with autheticator for sub streams, added config catalog with all streams, updated docs * use advanced_auth * added advanced_auth * moved scopes to private property * updated source version * Revert "updated source version" This reverts commit ce3d06165c4bbbe1592e22203d6b6c545deec9a9. * updated source version * added new version for airbyte index Co-authored-by: ievgeniit <[email protected]>
466
0
45
44
15
3,384
17
airbyte
13
airbyte-integrations/connectors/source-square/source_square/source.py
Python
8
{ "docstring": "Handle differences in expiration attr:\n from API: \"expires_at\": \"2021-11-05T14:26:57Z\"\n expected: \"expires_in\": number of seconds\n ", "language": "en", "n_whitespaces": 35, "n_words": 14, "vocab_size": 14 }
https://github.com/airbytehq/airbyte.git
5
start_object
def start_object(self, obj): if not hasattr(obj, "_meta"): raise base.SerializationError( "Non-model object (%s) encountered during serialization" % type(obj) ) self.indent(1) attrs = {"model": str(obj._meta)} if not self.use_natural_primary_keys or not hasattr(obj, "natural_key"): obj_pk = obj.pk if obj_pk is not None: attrs["pk"] = str(obj_pk) self.xml.startElement("object", attrs)
9c19aff7c7561e3a82978a272ecdaad40dda5c00
12
xml_serializer.py
157
Refs #33476 -- Reformatted code with Black.
50,880
0
159
91
34
204,767
43
django
16
django/core/serializers/xml_serializer.py
Python
12
{ "docstring": "\n Called as each object is handled.\n ", "language": "en", "n_whitespaces": 21, "n_words": 6, "vocab_size": 6 }
https://github.com/django/django.git
1
test_syncer_callback_dead_node_log_error
def test_syncer_callback_dead_node_log_error(caplog, ray_start_2_cpus, temp_data_dirs): caplog.set_level(logging.ERROR, logger="ray.tune.syncer") tmp_source, tmp_target = temp_data_dirs syncer_callback = TestSyncerCallback( sync_period=0, local_logdir_override=tmp_target, ) trial1 = MockTrial(trial_id="a", logdir=tmp_source, on_dead_node=True) syncer_callback.on_trial_result(iteration=1, trials=[], trial=trial1, result={}) assert ( "An error occurred when trying to get the node ip where this trial is running" in caplog.text )
fc9f8e458c4dad7a51e0d781917b1a003cb55cd7
10
test_syncer_callback.py
135
[Tune] Catch SyncerCallback failure with dead node (#29438) ### Context This issue was uncovered by this long running test: `long_running_distributed_pytorch_pbt_failure`. This test randomly kills nodes via `FailureInjectorCallback`, and the test failure happens when: 1. A trial result comes in and is processed 2. The node this trial is running on is requested to be killed by the failure injector 3. The driver's syncer callback runs on the on_trial_result event 4. The node dies 5. The driver is in the middle of syncing, trying to access the node ip, which errors ### What's in this PR? 1. Gracefully handle this race condition by catching the error thrown by the sync operation on a dead node 2. Log an error to the user 3. Adds a test for this sync with dead node scenario Signed-off-by: Justin Yu <[email protected]>
30,200
0
100
86
42
134,124
45
ray
25
python/ray/tune/tests/test_syncer_callback.py
Python
13
{ "docstring": "Check that we catch + log errors when trying syncing with a dead remote node.", "language": "en", "n_whitespaces": 14, "n_words": 15, "vocab_size": 15 }
https://github.com/ray-project/ray.git
1
test_array_vs_scalar_strict
def test_array_vs_scalar_strict(self): a = np.array([1., 1., 1.]) b = 1. with pytest.raises(AssertionError): assert_array_equal(a, b, strict=True)
cafec60a5e28af98fb8798049edd7942720d2d74
10
test_utils.py
67
ENH: Add strict parameter to assert_array_equal. (#21595) Fixes #9542 Co-authored-by: Bas van Beek <[email protected]>
38,755
0
54
45
14
160,836
15
numpy
11
numpy/testing/tests/test_utils.py
Python
5
{ "docstring": "Test comparing an array with a scalar with strict option.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
https://github.com/numpy/numpy.git
2
__getitem__
def __getitem__(self, key): if key != -1: raise NotImplementedError("Support only `-1` (for `reset_parameters`).") return self.out[key]
b617a87ee40ab384767a27335313c2c65ee094ec
10
subsampling.py
45
Init ppg extractor and ppg2mel (#375) * Init ppg extractor and ppg2mel * add preprocess and training * FIx known issues * Update __init__.py Allow to gen audio * Fix length issue * Fix bug of preparing fid * Fix sample issues * Add UI usage of PPG-vc
38,921
0
47
26
15
161,110
15
MockingBird
5
ppg_extractor/encoder/subsampling.py
Python
4
{ "docstring": "Subsample x.\n\n When reset_parameters() is called, if use_scaled_pos_enc is used,\n return the positioning encoding.\n\n ", "language": "en", "n_whitespaces": 39, "n_words": 14, "vocab_size": 13 }
https://github.com/babysor/MockingBird.git
22
__mul__
def __mul__(self, other): if other is Ellipsis: other = (0, None) elif isinstance(other, tuple) and other[:1] == (Ellipsis,): other = ((0, ) + other[1:] + (None,))[:2] if isinstance(other, int): minElements, optElements = other, 0 elif isinstance(other, tuple): other = tuple(o if o is not Ellipsis else None for o in other) other = (other + (None, None))[:2] if other[0] is None: other = (0, other[1]) if isinstance(other[0], int) and other[1] is None: if other[0] == 0: return ZeroOrMore(self) if other[0] == 1: return OneOrMore(self) else: return self * other[0] + ZeroOrMore(self) elif isinstance(other[0], int) and isinstance(other[1], int): minElements, optElements = other optElements -= minElements else: raise TypeError("cannot multiply 'ParserElement' and ('%s', '%s') objects", type(other[0]), type(other[1])) else: raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other)) if minElements < 0: raise ValueError("cannot multiply ParserElement by negative value") if optElements < 0: raise ValueError("second tuple value must be greater or equal to first tuple value") if minElements == optElements == 0: raise ValueError("cannot multiply ParserElement by 0 or (0, 0)") if optElements:
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
16
pyparsing.py
451
upd; format
13,262
0
544
359
86
63,347
169
transferlearning
15
.venv/lib/python3.8/site-packages/pip/_vendor/pyparsing.py
Python
47
{ "docstring": "\n Implementation of * operator, allows use of ``expr * 3`` in place of\n ``expr + expr + expr``. Expressions may also me multiplied by a 2-integer\n tuple, similar to ``{min, max}`` multipliers in regular expressions. Tuples\n may also include ``None`` as in:\n - ``expr*(n, None)`` or ``expr*(n, )`` is equivalent\n to ``expr*n + ZeroOrMore(expr)``\n (read as \"at least n instances of ``expr``\")\n - ``expr*(None, n)`` is equivalent to ``expr*(0, n)``\n (read as \"0 to n instances of ``expr``\")\n - ``expr*(None, None)`` is equivalent to ``ZeroOrMore(expr)``\n - ``expr*(1, None)`` is equivalent to ``OneOrMore(expr)``\n\n Note that ``expr*(None, n)`` does not raise an exception if\n more than n exprs exist in the input stream; that is,\n ``expr*(None, n)`` does not enforce a maximum number of expr\n occurrences. If this behavior is desired, then write\n ``expr*(None, n) + ~expr``\n ", "language": "en", "n_whitespaces": 280, "n_words": 135, "vocab_size": 84 }
https://github.com/jindongwang/transferlearning.git
2
_calc_impute
def _calc_impute(self, dist_pot_donors, n_neighbors, fit_X_col, mask_fit_X_col): # Get donors donors_idx = np.argpartition(dist_pot_donors, n_neighbors - 1, axis=1)[ :, :n_neighbors ] # Get weight matrix from distance matrix donors_dist = dist_pot_donors[ np.arange(donors_idx.shape[0])[:, None], donors_idx ] weight_matrix = _get_weights(donors_dist, self.weights) # fill nans with zeros if weight_matrix is not None: weight_matrix[np.isnan(weight_matrix)] = 0.0 # Retrieve donor values and calculate kNN average donors = fit_X_col.take(donors_idx) donors_mask = mask_fit_X_col.take(donors_idx) donors = np.ma.array(donors, mask=donors_mask) return np.ma.average(donors, axis=1, weights=weight_matrix).data
2e3abc2e32eefbfea78f15bcc767ca9bb4911568
12
_knn.py
205
MAINT Fix some typos (#23251)
75,917
0
210
137
56
259,779
72
scikit-learn
25
sklearn/impute/_knn.py
Python
14
{ "docstring": "Helper function to impute a single column.\n\n Parameters\n ----------\n dist_pot_donors : ndarray of shape (n_receivers, n_potential_donors)\n Distance matrix between the receivers and potential donors from\n training set. There must be at least one non-nan distance between\n a receiver and a potential donor.\n\n n_neighbors : int\n Number of neighbors to consider.\n\n fit_X_col : ndarray of shape (n_potential_donors,)\n Column of potential donors from training set.\n\n mask_fit_X_col : ndarray of shape (n_potential_donors,)\n Missing mask for fit_X_col.\n\n Returns\n -------\n imputed_values: ndarray of shape (n_receivers,)\n Imputed values for receiver.\n ", "language": "en", "n_whitespaces": 231, "n_words": 84, "vocab_size": 57 }
https://github.com/scikit-learn/scikit-learn.git
7
add_row
def add_row(self, *args): NumRows = len(self.Rows) # number of existing rows is our row number CurrentRowNumber = NumRows # this row's number CurrentRow = [] # start with a blank row and build up # ------------------------- Add the elements to a row ------------------------- # for i, element in enumerate(args): # Loop through list of elements and add them to the row if type(element) == list: popup_error_with_traceback('Error creating Tab layout', 'Layout has a LIST instead of an ELEMENT', 'This means you have a badly placed ]', 'The offensive list is:', element, 'This list will be stripped from your layout') continue elif callable(element) and not isinstance(element, Element): popup_error_with_traceback('Error creating Tab layout', 'Layout has a FUNCTION instead of an ELEMENT', 'This likely means you are missing () from your layout', 'The offensive list is:', element, 'This item will be stripped from your layout') continue if element.ParentContainer is not None: warnings.warn( '*** YOU ARE ATTEMPTING TO RESUSE AN ELEMENT IN YOUR LAYOUT! Once placed in a layout, an element cannot be used in another layout. ***', UserWarning) popup_error_with_traceback('Error creating Tab layout', 'The layout specified has already been used', 'You MUST start witha "clean", unused layout every time you create a window', 'The offensive Element = ', element, 'and has a key = ', element.Key, 'This item will be stripped from your layout', 'Hint - try printing your layout and matching the IDs "print(layout)"') continue element.Position = (CurrentRowNumber, i) element.ParentContainer = self CurrentRow.append(element) if element.Key is not None: self.UseDictionary = True # ------------------------- Append the row to list of Rows ------------------------- # self.Rows.append(CurrentRow)
85d664925ad7042896b76b32b518d778aae024e1
13
PySimpleGUI.py
288
Changed all Tab errors to the nicer traceback error popup. Removed Output Element from the Pack function (that makes the change as real as it gets)
53,532
0
999
166
143
212,951
258
PySimpleGUI
25
PySimpleGUI.py
Python
40
{ "docstring": "\n Not recommended use call. Used to add rows of Elements to the Frame Element.\n\n :param *args: The list of elements for this row\n :type *args: List[Element]\n ", "language": "en", "n_whitespaces": 57, "n_words": 26, "vocab_size": 23 }
https://github.com/PySimpleGUI/PySimpleGUI.git
11
build_system_components
def build_system_components(device_type, os_id, navigator_id): if os_id == "win": platform_version = randomizer.choice(OS_PLATFORM["win"]) cpu = randomizer.choice(OS_CPU["win"]) if cpu: platform = f"{platform_version}; {cpu}" else: platform = platform_version res = { "platform_version": platform_version, "platform": platform, "ua_platform": platform, "oscpu": platform, } elif os_id == "linux": cpu = randomizer.choice(OS_CPU["linux"]) platform_version = randomizer.choice(OS_PLATFORM["linux"]) platform = f"{platform_version} {cpu}" res = { "platform_version": platform_version, "platform": platform, "ua_platform": platform, "oscpu": "Linux %s" % cpu, } elif os_id == "mac": cpu = randomizer.choice(OS_CPU["mac"]) platform_version = randomizer.choice(OS_PLATFORM["mac"]) platform = platform_version if navigator_id == "chrome": platform = fix_chrome_mac_platform(platform) res = { "platform_version": platform_version, "platform": "MacIntel", "ua_platform": platform, "oscpu": "Intel Mac OS X %s" % platform.split(" ")[-1], } elif os_id == "android": assert navigator_id in ("firefox", "chrome") assert device_type in ("smartphone", "tablet") platform_version = randomizer.choice(OS_PLATFORM["android"]) if navigator_id == "firefox": if device_type == "smartphone": ua_platform = "%s; Mobile" % platform_version elif device_type == "tablet": ua_platform = "%s; Tablet" % platform_version elif navigator_id == "chrome": device_id = randomizer.choice(SMARTPHONE_DEV_IDS) ua_platform = f"Linux; {platform_version}; {device_id}" oscpu = "Linux %s" % randomizer.choice(OS_CPU["android"]) res = { "platform_version": platform_version, "ua_platform": ua_platform, "platform": oscpu, "oscpu": oscpu, } return res
ab4de1dd70fba866930150e440a03e461a6ca6a8
16
base.py
572
Create a packaged app bundle with Pyinstaller (#1525) * Add dashboard widget assets * Add ipywidgets and ipyflex to project * Add currencies dashboard notebook * Update docs and docstrings * Add pyinstaller to project deps * Add pyinstaller artifacts to gitignore * Fix linter errors in terminal.py * Update cspell hook and action with a pyinstaller specific word * Add pyinstaller specfile and artifacts * Add splashscreen image * Add app icon * adding splash screen support to terminal.spec and terminal.py * Restore the conda env build files * Sync deps * Add border to the splashscreen image * Clean up terminal launcher * Add support for default feature flags in packages apps * Fix types and linting * Add splashscreen management to app bootup * Check prediction feature flag when entering crypto/pred * Update pyinstaller spec file * fix .spec file to work for splash and icon - removed the ".." * Allows to export when using installer (#1568) * fix export for packaged apps * fix filename * Git : replace commit_hash when it is set in config_terminal * Add update of the git commit hash in gtff default during build * Add packaged app name and feature flag to logs * Add platform specific icon assignment * Add macOS build assets * Add tensorflow to hidden imports * Move LOGGING_COMMIT_HASH to gtff * Adding files/folders needed to .spec and pyinstaller folder. This will make certain commands work again. * Linting * Workflow : ignore ./build/pyinstaller from codespell * Workflow : exclude ./build/pyinstaller from flake8 * Poetry + Workflow : add types-six * Pyinstaller : remove property_cached, user_agent and vaderSentiment * Revert "Pyinstaller : remove property_cached, user_agent and vaderSentiment" This reverts commit dbb3e2b81086f97819ebd21457148c7160a4d703. * Clean up local paths in specfile * Validate deps have correct Jinja version (they do) * Fix logging commit hash to be set correctly for the logger to see it Co-authored-by: Andrew <[email protected]> Co-authored-by: didierlopes.eth <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]>
84,466
0
653
303
75
283,202
177
OpenBBTerminal
18
build/pyinstaller/user_agent/base.py
Python
56
{ "docstring": "\n For given os_id build random platform and oscpu\n components\n\n Returns dict {platform_version, platform, ua_platform, oscpu}\n\n platform_version is OS name used in different places\n ua_platform goes to navigator.platform\n platform is used in building navigator.userAgent\n oscpu goes to navigator.oscpu\n ", "language": "en", "n_whitespaces": 62, "n_words": 37, "vocab_size": 30 }
https://github.com/OpenBB-finance/OpenBBTerminal.git
3
update_pea_args
def update_pea_args(self): if isinstance(self.args, Dict): # This is used when a Pod is created in a remote context, where peas & their connections are already given. self.peas_args = self.args else: self.peas_args = self._parse_args(self.args) if self.is_sandbox: host, port = HubIO.deploy_public_sandbox(getattr(self.args, 'uses', '')) self.first_pea_args.host = host self.first_pea_args.port_in = port self.peas_args['head'].host = host self.peas_args['head'].port_in = port
eea04c36350e86b3b0f16217cd37e630bfb81b57
13
__init__.py
159
feat: support jinahub+sandbox (#4130)
1,886
0
169
95
40
10,637
53
jina
15
jina/peapods/pods/__init__.py
Python
11
{ "docstring": " Update args of all its peas based on Pod args. Including head/tail", "language": "en", "n_whitespaces": 12, "n_words": 12, "vocab_size": 12 }
https://github.com/jina-ai/jina.git
5
voice_conversion
def voice_conversion(self, y, y_lengths, speaker_cond_src, speaker_cond_tgt): assert self.num_speakers > 0, "num_speakers have to be larger than 0." # speaker embedding if self.args.use_speaker_embedding and not self.args.use_d_vector_file: g_src = self.emb_g(speaker_cond_src).unsqueeze(-1) g_tgt = self.emb_g(speaker_cond_tgt).unsqueeze(-1) elif not self.args.use_speaker_embedding and self.args.use_d_vector_file: g_src = F.normalize(speaker_cond_src).unsqueeze(-1) g_tgt = F.normalize(speaker_cond_tgt).unsqueeze(-1) else: raise RuntimeError(" [!] Voice conversion is only supported on multi-speaker models.") z, _, _, y_mask = self.posterior_encoder(y.transpose(1, 2), y_lengths, g=g_src) z_p = self.flow(z, y_mask, g=g_src) z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) o_hat = self.waveform_decoder(z_hat * y_mask, g=g_tgt) return o_hat, y_mask, (z, z_p, z_hat)
dbe9da7f15544b83043f481a99e5bcb23e002dc9
13
vits.py
303
Add Voice conversion inference support (#1337) * Add support for voice conversion inference * Cache d_vectors_by_speaker for fast inference using a bigger speakers.json * Rebase bug fix * Use the average d-vector for inference
77,189
0
218
198
67
262,333
86
TTS
29
TTS/tts/models/vits.py
Python
15
{ "docstring": "Forward pass for voice conversion\n\n TODO: create an end-point for voice conversion\n\n Args:\n y (Tensor): Reference spectrograms. Tensor of shape [B, T, C]\n y_lengths (Tensor): Length of each reference spectrogram. Tensor of shape [B]\n speaker_cond_src (Tensor): Reference speaker ID. Tensor of shape [B,]\n speaker_cond_tgt (Tensor): Target speaker ID. Tensor of shape [B,]\n ", "language": "en", "n_whitespaces": 117, "n_words": 52, "vocab_size": 32 }
https://github.com/coqui-ai/TTS.git
6
upgrade
def upgrade(): conn = op.get_bind() if conn.dialect.name == 'sqlite': op.execute('PRAGMA foreign_keys=OFF') with op.batch_alter_table('ab_view_menu', schema=None) as batch_op: batch_op.create_unique_constraint(batch_op.f('ab_view_menu_name_uq'), ['name']) op.execute('PRAGMA foreign_keys=ON') elif conn.dialect.name == 'mysql': with op.batch_alter_table('ab_register_user', schema=None) as batch_op: batch_op.alter_column('username', existing_type=sa.String(256), nullable=False) batch_op.alter_column('email', existing_type=sa.String(256), nullable=False) with op.batch_alter_table('ab_user', schema=None) as batch_op: batch_op.alter_column('username', existing_type=sa.String(256), nullable=False) batch_op.alter_column('email', existing_type=sa.String(256), nullable=False) elif conn.dialect.name == 'mssql': with op.batch_alter_table('ab_register_user') as batch_op: # Drop the unique constraint on username and email constraints = get_mssql_table_constraints(conn, 'ab_register_user') for k, _ in constraints.get('UNIQUE').items(): batch_op.drop_constraint(k, type_='unique') batch_op.alter_column('username', existing_type=sa.String(256), nullable=False) batch_op.create_unique_constraint(None, ['username']) batch_op.alter_column('email', existing_type=sa.String(256), nullable=False) with op.batch_alter_table('ab_user') as batch_op: # Drop the unique constraint on username and email constraints = get_mssql_table_constraints(conn, 'ab_user') for k, _ in constraints.get('UNIQUE').items(): batch_op.drop_constraint(k, type_='unique') batch_op.alter_column('username', existing_type=sa.String(256), nullable=False) batch_op.create_unique_constraint(None, ['username']) batch_op.alter_column('email', existing_type=sa.String(256), nullable=False) batch_op.create_unique_constraint(None, ['email'])
2f5a567977e1219cab16c2548825a1b9eba07ab3
16
0106_909884dea523_update_migration_for_fab_tables_to_add_missing_constraints.py
652
Use Airflow.Base.metadata in FAB models (#22353) Since FAB models are now in airflow, it makes sense to monitor changes in them. Therefore we use Airflow.models.base.Base.metadata for FAB models
8,919
0
408
378
53
46,541
116
airflow
25
airflow/migrations/versions/0106_909884dea523_update_migration_for_fab_tables_to_add_missing_constraints.py
Python
30
{ "docstring": "Apply Update migration for FAB tables to add missing constraints", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
https://github.com/apache/airflow.git
6
_iter_all_mapped_downstreams
def _iter_all_mapped_downstreams(self) -> Iterator["MappedOperator"]: from airflow.models.mappedoperator import MappedOperator from airflow.utils.task_group import TaskGroup
197cff3194e855b9207c3c0da8ae093a0d5dda55
6
taskmixin.py
44
Ensure TaskMap only checks "relevant" dependencies (#23053) When looking for "mapped dependants" of a task, we only want a task if it not only is a direct downstream of the task, but also it actually "uses" the task's pushed XCom for task mapping. So we need to peek into the mapped downstream task's expansion kwargs, and only count it as a mapped dependant if the upstream is referenced there.
9,247
0
33
82
10
47,758
12
airflow
10
airflow/models/taskmixin.py
Python
28
{ "docstring": "Return mapped nodes that are direct dependencies of the current task.\n\n For now, this walks the entire DAG to find mapped nodes that has this\n current task as an upstream. We cannot use ``downstream_list`` since it\n only contains operators, not task groups. In the future, we should\n provide a way to record an DAG node's all downstream nodes instead.\n\n Note that this does not guarantee the returned tasks actually use the\n current task for task mapping, but only checks those task are mapped\n operators, and are downstreams of the current task.\n\n To get a list of tasks that uses the current task for task mapping, use\n :meth:`iter_mapped_dependants` instead.\n ", "language": "en", "n_whitespaces": 178, "n_words": 108, "vocab_size": 65 }
https://github.com/apache/airflow.git
34
_rescale_dataset_split_sizes
def _rescale_dataset_split_sizes(left_size,right_size,total_length): left_size_type = type(left_size) right_size_type = type(right_size) if ((left_size is not None and left_size_type not in [int,float]) and (right_size is not None and right_size_type not in [int,float])): raise TypeError('Invalid `left_size` and `right_size` Types. ' 'Expected: integer or float or None. ' f' Received: {left_size_type} and {right_size_type}') if left_size is not None and left_size_type not in [int,float]: raise TypeError(f'Invalid `left_size` Type. Received: {left_size_type}. ' ' Expected: int or float or None') if right_size is not None and right_size_type not in [int,float]: raise TypeError(f'Invalid `right_size` Type. Received: {right_size_type}.' ' Expected: int or float or None') if left_size == 0 and right_size == 0: raise ValueError('Invalid `left_size` and `right_size` values. ' 'You must specify either `left_size` or `right_size` with ' f'value greater than 0 and less than {total_length} ' 'or a float within range [0,1] to split the dataset' f'Received: `left_size`={left_size}, ' f'`right_size`={right_size}') if (left_size_type == int and (left_size <= 0 or left_size>= total_length) or left_size_type == float and (left_size <= 0 or left_size>= 1) ): raise ValueError('`left_size` should be either a positive integer ' f'and smaller than {total_length} or a float ' 'within the range `[0, 1]`. Received: left_size=' f'{left_size}') if (right_size_type == int and (right_size <= 0 or right_size>= total_length) or right_size_type == float and (right_size <= 0 or right_size>= 1)): raise ValueError('`right_size` should be either a positive integer ' f'and smaller than {total_length} or ' 'a float within the range `[0, 1]`. Received: right_size=' f'{right_size}') if right_size_type == left_size_type == float and right_size + left_size > 1: raise ValueError('sum of `left_size` and `right_size`' ' should be within `[0,1]`.' f'Received: {right_size + left_size} ,' 'reduce the `left_size` or `right_size`') if left_size_type == float: left_size = round(left_size*total_length) elif left_size_type == int: left_size = float(left_size) if right_size_type == float: right_size = round(right_size*total_length) elif right_size_type == int: right_size = float(right_size) if left_size is None: left_size = total_length - right_size elif right_size is None: right_size = total_length - left_size if left_size + right_size > total_length: raise ValueError('The sum of `left_size` and `right_size`' f' should be smaller than the samples {total_length} ' ' reduce `left_size` or `right_size` ' ) for split,side in [(left_size,'left'),(right_size,'right')]: if split == 0: raise ValueError(f'with dataset of length={total_length} ' '`left_size`={left_size} and `right_size`={right_size}, ' f'resulting {side} dataset split will be empty. ' 'Adjust any of the aforementioned parameters') left_size,right_size = int(left_size) ,int(right_size) return left_size,right_size
56b75d030cd9c3e3fde0ddf1f908436e6a5be3d6
14
dataset_utils.py
671
adds test case for fractional size with tuple of numpy arrays in different shape
79,956
0
951
369
142
269,225
382
keras
14
keras/utils/dataset_utils.py
Python
66
{ "docstring": "Helper function to rescale left_size/right_size args relative\n to dataset's size\n ", "language": "en", "n_whitespaces": 13, "n_words": 10, "vocab_size": 9 }
https://github.com/keras-team/keras.git
1
best_checkpoint_path
def best_checkpoint_path(self) -> Optional[Path]: return self.checkpoint_manager.best_checkpoint_path
dc7ed086a5038775e378b32cb31fb4a79f418dd9
7
trainer.py
29
[AIR] More checkpoint configurability, `Result` extension (#25943) This PR: * Allows the user to set `keep_checkpoints_num` and `checkpoint_score_attr` in `RunConfig` using the `CheckpointStrategy` dataclass * Adds two new fields to the `Result` object - `best_checkpoints` - a list of saved best checkpoints as determined by `CheckpointingConfig`.
32,968
0
20
17
6
143,364
6
ray
5
python/ray/train/trainer.py
Python
11
{ "docstring": "Path to the best persisted checkpoint from the latest run.\n\n \"Best\" is defined by the input ``CheckpointConfig``.\n Default behavior is to return the most recent checkpoint.\n\n Returns ``None`` if ``run()`` has not been called or if\n ``train.save_checkpoint()`` has not been called from ``train_func``\n within the most recent call to ``run``.\n ", "language": "en", "n_whitespaces": 92, "n_words": 50, "vocab_size": 35 }
https://github.com/ray-project/ray.git
4
clone
def clone(self, **kw): newpolicy = self.__class__.__new__(self.__class__) for attr, value in self.__dict__.items(): object.__setattr__(newpolicy, attr, value) for attr, value in kw.items(): if not hasattr(self, attr): raise TypeError( "{!r} is an invalid keyword argument for {}".format( attr, self.__class__.__name__)) object.__setattr__(newpolicy, attr, value) return newpolicy
8198943edd73a363c266633e1aa5b2a9e9c9f526
15
_policybase.py
144
add python 3.10.4 for windows
57,018
0
165
92
29
223,628
40
XX-Net
16
python3.10.4/Lib/email/_policybase.py
Python
11
{ "docstring": "Return a new instance with specified attributes changed.\n\n The new instance has the same attribute values as the current object,\n except for the changes passed in as keyword arguments.\n\n ", "language": "en", "n_whitespaces": 50, "n_words": 29, "vocab_size": 24 }
https://github.com/XX-net/XX-Net.git
2
call_deploy
def call_deploy(cls, fname, col_partitions, **kwargs): return np.array( [ cls.deploy( cls.parse, num_returns=NPartitions.get() + 2, fname=fname, columns=cols, num_splits=NPartitions.get(), **kwargs, ) for cols in col_partitions ] ).T
97769988a6f19e4b76f34238c97bf159ee7626a5
15
column_store_dispatcher.py
96
REFACTOR-#3853: interacting with Dask interface through 'DaskWrapper' class (#3854) Co-authored-by: Devin Petersohn <[email protected]> Co-authored-by: Dmitry Chigarev <[email protected]> Co-authored-by: Yaroslav Igoshev <[email protected]> Signed-off-by: Anatoly Myachev <[email protected]>
35,432
0
226
65
24
153,543
24
modin
16
modin/core/io/column_stores/column_store_dispatcher.py
Python
14
{ "docstring": "\n Deploy remote tasks to the workers with passed parameters.\n\n Parameters\n ----------\n fname : str, path object or file-like object\n Name of the file to read.\n col_partitions : list\n List of arrays with columns names that should be read\n by each partition.\n **kwargs : dict\n Parameters of deploying read_* function.\n\n Returns\n -------\n np.ndarray\n Array with references to the task deploy result for each partition.\n ", "language": "en", "n_whitespaces": 189, "n_words": 63, "vocab_size": 49 }
https://github.com/modin-project/modin.git
1
test_nowrapfunc
def test_nowrapfunc(capfd, hello_world_f90, monkeypatch): ipath = Path(hello_world_f90) mname = "blah" monkeypatch.setattr(sys, "argv", f'f2py -m {mname} {ipath} --no-wrap-functions'.split()) with util.switchdir(ipath.parent): f2pycli() out, _ = capfd.readouterr() assert r"Fortran 77 wrappers are saved to" not in out
729ad4f92420231e2a7009b3223c6c7620b8b808
11
test_f2py2e.py
115
TST: Initialize f2py2e tests of the F2PY CLI (#20668) Increases F2PY coverage by around 15 percent. For the CLI itself it covers the major features (around 70 percent), with the exception of mostly numpy.distutils stuff. More importantly, sets the groundwork for #20056, in that passing the same testsuite should indicate feature parity.
38,508
0
93
62
32
160,136
34
numpy
17
numpy/f2py/tests/test_f2py2e.py
Python
9
{ "docstring": "Ensures that fortran subroutine wrappers for F77 can be disabled\n\n CLI :: --no-wrap-functions\n ", "language": "en", "n_whitespaces": 19, "n_words": 13, "vocab_size": 13 }
https://github.com/numpy/numpy.git
3
get_queryset
def get_queryset(self, request): queryset = SavedFilter.objects.all() user = request.user if user.is_superuser: return queryset if user.is_anonymous: return queryset.filter(shared=True) return queryset.filter( Q(shared=True) | Q(user=user) )
484efdaf75f267a43f9321b938fda1bc967b9e53
11
views.py
101
Closes #9623: Implement saved filters (#10801) * Initial work on saved filters * Return only enabled/shared filters * Add tests * Clean up filtering of usable SavedFilters
78,254
0
105
62
18
265,983
23
netbox
13
netbox/extras/views.py
Python
10
{ "docstring": "\n Return only shared SavedFilters, or those owned by the current user, unless\n this is a superuser.\n ", "language": "en", "n_whitespaces": 38, "n_words": 16, "vocab_size": 16 }
https://github.com/netbox-community/netbox.git
1
to_bag
def to_bag(self, index=False, format="tuple"): from .io import to_bag return to_bag(self, index, format)
8a6f6a7b95762df4e44bc4d82ce33a7c388a0676
7
core.py
47
Move Bag.map_partitions to Blockwise (#8646) 1. Adds `format="frame"` option to `dataframe.io.to_bag` (effectively returning a zero-copy view of the same dask graph, that no-longer tracks meta/divisions) 2. Revises `Bag.map_partitions` to use `blockwise` (and to support the `token=` option) 3. Modifies the ACA code path to use a `Bag.map_partitions` for any blockwise operations where partitions may loose "dataframe-like" properties. This represents an alternative to using `map_partitions` incorrectly in ACA. It is also an alternative to the low-level `blockwise` API.
36,500
0
33
28
11
155,995
12
dask
5
dask/dataframe/core.py
Python
3
{ "docstring": "Convert to a dask Bag of tuples of each row.\n\n Parameters\n ----------\n index : bool, optional\n If True, the index is included as the first element of each tuple.\n Default is False.\n format : {\"tuple\", \"dict\", \"frame\"}, optional\n Whether to return a bag of tuples, dictionaries, or\n dataframe-like objects. Default is \"tuple\". If \"frame\",\n the original partitions of ``df`` will not be transformed\n in any way.\n ", "language": "en", "n_whitespaces": 167, "n_words": 66, "vocab_size": 50 }
https://github.com/dask/dask.git
2
accuracy
def accuracy(self, Y_hat, Y, averaged=True): Y_hat = d2l.reshape(Y_hat, (-1, Y_hat.shape[-1])) preds = d2l.astype(d2l.argmax(Y_hat, axis=1), Y.dtype) compare = d2l.astype(preds == d2l.reshape(Y, -1), d2l.float32) return d2l.reduce_mean(compare) if averaged else compare
19aba1f059efad45e1466d47954b2cf54d45b106
12
mxnet.py
133
simplify d2l lib
74,149
0
63
89
25
253,602
28
d2l-en
16
d2l/mxnet.py
Python
5
{ "docstring": "Compute the number of correct predictions.\n\n Defined in :numref:`sec_classification`", "language": "en", "n_whitespaces": 15, "n_words": 9, "vocab_size": 9 }
https://github.com/d2l-ai/d2l-en.git
1
_reset_replica_iterator
def _reset_replica_iterator(self): replicas = list(self.in_flight_queries.keys()) random.shuffle(replicas) self.replica_iterator = itertools.cycle(replicas)
545c51609f0f55b41cf99cec95a9c21bee6846de
11
router.py
59
[Serve] ServeHandle detects ActorError and drop replicas from target group (#26685)
28,152
0
37
34
8
126,370
9
ray
11
python/ray/serve/_private/router.py
Python
4
{ "docstring": "Reset the iterator used to load balance replicas.\n\n This call is expected to be called after the replica membership has\n been updated. It will shuffle the replicas randomly to avoid multiple\n handle sending requests in the same order.\n ", "language": "en", "n_whitespaces": 66, "n_words": 38, "vocab_size": 33 }
https://github.com/ray-project/ray.git
3
test_connection
def test_connection(self): status, message = False, '' try: if self.get_first("select 1 from dual"): status = True message = 'Connection successfully tested' except Exception as e: status = False message = str(e) return status, message
900bad1c67654252196bb095a2a150a23ae5fc9a
11
oracle.py
87
Fix oracle test connection (#21699)
8,538
0
132
47
25
45,285
34
airflow
8
airflow/providers/oracle/hooks/oracle.py
Python
10
{ "docstring": "Tests the connection by executing a select 1 from dual query", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
https://github.com/apache/airflow.git
3
task_instance_link
def task_instance_link(attr): dag_id = attr.get('dag_id') task_id = attr.get('task_id') execution_date = attr.get('dag_run.execution_date') or attr.get('execution_date') or timezone.utcnow() url = url_for( 'Airflow.task', dag_id=dag_id, task_id=task_id, execution_date=execution_date.isoformat(), map_index=attr.get('map_index', -1), ) url_root = url_for( 'Airflow.graph', dag_id=dag_id, root=task_id, execution_date=execution_date.isoformat() ) return Markup( ).format(url=url, task_id=task_id, url_root=url_root)
3182ba2f402656bfc7d7777f1678161ec5a9cf79
12
utils.py
200
Add map_index support to all task instance-related views (#22272) Co-authored-by: Ash Berlin-Taylor <[email protected]> Co-authored-by: Brent Bovenzi <[email protected]>
8,847
0
118
120
29
46,296
38
airflow
16
airflow/www/utils.py
Python
25
{ "docstring": "Generates a URL to the Graph view for a TaskInstance.\n <span style=\"white-space: nowrap;\">\n <a href=\"{url}\">{task_id}</a>\n <a href=\"{url_root}\" title=\"Filter on this task and upstream\">\n <span class=\"material-icons\" style=\"margin-left:0;\"\n aria-hidden=\"true\">filter_alt</span>\n </a>\n </span>\n ", "language": "en", "n_whitespaces": 89, "n_words": 29, "vocab_size": 26 }
https://github.com/apache/airflow.git
1
downgrade
def downgrade(): op.create_table("__airflow_tmp_xcom", *_get_old_xcom_columns()) xcom = Table("xcom", metadata, *_get_new_xcom_columns()) query = select( [ xcom.c.key, xcom.c.value, xcom.c.timestamp, xcom.c.task_id, xcom.c.dag_id, dagrun.c.execution_date, ], ).select_from( xcom.join( dagrun, xcom.c.dag_id == dagrun.c.dag_id, xcom.c.run_id == dagrun.c.run_id, ), ) op.execute(f"INSERT INTO __airflow_tmp_xcom {query.selectable.compile(op.get_bind())}") op.drop_table("xcom") op.rename_table("__airflow_tmp_xcom", "xcom") op.create_primary_key("xcom_pkey", "xcom", ["dag_id", "task_id", "execution_date", "key"])
0ebd6428e6b484790bfbbe1b8687ef4e6cae10e9
13
c306b5b5ae4a_switch_xcom_table_to_use_run_id.py
264
Switch XCom implementation to use run_id (#20975)
8,429
0
201
148
42
44,969
44
airflow
28
airflow/migrations/versions/c306b5b5ae4a_switch_xcom_table_to_use_run_id.py
Python
23
{ "docstring": "Switch XCom table back to use execution_date.\n\n Basically an inverse operation.\n ", "language": "en", "n_whitespaces": 17, "n_words": 11, "vocab_size": 11 }
https://github.com/apache/airflow.git
6
remap_palette
def remap_palette(self, dest_map, source_palette=None): from . import ImagePalette if self.mode not in ("L", "P"): raise ValueError("illegal image mode") if source_palette is None: if self.mode == "P": self.load() source_palette = self.im.getpalette("RGB")[:768] else: # L-mode source_palette = bytearray(i // 3 for i in range(768)) palette_bytes = b"" new_positions = [0] * 256 # pick only the used colors from the palette for i, oldPosition in enumerate(dest_map): palette_bytes += source_palette[oldPosition * 3 : oldPosition * 3 + 3] new_positions[oldPosition] = i # replace the palette color id of all pixel with the new id # Palette images are [0..255], mapped through a 1 or 3 # byte/color map. We need to remap the whole image # from palette 1 to palette 2. New_positions is # an array of indexes into palette 1. Palette 2 is # palette 1 with any holes removed. # We're going to leverage the convert mechanism to use the # C code to remap the image from palette 1 to palette 2, # by forcing the source image into 'L' mode and adding a # mapping 'L' mode palette, then converting back to 'L' # sans palette thus converting the image bytes, then # assigning the optimized RGB palette. # perf reference, 9500x4000 gif, w/~135 colors # 14 sec prepatch, 1 sec postpatch with optimization forced. mapping_palette = bytearray(new_positions) m_im = self.copy() m_im.mode = "P" m_im.palette = ImagePalette.ImagePalette("RGB", palette=mapping_palette * 3) # possibly set palette dirty, then # m_im.putpalette(mapping_palette, 'L') # converts to 'P' # or just force it. # UNDONE -- this is part of the general issue with palettes m_im.im.putpalette("RGB;L", m_im.palette.tobytes()) m_im = m_im.convert("L") # Internally, we require 768 bytes for a palette. new_palette_bytes = palette_bytes + (768 - len(palette_bytes)) * b"\x00" m_im.putpalette(new_palette_bytes) m_im.palette = ImagePalette.ImagePalette("RGB", palette=palette_bytes) if "transparency" in self.info: m_im.info["transparency"] = new_positions[self.info["transparency"]] return m_im
46a80d144a16836af304a7aaa8e620962d91ac23
16
Image.py
425
Update transparency when remapping the palette
69,947
0
680
231
177
242,978
299
Pillow
27
src/PIL/Image.py
Python
27
{ "docstring": "\n Rewrites the image to reorder the palette.\n\n :param dest_map: A list of indexes into the original palette.\n e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))``\n is the identity transform.\n :param source_palette: Bytes or None.\n :returns: An :py:class:`~PIL.Image.Image` object.\n\n ", "language": "en", "n_whitespaces": 97, "n_words": 40, "vocab_size": 35 }
https://github.com/python-pillow/Pillow.git
1
_update_tasks_counters_and_is_labeled
def _update_tasks_counters_and_is_labeled(self, queryset, from_scratch=True): queryset = make_queryset_from_iterable(queryset) objs = self._update_tasks_counters(queryset, from_scratch) bulk_update_stats_project_tasks(queryset, self) return objs
aa36f4f70f7a6290f74059a6e13fd89dfd3e6ef8
8
models.py
57
fix: DEV-3798: Improve performance for _rearrange_overlap_cohort (#3271) * fix: DEV-3798: Improve performance for _rearrange_overlap_cohort
42,640
0
50
36
13
178,264
15
label-studio
8
label_studio/projects/models.py
Python
5
{ "docstring": "\n Update tasks counters and is_labeled in a single operation\n :param queryset: Tasks to update queryset\n :param from_scratch: Skip calculated tasks\n :return: Count of updated tasks\n ", "language": "en", "n_whitespaces": 61, "n_words": 25, "vocab_size": 22 }
https://github.com/heartexlabs/label-studio.git
3
async_enable
async def async_enable(self) -> None: if self._is_enabled: return self._is_enabled = True # HomeAssistant is starting up if self.hass.state != CoreState.not_running: self._async_detach_triggers = await self._async_attach_triggers(False) self.async_write_ha_state() return
5e338d21665cb04f66fcebd9376cdda389c30c01
11
__init__.py
82
Improve type hints in automation (#78368) * Improve type hints in automation * Apply suggestion * Apply suggestion * Apply suggestion * Add Protocol for IfAction * Use ConfigType for IfAction * Rename variable
106,440
0
105
67
23
307,672
26
core
10
homeassistant/components/automation/__init__.py
Python
17
{ "docstring": "Enable this automation entity.\n\n This method is a coroutine.\n ", "language": "en", "n_whitespaces": 23, "n_words": 9, "vocab_size": 9 }
https://github.com/home-assistant/core.git
1
test_calibration_without_sample_weight_base_estimator
def test_calibration_without_sample_weight_base_estimator(data): X, y = data sample_weight = np.ones_like(y)
effdd6e215c67f2ae8ed1e378ea1661e936059a4
8
test_calibration.py
34
API Rename base_estimator in CalibratedClassifierCV (#22054) Co-authored-by: Kevin Roice <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: Thomas J. Fan <[email protected]>
76,059
0
18
58
8
260,080
9
scikit-learn
7
sklearn/tests/test_calibration.py
Python
9
{ "docstring": "Check that even if the estimator doesn't support\n sample_weight, fitting with sample_weight still works.\n\n There should be a warning, since the sample_weight is not passed\n on to the estimator.\n ", "language": "en", "n_whitespaces": 41, "n_words": 29, "vocab_size": 26 }
https://github.com/scikit-learn/scikit-learn.git
2
_load_dependencies
def _load_dependencies(self): deps = [] for role_include in self._metadata.dependencies: r = Role.load(role_include, play=self._play, parent_role=self) deps.append(r) return deps # other functions
1998521e2d5b89bc53d00639bad178330ebb98df
12
__init__.py
73
Always create new role (#78661) Don't use role cache for determining whether to create a new instance of role
79,654
0
73
45
18
268,761
20
ansible
13
lib/ansible/playbook/role/__init__.py
Python
6
{ "docstring": "\n Recursively loads role dependencies from the metadata list of\n dependencies, if it exists\n ", "language": "en", "n_whitespaces": 35, "n_words": 13, "vocab_size": 13 }
https://github.com/ansible/ansible.git
3
test_timesteps_unit
def test_timesteps_unit(self): self.batch_id = 0 batch_size = 5 buffer_size = 15 buffer = ReplayBuffer(capacity=buffer_size) # Test add/sample self._add_data_to_buffer(buffer, batch_size=batch_size, num_batches=1) self._add_data_to_buffer(buffer, batch_size=batch_size, num_batches=2) # Sampling from it now should yield our first batch 1/3 of the time num_sampled_dict = {_id: 0 for _id in range(self.batch_id)} num_samples = 200 for i in range(num_samples): _id = buffer.sample(1)["batch_id"][0] num_sampled_dict[_id] += 1 assert np.allclose( np.array(list(num_sampled_dict.values())) / num_samples, len(num_sampled_dict) * [1 / 3], atol=0.1, ) # Test set/get state state = buffer.get_state() other_buffer = ReplayBuffer(capacity=buffer_size) self._add_data_to_buffer(other_buffer, 1) other_buffer.set_state(state) assert other_buffer._storage == buffer._storage assert other_buffer._next_idx == buffer._next_idx assert other_buffer._num_timesteps_added == buffer._num_timesteps_added assert ( other_buffer._num_timesteps_added_wrap == buffer._num_timesteps_added_wrap ) assert other_buffer._num_timesteps_sampled == buffer._num_timesteps_sampled assert other_buffer._eviction_started == buffer._eviction_started assert other_buffer._est_size_bytes == buffer._est_size_bytes assert len(other_buffer) == len(other_buffer)
e57ce7efd6ea1d0e4f6942fcf6f526287340e63d
14
test_replay_buffer.py
366
[RLlib] Replay Buffer API and Training Iteration Fn for DQN. (#23420)
34,174
0
379
236
80
148,107
117
ray
34
rllib/utils/replay_buffers/tests/test_replay_buffer.py
Python
31
{ "docstring": "Tests adding, sampling, get-/set state, and eviction with\n experiences stored by timesteps.\n ", "language": "en", "n_whitespaces": 26, "n_words": 12, "vocab_size": 12 }
https://github.com/ray-project/ray.git
2
_unschedule_refresh
def _unschedule_refresh(self) -> None: if self._unsub_refresh: self._unsub_refresh() self._unsub_refresh = None
8910d265d6cf15fed4e6e98b4344031019c1016d
9
update_coordinator.py
41
Keep track of a context for each listener (#72702) * Remove async_remove_listener This avoids the ambuigity as to what happens if same callback is added multiple times. * Keep track of a context for each listener This allow a update coordinator to adapt what data to request on update from the backing service based on which entities are enabled. * Clone list before calling callbacks The callbacks can end up unregistering and modifying the dict while iterating. * Only yield actual values * Add a test for update context * Factor out iteration of _listeners to helper * Verify context is passed to coordinator * Switch to Any as type instead of object * Remove function which use was dropped earliers The use was removed in 8bee25c938a123f0da7569b4e2753598d478b900
102,039
0
46
23
10
303,211
10
core
3
homeassistant/helpers/update_coordinator.py
Python
5
{ "docstring": "Unschedule any pending refresh since there is no longer any listeners.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 10 }
https://github.com/home-assistant/core.git
6
split_sections
def split_sections(s): section = None content = [] for line in yield_lines(s): if line.startswith("["): if line.endswith("]"): if section or content: yield section, content section = line[1:-1].strip() content = [] else: raise ValueError("Invalid section heading", line) else: content.append(line) # wrap up last segment yield section, content
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
17
__init__.py
148
upd; format
13,164
0
189
84
30
63,142
45
transferlearning
11
.venv/lib/python3.8/site-packages/pip/_vendor/pkg_resources/__init__.py
Python
15
{ "docstring": "Split a string or iterable thereof into (section, content) pairs\n\n Each ``section`` is a stripped version of the section header (\"[section]\")\n and each ``content`` is a list of stripped lines excluding blank lines and\n comment-only lines. If there are any such lines before the first section\n header, they're returned in a first ``section`` of ``None``.\n ", "language": "en", "n_whitespaces": 71, "n_words": 55, "vocab_size": 41 }
https://github.com/jindongwang/transferlearning.git
3
close
def close(self) -> None: if self.is_wrapped: assert isinstance(self.handle, TextIOWrapper) self.handle.flush() self.handle.detach() self.created_handles.remove(self.handle) for handle in self.created_handles: handle.close() self.created_handles = [] self.is_wrapped = False
96ba36ddcbedda4bfe4e70ea0261a3194724f768
10
common.py
114
BUG: do not suppress errors when closing file handles (#47165)
39,862
0
113
69
22
166,849
23
pandas
10
pandas/io/common.py
Python
16
{ "docstring": "\n Close all created buffers.\n\n Note: If a TextIOWrapper was inserted, it is flushed and detached to\n avoid closing the potentially user-created buffer.\n ", "language": "en", "n_whitespaces": 51, "n_words": 22, "vocab_size": 22 }
https://github.com/pandas-dev/pandas.git
1
robots
def robots(self): return send_from_directory(get_airflow_app().static_folder, 'robots.txt')
e2f19505bf3622935480e80bee55bf5b6d80097b
10
views.py
32
Upgrade FAB to 4.1.1 (#24399) * Upgrade FAB to 4.1.1 The Flask Application Builder have been updated recently to support a number of newer dependencies. This PR is the attempt to migrate FAB to newer version. This includes: * update setup.py and setup.cfg upper and lower bounds to account for proper version of dependencies that FAB < 4.0.0 was blocking from upgrade * added typed Flask application retrieval with a custom application fields available for MyPy typing checks. * fix typing to account for typing hints added in multiple upgraded libraries optional values and content of request returned as Mapping * switch to PyJWT 2.* by using non-deprecated "required" claim as list rather than separate fields * add possibiliyt to install providers without constraints so that we could avoid errors on conflicting constraints when upgrade-to-newer-dependencies is used * add pre-commit to check that 2.4+ only get_airflow_app is not used in providers * avoid Bad Request in case the request sent to Flask 2.0 is not JSon content type * switch imports of internal classes to direct packages where classes are available rather than from "airflow.models" to satisfy MyPY * synchronize changes of FAB Security Manager 4.1.1 with our copy of the Security Manager. * add error handling for a few "None" cases detected by MyPY * corrected test cases that were broken by immutability of Flask 2 objects and better escaping done by Flask 2 * updated test cases to account for redirection to "path" rather than full URL by Flask2 Fixes: #22397 * fixup! Upgrade FAB to 4.1.1
7,945
0
19
17
5
43,388
5
airflow
5
airflow/www/views.py
Python
2
{ "docstring": "\n Returns a robots.txt file for blocking certain search engine crawlers. This mitigates some\n of the risk associated with exposing Airflow to the public internet, however it does not\n address the real security risks associated with such a deployment.\n ", "language": "en", "n_whitespaces": 67, "n_words": 38, "vocab_size": 33 }
https://github.com/apache/airflow.git
2
geom_output
def geom_output(func, argtypes, offset=None): # Setting the argument types func.argtypes = argtypes if not offset: # When a geometry pointer is directly returned. func.restype = c_void_p func.errcheck = check_geom else: # Error code returned, geometry is returned by-reference. func.restype = c_int
9c19aff7c7561e3a82978a272ecdaad40dda5c00
10
generation.py
66
Refs #33476 -- Reformatted code with Black.
50,602
0
91
47
33
203,997
41
django
9
django/contrib/gis/gdal/prototypes/generation.py
Python
10
{ "docstring": "\n Generate a function that returns a Geometry either by reference\n or directly (if the return_geom keyword is set to True).\n ", "language": "en", "n_whitespaces": 30, "n_words": 20, "vocab_size": 19 }
https://github.com/django/django.git
1
query
def query(self, expr, **kwargs): return self.map_partitions(M.query, expr, **kwargs)
e30471041ab9be7126a14f412fd17a1e2df8a7f5
8
core.py
39
Update `DataFrame.query` docstring (#8890)
36,619
0
22
25
7
156,252
8
dask
6
dask/dataframe/core.py
Python
2
{ "docstring": "Filter dataframe with complex expression\n\n Blocked version of pd.DataFrame.query\n\n Parameters\n ----------\n expr: str\n The query string to evaluate.\n You can refer to column names that are not valid Python variable names\n by surrounding them in backticks.\n Dask does not fully support referring to variables using the '@' character,\n use f-strings or the ``local_dict`` keyword argument instead.\n\n Notes\n -----\n This is like the sequential version except that this will also happen\n in many threads. This may conflict with ``numexpr`` which will use\n multiple threads itself. We recommend that you set ``numexpr`` to use a\n single thread:\n\n .. code-block:: python\n\n import numexpr\n numexpr.set_num_threads(1)\n\n See also\n --------\n pandas.DataFrame.query\n pandas.eval\n\n Examples\n --------\n >>> import pandas as pd\n >>> import dask.dataframe as dd\n >>> df = pd.DataFrame({'x': [1, 2, 1, 2],\n ... 'y': [1, 2, 3, 4],\n ... 'z z': [4, 3, 2, 1]})\n >>> ddf = dd.from_pandas(df, npartitions=2)\n\n Refer to column names directly:\n\n >>> ddf.query('y > x').compute()\n x y z z\n 2 1 3 2\n 3 2 4 1\n\n Refer to column name using backticks:\n\n >>> ddf.query('`z z` > x').compute()\n x y z z\n 0 1 1 4\n 1 2 2 3\n 2 1 3 2\n\n Refer to variable name using f-strings:\n\n >>> value = 1\n >>> ddf.query(f'x == {value}').compute()\n x y z z\n 0 1 1 4\n 2 1 3 2\n\n Refer to variable name using ``local_dict``:\n\n >>> ddf.query('x == @value', local_dict={\"value\": value}).compute()\n x y z z\n 0 1 1 4\n 2 1 3 2\n ", "language": "en", "n_whitespaces": 746, "n_words": 242, "vocab_size": 140 }
https://github.com/dask/dask.git
9
_check_gpu_tensors
def _check_gpu_tensors(tensors): if not tensors or not isinstance(tensors, list): raise RuntimeError("'tensors' must be a nonempty list.") if len(tensors) > nccl_util.get_num_gpus(): raise RuntimeError( "Tensor list cannot be larger than the number" "of available GPUs. Got {} > {}.".format( len(tensors), nccl_util.get_num_gpus() ) ) t0 = tensors[0] dt = nccl_util.get_nccl_tensor_dtype(t0) s = nccl_util.get_tensor_shape(t0) d = nccl_util.get_tensor_device(t0) for i, t in enumerate(tensors): if i == 0: continue # We need to check the following: # (1) tensor is cuda (already checked during API) # (2) tensor dtype # (3) tensor shape match # (4) each tensor is on a different GPU dtype = nccl_util.get_nccl_tensor_dtype(t) if dt != dtype: raise RuntimeError( "Tensors must have identical dtype. Got: '{}'.".format(dtype) ) shape = nccl_util.get_tensor_shape(t) if s != shape: raise RuntimeError( "Tensor must have identical shape. Got: '{}'.".format(shape) ) device = nccl_util.get_tensor_device(t) if device == d: raise RuntimeError("Tensor must be on distinct GPUs.")
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
14
nccl_collective_group.py
285
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
29,907
0
418
165
96
132,999
145
ray
22
python/ray/util/collective/collective_group/nccl_collective_group.py
Python
30
{ "docstring": "Check all tensors are distributed on different GPUs.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
https://github.com/ray-project/ray.git
1
test_map_xcom_arg
def test_map_xcom_arg(): with DAG("test-dag", start_date=DEFAULT_DATE): task1 = BaseOperator(task_id="op1") mapped = MockOperator.partial(task_id='task_2').expand(arg2=XComArg(task1)) finish = MockOperator(task_id="finish") mapped >> finish assert task1.downstream_list == [mapped]
70b41e46b46e65c0446a40ab91624cb2291a5039
14
test_mappedoperator.py
112
Move MappedOperator tests to mirror code location (#23884) At some point during the development of AIP-42 we moved the code for MappedOperator out of baseoperator.py to mappedoperator.py, but we didn't move the tests at the same time
7,687
0
58
63
17
42,678
21
airflow
15
tests/models/test_mappedoperator.py
Python
7
{ "docstring": "Test that dependencies are correct when mapping with an XComArg", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
https://github.com/apache/airflow.git
5
testClusterAutoscaling
def testClusterAutoscaling(self): self.cluster.update_config( { "provider": {"head_resources": {"CPU": 4, "GPU": 0}}, } ) self.cluster.start() self.cluster.connect(client=True, timeout=120) self.assertGreater(ray.cluster_resources().get("CPU", 0), 0) # Trigger autoscaling pg = ray.util.placement_group([{"CPU": 1, "GPU": 1}] * 2) timeout = time.monotonic() + 120 while ray.cluster_resources().get("GPU", 0) < 2: if time.monotonic() > timeout: raise RuntimeError("Autoscaling failed or too slow.") time.sleep(1) # Schedule task with resources self.assertEquals( 5, ray.get( remote_task.options( num_cpus=1, num_gpus=1, scheduling_strategy=PlacementGroupSchedulingStrategy( placement_group=pg ), ).remote(5) ), ) print("Autoscaling worked") ray.util.remove_placement_group(pg) time.sleep(2) # Give some time so nodes.json is updated self.cluster.kill_node(num=2) print("Killed GPU node.") pg = ray.util.placement_group([{"CPU": 1, "GPU": 1}] * 2) table = ray.util.placement_group_table(pg) assert table["state"] == "PENDING" timeout = time.monotonic() + 180 while table["state"] != "CREATED": if time.monotonic() > timeout: raise RuntimeError("Re-starting killed node failed or too slow.") time.sleep(1) table = ray.util.placement_group_table(pg) print("Node was restarted.")
57cdbb1769a9c32972ba0ec9e7e857eeea961869
17
test_multinode_sync.py
513
Migrate the deprecated placement_group option to PlacementGroupSchedulingStrategy (#28437) placement_group option is deprecated, use PlacementGroupSchedulingStrategy instead.
28,467
0
579
300
90
127,553
126
ray
33
python/ray/tune/tests/test_multinode_sync.py
Python
42
{ "docstring": "Sanity check that multinode tests with autoscaling are working", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
https://github.com/ray-project/ray.git
4
string_width_in_pixels
def string_width_in_pixels(cls, font, string): # if no windows have been created (there is no hidden master root to rely on) then temporarily make a window so the measurement can happen if Window.NumOpenWindows == 0: root = tk.Tk() else: root = None size = 0 try: size = tkinter.font.Font(font=font).measure(string) # string's width except Exception as e: _error_popup_with_traceback('Exception retrieving string width in pixels', e) if root is not None: root.destroy() return size
acaae54a1ade24b2e55f7274ae4db747160a38db
13
PySimpleGUI.py
128
Enable Text class methods to be called prior to any windows being created: string_width_in_pixels, char_height_in_pixels, char_width_in_pixels. Removed destruction of hidden master root from popup_get_file & popup_get_folder (was old code)
53,306
0
190
75
56
212,643
70
PySimpleGUI
17
PySimpleGUI.py
Python
13
{ "docstring": "\n Get the with of the supplied string in pixels for the font being passed in.\n If an error occurs, 0 will be returned\n :param font: specifies the font family, size, etc. Tuple or Single string format 'name size styles'. Styles: italic * roman bold normal underline overstrike, to be measured\n :type font: (str or (str, int[, str]) or None)\n :param string: the string to measure\n :type string: str\n :return: Width in pixels of string\n :rtype: (int)\n ", "language": "en", "n_whitespaces": 160, "n_words": 76, "vocab_size": 57 }
https://github.com/PySimpleGUI/PySimpleGUI.git
3
_should_use_osx_framework_prefix
def _should_use_osx_framework_prefix() -> bool: return ( "osx_framework_library" in _AVAILABLE_SCHEMES and not running_under_virtualenv() and is_osx_framework() )
7e33fcae4384563b4c927fd44318c29dd524a097
11
_sysconfig.py
42
Vendor in pip 21.2.4 release (from pip 21.2.2 prior). (#5009) * Vendor in pip 21.2.4 release (from pip 21.2.2 prior). * Add news fragment for pip 21.2.4 vendor update. * Add potentially missing LICENSE files
2,987
0
45
22
14
19,471
15
pipenv
5
pipenv/patched/notpip/_internal/locations/_sysconfig.py
Python
24
{ "docstring": "Check for Apple's ``osx_framework_library`` scheme.\n\n Python distributed by Apple's Command Line Tools has this special scheme\n that's used when:\n\n * This is a framework build.\n * We are installing into the system prefix.\n\n This does not account for ``pip install --prefix`` (also means we're not\n installing to the system prefix), which should use ``posix_prefix``, but\n logic here means ``_infer_prefix()`` outputs ``osx_framework_library``. But\n since ``prefix`` is not available for ``sysconfig.get_default_scheme()``,\n which is the stdlib replacement for ``_infer_prefix()``, presumably Apple\n wouldn't be able to magically switch between ``osx_framework_library`` and\n ``posix_prefix``. ``_infer_prefix()`` returning ``osx_framework_library``\n means its behavior is consistent whether we use the stdlib implementation\n or our own, and we deal with this special case in ``get_scheme()`` instead.\n ", "language": "en", "n_whitespaces": 157, "n_words": 115, "vocab_size": 86 }
https://github.com/pypa/pipenv.git
4
set_focus
def set_focus(self, force=False): if not self._widget_was_created(): # if widget hasn't been created yet, then don't allow return try: if force: self.Widget.focus_force() else: self.Widget.focus_set() except Exception as e: _error_popup_with_traceback("Exception blocking focus. Check your element's Widget", e)
9b814f003b0685757d76ce56ee9c98eae114d346
13
PySimpleGUI.py
92
Added key and widget Element properties, new focus methods Element.get_next_focus, Element.get_previous_focus. New Window method Window.widget_to_element
53,427
0
138
51
33
212,817
35
PySimpleGUI
10
PySimpleGUI.py
Python
10
{ "docstring": "\n Sets the current focus to be on this element\n\n :param force: if True will call focus_force otherwise calls focus_set\n :type force: bool\n ", "language": "en", "n_whitespaces": 52, "n_words": 22, "vocab_size": 21 }
https://github.com/PySimpleGUI/PySimpleGUI.git
2
announce
def announce(): current_version, _, _ = parse_version_from_module() tag_name = f"v{current_version}" click.echo( f ) if "rc" in tag_name: click.echo( ) else: click.echo( )
12d1f82db213603972d60be3f46f6a36c3c2330f
11
release.py
106
Generate announcement links in release script (#12242)
71,896
0
102
44
17
247,751
22
synapse
7
scripts-dev/release.py
Python
28
{ "docstring": "Generate markdown to announce the release.\nHi everyone. Synapse {current_version} has just been released.\n\n[notes](https://github.com/matrix-org/synapse/releases/tag/{tag_name}) |\\\n[docker](https://hub.docker.com/r/matrixdotorg/synapse/tags?name={tag_name}) | \\\n[debs](https://packages.matrix.org/debian/) | \\\n[pypi](https://pypi.org/project/matrix-synapse/{current_version}/)\nAnnounce the RC in\n- #homeowners:matrix.org (Synapse Announcements)\n- #synapse-dev:matrix.org\nAnnounce the release in\n- #homeowners:matrix.org (Synapse Announcements), bumping the version in the topic\n- #synapse:matrix.org (Synapse Admins), bumping the version in the topic\n- #synapse-dev:matrix.org\n- #synapse-package-maintainers:matrix.org", "language": "en", "n_whitespaces": 47, "n_words": 61, "vocab_size": 37 }
https://github.com/matrix-org/synapse.git
5
workflow_in_progress
def workflow_in_progress(self): if not getattr(settings, "WAGTAIL_WORKFLOW_ENABLED", True): return False # `_current_workflow_states` may be populated by `prefetch_workflow_states` on `PageQuerySet` as a # performance optimisation if hasattr(self, "_current_workflow_states"): for state in self._current_workflow_states: if state.status == WorkflowState.STATUS_IN_PROGRESS: return True return False return WorkflowState.objects.filter( page=self, status=WorkflowState.STATUS_IN_PROGRESS ).exists()
d10f15e55806c6944827d801cd9c2d53f5da4186
11
__init__.py
112
Reformat with black
16,127
0
170
68
36
73,819
43
wagtail
14
wagtail/core/models/__init__.py
Python
11
{ "docstring": "Returns True if a workflow is in progress on the current page, otherwise False", "language": "en", "n_whitespaces": 13, "n_words": 14, "vocab_size": 14 }
https://github.com/wagtail/wagtail.git
3
numpy_to_pil
def numpy_to_pil(images): if images.ndim == 3: images = images[None, ...] images = (images * 255).round().astype("uint8") pil_images = [Image.fromarray(image) for image in images] return pil_images
1b42732ced07861b810f77ecf3fc8ce63ce465e8
12
pipeline_utils.py
87
PIL-ify the pipeline outputs (#111)
120,858
0
70
53
20
336,164
24
diffusers
9
src/diffusers/pipeline_utils.py
Python
6
{ "docstring": "\n Convert a numpy image or a batch of images to a PIL image.\n ", "language": "en", "n_whitespaces": 28, "n_words": 13, "vocab_size": 11 }
https://github.com/huggingface/diffusers.git
1
make_authors_file_lines
def make_authors_file_lines(git_people): # define new lines for the file header = filldedent().lstrip() header_extra = "There are a total of %d authors." % len(git_people) lines = header.splitlines() lines.append('') lines.append(header_extra) lines.append('') lines.extend(git_people) return lines
e8bf22b0eb76ecb6aec12dd45549649c490e1354
11
mailmap_check.py
103
mailmap documents python2 I believe this to be useful for at least two reasons. Current documentation either state to use `python bin/mailmap_check.py` or just `bin/mailmap_check.py` which itself calles `python`. In both case, on ubuntu this will run python2. Worse, instead of printing "This script requires Python 3.8 or newer" it prints an error message that python don't know what to do with the f-string. If the f-string is removed, that it does not know pathlib. So, I ensure that the script still work as before on 3.8 and prints a more useful message on python2. I admit I'm not fan of removing a f-string, however, since there was a single f-string, I assume it was relatively acceptable. I suppose most sympy contributor are at ease with those subtilities of python2/3. However, this will at least be useful for people, like me, who only wanted to contribute to improving documentation and not have to deal with complexity of system administration.
49,232
0
59
56
27
199,311
32
sympy
11
bin/mailmap_check.py
Python
15
{ "docstring": "\n All people who contributed to SymPy by sending at least a patch or\n more (in the order of the date of their first contribution), except\n those who explicitly didn't want to be mentioned. People with a * next\n to their names are not found in the metadata of the git history. This\n file is generated automatically by running `./bin/authors_update.py`.\n ", "language": "en", "n_whitespaces": 102, "n_words": 59, "vocab_size": 48 }
https://github.com/sympy/sympy.git
6
avatar_url
def avatar_url(user, size=50, gravatar_only=False): if ( not gravatar_only and hasattr(user, "wagtail_userprofile") and user.wagtail_userprofile.avatar ): return user.wagtail_userprofile.avatar.url if hasattr(user, "email"): gravatar_url = get_gravatar_url(user.email, size=size) if gravatar_url is not None: return gravatar_url return versioned_static_func("wagtailadmin/images/default-user-avatar.png") @register.simple_tag
d10f15e55806c6944827d801cd9c2d53f5da4186
@register.simple_tag
11
wagtailadmin_tags.py
127
Reformat with black
15,646
1
100
74
24
71,246
33
wagtail
14
wagtail/admin/templatetags/wagtailadmin_tags.py
Python
12
{ "docstring": "\n A template tag that receives a user and size and return\n the appropriate avatar url for that user.\n Example usage: {% avatar_url request.user 50 %}\n ", "language": "en", "n_whitespaces": 38, "n_words": 25, "vocab_size": 23 }
https://github.com/wagtail/wagtail.git
1
test_failing_open
def test_failing_open(self, tmp_path): qf = QFile(str(tmp_path)) dev = qtutils.PyQIODevice(qf) with pytest.raises(qtutils.QtOSError) as excinfo: dev.open(QIODevice.OpenModeFlag.WriteOnly) assert excinfo.value.qt_errno == QFileDevice.FileError.OpenError assert dev.closed
0877fb0d78635692e481c8bde224fac5ad0dd430
11
test_qtutils.py
105
Run scripts/dev/rewrite_enums.py
117,728
0
73
63
18
321,446
20
qutebrowser
23
tests/unit/utils/test_qtutils.py
Python
7
{ "docstring": "Test open() which fails (because it's an existent directory).", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
https://github.com/qutebrowser/qutebrowser.git
14
deep_deconstruct
def deep_deconstruct(self, obj): if isinstance(obj, list): return [self.deep_deconstruct(value) for value in obj] elif isinstance(obj, tuple): return tuple(self.deep_deconstruct(value) for value in obj) elif isinstance(obj, dict): return {key: self.deep_deconstruct(value) for key, value in obj.items()} elif isinstance(obj, functools.partial): return ( obj.func, self.deep_deconstruct(obj.args), self.deep_deconstruct(obj.keywords), ) elif isinstance(obj, COMPILED_REGEX_TYPE): return RegexObject(obj) elif isinstance(obj, type): # If this is a type that implements 'deconstruct' as an instance method, # avoid treating this as being deconstructible itself - see #22951 return obj elif hasattr(obj, "deconstruct"): deconstructed = obj.deconstruct() if isinstance(obj, models.Field): # we have a field which also returns a name deconstructed = deconstructed[1:] path, args, kwargs = deconstructed return ( path, [self.deep_deconstruct(value) for value in args], {key: self.deep_deconstruct(value) for key, value in kwargs.items()}, ) else: return obj
9c19aff7c7561e3a82978a272ecdaad40dda5c00
13
autodetector.py
337
Refs #33476 -- Reformatted code with Black.
51,047
0
469
220
72
205,257
121
django
25
django/db/migrations/autodetector.py
Python
29
{ "docstring": "\n Recursive deconstruction for a field and its arguments.\n Used for full comparison for rename/alter; sometimes a single-level\n deconstruction will not compare correctly.\n ", "language": "en", "n_whitespaces": 51, "n_words": 22, "vocab_size": 18 }
https://github.com/django/django.git
1
test_reordering_concurrently
def test_reordering_concurrently(dummy_attribute, assert_num_queries): qs = SortedModel.objects attribute = dummy_attribute entries = list( qs.bulk_create( [ SortedModel(attribute=attribute, slug="1", name="1", sort_order=0), SortedModel(attribute=attribute, slug="2", name="2", sort_order=1), ] ) ) operations = {entries[0].pk: +1} with assert_num_queries(2) as ctx: perform_reordering(qs, operations) assert ctx[0]["sql"] == ( 'SELECT "attribute_attributevalue"."id", ' '"attribute_attributevalue"."sort_order" ' 'FROM "attribute_attributevalue" ' "ORDER BY " '"attribute_attributevalue"."sort_order" ASC NULLS LAST, ' '"attribute_attributevalue"."id" ASC FOR UPDATE' ) assert ctx[1]["sql"] == ( 'UPDATE "attribute_attributevalue" ' 'SET "sort_order" = ' f'(CASE WHEN ("attribute_attributevalue"."id" = {entries[0].pk}) ' f'THEN 1 WHEN ("attribute_attributevalue"."id" = {entries[1].pk}) ' "THEN 0 ELSE NULL END)::integer " 'WHERE "attribute_attributevalue"."id" ' f"IN ({entries[0].pk}, {entries[1].pk})" )
9e2ea5b2f647f7d19c3a54347ccdcbf787f2ca0b
15
test_core_reordering.py
272
Upgrade Saleor to Django version 4.0 (#10518) Lots of typing fixes uncovered while I attempted to upgrade to the latest version of django-stubs. Didn't pull it here as it has a bug that causes it to report problems related to its own type cloning code.
5,174
0
294
131
70
28,584
97
saleor
17
saleor/graphql/core/tests/test_core_reordering.py
Python
31
{ "docstring": "\n Ensures users cannot concurrently reorder, they need to wait for the other one\n to achieve.\n\n This must be the first thing done before doing anything. For that, we ensure\n the first SQL query is acquiring the lock.\n ", "language": "en", "n_whitespaces": 53, "n_words": 37, "vocab_size": 32 }
https://github.com/saleor/saleor.git
1
enhanced_current_hue
def enhanced_current_hue(self) -> int | None: return self.cluster.get("enhanced_current_hue")
04c6b9c51963418ffebddc7753939700fbea7e42
8
lighting.py
35
ZHA light entity cleanup (#75573) * use base class attributes * initial hue and saturation support * spec is 65536 not 65535 * fixes * enhanced current hue * fix comparison * clean up * fix channel test * oops * report enhanced current hue
116,067
0
22
19
8
317,500
8
core
5
homeassistant/components/zha/core/channels/lighting.py
Python
3
{ "docstring": "Return cached value of the enhanced_current_hue attribute.", "language": "en", "n_whitespaces": 6, "n_words": 7, "vocab_size": 7 }
https://github.com/home-assistant/core.git
1
test_stream_slices_with_state
def test_stream_slices_with_state(self, api, async_manager_mock, start_date): end_date = start_date + duration(days=10) cursor_value = start_date + duration(days=5) state = {AdsInsights.cursor_field: cursor_value.date().isoformat()} stream = AdsInsights(api=api, start_date=start_date, end_date=end_date) async_manager_mock.completed_jobs.return_value = [1, 2, 3] slices = list(stream.stream_slices(stream_state=state, sync_mode=SyncMode.incremental)) assert slices == [{"insight_job": 1}, {"insight_job": 2}, {"insight_job": 3}] async_manager_mock.assert_called_once() args, kwargs = async_manager_mock.call_args generated_jobs = list(kwargs["jobs"]) assert len(generated_jobs) == (end_date - cursor_value).days assert generated_jobs[0].interval.start == cursor_value.date() + duration(days=1) assert generated_jobs[1].interval.start == cursor_value.date() + duration(days=2)
a3aae8017a0a40ff2006e2567f71dccb04c997a5
12
test_base_insight_streams.py
310
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <[email protected]> Co-authored-by: Eugene Kulak <[email protected]>
575
0
166
197
48
3,835
68
airbyte
32
airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_base_insight_streams.py
Python
14
{ "docstring": "Stream will use cursor_value from state when there is state", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
https://github.com/airbytehq/airbyte.git
4
detect_rnet
def detect_rnet(self, images, rectangle_batch, height, width): ret = [] # TODO: batching for idx, rectangles in enumerate(rectangle_batch): if not rectangles: ret.append([]) continue image = images[idx] crop_number = 0 predict_24_batch = [] for rect in rectangles: crop_img = image[int(rect[1]):int(rect[3]), int(rect[0]):int(rect[2])] scale_img = cv2.resize(crop_img, (24, 24)) predict_24_batch.append(scale_img) crop_number += 1 predict_24_batch = np.array(predict_24_batch) output = self.rnet.predict(predict_24_batch, batch_size=128) cls_prob = output[0] cls_prob = np.array(cls_prob) roi_prob = output[1] roi_prob = np.array(roi_prob) ret.append(filter_face_24net( cls_prob, roi_prob, rectangles, width, height, self.threshold[1] )) return ret
aa39234538a8f83e6aa2b60b8275a570e8876ac2
15
mtcnn.py
294
Update all Keras Imports to be conditional (#1214) * Remove custom keras importer * first round keras imports fix * launcher.py: Remove KerasFinder references * 2nd round keras imports update (lib and extract) * 3rd round keras imports update (train) * remove KerasFinder from tests * 4th round keras imports update (tests)
19,919
0
360
193
56
100,441
77
faceswap
30
plugins/extract/detect/mtcnn.py
Python
24
{ "docstring": " second stage - refinement of face candidates with r-net ", "language": "en", "n_whitespaces": 10, "n_words": 9, "vocab_size": 9 }
https://github.com/deepfakes/faceswap.git
2
_update_selection_poly
def _update_selection_poly(self, vmin, vmax): # The vertices are positioned # 1 ------ 2 # | | # 0, 4 ---- 3 verts = self.poly.xy if self.orientation == "vertical": verts[0] = verts[4] = .25, vmin verts[1] = .25, vmax verts[2] = .75, vmax verts[3] = .75, vmin else: verts[0] = verts[4] = vmin, .25 verts[1] = vmin, .75 verts[2] = vmax, .75 verts[3] = vmax, .25
bed236261a74045a0e4a85fc930bc0ad0da786bb
11
widgets.py
159
Fix RangeSlider for same init values Closes #22686. Co-authored-by: Nickolaos Giannatos <[email protected]>
23,167
0
218
108
37
108,380
65
matplotlib
8
lib/matplotlib/widgets.py
Python
12
{ "docstring": "\n Update the vertices of the *self.poly* slider in-place\n to cover the data range *vmin*, *vmax*.\n ", "language": "en", "n_whitespaces": 37, "n_words": 15, "vocab_size": 13 }
https://github.com/matplotlib/matplotlib.git
2
clear_region_to_control_producer
def clear_region_to_control_producer(): global _publisher if _publisher: _publisher.close() _publisher = None
941184cd24186324fd9f7f304b7f713041834726
9
producer.py
34
chore(hybrid-cloud): AuditLogEntry is a control silo model now (#39890) In the control silo, creating an audit log entry writes to the db directly, whilst in region silo mode creating an audit log entry will instead push to a new kafka producer that consumes into the control silo asynchronously.
18,180
0
33
18
9
86,877
10
sentry
3
src/sentry/region_to_control/producer.py
Python
5
{ "docstring": "\n In tests, it is necessary to close the publisher after test failures or success for the pytest runner to continue.\n The atexit handler does not handle this case gracefully, so instead we use a test fixture and call this method to\n ensure, that the producer is always closed.\n ", "language": "en", "n_whitespaces": 61, "n_words": 48, "vocab_size": 41 }
https://github.com/getsentry/sentry.git
4
find_loader
def find_loader(self, fullname): warnings.warn("PathEntryFinder.find_loader() is deprecated since Python " "3.4 in favor of PathEntryFinder.find_spec() " "(available since 3.4)", DeprecationWarning, stacklevel=2) if not hasattr(self, 'find_spec'): return None, [] found = self.find_spec(fullname) if found is not None: if not found.submodule_search_locations: portions = [] else: portions = found.submodule_search_locations return found.loader, portions else: return None, [] find_module = _bootstrap_external._find_module_shim
8198943edd73a363c266633e1aa5b2a9e9c9f526
12
abc.py
145
add python 3.10.4 for windows
55,187
0
269
80
36
218,187
55
XX-Net
16
python3.10.4/Lib/importlib/abc.py
Python
17
{ "docstring": "Return (loader, namespace portion) for the path entry.\n\n The fullname is a str. The namespace portion is a sequence of\n path entries contributing to part of a namespace package. The\n sequence may be empty. If loader is not None, the portion will\n be ignored.\n\n The portion will be discarded if another path entry finder\n locates the module as a normal module or package.\n\n This method is deprecated since Python 3.4 in favor of\n finder.find_spec(). If find_spec() is provided than backwards-compatible\n functionality is provided.\n ", "language": "en", "n_whitespaces": 155, "n_words": 83, "vocab_size": 55 }
https://github.com/XX-net/XX-Net.git
4
_api_all
def _api_all(self): response.content_type = 'application/json; charset=utf-8' if self.args.debug: fname = os.path.join(tempfile.gettempdir(), 'glances-debug.json') try: with open(fname) as f: return f.read() except IOError: logger.debug("Debug file (%s) not found" % fname) # Update the stat self.__update__() try: # Get the JSON value of the stat ID statval = json_dumps(self.stats.getAllAsDict()) except Exception as e: abort(404, "Cannot get stats (%s)" % str(e)) return statval
a9ee2aa09c00b6adc396c666d6e970e2b58918e6
14
glances_bottle.py
177
Replace json by ujson #2201
15,471
0
230
98
47
70,258
59
glances
26
glances/outputs/glances_bottle.py
Python
15
{ "docstring": "Glances API RESTful implementation.\n\n Return the JSON representation of all the plugins\n HTTP/200 if OK\n HTTP/400 if plugin is not found\n HTTP/404 if others error\n ", "language": "en", "n_whitespaces": 60, "n_words": 25, "vocab_size": 22 }
https://github.com/nicolargo/glances.git
2
pretrained_cfg_for_features
def pretrained_cfg_for_features(pretrained_cfg): pretrained_cfg = deepcopy(pretrained_cfg) # remove default pretrained cfg fields that don't have much relevance for feature backbone to_remove = ('num_classes', 'crop_pct', 'classifier', 'global_pool') # add default final pool size? for tr in to_remove: pretrained_cfg.pop(tr, None) return pretrained_cfg # def overlay_external_pretrained_cfg(pretrained_cfg, kwargs): # # external_pretrained_cfg = kwargs.pop('external_pretrained_cfg', None) # if external_pretrained_cfg: # pretrained_cfg.pop('url', None) # url should come from external cfg # pretrained_cfg.pop('hf_hub', None) # hf hub id should come from external cfg # pretrained_cfg.update(external_pretrained_cfg)
abc9ba254430ef971ea3dbd12f2b4f1969da55be
9
helpers.py
73
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
119,875
0
130
37
51
331,622
76
pytorch-image-models
6
timm/models/helpers.py
Python
6
{ "docstring": " Overlay 'external_pretrained_cfg' in kwargs on top of pretrained_cfg arg.\n# ", "language": "en", "n_whitespaces": 14, "n_words": 10, "vocab_size": 10 }
https://github.com/huggingface/pytorch-image-models.git
1
test_get_cert_serial_valid
def test_get_cert_serial_valid(certutil, cert_file): serial = certutil.get_cert_serial(str(cert_file)) assert serial == "5be1cc5d51b78dbd49a0b7c00d44806d"
a8d2d1e1397cdc79b2c5f1ad7f6e3b729dcf8857
10
test_win_certutil.py
41
Add tests, fix state module
54,237
0
19
23
9
215,903
10
salt
6
tests/pytests/functional/modules/test_win_certutil.py
Python
3
{ "docstring": "\n Test get_cert_serial with a known valid certificate\n ", "language": "en", "n_whitespaces": 14, "n_words": 7, "vocab_size": 7 }
https://github.com/saltstack/salt.git
2
check_string
def check_string(result, func, cargs): if not result: raise GEOSException( 'Error encountered checking string return value in GEOS C function "%s".' % func.__name__ ) # Getting the string value at the pointer address. s = string_at(result) # Freeing the memory allocated within GEOS free(result) return s
9c19aff7c7561e3a82978a272ecdaad40dda5c00
11
errcheck.py
62
Refs #33476 -- Reformatted code with Black.
50,633
0
102
35
37
204,095
45
django
9
django/contrib/gis/geos/prototypes/errcheck.py
Python
9
{ "docstring": "\n Error checking for routines that return strings.\n\n This frees the memory allocated by GEOS at the result pointer.\n ", "language": "en", "n_whitespaces": 28, "n_words": 18, "vocab_size": 17 }
https://github.com/django/django.git
8
set_data
def set_data(self, x, y, A): x = np.array(x, np.float32) y = np.array(y, np.float32) A = cbook.safe_masked_invalid(A, copy=True) if not (x.ndim == y.ndim == 1 and A.shape[0:2] == y.shape + x.shape): raise TypeError("Axes don't match array shape") if A.ndim not in [2, 3]: raise TypeError("Can only plot 2D or 3D data") if A.ndim == 3 and A.shape[2] not in [1, 3, 4]: raise TypeError("3D arrays must have three (RGB) " "or four (RGBA) color components") if A.ndim == 3 and A.shape[2] == 1: A = A.squeeze(axis=-1) self._A = A self._Ax = x self._Ay = y self._imcache = None self.stale = True
f16da868d016363c4cd734b2abd6535230b094df
12
image.py
284
[Doc] Fix ndarray-links for arguments
24,326
0
262
182
68
110,847
100
matplotlib
21
lib/matplotlib/image.py
Python
18
{ "docstring": "\n Set the grid for the pixel centers, and the pixel values.\n\n Parameters\n ----------\n x, y : 1D array-like\n Monotonic arrays of shapes (N,) and (M,), respectively, specifying\n pixel centers.\n A : array-like\n (M, N) `~numpy.ndarray` or masked array of values to be\n colormapped, or (M, N, 3) RGB array, or (M, N, 4) RGBA array.\n ", "language": "en", "n_whitespaces": 142, "n_words": 55, "vocab_size": 42 }
https://github.com/matplotlib/matplotlib.git
6
tf_shard_checkpoint
def tf_shard_checkpoint(weights, max_shard_size="10GB"): max_shard_size = convert_file_size_to_int(max_shard_size) sharded_state_dicts = [] current_block = [] current_block_size = 0 total_size = 0 for item in weights: weight_size = item.numpy().size * dtype_byte_size(item.dtype) # If this weight is going to tip up over the maximal size, we split. if current_block_size + weight_size > max_shard_size: sharded_state_dicts.append(current_block) current_block = [] current_block_size = 0 current_block.append(item) current_block_size += weight_size total_size += weight_size # Add the last block sharded_state_dicts.append(current_block) # If we only have one shard, we return it if len(sharded_state_dicts) == 1: return {TF2_WEIGHTS_NAME: sharded_state_dicts[0]}, None # Otherwise, let's build the index weight_map = {} shards = {} for idx, shard in enumerate(sharded_state_dicts): shard_file = TF2_WEIGHTS_NAME.replace(".h5", f"-{idx+1:05d}-of-{len(sharded_state_dicts):05d}.h5") shards[shard_file] = shard for weight in shard: weight_name = weight.name weight_map[weight_name] = shard_file # Add the metadata metadata = {"total_size": total_size} index = {"metadata": metadata, "weight_map": weight_map} return shards, index
7cced021fa8ddc59f0f77384300760d34545394e
14
modeling_tf_utils.py
324
TF Sharded (#17713) * initial commit * update modeeling tf utils * quality * clean and update args * update * remove potential bug * code quality * update * update max shard * update tests for sharding from pretrained * fix remaining test * make style * h5py if tf available * update and fix test * fix test * style * modified push to hub to support shard for TF * quick fix * update code * merge branch main and style * Apply suggestions from code review Co-authored-by: Joao Gante <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * update based on reviews * update doc * update and style * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * Update based on reviews * fix typo * style Co-authored-by: Joao Gante <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
5,748
0
319
181
83
31,457
137
transformers
29
src/transformers/modeling_tf_utils.py
Python
29
{ "docstring": "\n Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a\n given size.\n\n The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no\n optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the\n limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB],\n [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB].\n\n <Tip warning={true}>\n\n If one of the model's weight is bigger that `max_shard_size`, it will end up in its own sub-checkpoint which will\n have a size greater than `max_shard_size`.\n\n </Tip>\n\n Args:\n weights (`Dict[str, tf.RessourceVariable]`): The list of tf.RessourceVariable of a model to save.\n max_shard_size (`int` or `str`, *optional*, defaults to `\"10GB\"`):\n The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit\n (like `\"5MB\"`).\n ", "language": "en", "n_whitespaces": 231, "n_words": 158, "vocab_size": 105 }
https://github.com/huggingface/transformers.git
6
start
def start(self): if self._build_level.value < FlowBuildLevel.GRAPH.value: self.build(copy_flow=False) # set env only before the Deployment get started if self.args.env: for k, v in self.args.env.items(): os.environ[k] = str(v) for k, v in self: if not v.external: self.enter_context(v) self._wait_until_all_ready() self._build_level = FlowBuildLevel.RUNNING return self
13edc16d806fb5d77a6849551178ccc75937f25f
12
base.py
151
refactor: rename pod to deployment (#4230) * refactor: rename pod to deployment * style: fix overload and cli autocomplete * fix: undo daemon mistake * refactor: leftover cleanup * fix: more test fixes * fix: more fixes * fix: more fixes * fix: more fixes * fix: more tests * fix: fix more tests * refactor: fix more tests * refactor: more tests fixes * refactor: rename pea to pod * refactor: adjust docs * refactor: complete pea renaming * refactor: more fixes * fix: pea_type in k8s yamls * fix: adjust pod args name * refactor: rename peapods parser folder * fix: da init Co-authored-by: Jina Dev Bot <[email protected]>
1,964
0
160
93
34
10,879
41
jina
20
jina/orchestrate/flow/base.py
Python
12
{ "docstring": "Start to run all Deployments in this Flow.\n\n Remember to close the Flow with :meth:`close`.\n\n Note that this method has a timeout of ``timeout_ready`` set in CLI,\n which is inherited all the way from :class:`jina.orchestrate.pods.Pod`\n\n\n .. # noqa: DAR401\n\n :return: this instance\n ", "language": "en", "n_whitespaces": 84, "n_words": 42, "vocab_size": 36 }
https://github.com/jina-ai/jina.git
1
get_sal_struct
def get_sal_struct(company, currency, salary_slip_based_on_timesheet, condition): return frappe.db.sql_list( .format( condition=condition ), { "company": company, "currency": currency, "salary_slip_based_on_timesheet": salary_slip_based_on_timesheet, }, )
494bd9ef78313436f0424b918f200dab8fc7c20b
10
payroll_entry.py
68
style: format code with black
14,378
0
8
43
17
66,913
19
erpnext
9
erpnext/payroll/doctype/payroll_entry/payroll_entry.py
Python
20
{ "docstring": "\n\t\tselect\n\t\t\tname from `tabSalary Structure`\n\t\twhere\n\t\t\tdocstatus = 1 and\n\t\t\tis_active = 'Yes'\n\t\t\tand company = %(company)s\n\t\t\tand currency = %(currency)s and\n\t\t\tifnull(salary_slip_based_on_timesheet,0) = %(salary_slip_based_on_timesheet)s\n\t\t\t{condition}", "language": "en", "n_whitespaces": 17, "n_words": 26, "vocab_size": 19 }
https://github.com/frappe/erpnext.git
2
test_track_task_functions
async def test_track_task_functions(event_loop): hass = ha.HomeAssistant() try: assert hass._track_task hass.async_stop_track_tasks() assert not hass._track_task hass.async_track_tasks() assert hass._track_task finally: await hass.async_stop()
c576a68d336bc91fd82c299d9b3e5dfdc1c14960
11
test_core.py
83
Upgrade pytest-aiohttp (#82475) * Upgrade pytest-aiohttp * Make sure executors, tasks and timers are closed Some test will trigger warnings on garbage collect, these warnings spills over into next test. Some test trigger tasks that raise errors on shutdown, these spill over into next test. This is to mimic older pytest-aiohttp and it's behaviour on test cleanup. Discussions on similar changes for pytest-aiohttp are here: https://github.com/pytest-dev/pytest-asyncio/pull/309 * Replace loop with event_loop * Make sure time is frozen for tests * Make sure the ConditionType is not async /home-assistant/homeassistant/helpers/template.py:2082: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited def wrapper(*args, **kwargs): Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. * Increase litejet press tests with a factor 10 The times are simulated anyway, and we can't stop the normal event from occuring. * Use async handlers for aiohttp tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template /Users/joakim/src/hass/home-assistant/venv/lib/python3.9/site-packages/aiohttp/web_urldispatcher.py:189: DeprecationWarning: Bare functions are deprecated, use async ones warnings.warn( * Switch to freezegun in modbus tests The tests allowed clock to tick in between steps * Make sure skybell object are fully mocked Old tests would trigger attempts to post to could services: ``` DEBUG:aioskybell:HTTP post https://cloud.myskybell.com/api/v3/login/ Request with headers: {'content-type': 'application/json', 'accept': '*/*', 'x-skybell-app-id': 'd2b542c7-a7e4-4e1e-b77d-2b76911c7c46', 'x-skybell-client-id': '1f36a3c0-6dee-4997-a6db-4e1c67338e57'} ``` * Fix sorting that broke after rebase
90,842
0
73
46
15
291,738
19
core
9
tests/test_core.py
Python
10
{ "docstring": "Test function to start/stop track task and initial state.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
https://github.com/home-assistant/core.git
8
async_test_still
async def async_test_still(hass, info) -> tuple[dict[str, str], str | None]: fmt = None if not (url := info.get(CONF_STILL_IMAGE_URL)): return {}, None if not isinstance(url, template_helper.Template) and url: url = cv.template(url) url.hass = hass try: url = url.async_render(parse_result=False) except TemplateError as err: _LOGGER.error("Error parsing template %s: %s", url, err) return {CONF_STILL_IMAGE_URL: "template_error"}, None verify_ssl = info.get(CONF_VERIFY_SSL) auth = generate_auth(info) try: async_client = get_async_client(hass, verify_ssl=verify_ssl)
c1a2be72fc8b76b55cfde1823c5688100e397369
async def async_test_still(hass, info) -> tuple[dict[str, str], str | None]: """Verify that the still image is valid before we create an entity.""" fmt = None if not (url := info.get(CONF_STILL_IMAGE_URL)): return {}, None if not isinstance(url, template_helper.Template) and url: url = cv.template(url) url.hass = hass try: url = url.async_render(parse_result=False) except TemplateError as err: _LOGGER.error("Error parsing template %s: %s", url, err) return {CONF_STILL_IMAGE_URL: "template_error"}, None verify_ssl = info.get(CONF_VERIFY_SSL) auth = generate_auth(info) try: async_client = get_async_client(hass, verify_ssl=verify_ssl)
11
config_flow.py
208
Generic IP Camera configflow 2 (#52360) Co-authored-by: J. Nick Koston <[email protected]>
93,650
1
139
253
50
294,616
63
core
27
homeassistant/components/generic/config_flow.py
Python
40
{ "docstring": "Verify that the still image is valid before we create an entity.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
https://github.com/home-assistant/core.git
2
split_list
def split_list(v, n): k, m = divmod(len(v), n) return (v[i * k + min(i, m) : (i + 1) * k + min(i + 1, m)] for i in range(n))
f60f0e8fc697d98d9f3a4b9ea851329321e64be9
12
captum_ray.py
93
Adds Ray implementation of IntegratedGradientsExplainer that distributes across cluster resources (#2697)
1,457
0
39
61
25
8,560
30
ludwig
10
ludwig/explain/captum_ray.py
Python
3
{ "docstring": "Splits a list into n roughly equal sub-lists.\n\n Source: https://stackoverflow.com/a/2135920\n ", "language": "en", "n_whitespaces": 16, "n_words": 10, "vocab_size": 10 }
https://github.com/ludwig-ai/ludwig.git
1
test_bad_inputs
def test_bad_inputs() -> None: chain = FakeChain() with pytest.raises(ValueError): chain({"foobar": "baz"})
18aeb720126a68201c7e3b5a617139c27c779496
12
test_base.py
56
initial commit
46,487
0
27
28
11
191,349
11
langchain
6
tests/unit_tests/chains/test_base.py
Python
5
{ "docstring": "Test errors are raised if input keys are not found.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
https://github.com/hwchase17/langchain.git
2
prefix
def prefix(self) -> str: if self.collection: return self.collection.prefix return ''
3eb0485dd92c88cc92152d3656d94492db44b183
9
__init__.py
38
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
79,305
0
42
21
9
268,031
10
ansible
4
test/lib/ansible_test/_internal/provider/layout/__init__.py
Python
5
{ "docstring": "Return the collection prefix or an empty string if not a collection.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
https://github.com/ansible/ansible.git
3
object_inspect_mime
def object_inspect_mime(self, oname, detail_level=0, omit_sections=()): with self.builtin_trap: info = self._object_find(oname) if info.found: docformat = sphinxify(self.object_inspect(oname)) if self.sphinxify_docstring else None return self.inspector._get_info( info.obj, oname, info=info, detail_level=detail_level, formatter=docformat, omit_sections=omit_sections, ) else: raise KeyError(oname) #------------------------------------------------------------------------- # Things related to history management #-------------------------------------------------------------------------
d55a692f46402f397ab38e6c4c9fb6423a85b54f
15
interactiveshell.py
139
Update sphinxify usage
52,340
0
269
89
35
208,476
39
ipython
18
IPython/core/interactiveshell.py
Python
20
{ "docstring": "Get object info as a mimebundle of formatted representations.\n\n A mimebundle is a dictionary, keyed by mime-type.\n It must always have the key `'text/plain'`.\n ", "language": "en", "n_whitespaces": 45, "n_words": 24, "vocab_size": 22 }
https://github.com/ipython/ipython.git
4
get_admin_urls_for_registration
def get_admin_urls_for_registration(self): urls = ( re_path( self.url_helper.get_action_url_pattern("index"), self.index_view, name=self.url_helper.get_action_url_name("index"), ), re_path( self.url_helper.get_action_url_pattern("create"), self.create_view, name=self.url_helper.get_action_url_name("create"), ), re_path( self.url_helper.get_action_url_pattern("edit"), self.edit_view, name=self.url_helper.get_action_url_name("edit"), ), re_path( self.url_helper.get_action_url_pattern("delete"), self.delete_view, name=self.url_helper.get_action_url_name("delete"), ), ) if self.inspect_view_enabled: urls = urls + ( re_path( self.url_helper.get_action_url_pattern("inspect"), self.inspect_view, name=self.url_helper.get_action_url_name("inspect"), ), ) if self.history_view_enabled: urls = urls + ( re_path( self.url_helper.get_action_url_pattern("history"), self.history_view, name=self.url_helper.get_action_url_name("history"), ), ) if self.is_pagemodel: urls = urls + ( re_path( self.url_helper.get_action_url_pattern("choose_parent"), self.choose_parent_view, name=self.url_helper.get_action_url_name("choose_parent"), ), ) return urls
d10f15e55806c6944827d801cd9c2d53f5da4186
16
options.py
385
Reformat with black
15,969
0
711
241
35
73,172
67
wagtail
18
wagtail/contrib/modeladmin/options.py
Python
48
{ "docstring": "\n Utilised by Wagtail's 'register_admin_urls' hook to register urls for\n our the views that class offers.\n ", "language": "en", "n_whitespaces": 37, "n_words": 15, "vocab_size": 15 }
https://github.com/wagtail/wagtail.git
1
async_added_to_hass
async def async_added_to_hass(self) -> None: self._table.add_listener(self.async_write_ha_state)
0c767bd0d37a41af37728b1d8b4eae8dceb7e188
8
media_player.py
33
Improve entity type hints [s] (part 1/2) (#77881)
105,262
0
20
18
6
306,478
6
core
5
homeassistant/components/sisyphus/media_player.py
Python
3
{ "docstring": "Add listeners after this object has been initialized.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
https://github.com/home-assistant/core.git
1
unified_job_template_table
def unified_job_template_table(since, full_path, **kwargs): unified_job_template_query = return _copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path) @register('workflow_job_node_table', '1.0', format='csv', description=_('Data on workflow runs'), expensive=four_hour_slicing)
17756f0e725fb3a87862ac8234a5974c67b0f6e2
@register('workflow_job_node_table', '1.0', format='csv', description=_('Data on workflow runs'), expensive=four_hour_slicing)
10
collectors.py
84
Add job execution environment image to analytics data (#11835) * Add job execution environment image to analytics data * Add EE image to UJT analytics data * Bump the unified job templates table
17,093
1
23
28
18
80,703
18
awx
15
awx/main/analytics/collectors.py
Python
22
{ "docstring": "COPY (SELECT main_unifiedjobtemplate.id,\n main_unifiedjobtemplate.polymorphic_ctype_id,\n django_content_type.model,\n main_executionenvironment.image as execution_environment_image,\n main_unifiedjobtemplate.created,\n main_unifiedjobtemplate.modified,\n main_unifiedjobtemplate.created_by_id,\n main_unifiedjobtemplate.modified_by_id,\n main_unifiedjobtemplate.name,\n main_unifiedjobtemplate.current_job_id,\n main_unifiedjobtemplate.last_job_id,\n main_unifiedjobtemplate.last_job_failed,\n main_unifiedjobtemplate.last_job_run,\n main_unifiedjobtemplate.next_job_run,\n main_unifiedjobtemplate.next_schedule_id,\n main_unifiedjobtemplate.status\n FROM main_unifiedjobtemplate\n LEFT JOIN main_executionenvironment ON main_executionenvironment.id = main_unifiedjobtemplate.execution_environment_id, django_content_type\n WHERE main_unifiedjobtemplate.polymorphic_ctype_id = django_content_type.id\n ORDER BY main_unifiedjobtemplate.id ASC) TO STDOUT WITH CSV HEADER", "language": "en", "n_whitespaces": 650, "n_words": 43, "vocab_size": 42 }
https://github.com/ansible/awx.git
1
get_base_rev_args
def get_base_rev_args(rev): # type: (str) -> List[str] raise NotImplementedError
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
6
versioncontrol.py
17
upd; format
12,572
0
30
8
9
61,433
9
transferlearning
3
.venv/lib/python3.8/site-packages/pip/_internal/vcs/versioncontrol.py
Python
2
{ "docstring": "\n Return the base revision arguments for a vcs command.\n\n Args:\n rev: the name of a revision to install. Cannot be None.\n ", "language": "en", "n_whitespaces": 53, "n_words": 21, "vocab_size": 18 }
https://github.com/jindongwang/transferlearning.git
1
stored_mask
def stored_mask(self) -> np.ndarray: assert self._mask is not None dims = (self.stored_size, self.stored_size, 1) mask = np.frombuffer(decompress(self._mask), dtype="uint8").reshape(dims) logger.trace("stored mask shape: %s", mask.shape) # type: ignore return mask
5e73437be47f2410439a3c6716de96354e6a0c94
13
detected_face.py
104
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
20,657
0
71
64
25
101,237
28
faceswap
15
lib/align/detected_face.py
Python
8
{ "docstring": " :class:`numpy.ndarray`: The mask at the size of :attr:`stored_size` as it is stored\n (i.e. with no blurring/centering applied). ", "language": "en", "n_whitespaces": 25, "n_words": 17, "vocab_size": 17 }
https://github.com/deepfakes/faceswap.git
1
test_smaller_request_deduplicated
def test_smaller_request_deduplicated(self) -> None: req1 = ensureDeferred( self.state_datastore._get_state_for_group_using_inflight_cache( 42, StateFilter.from_types((("test.type", None),)) ) ) self.pump(by=0.1) # This should have gone to the database self.assertEqual(len(self.get_state_group_calls), 1) self.assertFalse(req1.called) req2 = ensureDeferred( self.state_datastore._get_state_for_group_using_inflight_cache( 42, StateFilter.from_types((("test.type", "b"),)) ) ) self.pump(by=0.1) # No more calls should have gone to the database, because the second # request was already in the in-flight cache! self.assertEqual(len(self.get_state_group_calls), 1) self.assertFalse(req1.called) self.assertFalse(req2.called) groups, sf, d = self.get_state_group_calls[0] self.assertEqual(groups, (42,)) # The state filter is expanded internally for increased cache hit rate, # so we the database sees a wider state filter than requested. self.assertEqual(sf, ALL_NON_MEMBERS_STATE_FILTER) # Now we can complete the request self._complete_request_fake(groups, sf, d) self.assertEqual( self.get_success(req1), {("test.type", "a"): "AAA", ("test.type", "b"): "BBB"}, ) self.assertEqual(self.get_success(req2), {("test.type", "b"): "BBB"})
546b9c9e648f5e2b25bb7c8350570787ff9befae
15
test_state_store.py
363
Add more tests for in-flight state query duplication. (#12033)
71,179
0
387
224
80
246,367
116
synapse
22
tests/storage/databases/test_state_store.py
Python
37
{ "docstring": "\n Tests that duplicate requests for state are deduplicated.\n\n This test:\n - requests some state (state group 42, 'all' state filter)\n - requests a subset of that state, before the first request finishes\n - checks to see that only one database query was made\n - completes the database query\n - checks that both requests see the correct retrieved state\n ", "language": "en", "n_whitespaces": 115, "n_words": 58, "vocab_size": 39 }
https://github.com/matrix-org/synapse.git
5
testSuccessiveHalving
def testSuccessiveHalving(self): stats = self.default_statistics() sched, mock_runner = self.schedulerSetup(stats["max_trials"]) big_bracket = sched._state["bracket"] cur_units = stats[str(stats["s_max"])]["r"] # The last bracket will downscale 4 times for x in range(stats["brack_count"] - 1): trials = big_bracket.current_trials() current_length = len(trials) for trl in trials: mock_runner._launch_trial(trl) # Provides results from 0 to 8 in order, keeping last one running for i, trl in enumerate(trials): action = sched.on_trial_result(mock_runner, trl, result(cur_units, i)) if i < current_length - 1: self.assertEqual(action, TrialScheduler.PAUSE) mock_runner.process_action(trl, action) self.assertEqual(action, TrialScheduler.CONTINUE) new_length = len(big_bracket.current_trials()) self.assertEqual(new_length, self.downscale(current_length, sched)) cur_units = int(cur_units * sched._eta) self.assertEqual(len(big_bracket.current_trials()), 1)
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
14
test_trial_scheduler.py
320
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
29,800
0
323
198
69
132,749
89
ray
33
python/ray/tune/tests/test_trial_scheduler.py
Python
20
{ "docstring": "Setup full band, then iterate through last bracket (n=81)\n to make sure successive halving is correct.", "language": "en", "n_whitespaces": 22, "n_words": 16, "vocab_size": 16 }
https://github.com/ray-project/ray.git
1
test_authorization_header
def test_authorization_header(self) -> None: # test a "normal" Authorization header self.assertEqual( _parse_auth_header( b'X-Matrix origin=foo,key="ed25519:1",sig="sig",destination="bar"' ), ("foo", "ed25519:1", "sig", "bar"), ) # test an Authorization with extra spaces, upper-case names, and escaped # characters self.assertEqual( _parse_auth_header( b'X-Matrix ORIGIN=foo,KEY="ed25\\519:1",SIG="sig",destination="bar"' ), ("foo", "ed25519:1", "sig", "bar"), ) self.assertEqual( _parse_auth_header( b'X-Matrix origin=foo,key="ed25519:1",sig="sig",destination="bar",extra_field=ignored' ), ("foo", "ed25519:1", "sig", "bar"), )
8afb7b55d0527f8c6af7690b162ebaabe9b5d9f5
10
test__base.py
131
Make handling of federation Authorization header (more) compliant with RFC7230 (#12774) The main differences are: - values with delimiters (such as colons) should be quoted, so always quote the origin, since it could contain a colon followed by a port number - should allow more than one space after "X-Matrix" - quoted values with backslash-escaped characters should be unescaped - names should be case insensitive
72,218
0
268
71
31
248,327
53
synapse
4
tests/federation/transport/server/test__base.py
Python
20
{ "docstring": "Tests that the Authorization header is parsed correctly.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
https://github.com/matrix-org/synapse.git
1
find_free_port
def find_free_port(self) -> int: from ray.air._internal.util import find_free_port return find_free_port()
862d10c162421706f77f73428429379a8b22fc38
7
rollout_worker.py
36
[AIR] Remove ML code from `ray.util` (#27005) Removes all ML related code from `ray.util` Removes: - `ray.util.xgboost` - `ray.util.lightgbm` - `ray.util.horovod` - `ray.util.ray_lightning` Moves `ray.util.ml_utils` to other locations Closes #23900 Signed-off-by: Amog Kamsetty <[email protected]> Signed-off-by: Kai Fricke <[email protected]> Co-authored-by: Kai Fricke <[email protected]>
28,045
0
31
22
10
126,028
10
ray
7
rllib/evaluation/rollout_worker.py
Python
4
{ "docstring": "Finds a free port on the node that this worker runs on.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
https://github.com/ray-project/ray.git
2
test_relative_json
def test_relative_json(self): # this should work regardless of where th current working directory is. with tempfile.TemporaryDirectory() as tmp_dir: cwdir = os.getcwd() os.chdir(tmp_dir) unzipped_paths = _unzip_if_needed( [str(Path(self.relative_path) / "large.json")], "json" ) self.assertEqual( os.path.realpath(str(Path(unzipped_paths[0]).absolute())), os.path.realpath( str( Path(__file__).parent.parent.parent / self.relative_path / "large.json" ) ), ) assert all([Path(fpath).exists() for fpath in unzipped_paths]) os.chdir(cwdir)
569fe0109629048d08e1d9e023f7769f10bd2244
20
test_dataset_reader.py
213
[RLlib] improved unittests for dataset_reader and fixed bugs (#26458)
27,740
0
325
126
44
125,000
49
ray
23
rllib/offline/tests/test_dataset_reader.py
Python
19
{ "docstring": "Tests whether the unzip_if_needed function works correctly on relative json\n files", "language": "en", "n_whitespaces": 17, "n_words": 11, "vocab_size": 11 }
https://github.com/ray-project/ray.git
3
pub_connect
def pub_connect(self): if self.pub_sock: self.pub_close() ctx = zmq.Context.instance() self._sock_data.sock = ctx.socket(zmq.PUSH) self.pub_sock.setsockopt(zmq.LINGER, -1) if self.opts.get("ipc_mode", "") == "tcp": pull_uri = "tcp://127.0.0.1:{}".format( self.opts.get("tcp_master_publish_pull", 4514) ) else: pull_uri = "ipc://{}".format( os.path.join(self.opts["sock_dir"], "publish_pull.ipc") ) log.debug("Connecting to pub server: %s", pull_uri) self.pub_sock.connect(pull_uri) return self._sock_data.sock
d4e6111086ff713eb6609dc6c98cec98aded2564
15
zeromq.py
223
Refactor into transports and channels
53,906
0
195
129
33
215,281
40
salt
24
salt/transport/zeromq.py
Python
17
{ "docstring": "\n Create and connect this thread's zmq socket. If a publisher socket\n already exists \"pub_close\" is called before creating and connecting a\n new socket.\n ", "language": "en", "n_whitespaces": 52, "n_words": 23, "vocab_size": 20 }
https://github.com/saltstack/salt.git
2
x_forwarded_ip
def x_forwarded_ip(request): ip_address_list = request.headers.get('X-Forwarded-For') if ip_address_list: ip_address_list = ip_address_list.split(',') return ip_address_list[0]
9e8eb17497edf8f40d68f7dcb53b1b2c3576313c
11
request_ip_resolvers.py
58
Feat/django4 support (#7268) * feat: update requirements for the deps to install for django 4 * fix: django utils http deprecations * fix: signals deprecation from Django * fix: lang key deprecation for session * wip: middleware deprecation fixes * feat: add django 4 to the ci mix * fix: 3.6 is deprecated * fix: use the same name as existing convention * wip: fix the toolbar * fix: more tests * fix: issue with thread local in Django Django uses thread-locals internally to track the currently active language for the request. Python implements thread-local data through the threading.local class, but as of Django 3.x, multiple requests can be handled in a single thread and so thread-locals will no longer be unique to a single request. Django therefore provides asgiref.Local as a drop-in replacement. Authored-by: Vinit Kumar <[email protected]> Signed-off-by: Vinit Kumar <[email protected]> * fix: add correct version of package deps * revert: old style middlewares * fix: current user middleware issues * fix: django 4.0 is 3.8+ only * fix: issue with middlewares upgrade to the new convention * fix: port the middleware to new convention Authored-by: Vinit Kumar <[email protected]> Signed-off-by: Vinit Kumar <[email protected]> Co-authored-by: Mark Walker <[email protected]> * fix: isort linting issues * feat: port the middleware to the new format * Move django upper limit from 4 to 5 Co-authored-by: Mark Walker <[email protected]> Co-authored-by: Mark Walker <[email protected]>
17,350
0
35
32
10
82,330
12
django-cms
6
cms/utils/request_ip_resolvers.py
Python
5
{ "docstring": "\n Returns the IP Address contained in the 'HTTP_X_FORWARDED_FOR' header, if\n present. Otherwise, `None`.\n\n Should handle properly configured proxy servers.\n ", "language": "en", "n_whitespaces": 32, "n_words": 19, "vocab_size": 18 }
https://github.com/django-cms/django-cms.git
2
test_basic
def test_basic(self, ray_start_regular_shared): # "simple" contains three 32x32 RGB images. ds = ray.data.read_images("example://image-datasets/simple") assert ds.schema().names == [TENSOR_COLUMN_NAME] column_type = ds.schema().types[0] assert isinstance(column_type, ArrowTensorType) assert all(array.shape == (32, 32, 3) for array in ds.take())
fe3c2294f08fd27867d77d0e3dc3ebfeba0d6d05
10
test_dataset_image.py
115
[AIR - Datasets] Add experimental `read_images` (#29177) Users can't discover ImageFolderDatasource. This PR adds a more-discoverable way to read images. Signed-off-by: Balaji Veeramani <[email protected]> Co-authored-by: Balaji Veeramani <[email protected]>
28,818
0
82
72
29
128,822
33
ray
18
python/ray/data/tests/test_dataset_image.py
Python
6
{ "docstring": "Test basic `read_images` functionality.\n The folder \"simple\" contains three 32x32 RGB images.\n ", "language": "en", "n_whitespaces": 26, "n_words": 12, "vocab_size": 12 }
https://github.com/ray-project/ray.git
1
inplace_increment
def inplace_increment(x, val, f=None): return _cur_framework(x, f=f).inplace_increment(x, val)
ec8341197ccdd240a346a95c2a434e5ef9f9ef72
10
general.py
43
moved all inplace methods from gradients submodule to general submodule, as inplace ops are also relevant for non-Variable tensors.
53,647
0
14
28
8
213,228
8
ivy
5
ivy/core/general.py
Python
2
{ "docstring": "\n Perform in-place increment for the input variable.\n\n :param x: The variable to increment.\n :type x: variable\n :param val: The array to increment the variable with.\n :type val: array\n :param f: Machine learning framework. Inferred from inputs if None.\n :type f: ml_framework, optional\n :return: The variable following the in-place increment.\n ", "language": "en", "n_whitespaces": 77, "n_words": 49, "vocab_size": 30 }
https://github.com/unifyai/ivy.git
4
func_dump
def func_dump(func): if os.name == "nt": raw_code = marshal.dumps(func.__code__).replace(b"\\", b"/") code = codecs.encode(raw_code, "base64").decode("ascii") else: raw_code = marshal.dumps(func.__code__) code = codecs.encode(raw_code, "base64").decode("ascii") defaults = func.__defaults__ if func.__closure__: closure = tuple(c.cell_contents for c in func.__closure__) else: closure = None return code, defaults, closure
84afc5193d38057e2e2badf9c889ea87d80d8fbf
14
generic_utils.py
185
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
81,751
0
105
109
28
276,840
42
keras
20
keras/utils/generic_utils.py
Python
13
{ "docstring": "Serializes a user defined function.\n\n Args:\n func: the function to serialize.\n\n Returns:\n A tuple `(code, defaults, closure)`.\n ", "language": "en", "n_whitespaces": 40, "n_words": 17, "vocab_size": 17 }
https://github.com/keras-team/keras.git
8
_set_state
def _set_state(self, state): for child_attr, child_obj in self.__dict__.items(): # TODO(rchao): Retrieve non-variable states from the dict as well. # TODO(rchao): Give a warning for mismatches. if isinstance(child_obj, tf.Variable): child_obj.assign(state[child_attr]) elif saving_lib.is_container(child_obj): for k, contained_obj in enumerate(child_obj): if isinstance(contained_obj, tf.Variable): # Handling the case where `child_obj` is a list/tuple. contained_obj.assign(state[f"{child_attr}-{k}"]) elif isinstance(child_obj, dict) and isinstance( child_obj[contained_obj], tf.Variable ): # Handling the case where `child_obj` is a dict. child_obj[contained_obj].assign( state[f"{child_attr}-{contained_obj}"] )
ba5086fa31d24a9f61b46d4a844311b58dea7ff1
20
base_layer.py
192
Keras saving: A prototype of config-based (idempotent) saving and loading, with simple model state restoration added. It's done via the archive provided by `zipfile` package. Preliminary for review and the APIs and implementation are subject to changes. PiperOrigin-RevId: 470784761
83,088
0
379
111
49
279,673
69
keras
17
keras/engine/base_layer.py
Python
14
{ "docstring": "Experimental method for setting the state of this layer object.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
https://github.com/keras-team/keras.git
1
test_deepspeed_multigpu_single_file
def test_deepspeed_multigpu_single_file(tmpdir): model = BoringModel() checkpoint_path = os.path.join(tmpdir, "model.pt") trainer = Trainer(default_root_dir=tmpdir, fast_dev_run=True) trainer.fit(model) trainer.save_checkpoint(checkpoint_path) trainer = Trainer( default_root_dir=tmpdir, strategy=DeepSpeedStrategy(stage=3), gpus=1, fast_dev_run=True, precision=16 ) strategy = trainer.strategy assert isinstance(strategy, DeepSpeedStrategy) assert not strategy.load_full_weights with pytest.raises(MisconfigurationException, match="DeepSpeed was unable to load the checkpoint."): trainer.test(model, ckpt_path=checkpoint_path) trainer = Trainer( default_root_dir=tmpdir, strategy=DeepSpeedStrategy(stage=3, load_full_weights=True), gpus=1, fast_dev_run=True, precision=16, ) strategy = trainer.strategy assert isinstance(strategy, DeepSpeedStrategy) assert strategy.load_full_weights trainer.test(model, ckpt_path=checkpoint_path)
650c710efacd633fa283955145342bb64063c883
12
test_deepspeed_strategy.py
270
Rename training plugin test files & names to strategy (#11303)
69,594
0
167
175
41
241,567
64
lightning
27
tests/strategies/test_deepspeed_strategy.py
Python
25
{ "docstring": "Test to ensure that DeepSpeed loads from a single file checkpoint.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
https://github.com/Lightning-AI/lightning.git
7
get_model_modules
def get_model_modules(): _ignore_modules = [ "modeling_auto", "modeling_encoder_decoder", "modeling_marian", "modeling_mmbt", "modeling_outputs", "modeling_retribert", "modeling_utils", "modeling_flax_auto", "modeling_flax_encoder_decoder", "modeling_flax_utils", "modeling_speech_encoder_decoder", "modeling_flax_vision_encoder_decoder", "modeling_transfo_xl_utilities", "modeling_tf_auto", "modeling_tf_encoder_decoder", "modeling_tf_outputs", "modeling_tf_pytorch_utils", "modeling_tf_utils", "modeling_tf_transfo_xl_utilities", "modeling_tf_vision_encoder_decoder", "modeling_vision_encoder_decoder", ] modules = [] for model in dir(transformers.models): # There are some magic dunder attributes in the dir, we ignore them if not model.startswith("__"): model_module = getattr(transformers.models, model) for submodule in dir(model_module): if submodule.startswith("modeling") and submodule not in _ignore_modules: modeling_module = getattr(model_module, submodule) if inspect.ismodule(modeling_module): modules.append(modeling_module) return modules
b67fd797bec56b59e1cd3ad54fa2783f7d7b7cbc
17
check_repo.py
230
Add TFVisionEncoderDecoderModel (#14148) * Start the work on TFVisionEncoderDecoderModel * Expose TFVisionEncoderDecoderModel * fix import * Add modeling_tf_vision_encoder_decoder to _ignore_modules in get_model_modules() * reorder * Apply the fix for checkpoint loading as in #14016 * remove attention_mask + fix VISION_DUMMY_INPUTS * A minimal change to make TF generate() work for vision models as encoder in encoder-decoder setting * fix wrong condition: shape_list(input_ids) == 2 * add tests * use personal TFViTModel checkpoint (for now) * Add equivalence tests + projection layer * style * make sure projection layer can run * Add examples * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * Clean comments (need to work on TODOs for PyTorch models) * Remove TF -> PT in check_pt_tf_equivalence for TFVisionEncoderDecoderModel * fixes * Revert changes in PT code. * Update tests/test_modeling_tf_vision_encoder_decoder.py Co-authored-by: Patrick von Platen <[email protected]> * Add test_inference_coco_en for TF test * fix quality * fix name * build doc * add main_input_name * Fix ckpt name in test * fix diff between master and this PR * fix doc * fix style and quality * fix missing doc * fix labels handling * Delete auto.rst * Add the changes done in #14016 * fix prefix * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * make style Co-authored-by: ydshieh <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
6,182
0
351
129
62
33,995
74
transformers
15
utils/check_repo.py
Python
34
{ "docstring": "Get the model modules inside the transformers library.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 7 }
https://github.com/huggingface/transformers.git
3
_stop_wallet
async def _stop_wallet(self): if self.service is not None: self.service._close() peers_close_task: Optional[asyncio.Task] = await self.service._await_closed() if peers_close_task is not None: await peers_close_task ########################################################################################## # Key management ##########################################################################################
89f15f591cc3cc3e8ae40e95ffc802f7f2561ece
12
wallet_rpc_api.py
81
Merge standalone wallet into main (#9793) * wallet changes from pac * cat changes * pool tests * pooling tests passing * offers * lint * mempool_mode * black * linting * workflow files * flake8 * more cleanup * renamed * remove obsolete test, don't cast announcement * memos are not only bytes32 * trade renames * fix rpcs, block_record * wallet rpc, recompile settlement clvm * key derivation * clvm tests * lgtm issues and wallet peers * stash * rename * mypy linting * flake8 * bad initializer * flaky tests * Make CAT wallets only create on verified hints (#9651) * fix clvm tests * return to log lvl warn * check puzzle unhardened * public key, not bytes. api caching change * precommit changes * remove unused import * mypy ci file, tests * ensure balance before creating a tx * Remove CAT logic from full node test (#9741) * Add confirmations and sleeps for wallet (#9742) * use pool executor * rever merge mistakes/cleanup * Fix trade test flakiness (#9751) * remove precommit * older version of black * lint only in super linter * Make announcements in RPC be objects instead of bytes (#9752) * Make announcements in RPC be objects instead of bytes * Lint * misc hint'ish cleanup (#9753) * misc hint'ish cleanup * unremove some ci bits * Use main cached_bls.py * Fix bad merge in main_pac (#9774) * Fix bad merge at 71da0487b9cd5564453ec24b76f1ac773c272b75 * Remove unused ignores * more unused ignores * Fix bad merge at 3b143e705057d6c14e2fb3e00078aceff0552d7e * One more byte32.from_hexstr * Remove obsolete test * remove commented out * remove duplicate payment object * remove long sync * remove unused test, noise * memos type * bytes32 * make it clear it's a single state at a time * copy over asset ids from pacr * file endl linter * Update chia/server/ws_connection.py Co-authored-by: dustinface <[email protected]> Co-authored-by: Matt Hauff <[email protected]> Co-authored-by: Kyle Altendorf <[email protected]> Co-authored-by: dustinface <[email protected]>
21,556
0
97
46
19
102,613
26
chia-blockchain
9
chia/rpc/wallet_rpc_api.py
Python
6
{ "docstring": "\n Stops a currently running wallet/key, which allows starting the wallet with a new key.\n Each key has it's own wallet database.\n ", "language": "en", "n_whitespaces": 43, "n_words": 21, "vocab_size": 19 }
https://github.com/Chia-Network/chia-blockchain.git
1
should_autoscale
def should_autoscale(self) -> bool: return self._target_state.info.autoscaling_policy is not None
1fd2913abdcf376edd148692bfeb5962a6e1c478
9
deployment_state.py
32
[serve] Refactor checkpointing to write ahead target state (#26797)
27,880
0
23
19
9
125,454
9
ray
6
python/ray/serve/deployment_state.py
Python
5
{ "docstring": "\n Check if the deployment is under autoscaling\n ", "language": "en", "n_whitespaces": 22, "n_words": 7, "vocab_size": 7 }
https://github.com/ray-project/ray.git
9
assign_proto
def assign_proto(proto, name, val): is_repeated_field = hasattr(getattr(proto, name), 'extend') if is_repeated_field and not isinstance(val, list): val = [val] if isinstance(val, list): if isinstance(val[0], dict): for item in val: proto_item = getattr(proto, name).add() for k, v in six.iteritems(item): assign_proto(proto_item, k, v) else: getattr(proto, name).extend(val) elif isinstance(val, dict): for k, v in six.iteritems(val): assign_proto(getattr(proto, name), k, v) else: setattr(proto, name, val)
cc4d0564756ca067516f71718a3d135996525909
16
net_spec.py
230
Balanced joint maximum mean discrepancy for deep transfer learning
12,059
0
194
151
37
60,271
59
transferlearning
19
code/deep/BJMMD/caffe/python/caffe/net_spec.py
Python
17
{ "docstring": "Assign a Python object to a protobuf message, based on the Python\n type (in recursive fashion). Lists become repeated fields/messages, dicts\n become messages, and other types are assigned directly. For convenience,\n repeated fields whose values are not lists are converted to single-element\n lists; e.g., `my_repeated_int_field=3` is converted to\n `my_repeated_int_field=[3]`.", "language": "en", "n_whitespaces": 63, "n_words": 49, "vocab_size": 40 }
https://github.com/jindongwang/transferlearning.git
1
readlines
def readlines(self, sizehint=None, keepends=True): data = self.read() return data.splitlines(keepends)
8198943edd73a363c266633e1aa5b2a9e9c9f526
8
codecs.py
46
add python 3.10.4 for windows
56,380
0
30
28
9
221,366
9
XX-Net
7
python3.10.4/Lib/codecs.py
Python
3
{ "docstring": " Read all lines available on the input stream\n and return them as a list.\n\n Line breaks are implemented using the codec's decoder\n method and are included in the list entries.\n\n sizehint, if given, is ignored since there is no efficient\n way to finding the true end-of-line.\n\n ", "language": "en", "n_whitespaces": 109, "n_words": 46, "vocab_size": 40 }
https://github.com/XX-net/XX-Net.git
3
arc_cosine
def arc_cosine(value, default=_SENTINEL): try: return math.acos(float(value)) except (ValueError, TypeError): if default is _SENTINEL: raise_no_default("acos", value) return default
4885331509eeffe50f42d76b234996467b06170f
13
template.py
70
Fail template functions when no default specified (#71687)
99,468
0
58
42
15
300,608
17
core
10
homeassistant/helpers/template.py
Python
7
{ "docstring": "Filter and function to get arc cosine of the value.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
https://github.com/home-assistant/core.git
1
heawood_graph
def heawood_graph(create_using=None): G = LCF_graph(14, [5, -5], 7, create_using) G.name = "Heawood Graph" return G
dec723f072eb997a497a159dbe8674cd39999ee9
10
small.py
52
Docstrings for the small.py module (#5240) * added description for the first 5 small graphs * modified descriptions based on comment and added description for two more functions * added doctrings to all the functions * Minor touchups. Co-authored-by: Ross Barnowski <[email protected]>
41,728
0
27
32
13
176,158
15
networkx
5
networkx/generators/small.py
Python
4
{ "docstring": "\n Returns the Heawood Graph, a (3,6) cage.\n\n The Heawood Graph is an undirected graph with 14 nodes and 21 edges,\n named after Percy John Heawood [1]_.\n It is cubic symmetric, nonplanar, Hamiltonian, and can be represented\n in LCF notation as ``[5,-5]^7`` [2]_.\n It is the unique (3,6)-cage: the regular cubic graph of girth 6 with\n minimal number of vertices [3]_.\n\n Parameters\n ----------\n create_using : NetworkX graph constructor, optional (default=nx.Graph)\n Graph type to create. If graph instance, then cleared before populated.\n\n Returns\n -------\n G : networkx Graph\n Heawood Graph with 14 nodes and 21 edges\n\n References\n ----------\n .. [1] https://en.wikipedia.org/wiki/Heawood_graph\n .. [2] https://mathworld.wolfram.com/HeawoodGraph.html\n .. [3] https://www.win.tue.nl/~aeb/graphs/Heawood.html\n\n ", "language": "en", "n_whitespaces": 176, "n_words": 105, "vocab_size": 77 }
https://github.com/networkx/networkx.git
4
testRelativeLogdirWithNestedDir
def testRelativeLogdirWithNestedDir(self): local_dir_path = Path("/tmp/test_rel") if local_dir_path.exists(): local_dir = tempfile.mkdtemp(prefix=str(local_dir_path) + "_") else: local_dir = str(local_dir_path) tune.run( "rel_logdir", config={"a": tune.randint(0, 10)}, local_dir=local_dir, # Create a nested experiment directory. name="exp_dir/deep_exp_dir", ) # Copy the folder local_dir_moved = local_dir + "_moved" shutil.copytree(local_dir + "/exp_dir", local_dir_moved) # Load the trials. with self.assertRaises(ValueError): analysis = tune.ExperimentAnalysis(local_dir) # Using the subdir should work, however. analysis = tune.ExperimentAnalysis(local_dir + "/exp_dir") analysis_moved = tune.ExperimentAnalysis(local_dir_moved) configs = analysis.get_all_configs() configs_moved = analysis_moved.get_all_configs() config = configs[next(iter(configs))] config_moved = configs_moved[next(iter(configs_moved))] # Check, if the trial attributes can be loaded. self.assertEqual(len(configs), 1) self.assertEqual(len(configs_moved), 1) # Check, if the two configs are equal. self.assertDictEqual(config, config_moved) metric = "metric1" mode = "max" analysis_df = analysis.dataframe(metric, mode) analysis_moved_df = analysis_moved.dataframe(metric, mode) self.assertEqual(analysis_df.shape[0], 1) self.assertEqual(analysis_moved_df.shape[0], 1) # Drop the `logdir` column as this should be different # between the two trials. analysis_df.drop(columns="logdir", inplace=True) analysis_moved_df.drop(columns="logdir", inplace=True) self.assertEqual(analysis_df, analysis_moved_df) # Remove the files and directories. if shutil.rmtree.avoids_symlink_attacks: if local_dir_path.exists(): shutil.rmtree(local_dir) shutil.rmtree(local_dir_moved)
2a5d322e705df080e9254c9c9a3e187c1ea41c4e
14
test_trial_relative_logdir.py
505
[tune] Relative logdir paths in trials for ExperimentAnalysis in remote buckets (#25063) When running an experiment for example in the cloud and syncing to a bucket the logdir path in the trials will be changed when working with the checkpoints in the bucket. There are some workarounds, but the easier solution is to also add a rel_logdir containing the relative path to the trials/checkpoints that can handle any changes in the location of experiment results. As discussed with @Yard1 and @krfricke Co-authored-by: Antoni Baum <[email protected]> Co-authored-by: Kai Fricke <[email protected]>
32,294
0
530
299
102
141,206
153
ray
43
python/ray/tune/tests/test_trial_relative_logdir.py
Python
38
{ "docstring": "Using a nested directory for experiment name.\"\n\n This should raise an error as nested structures are not\n supported. It should work, however, to provide a nested\n path to the `ExperimentAnalysis` class or relocate the\n folder out of the nested structure.\n ", "language": "en", "n_whitespaces": 75, "n_words": 40, "vocab_size": 32 }
https://github.com/ray-project/ray.git
11
test_add_remove_actors
def test_add_remove_actors(self): workers = [] manager = AsyncRequestsManager( workers, max_remote_requests_in_flight_per_worker=2 ) if not ( ( len(manager._all_workers) == len(manager._remote_requests_in_flight) == len(manager._pending_to_actor) == len(manager._pending_remotes) == 0 ) ): raise ValueError("We should have no workers in this case.") assert not manager.call(lambda w: w.task()), ( "Task shouldn't have been " "launched since there are no " "workers in the manager." ) worker = RemoteRLlibActor.remote(sleep_time=0.1) manager.add_workers(worker) manager.call(lambda w: w.task()) if not ( len(manager._remote_requests_in_flight[worker]) == len(manager._pending_to_actor) == len(manager._all_workers) == len(manager._pending_remotes) == 1 ): raise ValueError("We should have 1 worker and 1 pending request") time.sleep(3) manager.get_ready() # test worker removal for i in range(2): manager.call(lambda w: w.task()) assert len(manager._pending_remotes) == i + 1 manager.remove_workers(worker) if not ((len(manager._all_workers) == 0)): raise ValueError("We should have no workers that we can schedule tasks to") if not ( (len(manager._pending_remotes) == 2 and len(manager._pending_to_actor) == 2) ): raise ValueError( "We should still have 2 pending requests in flight from the worker" ) time.sleep(3) result = manager.get_ready() if not ( len(result) == 1 and len(result[worker]) == 2 and len(manager._pending_remotes) == 0 and len(manager._pending_to_actor) == 0 ): raise ValueError( "We should have 2 ready results from the worker and no pending requests" )
eaed256d6863c529b8ada42514f7fba12d146f22
14
test_async_requests_manager.py
507
[RLlib] Async parallel execution manager. (#24423)
31,921
0
744
310
86
140,302
189
ray
27
rllib/execution/tests/test_async_requests_manager.py
Python
56
{ "docstring": "Tests that the async manager can properly add and remove actors", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
https://github.com/ray-project/ray.git
1
get_config
def get_config(self): json_word_counts = json.dumps(self.word_counts) json_word_docs = json.dumps(self.word_docs) json_index_docs = json.dumps(self.index_docs) json_word_index = json.dumps(self.word_index) json_index_word = json.dumps(self.index_word) return { "num_words": self.num_words, "filters": self.filters, "lower": self.lower, "split": self.split, "char_level": self.char_level, "oov_token": self.oov_token, "document_count": self.document_count, "word_counts": json_word_counts, "word_docs": json_word_docs, "index_docs": json_index_docs, "index_word": json_index_word, "word_index": json_word_index, }
84afc5193d38057e2e2badf9c889ea87d80d8fbf
9
text.py
203
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
81,462
0
232
121
40
275,783
44
keras
21
keras/preprocessing/text.py
Python
20
{ "docstring": "Returns the tokenizer configuration as Python dictionary.\n\n The word count dictionaries used by the tokenizer get serialized\n into plain JSON, so that the configuration can be read by other\n projects.\n\n Returns:\n A Python dictionary with the tokenizer configuration.\n ", "language": "en", "n_whitespaces": 84, "n_words": 38, "vocab_size": 30 }
https://github.com/keras-team/keras.git
1
send_direct
async def send_direct(self, channel, data): # We iterate over the channels to get reference to the websocket object # so we can disconnect incase of failure await channel.send(data)
6f5478cc029bc146e3980affa61dd7956c5cb416
8
channel.py
32
DataFrame transmission, strategy follower logic
34,784
0
56
17
25
150,520
28
freqtrade
5
freqtrade/rpc/replicate/channel.py
Python
2
{ "docstring": "\n Send data directly through direct_channel only\n\n :param direct_channel: The WebSocketChannel object to send data through\n :param data: The data to send\n ", "language": "en", "n_whitespaces": 50, "n_words": 21, "vocab_size": 14 }
https://github.com/freqtrade/freqtrade.git
2
align_arcface
def align_arcface(image, landmarks): image_size = (112, 112) dst = np.array([ [30.2946, 51.6963], [65.5318, 51.5014], [48.0252, 71.7366], [33.5493, 92.3655], [62.7299, 92.2041]], dtype=np.float32) if image_size[1] == 112: dst[:, 0] += 8.0 # dst = dst[:, ::-1] landmark5 = tf.stack([(landmarks[:, 36] + landmarks[:, 39]) / 2, (landmarks[:, 42] + landmarks[:, 45]) / 2, landmarks[:, 30], landmarks[:, 48], landmarks[:, 54]], 1)
7375ee364e0df2a417f92593e09557f1b2a3575a
14
arcface_handler.py
200
initialize ostec
1,576
0
217
284
45
9,219
57
insightface
12
reconstruction/ostec/core/arcface_handler.py
Python
20
{ "docstring": "\n Aligns 'image' with its corresponding 'landmarks' to a predefined template\n with similarity transformation. This is the tensorflow implementation of\n the default alignment procedure of ArcFace\n Args:\n image: a 4D float32 numpy array with shape [batch_size, image_height,\n image_width, 3].\n landmarks: 68 iBug landmark points of 'image'. [batch_size, 68, 2]\n\n Returns:\n 4-D float32 numpy array with shape [batch_size, 112, 112, 3]. Contains\n aligned version of 'image'\n\n ", "language": "en", "n_whitespaces": 118, "n_words": 64, "vocab_size": 47 }
https://github.com/deepinsight/insightface.git